Posted on Friday, 16th March 2012

Please read Chapter11-MAR16.pdf and post a comment.

Posted in Class | Comments (15)

  1. Eric VanEpps Says:

    On page 333, I believe you mean to write that theta = B1 rather than the B2 that is written.

    Section 11.1.5 states that the likelihood ratio test is optimal for simple hypotheses, but I don’t really understand why. Is it better than something like a t-test, and if so, how? Is the main improvement of the likelihood ratio the fact that it doesn’t rely on so many assumptions as traditional tests of hypotheses that we’ve already discussed?

  2. David Zhou Says:

    I’m a little confused by the use of the nuisance parameter. Is it basically what we call the free parameter? Is the nuisance parameter always given to w?

  3. Shubham Debnath Says:

    Good to see an example on source localization; I used to do similar research as an undergrad.

  4. Sharlene Flesher Says:

    Does lowering the p value indicating significance from 0.05 to 0.01 for multiple tests (section 11.3.2) result in results that are incorrectly considered insignificant? If so, how would these be found and corrected for?

  5. Ben Dichter Says:

    When would one use the AIC penalty vs the BIC penalty? Does the theory behind the derivations of these penalties apply better to some cases than others?

  6. Jay Scott Says:

    p330 s11.1.1: “Furthermore, like ML estimation, it turns out to have an important optimality property in large samples.” But no explicit explanation is given.

    p338 ex 11.1
    Comparing fits of exponential, gamma, and inverse Gaussian distribution, the lower AIC value is used to show that the Gaussian distribution is the better fit, but I am unclear as to the why.
    If AIC is the penalty only and not the actual maximized loglikelihood function, how can it be used to do this? Is 2p the penalty *for* AIC or is AIC the penalty 2p in an adjusted maximized logliklihood function?

  7. Rex Tien Says:

    Why would we want to use the bootstrap over the permutation test? Are there any pitfalls we must watch out for with permutation tests?

  8. Rich Truncellito Says:

    From my reading of the chapter, it seems that there is no need to correct the combination of p-values obtained from one-hypothesis tests that use multiple independent data sets, even though there is the need to correct the p-values obtained from multiple-hypotheses tests in general. Could you explain why it is OK to accept the one-hypothesis combined p-values without correction?

  9. Yijuan Du Says:

    Does the likelihood ratio test only work for parametric specification? What about hypothesis like θ1 = θ2?

  10. Kelly Says:

    In the continuation of example 5.7 on p. 350, the authors obtained 100,000 sets of pseudo-data. Is there any danger here of having too high a number of pseudo-data sets and depressing the p-value more than is warranted by the actual values obtained from real data? (sort of analogous to the criticism that with a large enough sample size, almost any test will come out significant). I’m guessing that since for the MEG recordings there are so many measurements to begin with, this is much more reasonable than it would be to run a standard psychological test on 100,000 participants. Is there some ballpark ratio (or range of ratios) of data points obtained in a single dataset to number of sets of pseudo-data, to balance the problems of testing many comparisons against the concern for falsely depressing the p-value?

  11. Scott Kennedy Says:

    The utility of the likelihood ratio tests is unclear to me. Are we using it only to get a p-value? If we know the MLE, what would our null hypothesis be? The permutation and bootstrap tests are much more clear, I’m just not sure how to use the likelihood ratio test.

  12. Amanda Markey Says:

    I believe the assumptions of one-way ANOVA are independence, homogeneity of variance, normality and no outliers. What are the assumptions of the likelihood ratio test and are there certain tests that are favored in the field (e.g. Shapiro-Wilks over Kolmogorov-Smirnov for testing normality)?

  13. Rob Rasmussen Says:

    The likelihood ratio seems somewhat similar to the Bayesian method for finding the probability distribution for an estimated parameter. Is it possible to go from the Bayes probability distribution to the likelihood ratio and significance testing?

  14. Noah Says:

    If the likelihood ratio test of a parameter vector is distributed as chi squared, does the ratio test for the nuisance parameter from that vector also follow the chi square? Is this always the case?

  15. Thomas Kraynak Says:

    Can you go over how you would go about using penalties in order to adjust dimensionality of parameters when they need adjusted? (p. 337)

Leave a Reply

You must be logged in to post a comment.