Posted on Tuesday, 14th February 2012

Please read Section 10.4 and post a comment.

Posted in Class | Comments (14)

  1. Eric VanEpps Says:

    I had never seen ROC curves before, and I appreciated the discussion of their interpretation. That said, I’m still not sure exactly how they’re generated such that I could make an ROC curve myself. Can we go over an example?

  2. Rob Rasmussen Says:

    I am confused by the last sentence before the detail subsection on pg 300 in comparison to section 10.4.6 – the interpretation of the p-value seems different in the two parts.

  3. Amanda Markey Says:

    I do not have a firm grasp on the ROC curve. What affects the shape of the curve?

    It’s clear how a larger delta (shift in mean) affects the curve, but it’s less clear to me how things that affect power affect the ROC curve. (e.g. increasing the sample size, increasing the effect size (more extreme manipulations yield more power) and decreasing alpha.) How do these factor in?

  4. Shubham Debnath Says:

    Very glad for sections 10.4.6 and 10.4.7 about the p-value. I have encountered people in the past who have interpreted the p-value completely incorrectly in both of the fashions described.

  5. Sharlene Flesher Says:

    You mentioned using the power to determine sample size- can you elaborate how to do that?

  6. nsnyder Says:

    With regards to using the confidence interval to test H0, since we often do not have the true standard deviation would it be appropriate to adjust the alpha level to make the calculation more accurate (especially with small sample sizes)?

  7. Rex Tien Says:

    I’m still having trouble understanding the implications of and the reason for uniformly distributed p values. My intuition would tell me that when your null hypothesis is true your p values should cluster to 1? Maybe I am thinking about it the wrong way.

  8. mpanico Says:

    When inverting the confidence interval, we are essentially saying there is a 10% chance the interval is either above or below the null hypothesis. Since it cannot be both, if the hypothesis is outside the CI, can we give it a p-value of .025?

  9. Jay Scott Says:

    Sec. 10.4.7 states the Neyman-Pearson conceptions “have proven their worth in theoretical work”. How? Is this a reference to using alternative hypotheses to evaluate type II errors and weigh hypotheses against one another, or is there another point being made?

  10. Scott Kennedy Says:

    I’m surprised that a p-value of .05 usually corresponds to a bayesian probability of .5 to .7, which seems to indicate the null hypothesis is rather likely, instead of the opposite.

    Also, I found theuse of p as the sample mean and as the p-value a little confusing in the illustration in section 10.4.8.

  11. Yijuan Says:

    In 10.4.8, a non-significant test by itself does not support H0. I don’t understand the example 10.4.2. In addition, if we get p value>0.95, can we say it’s in support of H0?

  12. Matt Bauman Says:

    I’m confused by the statement “A non-significant test of H_0 : θ = θ_0 could occur because H_0 holds or because the variability is so large that it is difficult to determine the value of the unknown parameter.” What’s the unknown parameter in this case? Isn’t it H_0′s test of θ = θ_0?

    (Aside, is there any form of *markup* _processing_ $\frac{x}{2}$ done?)

  13. Rich Truncellito Says:

    What is the significance of the p-value’s having a uniform distribution when the null hypothesis holds?

  14. Thomas Kraynak Says:

    It’s mentioned that if the standard error of the estimate is small, and a test fails, one can interpret that H0 : θ = θ0 holds. What is the rule of thumb for knowing whether the SE is small enough, and if it isn’t small enough, does that mean that more tests should be done?

Leave a Reply

You must be logged in to post a comment.