Posted on Tuesday, 14th February 2012
Please read Section 10.4 and post a comment.
Posted in Class | Comments (14)
Leave a Reply
You must be logged in to post a comment.
Posted on Tuesday, 14th February 2012
Please read Section 10.4 and post a comment.
Posted in Class | Comments (14)
You must be logged in to post a comment.
February 15th, 2012 at 9:24 am
I had never seen ROC curves before, and I appreciated the discussion of their interpretation. That said, I’m still not sure exactly how they’re generated such that I could make an ROC curve myself. Can we go over an example?
February 15th, 2012 at 10:17 pm
I am confused by the last sentence before the detail subsection on pg 300 in comparison to section 10.4.6 – the interpretation of the p-value seems different in the two parts.
February 15th, 2012 at 10:39 pm
I do not have a firm grasp on the ROC curve. What affects the shape of the curve?
It’s clear how a larger delta (shift in mean) affects the curve, but it’s less clear to me how things that affect power affect the ROC curve. (e.g. increasing the sample size, increasing the effect size (more extreme manipulations yield more power) and decreasing alpha.) How do these factor in?
February 15th, 2012 at 11:53 pm
Very glad for sections 10.4.6 and 10.4.7 about the p-value. I have encountered people in the past who have interpreted the p-value completely incorrectly in both of the fashions described.
February 15th, 2012 at 11:55 pm
You mentioned using the power to determine sample size- can you elaborate how to do that?
February 16th, 2012 at 12:02 am
With regards to using the confidence interval to test H0, since we often do not have the true standard deviation would it be appropriate to adjust the alpha level to make the calculation more accurate (especially with small sample sizes)?
February 16th, 2012 at 12:52 am
I’m still having trouble understanding the implications of and the reason for uniformly distributed p values. My intuition would tell me that when your null hypothesis is true your p values should cluster to 1? Maybe I am thinking about it the wrong way.
February 16th, 2012 at 2:12 am
When inverting the confidence interval, we are essentially saying there is a 10% chance the interval is either above or below the null hypothesis. Since it cannot be both, if the hypothesis is outside the CI, can we give it a p-value of .025?
February 16th, 2012 at 2:26 am
Sec. 10.4.7 states the Neyman-Pearson conceptions “have proven their worth in theoretical work”. How? Is this a reference to using alternative hypotheses to evaluate type II errors and weigh hypotheses against one another, or is there another point being made?
February 16th, 2012 at 8:27 am
I’m surprised that a p-value of .05 usually corresponds to a bayesian probability of .5 to .7, which seems to indicate the null hypothesis is rather likely, instead of the opposite.
Also, I found theuse of p as the sample mean and as the p-value a little confusing in the illustration in section 10.4.8.
February 16th, 2012 at 9:15 am
In 10.4.8, a non-significant test by itself does not support H0. I don’t understand the example 10.4.2. In addition, if we get p value>0.95, can we say it’s in support of H0?
February 16th, 2012 at 9:20 am
I’m confused by the statement “A non-significant test of H_0 : θ = θ_0 could occur because H_0 holds or because the variability is so large that it is difficult to determine the value of the unknown parameter.” What’s the unknown parameter in this case? Isn’t it H_0′s test of θ = θ_0?
(Aside, is there any form of *markup* _processing_ $\frac{x}{2}$ done?)
February 16th, 2012 at 9:24 am
What is the significance of the p-value’s having a uniform distribution when the null hypothesis holds?
February 16th, 2012 at 9:26 am
It’s mentioned that if the standard error of the estimate is small, and a test fails, one can interpret that H0 : θ = θ0 holds. What is the rule of thumb for knowing whether the SE is small enough, and if it isn’t small enough, does that mean that more tests should be done?