<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Feb 16</title>
	<atom:link href="http://www.stat.cmu.edu/~kass/smnp/?feed=rss2&#038;p=89" rel="self" type="application/rss+xml" />
	<link>http://www.stat.cmu.edu/~kass/smnp/?p=89</link>
	<description></description>
	<lastBuildDate>Thu, 26 Apr 2012 14:07:02 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Thomas Kraynak</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-124</link>
		<dc:creator>Thomas Kraynak</dc:creator>
		<pubDate>Thu, 16 Feb 2012 14:26:44 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-124</guid>
		<description>It&#039;s mentioned that if the standard error of the estimate is small, and a test fails, one can interpret that H0 : θ = θ0 holds. What is the rule of thumb for knowing whether the SE is small enough, and if it isn&#039;t small enough, does that mean that more tests should be done?</description>
		<content:encoded><![CDATA[<p>It&#8217;s mentioned that if the standard error of the estimate is small, and a test fails, one can interpret that H0 : θ = θ0 holds. What is the rule of thumb for knowing whether the SE is small enough, and if it isn&#8217;t small enough, does that mean that more tests should be done?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rich Truncellito</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-123</link>
		<dc:creator>Rich Truncellito</dc:creator>
		<pubDate>Thu, 16 Feb 2012 14:24:49 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-123</guid>
		<description>What is the significance of the p-value&#039;s having a uniform distribution when the null hypothesis holds?</description>
		<content:encoded><![CDATA[<p>What is the significance of the p-value&#8217;s having a uniform distribution when the null hypothesis holds?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Bauman</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-122</link>
		<dc:creator>Matt Bauman</dc:creator>
		<pubDate>Thu, 16 Feb 2012 14:20:06 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-122</guid>
		<description>I&#039;m confused by the statement &quot;A non-significant test of H_0 : θ = θ_0 could occur because H_0 holds or because the variability is so large that it is difficult to determine the value of the unknown parameter.&quot; What&#039;s the unknown parameter in this case?  Isn&#039;t it H_0&#039;s test of θ = θ_0?

(Aside, is there any form of *markup* _processing_ $\frac{x}{2}$ done?)</description>
		<content:encoded><![CDATA[<p>I&#8217;m confused by the statement &#8220;A non-significant test of H_0 : θ = θ_0 could occur because H_0 holds or because the variability is so large that it is difficult to determine the value of the unknown parameter.&#8221; What&#8217;s the unknown parameter in this case?  Isn&#8217;t it H_0&#8242;s test of θ = θ_0?</p>
<p>(Aside, is there any form of *markup* _processing_ $\frac{x}{2}$ done?)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Yijuan</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-121</link>
		<dc:creator>Yijuan</dc:creator>
		<pubDate>Thu, 16 Feb 2012 14:15:47 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-121</guid>
		<description>In 10.4.8, a non-significant test by itself does not support H0. I don&#039;t understand the example 10.4.2. In addition, if we get p value&gt;0.95, can we say it&#039;s in support of H0?</description>
		<content:encoded><![CDATA[<p>In 10.4.8, a non-significant test by itself does not support H0. I don&#8217;t understand the example 10.4.2. In addition, if we get p value&gt;0.95, can we say it&#8217;s in support of H0?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Scott Kennedy</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-120</link>
		<dc:creator>Scott Kennedy</dc:creator>
		<pubDate>Thu, 16 Feb 2012 13:27:37 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-120</guid>
		<description>I&#039;m surprised that a p-value of .05 usually corresponds to a bayesian probability of .5 to .7, which seems to indicate the null hypothesis is rather likely, instead of the opposite.

Also, I found theuse of p as the sample mean and as the p-value a little confusing in the illustration in section 10.4.8.</description>
		<content:encoded><![CDATA[<p>I&#8217;m surprised that a p-value of .05 usually corresponds to a bayesian probability of .5 to .7, which seems to indicate the null hypothesis is rather likely, instead of the opposite.</p>
<p>Also, I found theuse of p as the sample mean and as the p-value a little confusing in the illustration in section 10.4.8.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jay Scott</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-119</link>
		<dc:creator>Jay Scott</dc:creator>
		<pubDate>Thu, 16 Feb 2012 07:26:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-119</guid>
		<description>Sec. 10.4.7 states the Neyman-Pearson conceptions &quot;have proven their worth in theoretical work&quot;.  How?  Is this a reference to using alternative hypotheses to evaluate type II errors and weigh hypotheses against one another, or is there another point being made?</description>
		<content:encoded><![CDATA[<p>Sec. 10.4.7 states the Neyman-Pearson conceptions &#8220;have proven their worth in theoretical work&#8221;.  How?  Is this a reference to using alternative hypotheses to evaluate type II errors and weigh hypotheses against one another, or is there another point being made?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: mpanico</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-118</link>
		<dc:creator>mpanico</dc:creator>
		<pubDate>Thu, 16 Feb 2012 07:12:36 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-118</guid>
		<description>When inverting the confidence interval, we are essentially saying there is a 10% chance the interval is either above or below the null hypothesis. Since it cannot be both, if the hypothesis is outside the CI, can we give it a p-value of .025?</description>
		<content:encoded><![CDATA[<p>When inverting the confidence interval, we are essentially saying there is a 10% chance the interval is either above or below the null hypothesis. Since it cannot be both, if the hypothesis is outside the CI, can we give it a p-value of .025?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rex Tien</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-117</link>
		<dc:creator>Rex Tien</dc:creator>
		<pubDate>Thu, 16 Feb 2012 05:52:31 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-117</guid>
		<description>I&#039;m still having trouble understanding the implications of and the reason for uniformly distributed p values. My intuition would tell me that when your null hypothesis is true your p values should cluster to 1? Maybe I am thinking about it the wrong way.</description>
		<content:encoded><![CDATA[<p>I&#8217;m still having trouble understanding the implications of and the reason for uniformly distributed p values. My intuition would tell me that when your null hypothesis is true your p values should cluster to 1? Maybe I am thinking about it the wrong way.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: nsnyder</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-116</link>
		<dc:creator>nsnyder</dc:creator>
		<pubDate>Thu, 16 Feb 2012 05:02:12 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-116</guid>
		<description>With regards to using the confidence interval to test H0, since we often do not have the true standard deviation would it be appropriate to adjust the alpha level to make the calculation more accurate (especially with small sample sizes)?</description>
		<content:encoded><![CDATA[<p>With regards to using the confidence interval to test H0, since we often do not have the true standard deviation would it be appropriate to adjust the alpha level to make the calculation more accurate (especially with small sample sizes)?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Sharlene Flesher</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-115</link>
		<dc:creator>Sharlene Flesher</dc:creator>
		<pubDate>Thu, 16 Feb 2012 04:55:27 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=89#comment-115</guid>
		<description>You mentioned using the power to determine sample size- can you elaborate how to do that?</description>
		<content:encoded><![CDATA[<p>You mentioned using the power to determine sample size- can you elaborate how to do that?</p>
]]></content:encoded>
	</item>
</channel>
</rss>

