<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Tuesday Mar 20</title>
	<atom:link href="http://www.stat.cmu.edu/~kass/smnp/?feed=rss2&#038;p=105" rel="self" type="application/rss+xml" />
	<link>http://www.stat.cmu.edu/~kass/smnp/?p=105</link>
	<description></description>
	<lastBuildDate>Thu, 26 Apr 2012 14:07:02 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Thomas Kraynak</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-212</link>
		<dc:creator>Thomas Kraynak</dc:creator>
		<pubDate>Tue, 20 Mar 2012 13:25:18 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-212</guid>
		<description>Can you go over how you would go about using penalties in order to adjust dimensionality of parameters when they need adjusted?  (p. 337)</description>
		<content:encoded><![CDATA[<p>Can you go over how you would go about using penalties in order to adjust dimensionality of parameters when they need adjusted?  (p. 337)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Noah</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-211</link>
		<dc:creator>Noah</dc:creator>
		<pubDate>Tue, 20 Mar 2012 13:12:49 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-211</guid>
		<description>If the likelihood ratio test of a parameter vector is distributed as chi squared, does the ratio test for the nuisance parameter from that vector also follow the chi square? Is this always the case?</description>
		<content:encoded><![CDATA[<p>If the likelihood ratio test of a parameter vector is distributed as chi squared, does the ratio test for the nuisance parameter from that vector also follow the chi square? Is this always the case?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rob Rasmussen</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-210</link>
		<dc:creator>Rob Rasmussen</dc:creator>
		<pubDate>Tue, 20 Mar 2012 13:03:49 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-210</guid>
		<description>The likelihood ratio seems somewhat similar to the Bayesian method for finding the probability distribution for an estimated parameter. Is it possible to go from the Bayes probability distribution to the likelihood ratio and significance testing?</description>
		<content:encoded><![CDATA[<p>The likelihood ratio seems somewhat similar to the Bayesian method for finding the probability distribution for an estimated parameter. Is it possible to go from the Bayes probability distribution to the likelihood ratio and significance testing?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Amanda Markey</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-209</link>
		<dc:creator>Amanda Markey</dc:creator>
		<pubDate>Tue, 20 Mar 2012 12:43:56 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-209</guid>
		<description>I believe the assumptions of one-way ANOVA are independence, homogeneity of variance, normality and no outliers.  What are the assumptions of the likelihood ratio test and are there certain tests that are favored in the field (e.g. Shapiro-Wilks over Kolmogorov-Smirnov for testing normality)?</description>
		<content:encoded><![CDATA[<p>I believe the assumptions of one-way ANOVA are independence, homogeneity of variance, normality and no outliers.  What are the assumptions of the likelihood ratio test and are there certain tests that are favored in the field (e.g. Shapiro-Wilks over Kolmogorov-Smirnov for testing normality)?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Scott Kennedy</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-208</link>
		<dc:creator>Scott Kennedy</dc:creator>
		<pubDate>Tue, 20 Mar 2012 12:42:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-208</guid>
		<description>The utility of the likelihood ratio tests is unclear to me. Are we using it only to get a p-value? If we know the MLE, what would our null hypothesis be? The permutation and bootstrap tests are much more clear, I&#039;m just not sure how to use the likelihood ratio test.</description>
		<content:encoded><![CDATA[<p>The utility of the likelihood ratio tests is unclear to me. Are we using it only to get a p-value? If we know the MLE, what would our null hypothesis be? The permutation and bootstrap tests are much more clear, I&#8217;m just not sure how to use the likelihood ratio test.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Kelly</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-207</link>
		<dc:creator>Kelly</dc:creator>
		<pubDate>Tue, 20 Mar 2012 12:28:16 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-207</guid>
		<description>In the continuation of example 5.7 on p. 350, the authors obtained 100,000 sets of pseudo-data.  Is there any danger here of having too high a number of pseudo-data sets and depressing the p-value more than is warranted by the actual values obtained from real data? (sort of analogous to the criticism that with a large enough sample size, almost any test will come out significant).  I&#039;m guessing that since for the MEG recordings there are so many measurements to begin with, this is much more reasonable than it would be to run a standard psychological test on 100,000 participants.  Is there some ballpark ratio (or range of ratios) of data points obtained in a single dataset to number of sets of pseudo-data, to balance the problems of testing many comparisons against the concern for falsely depressing the p-value?</description>
		<content:encoded><![CDATA[<p>In the continuation of example 5.7 on p. 350, the authors obtained 100,000 sets of pseudo-data.  Is there any danger here of having too high a number of pseudo-data sets and depressing the p-value more than is warranted by the actual values obtained from real data? (sort of analogous to the criticism that with a large enough sample size, almost any test will come out significant).  I&#8217;m guessing that since for the MEG recordings there are so many measurements to begin with, this is much more reasonable than it would be to run a standard psychological test on 100,000 participants.  Is there some ballpark ratio (or range of ratios) of data points obtained in a single dataset to number of sets of pseudo-data, to balance the problems of testing many comparisons against the concern for falsely depressing the p-value?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Yijuan Du</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-206</link>
		<dc:creator>Yijuan Du</dc:creator>
		<pubDate>Tue, 20 Mar 2012 12:25:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-206</guid>
		<description>Does the likelihood ratio test only work for parametric specification? What about hypothesis like θ1 = θ2?</description>
		<content:encoded><![CDATA[<p>Does the likelihood ratio test only work for parametric specification? What about hypothesis like θ1 = θ2?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rich Truncellito</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-205</link>
		<dc:creator>Rich Truncellito</dc:creator>
		<pubDate>Tue, 20 Mar 2012 11:40:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-205</guid>
		<description>From my reading of the chapter, it seems that there is no need to correct the combination of p-values obtained from one-hypothesis tests that use multiple independent data sets, even though there is the need to correct the p-values obtained from multiple-hypotheses tests in general. Could you explain why it is OK to accept the one-hypothesis combined p-values without correction?</description>
		<content:encoded><![CDATA[<p>From my reading of the chapter, it seems that there is no need to correct the combination of p-values obtained from one-hypothesis tests that use multiple independent data sets, even though there is the need to correct the p-values obtained from multiple-hypotheses tests in general. Could you explain why it is OK to accept the one-hypothesis combined p-values without correction?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rex Tien</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-204</link>
		<dc:creator>Rex Tien</dc:creator>
		<pubDate>Tue, 20 Mar 2012 05:55:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-204</guid>
		<description>Why would we want to use the bootstrap over the permutation test? Are there any pitfalls we must watch out for with permutation tests?</description>
		<content:encoded><![CDATA[<p>Why would we want to use the bootstrap over the permutation test? Are there any pitfalls we must watch out for with permutation tests?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jay Scott</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-203</link>
		<dc:creator>Jay Scott</dc:creator>
		<pubDate>Tue, 20 Mar 2012 04:53:43 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=105#comment-203</guid>
		<description>p330 s11.1.1:  &quot;Furthermore, like ML estimation, it turns out to have an important optimality property in large samples.&quot;  But no explicit explanation is given.


p338 ex 11.1
Comparing fits of exponential, gamma, and inverse Gaussian distribution, the lower AIC value is used to show that the Gaussian distribution is the better fit, but I am unclear as to the why.
If AIC is the penalty only and not the actual maximized loglikelihood function, how can it be used to do this?  Is 2p the penalty *for*  AIC or is AIC the penalty 2p in an adjusted maximized logliklihood function?</description>
		<content:encoded><![CDATA[<p>p330 s11.1.1:  &#8220;Furthermore, like ML estimation, it turns out to have an important optimality property in large samples.&#8221;  But no explicit explanation is given.</p>
<p>p338 ex 11.1<br />
Comparing fits of exponential, gamma, and inverse Gaussian distribution, the lower AIC value is used to show that the Gaussian distribution is the better fit, but I am unclear as to the why.<br />
If AIC is the penalty only and not the actual maximized loglikelihood function, how can it be used to do this?  Is 2p the penalty *for*  AIC or is AIC the penalty 2p in an adjusted maximized logliklihood function?</p>
]]></content:encoded>
	</item>
</channel>
</rss>

