<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Tues Feb 7</title>
	<atom:link href="http://www.stat.cmu.edu/~kass/smnp/?feed=rss2&#038;p=81" rel="self" type="application/rss+xml" />
	<link>http://www.stat.cmu.edu/~kass/smnp/?p=81</link>
	<description></description>
	<lastBuildDate>Thu, 26 Apr 2012 14:07:02 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Thomas Kraynak</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-85</link>
		<dc:creator>Thomas Kraynak</dc:creator>
		<pubDate>Tue, 07 Feb 2012 14:33:14 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-85</guid>
		<description>I still don&#039;t quite understand why we can consider theta a theoretical value from your explanation on p179.</description>
		<content:encoded><![CDATA[<p>I still don&#8217;t quite understand why we can consider theta a theoretical value from your explanation on p179.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Bauman</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-84</link>
		<dc:creator>Matt Bauman</dc:creator>
		<pubDate>Tue, 07 Feb 2012 14:20:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-84</guid>
		<description>I found the section on maximum likelihood very confusing initially.  It took a few reads before I finally began understanding it.  One lingering question: how can x have five possible values if n=4?</description>
		<content:encoded><![CDATA[<p>I found the section on maximum likelihood very confusing initially.  It took a few reads before I finally began understanding it.  One lingering question: how can x have five possible values if n=4?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rich Truncellito</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-83</link>
		<dc:creator>Rich Truncellito</dc:creator>
		<pubDate>Tue, 07 Feb 2012 14:18:55 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-83</guid>
		<description>I found the descriptions and explanations in this chapter very clear. So, the one detail that I find myself missing is a small technical one: How does V(T) = p(1-p)/(n^2) because V(Y) = np(1-p)? Mainly, how does (n^2) become the denominator of V(T)?</description>
		<content:encoded><![CDATA[<p>I found the descriptions and explanations in this chapter very clear. So, the one detail that I find myself missing is a small technical one: How does V(T) = p(1-p)/(n^2) because V(Y) = np(1-p)? Mainly, how does (n^2) become the denominator of V(T)?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Amanda Markey</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-82</link>
		<dc:creator>Amanda Markey</dc:creator>
		<pubDate>Tue, 07 Feb 2012 14:10:11 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-82</guid>
		<description>I&#039;m not clear on the costs and benefits of MLE vs Method of Moments.  When and why would you use one vs. the other?

Also, how frequently do they produce the same estimator? (never? often?)</description>
		<content:encoded><![CDATA[<p>I&#8217;m not clear on the costs and benefits of MLE vs Method of Moments.  When and why would you use one vs. the other?</p>
<p>Also, how frequently do they produce the same estimator? (never? often?)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: nsnyder</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-81</link>
		<dc:creator>nsnyder</dc:creator>
		<pubDate>Tue, 07 Feb 2012 14:00:08 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-81</guid>
		<description>It seems that when discussing the likelihood, maximizing L(theta) will converge on a single parameter. Is this correct, and is this always the case? In otherwords, if you combined L(theta) for all of the vector, x, could there multiple maxima?</description>
		<content:encoded><![CDATA[<p>It seems that when discussing the likelihood, maximizing L(theta) will converge on a single parameter. Is this correct, and is this always the case? In otherwords, if you combined L(theta) for all of the vector, x, could there multiple maxima?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Yijuan</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-80</link>
		<dc:creator>Yijuan</dc:creator>
		<pubDate>Tue, 07 Feb 2012 13:45:47 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-80</guid>
		<description>Could you talk more about &#039;the choice of the “desired” SE2&#039;?</description>
		<content:encoded><![CDATA[<p>Could you talk more about &#8216;the choice of the “desired” SE2&#8242;?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jay Scott</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-79</link>
		<dc:creator>Jay Scott</dc:creator>
		<pubDate>Tue, 07 Feb 2012 13:28:01 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-79</guid>
		<description>The advantage of &quot;Method of Moment&quot; over &quot;Maximum Likelihood&quot; seems to be that it is much easier to compute, the tradeoff being it is a less reliable parameter estimate.  Aside from small sample size, are there any dataset characteristics indicating &quot;Method of Moment&quot; should not be used?</description>
		<content:encoded><![CDATA[<p>The advantage of &#8220;Method of Moment&#8221; over &#8220;Maximum Likelihood&#8221; seems to be that it is much easier to compute, the tradeoff being it is a less reliable parameter estimate.  Aside from small sample size, are there any dataset characteristics indicating &#8220;Method of Moment&#8221; should not be used?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Scott Kennedy</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-78</link>
		<dc:creator>Scott Kennedy</dc:creator>
		<pubDate>Tue, 07 Feb 2012 13:17:04 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-78</guid>
		<description>The premise of the maximum likelihood estimator is to choose some theta that gives the highest probability of some observed event x.

However, what if x lies along one of the tails of a distribution? If we assumed that the probability of x was high, wouldn&#039;t we pick the wrong distribution? Maximizing the probability of x seems like it should only be appropriate for the x&#039;s that are near the peak of the pdf.

I think what it boils down to is a question about how to quantify the total probability for a vector of many observations.  Given independence, is this just the product of the individual probabilities?</description>
		<content:encoded><![CDATA[<p>The premise of the maximum likelihood estimator is to choose some theta that gives the highest probability of some observed event x.</p>
<p>However, what if x lies along one of the tails of a distribution? If we assumed that the probability of x was high, wouldn&#8217;t we pick the wrong distribution? Maximizing the probability of x seems like it should only be appropriate for the x&#8217;s that are near the peak of the pdf.</p>
<p>I think what it boils down to is a question about how to quantify the total probability for a vector of many observations.  Given independence, is this just the product of the individual probabilities?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Sharlene Flesher</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-77</link>
		<dc:creator>Sharlene Flesher</dc:creator>
		<pubDate>Tue, 07 Feb 2012 12:10:42 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-77</guid>
		<description>At the end of 7.2.1 it is mentioned that higher order moments could be used- why would they be used? Also, at the end of 7.3.1 it says &quot;In many applications an estimator ˆθ follows an approximately normal distribution.&quot;-
how does the estimator following a normal distrbution?</description>
		<content:encoded><![CDATA[<p>At the end of 7.2.1 it is mentioned that higher order moments could be used- why would they be used? Also, at the end of 7.3.1 it says &#8220;In many applications an estimator ˆθ follows an approximately normal distribution.&#8221;-<br />
how does the estimator following a normal distrbution?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Kelly</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-76</link>
		<dc:creator>Kelly</dc:creator>
		<pubDate>Tue, 07 Feb 2012 08:41:49 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=81#comment-76</guid>
		<description>First off, I want to re-pose the question I asked on Thursday: I don&#039;t understand why it is true that sample means are normally distributed even when the distribution from which those means are drawn is not normal.

To move on to chapter 7, I noticed that you briefly discuss method of moments and then move quickly to maximum likelihood estimation.  I&#039;m not quite clear on the relationship between the two methods of estimation (other than that they&#039;re mentioned close together) How, if at all, are they related, and are there particular cases in which one or the other method is better to use?</description>
		<content:encoded><![CDATA[<p>First off, I want to re-pose the question I asked on Thursday: I don&#8217;t understand why it is true that sample means are normally distributed even when the distribution from which those means are drawn is not normal.</p>
<p>To move on to chapter 7, I noticed that you briefly discuss method of moments and then move quickly to maximum likelihood estimation.  I&#8217;m not quite clear on the relationship between the two methods of estimation (other than that they&#8217;re mentioned close together) How, if at all, are they related, and are there particular cases in which one or the other method is better to use?</p>
]]></content:encoded>
	</item>
</channel>
</rss>

