<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Thurs Mar 1</title>
	<atom:link href="http://www.stat.cmu.edu/~kass/smnp/?feed=rss2&#038;p=97" rel="self" type="application/rss+xml" />
	<link>http://www.stat.cmu.edu/~kass/smnp/?p=97</link>
	<description></description>
	<lastBuildDate>Thu, 26 Apr 2012 14:07:02 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Thomas Kraynak</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-166</link>
		<dc:creator>Thomas Kraynak</dc:creator>
		<pubDate>Thu, 01 Mar 2012 14:24:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-166</guid>
		<description>When estimating parameters using ML in software, what are the number of iterations needed for making an efficient estimator?</description>
		<content:encoded><![CDATA[<p>When estimating parameters using ML in software, what are the number of iterations needed for making an efficient estimator?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rich Truncellito</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-165</link>
		<dc:creator>Rich Truncellito</dc:creator>
		<pubDate>Thu, 01 Mar 2012 14:22:25 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-165</guid>
		<description>Because the inverse of I(θ) may be equated to V(T), it seems that √[I(θ)]•(θ^ - θ) may be a rephrasing of the statistic ratio (θ^ - θ)/SE(θ^) in chapter 10. Is assuming this apparent relationship accurate?</description>
		<content:encoded><![CDATA[<p>Because the inverse of I(θ) may be equated to V(T), it seems that √[I(θ)]•(θ^ &#8211; θ) may be a rephrasing of the statistic ratio (θ^ &#8211; θ)/SE(θ^) in chapter 10. Is assuming this apparent relationship accurate?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Amanda Markey</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-163</link>
		<dc:creator>Amanda Markey</dc:creator>
		<pubDate>Thu, 01 Mar 2012 13:37:55 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-163</guid>
		<description>I&#039;m still not clear on the concept of efficiency - is it a measure of whether the variance of the estimator is bounded? 

Also, does the concept of efficiency relate to the following statement (and if so, how?): it is sometimes better to take a biased estimator over an unbiased estimator if the sampling variance of the unbiased estimator is sufficiently smaller.</description>
		<content:encoded><![CDATA[<p>I&#8217;m still not clear on the concept of efficiency &#8211; is it a measure of whether the variance of the estimator is bounded? </p>
<p>Also, does the concept of efficiency relate to the following statement (and if so, how?): it is sometimes better to take a biased estimator over an unbiased estimator if the sampling variance of the unbiased estimator is sufficiently smaller.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Scott Kennedy</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-162</link>
		<dc:creator>Scott Kennedy</dc:creator>
		<pubDate>Thu, 01 Mar 2012 13:08:57 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-162</guid>
		<description>When would I use the loglikelihood function instead of the methods in Chapter 7 to find the standard error? And vice versa?</description>
		<content:encoded><![CDATA[<p>When would I use the loglikelihood function instead of the methods in Chapter 7 to find the standard error? And vice versa?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: David Zhou</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-161</link>
		<dc:creator>David Zhou</dc:creator>
		<pubDate>Thu, 01 Mar 2012 08:12:11 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-161</guid>
		<description>What is the origin of using partial differentiation for maximum likelihood estimation? In addition to the math, I&#039;m interested in the significance of this method.

Can you talk some more about the mathematical care that needs to go into maximum likelihood estimation?</description>
		<content:encoded><![CDATA[<p>What is the origin of using partial differentiation for maximum likelihood estimation? In addition to the math, I&#8217;m interested in the significance of this method.</p>
<p>Can you talk some more about the mathematical care that needs to go into maximum likelihood estimation?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Kelly</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-160</link>
		<dc:creator>Kelly</dc:creator>
		<pubDate>Thu, 01 Mar 2012 07:17:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-160</guid>
		<description>I read through the section on Fisher information several times and am still confused about its meaning and derivation.  Also, is it only related to estimators, or does it relate somehow to observed data as well?</description>
		<content:encoded><![CDATA[<p>I read through the section on Fisher information several times and am still confused about its meaning and derivation.  Also, is it only related to estimators, or does it relate somehow to observed data as well?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Panico</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-159</link>
		<dc:creator>Matt Panico</dc:creator>
		<pubDate>Thu, 01 Mar 2012 06:50:32 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-159</guid>
		<description>I&#039;m having trouble thinking of a situation in which a method of estimation would not be invariant to transformations of the parameter. If the actual parameter is transformed, why wouldn&#039;t its estimate be transformed in the same way?</description>
		<content:encoded><![CDATA[<p>I&#8217;m having trouble thinking of a situation in which a method of estimation would not be invariant to transformations of the parameter. If the actual parameter is transformed, why wouldn&#8217;t its estimate be transformed in the same way?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rex Tien</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-158</link>
		<dc:creator>Rex Tien</dc:creator>
		<pubDate>Thu, 01 Mar 2012 05:53:02 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-158</guid>
		<description>It is shown that the MLE of the standard deviation is not equal to the common standard deviation formula. Can you explain more why we want to depart from MLE in this case? And are there other concrete examples when it is preferable to stray from MLE?</description>
		<content:encoded><![CDATA[<p>It is shown that the MLE of the standard deviation is not equal to the common standard deviation formula. Can you explain more why we want to depart from MLE in this case? And are there other concrete examples when it is preferable to stray from MLE?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jay Scott</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-157</link>
		<dc:creator>Jay Scott</dc:creator>
		<pubDate>Thu, 01 Mar 2012 05:38:29 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-157</guid>
		<description>In 8.4.4 p241 you speak of multiple local maxima in loglikelihood causing numerical methods of estimating ML to get stuck in a region other than the actual maximum.  Is there a rule of thumb for determining when this will be a problem?  Is there a method for flattening the curve to reduce local variation yet still get a good estimate of ML?</description>
		<content:encoded><![CDATA[<p>In 8.4.4 p241 you speak of multiple local maxima in loglikelihood causing numerical methods of estimating ML to get stuck in a region other than the actual maximum.  Is there a rule of thumb for determining when this will be a problem?  Is there a method for flattening the curve to reduce local variation yet still get a good estimate of ML?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: nsnyder</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-156</link>
		<dc:creator>nsnyder</dc:creator>
		<pubDate>Thu, 01 Mar 2012 04:35:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=97#comment-156</guid>
		<description>In 8.4.4 the warning regarding application of numerical maximization to ML estimation is brought up with the concern of relative maxima in the loglikelihood. Is this only the case if the data is not from a normal distribution? Do these maxima correspond to anything interesting about the data?</description>
		<content:encoded><![CDATA[<p>In 8.4.4 the warning regarding application of numerical maximization to ML estimation is brought up with the concern of relative maxima in the loglikelihood. Is this only the case if the data is not from a normal distribution? Do these maxima correspond to anything interesting about the data?</p>
]]></content:encoded>
	</item>
</channel>
</rss>

