<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Tues Mar 27</title>
	<atom:link href="http://www.stat.cmu.edu/~kass/smnp/?feed=rss2&#038;p=109" rel="self" type="application/rss+xml" />
	<link>http://www.stat.cmu.edu/~kass/smnp/?p=109</link>
	<description></description>
	<lastBuildDate>Thu, 26 Apr 2012 14:07:02 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Kelly</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-242</link>
		<dc:creator>Kelly</dc:creator>
		<pubDate>Tue, 27 Mar 2012 13:30:55 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-242</guid>
		<description>I&#039;m still unclear on how cross-validation works. Can you clarify?</description>
		<content:encoded><![CDATA[<p>I&#8217;m still unclear on how cross-validation works. Can you clarify?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Thomas Kraynak</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-241</link>
		<dc:creator>Thomas Kraynak</dc:creator>
		<pubDate>Tue, 27 Mar 2012 13:30:52 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-241</guid>
		<description>If you want to simplify your model, what are some methods to determine whether to use forward selection, backward elimination, and/or stepwise regression?</description>
		<content:encoded><![CDATA[<p>If you want to simplify your model, what are some methods to determine whether to use forward selection, backward elimination, and/or stepwise regression?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Bauman</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-240</link>
		<dc:creator>Matt Bauman</dc:creator>
		<pubDate>Tue, 27 Mar 2012 13:27:13 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-240</guid>
		<description>I&#039;m slightly confused by the remarks on shrinkage. Why does a high coefficient suggest an irrelevant variable? How does adding the penalty term fix problems with non-invertible matrices?</description>
		<content:encoded><![CDATA[<p>I&#8217;m slightly confused by the remarks on shrinkage. Why does a high coefficient suggest an irrelevant variable? How does adding the penalty term fix problems with non-invertible matrices?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rex Tien</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-239</link>
		<dc:creator>Rex Tien</dc:creator>
		<pubDate>Tue, 27 Mar 2012 13:15:44 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-239</guid>
		<description>It would be very helpful to have a picture of the example in 12.5.5.

Is there any way of selectively shrinking certain coefficients selectively?</description>
		<content:encoded><![CDATA[<p>It would be very helpful to have a picture of the example in 12.5.5.</p>
<p>Is there any way of selectively shrinking certain coefficients selectively?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rob Rasmussen</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-238</link>
		<dc:creator>Rob Rasmussen</dc:creator>
		<pubDate>Tue, 27 Mar 2012 13:05:32 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-238</guid>
		<description>Is it possible to use a form of multiple linear regression for non-cosine directional tuning, or would it be limited to using kernel regression methods or some other technique?</description>
		<content:encoded><![CDATA[<p>Is it possible to use a form of multiple linear regression for non-cosine directional tuning, or would it be limited to using kernel regression methods or some other technique?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Yijuan Du</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-237</link>
		<dc:creator>Yijuan Du</dc:creator>
		<pubDate>Tue, 27 Mar 2012 12:33:37 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-237</guid>
		<description>When we chosee the explanatory variables, do we usually try to make them uncorrelated or it does not matter whether they are correlated? The correlation of them only influences interpretation?</description>
		<content:encoded><![CDATA[<p>When we chosee the explanatory variables, do we usually try to make them uncorrelated or it does not matter whether they are correlated? The correlation of them only influences interpretation?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: David Zhou</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-236</link>
		<dc:creator>David Zhou</dc:creator>
		<pubDate>Tue, 27 Mar 2012 07:40:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-236</guid>
		<description>Can you go over the source localization problem for MEG that you go into in Example 12.9? I&#039;m not clear on your use of penalized least squares in minimum norm estimate. It sounds like an exciting statistical technique, but I&#039;d love to hear more about it.</description>
		<content:encoded><![CDATA[<p>Can you go over the source localization problem for MEG that you go into in Example 12.9? I&#8217;m not clear on your use of penalized least squares in minimum norm estimate. It sounds like an exciting statistical technique, but I&#8217;d love to hear more about it.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Panico</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-235</link>
		<dc:creator>Matt Panico</dc:creator>
		<pubDate>Tue, 27 Mar 2012 06:30:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-235</guid>
		<description>Does the forward selection algorithm also apply to interaction effects? Should we consider interaction effects with every variable we add to the model, or only where we think it would make sense?</description>
		<content:encoded><![CDATA[<p>Does the forward selection algorithm also apply to interaction effects? Should we consider interaction effects with every variable we add to the model, or only where we think it would make sense?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jay Scott</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-234</link>
		<dc:creator>Jay Scott</dc:creator>
		<pubDate>Tue, 27 Mar 2012 05:35:47 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-234</guid>
		<description>Regarding model selection, will &#039;forward selection&#039; and &#039;backward elimination&#039; converge on the same model?</description>
		<content:encoded><![CDATA[<p>Regarding model selection, will &#8216;forward selection&#8217; and &#8216;backward elimination&#8217; converge on the same model?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Ben Dichter</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-233</link>
		<dc:creator>Ben Dichter</dc:creator>
		<pubDate>Tue, 27 Mar 2012 04:29:08 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=109#comment-233</guid>
		<description>When looking at interaction effects, you might end up with a very large number of variables. How do you know which combinations of variables might be important? How do you deal with over-fitting?</description>
		<content:encoded><![CDATA[<p>When looking at interaction effects, you might end up with a very large number of variables. How do you know which combinations of variables might be important? How do you deal with over-fitting?</p>
]]></content:encoded>
	</item>
</channel>
</rss>

