<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Thurs Mar 22</title>
	<atom:link href="http://www.stat.cmu.edu/~kass/smnp/?feed=rss2&#038;p=107" rel="self" type="application/rss+xml" />
	<link>http://www.stat.cmu.edu/~kass/smnp/?p=107</link>
	<description></description>
	<lastBuildDate>Thu, 26 Apr 2012 14:07:02 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Thomas Kraynak</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-227</link>
		<dc:creator>Thomas Kraynak</dc:creator>
		<pubDate>Thu, 22 Mar 2012 13:30:27 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-227</guid>
		<description>The procedures seem pretty straightforward when it comes to performing multiple regression using two, three, or four explanatory variables as related to doing a simple linear regression with one variable.  Are there issues associated with using more variables than that?  What is the &quot;limit&quot; of number of multiple regression variables typically used in the literature?</description>
		<content:encoded><![CDATA[<p>The procedures seem pretty straightforward when it comes to performing multiple regression using two, three, or four explanatory variables as related to doing a simple linear regression with one variable.  Are there issues associated with using more variables than that?  What is the &#8220;limit&#8221; of number of multiple regression variables typically used in the literature?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Noah</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-226</link>
		<dc:creator>Noah</dc:creator>
		<pubDate>Thu, 22 Mar 2012 13:05:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-226</guid>
		<description>When the F-statistic does not follow a F-distribution, is it completely useless? Or, can a different distribution be fit to the available F-statistics in order to make conclusions about the data? I would assume this would require being able to generate a large number of F-statistics (like in the HW), is that correct?</description>
		<content:encoded><![CDATA[<p>When the F-statistic does not follow a F-distribution, is it completely useless? Or, can a different distribution be fit to the available F-statistics in order to make conclusions about the data? I would assume this would require being able to generate a large number of F-statistics (like in the HW), is that correct?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Amanda Markey</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-225</link>
		<dc:creator>Amanda Markey</dc:creator>
		<pubDate>Thu, 22 Mar 2012 12:42:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-225</guid>
		<description>We talked about corrections for multiple hypothesis tests yesterday.  Is there a penalty for generating  many multiple regressions (each time changing which x&#039;s are included)?</description>
		<content:encoded><![CDATA[<p>We talked about corrections for multiple hypothesis tests yesterday.  Is there a penalty for generating  many multiple regressions (each time changing which x&#8217;s are included)?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: David Zhou</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-224</link>
		<dc:creator>David Zhou</dc:creator>
		<pubDate>Thu, 22 Mar 2012 12:40:56 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-224</guid>
		<description>Does the least squares fit principle basically say that if you have over constraining variables, you&#039;re never going to find a linear combination of them to exact regress to a line, so you project to as many dimensions as you can? I think I&#039;m familiar with the linear algebra behind it, but can you go over the statistical principles involved?

(This is one section ahead I realize.) How do you use polynomial and cosine regression? Do you have to assume that some model fits ad hoc? Is there any way to compute the order of the polynomials, or is it something you just fit and compare? I guess this goes back to what you say - all models are wrong, but some are useful?</description>
		<content:encoded><![CDATA[<p>Does the least squares fit principle basically say that if you have over constraining variables, you&#8217;re never going to find a linear combination of them to exact regress to a line, so you project to as many dimensions as you can? I think I&#8217;m familiar with the linear algebra behind it, but can you go over the statistical principles involved?</p>
<p>(This is one section ahead I realize.) How do you use polynomial and cosine regression? Do you have to assume that some model fits ad hoc? Is there any way to compute the order of the polynomials, or is it something you just fit and compare? I guess this goes back to what you say &#8211; all models are wrong, but some are useful?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Yijuan Du</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-223</link>
		<dc:creator>Yijuan Du</dc:creator>
		<pubDate>Thu, 22 Mar 2012 12:33:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-223</guid>
		<description>How to represent the interaction of these explanatory variables?</description>
		<content:encoded><![CDATA[<p>How to represent the interaction of these explanatory variables?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Panico</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-222</link>
		<dc:creator>Matt Panico</dc:creator>
		<pubDate>Thu, 22 Mar 2012 08:07:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-222</guid>
		<description>When bootstrapping multivariate regressions, the bootstrapped sample sets will almost always generate a better fit (and smaller residuals) than the original data, due to repeated data. Does this cause an issue with analysis?</description>
		<content:encoded><![CDATA[<p>When bootstrapping multivariate regressions, the bootstrapped sample sets will almost always generate a better fit (and smaller residuals) than the original data, due to repeated data. Does this cause an issue with analysis?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rex Tien</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-221</link>
		<dc:creator>Rex Tien</dc:creator>
		<pubDate>Thu, 22 Mar 2012 05:38:09 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-221</guid>
		<description>I am still not quite sure how to calculate the sigma^2 and s^2 in equation 12.56. Which x-values does it arise from?

Also, I appreciated the geometric argument in helping understand the matrix form of the least-squares fit. However, why does this insight apply to least squares and not other methods like least-absolute value?</description>
		<content:encoded><![CDATA[<p>I am still not quite sure how to calculate the sigma^2 and s^2 in equation 12.56. Which x-values does it arise from?</p>
<p>Also, I appreciated the geometric argument in helping understand the matrix form of the least-squares fit. However, why does this insight apply to least squares and not other methods like least-absolute value?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Ben Dichter</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-220</link>
		<dc:creator>Ben Dichter</dc:creator>
		<pubDate>Thu, 22 Mar 2012 05:38:08 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-220</guid>
		<description>Is linear regression always performed assuming normal noise? Are there instances when you would use a different noise model and would you be able to find a closed form solution? Would you be able to perform an analogous ANOVA?</description>
		<content:encoded><![CDATA[<p>Is linear regression always performed assuming normal noise? Are there instances when you would use a different noise model and would you be able to find a closed form solution? Would you be able to perform an analogous ANOVA?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Ben Dichter</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-219</link>
		<dc:creator>Ben Dichter</dc:creator>
		<pubDate>Thu, 22 Mar 2012 05:37:23 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-219</guid>
		<description>Is linear regression always performed assuming normal error? Are there instances when you would use a different error estimate and would you be able to find a closed form solution?</description>
		<content:encoded><![CDATA[<p>Is linear regression always performed assuming normal error? Are there instances when you would use a different error estimate and would you be able to find a closed form solution?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Shubham Debnath</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-218</link>
		<dc:creator>Shubham Debnath</dc:creator>
		<pubDate>Thu, 22 Mar 2012 04:56:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=107#comment-218</guid>
		<description>When using matrices for multiple regression, is it helpful to reduce a matrix to upper echelon or echelon form?  And how can one apply nonlinear regression models similarly?</description>
		<content:encoded><![CDATA[<p>When using matrices for multiple regression, is it helpful to reduce a matrix to upper echelon or echelon form?  And how can one apply nonlinear regression models similarly?</p>
]]></content:encoded>
	</item>
</channel>
</rss>

