<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Thurs Apr 5</title>
	<atom:link href="http://www.stat.cmu.edu/~kass/smnp/?feed=rss2&#038;p=116" rel="self" type="application/rss+xml" />
	<link>http://www.stat.cmu.edu/~kass/smnp/?p=116</link>
	<description></description>
	<lastBuildDate>Thu, 26 Apr 2012 14:07:02 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Rob Rasmussen</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-281</link>
		<dc:creator>Rob Rasmussen</dc:creator>
		<pubDate>Thu, 05 Apr 2012 13:17:24 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-281</guid>
		<description>I am having trouble seeing how the spline regression is solved by fitting equation 16.3</description>
		<content:encoded><![CDATA[<p>I am having trouble seeing how the spline regression is solved by fitting equation 16.3</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rich Truncellito</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-280</link>
		<dc:creator>Rich Truncellito</dc:creator>
		<pubDate>Thu, 05 Apr 2012 12:44:20 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-280</guid>
		<description>In reading about choosing a number of knots for creating splines, I initially found it counterintuitive that choosing more knots to create more splines generates a less smooth fit than choosing fewer knots to create fewer splines. Does choosing more knots generate a less smooth fit, because with more knots the whole fitted curve becomes overly sensitive to slight changes in the data?</description>
		<content:encoded><![CDATA[<p>In reading about choosing a number of knots for creating splines, I initially found it counterintuitive that choosing more knots to create more splines generates a less smooth fit than choosing fewer knots to create fewer splines. Does choosing more knots generate a less smooth fit, because with more knots the whole fitted curve becomes overly sensitive to slight changes in the data?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Amanda Markey</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-279</link>
		<dc:creator>Amanda Markey</dc:creator>
		<pubDate>Thu, 05 Apr 2012 12:39:23 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-279</guid>
		<description>On page 486, I&#039;m not sure why x4 = (0,0,0,1,8,27,64)T.</description>
		<content:encoded><![CDATA[<p>On page 486, I&#8217;m not sure why x4 = (0,0,0,1,8,27,64)T.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Bauman</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-278</link>
		<dc:creator>Matt Bauman</dc:creator>
		<pubDate>Thu, 05 Apr 2012 12:26:18 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-278</guid>
		<description>One powerful feature of regression is its ability to compare different datasets. How would one compare two datasets when fitting with splines? Do you keep the knot locations constant? Or simply the number of knots? Is this even done?  More specifically, can you obtain a p-value in comparison to some null hypothesis (e.g., there&#039;s only one spline segment)?</description>
		<content:encoded><![CDATA[<p>One powerful feature of regression is its ability to compare different datasets. How would one compare two datasets when fitting with splines? Do you keep the knot locations constant? Or simply the number of knots? Is this even done?  More specifically, can you obtain a p-value in comparison to some null hypothesis (e.g., there&#8217;s only one spline segment)?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Panico</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-277</link>
		<dc:creator>Matt Panico</dc:creator>
		<pubDate>Thu, 05 Apr 2012 06:21:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-277</guid>
		<description>Are splines useful with multiple predictors? That many equations might overload processing time.</description>
		<content:encoded><![CDATA[<p>Are splines useful with multiple predictors? That many equations might overload processing time.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: David Zhou</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-276</link>
		<dc:creator>David Zhou</dc:creator>
		<pubDate>Thu, 05 Apr 2012 06:19:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-276</guid>
		<description>I realize that this might be problematic, but what is the major error in choosing knots from points of inflection from a derivative of the curve that you&#039;re trying to fit? I&#039;m guessing that&#039;s building some assumptions of behavior into the regression, so can you talk about some of the things that choice of the knot set is trying to avoid?</description>
		<content:encoded><![CDATA[<p>I realize that this might be problematic, but what is the major error in choosing knots from points of inflection from a derivative of the curve that you&#8217;re trying to fit? I&#8217;m guessing that&#8217;s building some assumptions of behavior into the regression, so can you talk about some of the things that choice of the knot set is trying to avoid?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jay Scott</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-275</link>
		<dc:creator>Jay Scott</dc:creator>
		<pubDate>Thu, 05 Apr 2012 05:09:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-275</guid>
		<description>p489 and 490, I am having a lot of difficulty understanding what you mean by &quot;A second intuition is obtained by replacing the least-squares problem of minimizing the sum of squares with the penalized least squares problem of minimizing the penalized sum of squares where λ is a constant.&quot;

Also if the squared 2nd derivative is a &quot;roughness penalty&quot; and it&#039;s integral is a roughness measure, does that simply mean that the 1st derivative is the roughness measure?  If yes, could would a derivative of any roughness measure be its penalty?</description>
		<content:encoded><![CDATA[<p>p489 and 490, I am having a lot of difficulty understanding what you mean by &#8220;A second intuition is obtained by replacing the least-squares problem of minimizing the sum of squares with the penalized least squares problem of minimizing the penalized sum of squares where λ is a constant.&#8221;</p>
<p>Also if the squared 2nd derivative is a &#8220;roughness penalty&#8221; and it&#8217;s integral is a roughness measure, does that simply mean that the 1st derivative is the roughness measure?  If yes, could would a derivative of any roughness measure be its penalty?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Shubham Debnath</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-274</link>
		<dc:creator>Shubham Debnath</dc:creator>
		<pubDate>Thu, 05 Apr 2012 04:39:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-274</guid>
		<description>Are there other nonlinear smoothers aside from BARS? Are they useful at all? And how much slower are they than linear smoothers or BARS?</description>
		<content:encoded><![CDATA[<p>Are there other nonlinear smoothers aside from BARS? Are they useful at all? And how much slower are they than linear smoothers or BARS?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Ben Dichter</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-273</link>
		<dc:creator>Ben Dichter</dc:creator>
		<pubDate>Thu, 05 Apr 2012 04:39:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-273</guid>
		<description>it seems the roughness penalty plays a  similar role to  correction term we used earlier to prevent overfitting of multivariate regression. Is the second derivative always something you want to minimize? Would you penalize anything else?</description>
		<content:encoded><![CDATA[<p>it seems the roughness penalty plays a  similar role to  correction term we used earlier to prevent overfitting of multivariate regression. Is the second derivative always something you want to minimize? Would you penalize anything else?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rex Tien</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-272</link>
		<dc:creator>Rex Tien</dc:creator>
		<pubDate>Thu, 05 Apr 2012 03:54:27 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=116#comment-272</guid>
		<description>We focus mainly on cubic splines. But we could theoretically use any type of polynomial for splines. Are the splines limited to polynomials in order to use the regression method for calculating the splines? Or can we use any type of function? If so is there a method for determining what form of function is best for the splines?</description>
		<content:encoded><![CDATA[<p>We focus mainly on cubic splines. But we could theoretically use any type of polynomial for splines. Are the splines limited to polynomials in order to use the regression method for calculating the splines? Or can we use any type of function? If so is there a method for determining what form of function is best for the splines?</p>
]]></content:encoded>
	</item>
</channel>
</rss>

