<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Thurs Apr 12</title>
	<atom:link href="http://www.stat.cmu.edu/~kass/smnp/?feed=rss2&#038;p=121" rel="self" type="application/rss+xml" />
	<link>http://www.stat.cmu.edu/~kass/smnp/?p=121</link>
	<description></description>
	<lastBuildDate>Thu, 26 Apr 2012 14:07:02 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0.4</generator>
	<item>
		<title>By: Thomas Kraynak</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-310</link>
		<dc:creator>Thomas Kraynak</dc:creator>
		<pubDate>Thu, 12 Apr 2012 13:29:46 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-310</guid>
		<description>Could you go over how you formulate the sample ACF?</description>
		<content:encoded><![CDATA[<p>Could you go over how you formulate the sample ACF?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Matt Bauman</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-309</link>
		<dc:creator>Matt Bauman</dc:creator>
		<pubDate>Thu, 12 Apr 2012 13:23:13 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-309</guid>
		<description>I&#039;ve heard some forms of AR(p) models being referred to as Wiener Cascades or Wiener Filters -- are there any differences between these three terms? Or are they just synonyms?</description>
		<content:encoded><![CDATA[<p>I&#8217;ve heard some forms of AR(p) models being referred to as Wiener Cascades or Wiener Filters &#8212; are there any differences between these three terms? Or are they just synonyms?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rob Rasmussen</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-308</link>
		<dc:creator>Rob Rasmussen</dc:creator>
		<pubDate>Thu, 12 Apr 2012 13:01:26 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-308</guid>
		<description>Could you explain more about the difference between ACF and PACF?</description>
		<content:encoded><![CDATA[<p>Could you explain more about the difference between ACF and PACF?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rich Truncellito</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-307</link>
		<dc:creator>Rich Truncellito</dc:creator>
		<pubDate>Thu, 12 Apr 2012 12:26:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-307</guid>
		<description>In describing the autocovariance function on p. 512, you define h = t - s. Does this definition of h also apply to the h in the description of &quot;strictly stationary&quot; on the previous page? In other words, does the number of elements in each set of variables {Xt, Xt+1,..., Xt+h} and {Xs, Xs+1,..., Xs+h} have to agree with h = t - s?</description>
		<content:encoded><![CDATA[<p>In describing the autocovariance function on p. 512, you define h = t &#8211; s. Does this definition of h also apply to the h in the description of &#8220;strictly stationary&#8221; on the previous page? In other words, does the number of elements in each set of variables {Xt, Xt+1,&#8230;, Xt+h} and {Xs, Xs+1,&#8230;, Xs+h} have to agree with h = t &#8211; s?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Yijuan Du</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-306</link>
		<dc:creator>Yijuan Du</dc:creator>
		<pubDate>Thu, 12 Apr 2012 11:38:16 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-306</guid>
		<description>I don&#039;t quite understand, in the example of EEG and EPSC, &#039;the time scale their variation occurs&#039;, compared to observation interval..</description>
		<content:encoded><![CDATA[<p>I don&#8217;t quite understand, in the example of EEG and EPSC, &#8216;the time scale their variation occurs&#8217;, compared to observation interval..</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Amanda Markey</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-305</link>
		<dc:creator>Amanda Markey</dc:creator>
		<pubDate>Thu, 12 Apr 2012 10:24:46 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-305</guid>
		<description>You say, &quot;many physical phenomena may be described by applying this technique&quot;.  What are more of these?</description>
		<content:encoded><![CDATA[<p>You say, &#8220;many physical phenomena may be described by applying this technique&#8221;.  What are more of these?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: David Zhou</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-304</link>
		<dc:creator>David Zhou</dc:creator>
		<pubDate>Thu, 12 Apr 2012 08:13:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-304</guid>
		<description>I&#039;m not clear about the utility of spectral analysis - you mention it but it doesn&#039;t seem to factor into autocorrelation analysis - is it more important for Fourier analysis (which we skipped)?</description>
		<content:encoded><![CDATA[<p>I&#8217;m not clear about the utility of spectral analysis &#8211; you mention it but it doesn&#8217;t seem to factor into autocorrelation analysis &#8211; is it more important for Fourier analysis (which we skipped)?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jay Scott</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-303</link>
		<dc:creator>Jay Scott</dc:creator>
		<pubDate>Thu, 12 Apr 2012 05:15:37 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-303</guid>
		<description>I do not understand the generalization of 18.26 to 18.27:  

X_t = phi*X_(t-1) + W_y 
to
X_t = Summation i=1 to p of [phi_i*X_(t-i) + W_y].

or how the magnitude of phi determines causality(pp527-8).</description>
		<content:encoded><![CDATA[<p>I do not understand the generalization of 18.26 to 18.27:  </p>
<p>X_t = phi*X_(t-1) + W_y<br />
to<br />
X_t = Summation i=1 to p of [phi_i*X_(t-i) + W_y].</p>
<p>or how the magnitude of phi determines causality(pp527-8).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rex Tien</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-302</link>
		<dc:creator>Rex Tien</dc:creator>
		<pubDate>Thu, 12 Apr 2012 04:05:29 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-302</guid>
		<description>You mention that harmonics can often be fitted with the form omega_2 = k*omega_1, where k is an integer. Could we also use fractional k values? Is there some mathematical reason why harmonics with integer k&#039;s should improve the fit?</description>
		<content:encoded><![CDATA[<p>You mention that harmonics can often be fitted with the form omega_2 = k*omega_1, where k is an integer. Could we also use fractional k values? Is there some mathematical reason why harmonics with integer k&#8217;s should improve the fit?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Ben Dichter</title>
		<link>http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-301</link>
		<dc:creator>Ben Dichter</dc:creator>
		<pubDate>Thu, 12 Apr 2012 03:25:11 +0000</pubDate>
		<guid isPermaLink="false">http://www.stat.cmu.edu/~kass/smnp/?p=121#comment-301</guid>
		<description>Could you go through the math in the autocorrelation illustration?</description>
		<content:encoded><![CDATA[<p>Could you go through the math in the autocorrelation illustration?</p>
]]></content:encoded>
	</item>
</channel>
</rss>

