Posted on Tuesday, 15th January 2013
Non-computational students: you should read KEB Sections 3.2.1-3.2.3, 7.3.1, 7.3.5, 7.3.8–7.3.9, 5.1-5.3, 5.4.1-5.4.3, 5.5.2, 6.1.1, 6.2.1, 6.3.1, 8.1, 8.2–8.3 (skim), and then post a comment or question on this blog. This is A LOT of material, and it is OK if you only get part way through. I have listed the sections in order of immediate importance. Part of what I will do in Thursday’s class is check with you about how it is going. You should be frank with me.
POST YOUR COMMENT OR QUESTION BY 7:00 PM WEDNESDAY
Computational students: come to class at 11:30 with ideas of what paper you might like to present; if you are not sure, that’s OK; I hope at least you will have thought about possible topics.
Posted in Class | Comments (2)
Leave a Reply
You must be logged in to post a comment.

January 16th, 2013 at 5:42 pm
I was confused by Slutsky’s theorem and how it was applied on page 198 (in 7.3.5) to derive the confidence interval. It’s not apparent to me how this application aids in the use of 7.21 and 7.22.
In addition, I found it a bit difficult to keep the nomenclature straight. If we could have a brief review of what T, theta represent that could be very useful, as at the moment they are scattered throughout the chapters often in sections that we are not assigned to read. Thanks!
January 16th, 2013 at 7:01 pm
I haven’t finished the readings yet, but I found the recasting of standard descriptive statistics (i.e. mean, variance, s.e.) as probability functions to be helpful in connecting with my prior understanding. Based on the discussions of uncertainty, it sounds as though many statistical tests are just formalized means of summarizing means/variances into a singular distribution that assumes no measured difference. Thus the ‘significance’ testing is the logic of what’s outside of the confidence interval. Is that close or way off base?