Posted on Tuesday, 7th February 2012
Please finish Chapter 7 and also read section 10.1.
I intend to focus on Sections 7.3.8 and 7.3.9, but
you may post a comment on anything in Chapter 7 or
section 10.1.
Posted in Class | Comments (12)
Leave a Reply
You must be logged in to post a comment.

February 7th, 2012 at 10:05 pm
In section 7.3.9, it states that given that we know very little about our parameter a priori, it makes sense to use the uniform distribution as our prior pdf. Why is this the case, rather than a distribution like the standard normal?
Also, how is the posterior distribution at the top of page 199 calculated?
February 8th, 2012 at 9:18 am
Using Bayes theorem, we have to choose a prior distribution. In the example, the form of the distribution of that prior the same as the form of the data distribution? Is this always true? If so, are we simply optimizing the parameters for a distribution we must intuitively choose? Are there any guidelines for narrowing down the possibilities of distributions?
I found the distinction between the distribution of theta and the distribution of the data confusing. From what I understand, we always assume that the distribution of the data is binomial. And then, because we don’t have any intuitive guess about theta, we assume a uniform distribution. Had we known something about theta, would we have chosen a distribution that would have optimized that knowledge? Can you give an example of that?
February 8th, 2012 at 4:28 pm
I am still having trouble understanding how the degrees of freedom is calculated from the different scenarios.
February 8th, 2012 at 4:52 pm
Instead of a uniform distribution when assigning one as a prior distribution, can one use any other one? Obviously uniform is the most obvious and simplest, but can you use a normal distribution?
February 8th, 2012 at 11:25 pm
I don’t fully understand how the U and L of the CI are random variables- can you clarify how they are random variables rather than constants?
February 8th, 2012 at 11:28 pm
It seems that the last sentence of 7.3.9 contradicts the arguments made earlier. So is it actually OK to think of the confidence interval as there being 95% probability that the parameter is in the interval? If not exactly true, is this still a useful way to think about it or are there cases where this way of considering CIs leads us to significant errors?
February 9th, 2012 at 12:34 am
Is the width of the confidence interval always similar to the width of the credible interval?
February 9th, 2012 at 1:43 am
Are we justified in modeling chi^2 as a Poisson just because it is a count? Are there ever counts that are poorly modeled by a Poisson distribution? Also, how would you handle a continuous distribution? You could bin it, but the size of the bin will be arbitrary. Is there a better way?
February 9th, 2012 at 9:19 am
What kinds of problem advantage using the credible interval over using the confidence interval?
February 9th, 2012 at 9:22 am
When should we use Bayes’ Theorem for uncertainty assessment?
February 9th, 2012 at 9:24 am
What significance does the π term have in the pdf π(θ) = fθ(θ) ? Is it a term simply chosen because its independence from the other terms and not because of its value?
February 9th, 2012 at 9:25 am
By π I mean pi, it’s not showing up on the browser well.