Posted on Saturday, 4th February 2012

Please read Sections 7.1-7.3.8 and 7.3.10 then post a comment.

Posted in Class | Comments (17)

  1. Eric VanEpps Says:

    I’m still confused about how to do the method of maximum likelihood, and particularly how the inverted functionality of the pdf works. Can we go over a couple different examples in class?

  2. Ben Dichter Says:

    How do I know when to use a t-distribution instead of a normal distribution?

  3. Shubham Debnath Says:

    How do you get the unknown parameters for encoding rate, latency, and memory capacity?

  4. Shubham Debnath Says:

    ^ In reference to the equation on page 176.

  5. Rob Rasmussen Says:

    Does the standard error suffer from any bias when the sample mean is used as in example 1.2.1 in the same way as the uncorrected sample variance equation?

  6. Rex Tien Says:

    These estimate strategies all give equal weight to all of the data samples. Is it ever a good idea to “trust” certain data more and give them more weight when estimating parameters, and is there a method for doing this?

  7. mpanico Says:

    If we use have to use sample mean to estimate the standard deviation of the distribution, are confidence intervals (based on this standard deviation estimate) estimates as well?

  8. Kelly Says:

    First off, I want to re-pose the question I asked on Thursday: I don’t understand why it is true that sample means are normally distributed even when the distribution from which those means are drawn is not normal.

    To move on to chapter 7, I noticed that you briefly discuss method of moments and then move quickly to maximum likelihood estimation. I’m not quite clear on the relationship between the two methods of estimation (other than that they’re mentioned close together) How, if at all, are they related, and are there particular cases in which one or the other method is better to use?

  9. Sharlene Flesher Says:

    At the end of 7.2.1 it is mentioned that higher order moments could be used- why would they be used? Also, at the end of 7.3.1 it says “In many applications an estimator ˆθ follows an approximately normal distribution.”-
    how does the estimator following a normal distrbution?

  10. Scott Kennedy Says:

    The premise of the maximum likelihood estimator is to choose some theta that gives the highest probability of some observed event x.

    However, what if x lies along one of the tails of a distribution? If we assumed that the probability of x was high, wouldn’t we pick the wrong distribution? Maximizing the probability of x seems like it should only be appropriate for the x’s that are near the peak of the pdf.

    I think what it boils down to is a question about how to quantify the total probability for a vector of many observations. Given independence, is this just the product of the individual probabilities?

  11. Jay Scott Says:

    The advantage of “Method of Moment” over “Maximum Likelihood” seems to be that it is much easier to compute, the tradeoff being it is a less reliable parameter estimate. Aside from small sample size, are there any dataset characteristics indicating “Method of Moment” should not be used?

  12. Yijuan Says:

    Could you talk more about ‘the choice of the “desired” SE2′?

  13. nsnyder Says:

    It seems that when discussing the likelihood, maximizing L(theta) will converge on a single parameter. Is this correct, and is this always the case? In otherwords, if you combined L(theta) for all of the vector, x, could there multiple maxima?

  14. Amanda Markey Says:

    I’m not clear on the costs and benefits of MLE vs Method of Moments. When and why would you use one vs. the other?

    Also, how frequently do they produce the same estimator? (never? often?)

  15. Rich Truncellito Says:

    I found the descriptions and explanations in this chapter very clear. So, the one detail that I find myself missing is a small technical one: How does V(T) = p(1-p)/(n^2) because V(Y) = np(1-p)? Mainly, how does (n^2) become the denominator of V(T)?

  16. Matt Bauman Says:

    I found the section on maximum likelihood very confusing initially. It took a few reads before I finally began understanding it. One lingering question: how can x have five possible values if n=4?

  17. Thomas Kraynak Says:

    I still don’t quite understand why we can consider theta a theoretical value from your explanation on p179.

Leave a Reply

You must be logged in to post a comment.