Posted on Sunday, 11th March 2012

Please read section 9.2 and post a comment.

Posted in Class | Comments (13)

  1. Eric VanEpps Says:

    You mention that in practice, the parametric and nonparametric bootstraps often produce similar confidence intervals and standard error assessments, so choosing between them is often a matter of convenience. Except for those examples in which the data might not be iid, would there be any other reason to not use the nonparametric, as it takes fewer processes to just resample the data? In other words, are there any iid samples in which parametric is preferred to nonparametric bootstrapping?

  2. Yijuan Du Says:

    I’m still a little confused about the difference between the parametric bootstrap and simulation-based propagation of uncertainty.

  3. Scott Kennedy Says:

    Is it possible to use the nonparametric bootstrap as a way to check the validity of the assumed distribution used in a parametric bootstrap?

  4. Rob Rasmussen Says:

    I am confused as to how the estimator given in 9.21 would be more efficient than computing the sample mean.

  5. Noah Says:

    It seems that the nonparametric bootstrap will in most cases be easier and just as effective, are there any real benefits from using the parametric bootstrap? Also, it is mentioned that one of the shortcomings of the nonparametric bootstrap is that it requires an i.i.d. sample; however, I would have thought that this would also be a requirement for the parametric bootstrap?

  6. Sharlene Flesher Says:

    Under what circumstances would you use bootstrap, rather than another method, to estimate SE?

  7. Shubham Debnath Says:

    Because the parametric and nonparametric methods produce similar results and can be used for similar sets of data (albeit with a little manipulation), is there one used more often? Or perhaps preferred before trying the other one?

  8. Matt Panico Says:

    It seems to me that the nonparametric bootstrap requires a very large sample set to allow the resampling to accurately represent the underlying CDF. Is there a good rule of thumb about sample size?

  9. Ben Dichter Says:

    I don’t quite understand the point of the parametric bootstrap. In order to generate data we have to estimate the distribution parameters. Well if we are willing to accept those parameters, why do we even care about generating data? If we must choose a distribution to use MLE, then we could just look up the variance of that distribution in a book and skip the data generation step. Also, how are we to know how much we can trust our estimation of theta?

  10. Rex Tien Says:

    It seems like resampling the data could amplify any sort of bias or error that was present. Is there a way of detecting this?

    Similarly, is there a way of detecting when the data are not sufficiently i.i.d. samples? Can you give an example of the case where the sampling is not i.i.d.?

  11. Matt Bauman Says:

    It seems to me that both these methods would suffer greatly with smaller sample sizes. Is there a rule of thumb that you follow to determine the minimum number of samples for performing the bootstrap?

  12. Rich Truncellito Says:

    I’m not sure that I understand the distinction of arbitrary shuffles of the data from i.i.d. sampling, when both are drawn from the same model/pool.

  13. Thomas Kraynak Says:

    Can you go more into detail how the unknown φ is acquired? The text seems to give two pretty distinct methods of using this value (parameter vector φ = f(θ) or using the distribution’s median) and I’m having trouble deciding when to use each method.

Leave a Reply

You must be logged in to post a comment.