Posted on Sunday, 11th March 2012
Please read section 9.2 and post a comment.
Posted in Class | Comments (13)
Leave a Reply
You must be logged in to post a comment.
Posted on Sunday, 11th March 2012
Please read section 9.2 and post a comment.
Posted in Class | Comments (13)
You must be logged in to post a comment.
March 12th, 2012 at 9:53 am
You mention that in practice, the parametric and nonparametric bootstraps often produce similar confidence intervals and standard error assessments, so choosing between them is often a matter of convenience. Except for those examples in which the data might not be iid, would there be any other reason to not use the nonparametric, as it takes fewer processes to just resample the data? In other words, are there any iid samples in which parametric is preferred to nonparametric bootstrapping?
March 12th, 2012 at 4:51 pm
I’m still a little confused about the difference between the parametric bootstrap and simulation-based propagation of uncertainty.
March 12th, 2012 at 7:05 pm
Is it possible to use the nonparametric bootstrap as a way to check the validity of the assumed distribution used in a parametric bootstrap?
March 12th, 2012 at 8:47 pm
I am confused as to how the estimator given in 9.21 would be more efficient than computing the sample mean.
March 12th, 2012 at 9:55 pm
It seems that the nonparametric bootstrap will in most cases be easier and just as effective, are there any real benefits from using the parametric bootstrap? Also, it is mentioned that one of the shortcomings of the nonparametric bootstrap is that it requires an i.i.d. sample; however, I would have thought that this would also be a requirement for the parametric bootstrap?
March 12th, 2012 at 10:46 pm
Under what circumstances would you use bootstrap, rather than another method, to estimate SE?
March 12th, 2012 at 11:08 pm
Because the parametric and nonparametric methods produce similar results and can be used for similar sets of data (albeit with a little manipulation), is there one used more often? Or perhaps preferred before trying the other one?
March 12th, 2012 at 11:12 pm
It seems to me that the nonparametric bootstrap requires a very large sample set to allow the resampling to accurately represent the underlying CDF. Is there a good rule of thumb about sample size?
March 12th, 2012 at 11:53 pm
I don’t quite understand the point of the parametric bootstrap. In order to generate data we have to estimate the distribution parameters. Well if we are willing to accept those parameters, why do we even care about generating data? If we must choose a distribution to use MLE, then we could just look up the variance of that distribution in a book and skip the data generation step. Also, how are we to know how much we can trust our estimation of theta?
March 13th, 2012 at 1:30 am
It seems like resampling the data could amplify any sort of bias or error that was present. Is there a way of detecting this?
Similarly, is there a way of detecting when the data are not sufficiently i.i.d. samples? Can you give an example of the case where the sampling is not i.i.d.?
March 13th, 2012 at 8:19 am
It seems to me that both these methods would suffer greatly with smaller sample sizes. Is there a rule of thumb that you follow to determine the minimum number of samples for performing the bootstrap?
March 13th, 2012 at 8:30 am
I’m not sure that I understand the distinction of arbitrary shuffles of the data from i.i.d. sampling, when both are drawn from the same model/pool.
March 13th, 2012 at 8:30 am
Can you go more into detail how the unknown φ is acquired? The text seems to give two pretty distinct methods of using this value (parameter vector φ = f(θ) or using the distribution’s median) and I’m having trouble deciding when to use each method.