Posted on Thursday, 5th April 2012
Please finish reading chapter 16 and post a comment.
Posted in Class | Comments (15)
Leave a Reply
You must be logged in to post a comment.
Posted on Thursday, 5th April 2012
Please finish reading chapter 16 and post a comment.
Posted in Class | Comments (15)
You must be logged in to post a comment.
April 6th, 2012 at 11:13 am
Is there a rule of thumb for selecting h? Would it make sense to try the standard deviation as the value for h first, and then adjust from there?
April 9th, 2012 at 4:52 am
It seems like kernels maybe useful for estimating quantized neurotransmitter packet release. Is that the right way to think about kernel estimation? Are there other problems that are useful to apply this to?
April 9th, 2012 at 3:49 pm
When weighting, how does one decide what the suitable kernel K(u) is?
April 9th, 2012 at 7:05 pm
You mention that there are other kernel options for pdf estimation. What are they and how would we know to use one of them?
April 9th, 2012 at 8:09 pm
It seems like the kernel regression is analogous to convolving the datapoints with a gaussian kernel or whichever kernel you choose. However, I am having trouble visualizing what is happening in the local regression.
April 9th, 2012 at 9:32 pm
I thought the grid concept mentioned at the end of 16.4.2 was very interesting, but I didn’t fully understand it- can you give a visual of the grid and how you’d get the estimate from it?
April 9th, 2012 at 9:34 pm
Is selection of the bandwidth parameter (h) completely arbitrary then? Just trial and error until one is “just right”?
April 9th, 2012 at 10:13 pm
In 16.3.3, pp500-1, can you rephrase “if the weights wi(x) become concentrated near x”? What does it mean for a weight to be concentrated? Is this a peak in the pdf? An overlap between kernels if the h value is large enough?
April 10th, 2012 at 12:47 am
Kernel regression does not seem to provide a simple equation for the fitted curve. Does this mean the weighted sum must be compiled every time we try to make a prediction based on the best-fit curve?
April 10th, 2012 at 7:22 am
Given the sensitivity of histograms to bin width and the apparent robustness (and prettiness) of kernel density estimators, I would think that I’d see them more often in place of histograms. Is there a reason why this method isn’t more commonly used?
April 10th, 2012 at 7:27 am
Is there a prefernce on kernel regression or local polynomial regression under some situations? Could you go into more details of bandwidth selection?
April 10th, 2012 at 7:54 am
I don’t understand the notation in the generalized additive model. g is the link function, but I don’t recognize any variable that allows that link to happen.
April 10th, 2012 at 7:58 am
Is there a general rule for determining whether kernel or local polynomial regression will provide a better fit? Or is it something that just needs to be applied to the data and compared?
April 10th, 2012 at 8:19 am
You mention that any type of PDF could be used as a kernel. Can you give an example of one where a non-normal kernel would be a better choice?
April 10th, 2012 at 8:25 am
Can you go over how we get the kernel density estimate?