Posted on Friday, 17th February 2012
Please read Ch13a.pdf and Sections 12.1-12.4 of Chapter12-FEB17.pdf and post a comment.
Posted in Class | Comments (13)
Leave a Reply
You must be logged in to post a comment.
Posted on Friday, 17th February 2012
Please read Ch13a.pdf and Sections 12.1-12.4 of Chapter12-FEB17.pdf and post a comment.
Posted in Class | Comments (13)
You must be logged in to post a comment.
February 19th, 2012 at 2:25 pm
As highlighted by the plots in figure 12.5, visual analysis of the data can be key in understanding regression fits. With so much recent neuro data having high dimensionality, what are ways to do a visual analysis for high dimensional (more than 3) data?
February 20th, 2012 at 2:07 am
In what types of experiments might it be important to formulate the additional null hypothesis in two way ANOVA, that all the increments added overall means are zero?
I’m unclear about what residual analysis does. How does it represent unstructured noise? What are standardized residuals?
February 21st, 2012 at 11:29 am
I don’t know if I’ve ever seen a confidence interval for a correlation coefficient before, and I must admit that I got lost in trying to understand the derivation of the confidence interval for rho in 12.4.3. Can you explain how this is derived, as well as when such a confidence interval would be important in practice?
February 22nd, 2012 at 12:11 pm
Trying to use ANOVA now…this chapter is a good resource and will be useful.
February 22nd, 2012 at 9:44 pm
I am confused by the purpose of anova. When there are more than two groups, it seems to be throwing information away. Why not just use multiple different t-tests – it seems like it would be a rare case where there would be so many groups that doing t-tests would be too cumbersome.
February 22nd, 2012 at 11:31 pm
The R^2 measure can tell us if two things are linearly related. However, a low R^2 measure could mean the points are totally scattered or that they follow a highly non-linear relationship. Is there a measure of how “continuous” or “tight” the points are? (i.e. something that would be high when the points are arranged in a sine wave, and low when the points are scattered) Or do we have to just do it by eye?
February 22nd, 2012 at 11:43 pm
Can you reiterate how the degrees of freedom for the error are determined?
February 22nd, 2012 at 11:45 pm
I guess my question is, is there a measure that might tell us when to claim the two variables are unrelated, or when to seek out a highly non-linear relationship?
February 23rd, 2012 at 1:29 am
When doing linear regression over many variables, isn’t there an increasing risk of overfitting? How do you know if you are overfitting and how do you choose parameters accordingly?
February 23rd, 2012 at 1:38 am
In the finger-tapping example used for the two-way ANOVA, would it have been reasonable to subtract out the placebo rate as a “baseline” from each individual and compare the on-drug changes to a no-change case?
February 23rd, 2012 at 7:51 am
In 12.4 Correlation and Regression
Is there a preference of using correlation or regression under some conditions?
February 23rd, 2012 at 9:25 am
Can you go over why the one-way ANOVA splits up the variability into group and individual variability?
February 23rd, 2012 at 9:29 am
Does subtracting the mean of the x variable also work to reduce correlations of x with other transformations of x than x^2?
Also, could you explain what AIC and BIC are?