Posted on Tuesday, 3rd April 2012

Please read sections 16.1-16.2 of Chapter16-APR3.pdf and post a comment.

Posted in Class | Comments (14)

  1. Eric VanEpps Says:

    How do we interpret coefficients that are generated when splines are used to fit linear models? Is the purpose of generating these fits using nonparametric methods just to fit the data as well as possible, even if interpretation is more difficult?

  2. Yijuan Du Says:

    I’m still a little confused about the linear smoother. Can you give an example showing how to do it?

  3. Scott Kennedy Says:

    It is not intuitive to me how a natural spline would reduce the probability that the x variables would be correlated.

  4. Sharlene Flesher Says:

    How are the B-splines mentioned at the end of 16.2.2 found?

  5. Rex Tien Says:

    We focus mainly on cubic splines. But we could theoretically use any type of polynomial for splines. Are the splines limited to polynomials in order to use the regression method for calculating the splines? Or can we use any type of function? If so is there a method for determining what form of function is best for the splines?

  6. Ben Dichter Says:

    it seems the roughness penalty plays a similar role to correction term we used earlier to prevent overfitting of multivariate regression. Is the second derivative always something you want to minimize? Would you penalize anything else?

  7. Shubham Debnath Says:

    Are there other nonlinear smoothers aside from BARS? Are they useful at all? And how much slower are they than linear smoothers or BARS?

  8. Jay Scott Says:

    p489 and 490, I am having a lot of difficulty understanding what you mean by “A second intuition is obtained by replacing the least-squares problem of minimizing the sum of squares with the penalized least squares problem of minimizing the penalized sum of squares where λ is a constant.”

    Also if the squared 2nd derivative is a “roughness penalty” and it’s integral is a roughness measure, does that simply mean that the 1st derivative is the roughness measure? If yes, could would a derivative of any roughness measure be its penalty?

  9. David Zhou Says:

    I realize that this might be problematic, but what is the major error in choosing knots from points of inflection from a derivative of the curve that you’re trying to fit? I’m guessing that’s building some assumptions of behavior into the regression, so can you talk about some of the things that choice of the knot set is trying to avoid?

  10. Matt Panico Says:

    Are splines useful with multiple predictors? That many equations might overload processing time.

  11. Matt Bauman Says:

    One powerful feature of regression is its ability to compare different datasets. How would one compare two datasets when fitting with splines? Do you keep the knot locations constant? Or simply the number of knots? Is this even done? More specifically, can you obtain a p-value in comparison to some null hypothesis (e.g., there’s only one spline segment)?

  12. Amanda Markey Says:

    On page 486, I’m not sure why x4 = (0,0,0,1,8,27,64)T.

  13. Rich Truncellito Says:

    In reading about choosing a number of knots for creating splines, I initially found it counterintuitive that choosing more knots to create more splines generates a less smooth fit than choosing fewer knots to create fewer splines. Does choosing more knots generate a less smooth fit, because with more knots the whole fitted curve becomes overly sensitive to slight changes in the data?

  14. Rob Rasmussen Says:

    I am having trouble seeing how the spline regression is solved by fitting equation 16.3

Leave a Reply

You must be logged in to post a comment.