36-402, Section A, Spring 2019
19 February 2019
\[ \newcommand{\TrueRegFunc}{\mu} \newcommand{\EstRegFunc}{\widehat{\mu}} \newcommand{\Expect}[1]{\mathbb{E}\left[ #1 \right]} \newcommand{\Var}[1]{\mathbb{V}\left[ #1 \right]} \DeclareMathOperator*{\argmin}{argmin} % thanks, wikipedia! \]
Yes, if bias and variance both \(\rightarrow 0\), then the estimate must converge on the truth
mgcvgam and newer, nicer mgcvgam (Generalized Additive Model)
lm but can tell it to smooth variables:library(mgcv)
x <- matrix(runif(1000, min=-pi, max=pi), ncol=2)
y <- 0.5*x[,1]+sin(x[,2])+rnorm(n=nrow(x), sd=0.1)
df <- data.frame(y=y, x1=x[,1], x2=x[,2])
babys.first.am <- gam(y ~ s(x1)+s(x2), data=df)
par(mfrow=c(1,2)); plot(babys.first.am)s()s() as “smooth”, but it really means “spline”gam() knows about lots of other smoother techniques
te(), ti(), etc., etc., etc.mgcv::gampredict: Works just like with lm (or npreg for that matter), see help(predict.gam) for detailsfitted, residuals: Work just like with lmplot: We just saw this; many options, some described in text, see help(plot.gam) for detailscoefficients: Mostly coefficients of the spline polynomials — NOT INTERPRETABLEgam(y ~ x1+s(x2))gam(y ~ s(x1, x2))# Handling missing values by padding in NAs:
am.fits <- predict(chicago.am, newdata=chicago, na.action=na.pass)
plot(death ~ time, data=chicago)
points(chicago$time, am.fits, pch="x", col="red")gam rather than lm