Recitation 3 April 30
description
Transcript of Recitation 3 April 30
![Page 1: Recitation 3 April 30](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815f60550346895dce480b/html5/thumbnails/1.jpg)
RECITATION 3APRIL 30
Spline and Kernel methodGaussian Processes
![Page 2: Recitation 3 April 30](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815f60550346895dce480b/html5/thumbnails/2.jpg)
Penalized Cubic Regression Splines
• gam() in library “mgcv”
• gam( y ~ s(x, bs=“cr”, k=n.knots) , knots=list(x=c(…)), data = dataset)
• By default, the optimal smoothing parameter selected by GCV
• R Demo 1
![Page 3: Recitation 3 April 30](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815f60550346895dce480b/html5/thumbnails/3.jpg)
Kernel Method• Nadaraya-Watson locally constant model
• locally linear polynomial model
• How to define “local”?• By Kernel function, e.g. Gaussian kernel
• R Demo 1• R package: “locfit”• Function: locfit(y~x, kern=“gauss”, deg= , alpha= )• Bandwidth selected by GCV: gcvplot(y~x, kern=“gauss”, deg= ,
alpha= bandwidth range)
![Page 4: Recitation 3 April 30](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815f60550346895dce480b/html5/thumbnails/4.jpg)
Gaussian Processes• Distribution on functions
• f ~ GP(m,κ)• m: mean function• κ: covariance function
• p(f(x1), . . . , f(xn)) N∼ n(μ, K)• μ = [m(x1),...,m(xn)]• Kij = κ (xi,xj)
• Idea: If xi, xj are similar according to the kernel, then f(xi) is similar to f(xj)
![Page 5: Recitation 3 April 30](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815f60550346895dce480b/html5/thumbnails/5.jpg)
Gaussian Processes – Noise free observations
• Example task: • learn a function f(x) to estimate y, from data (x, y)• A function can be viewed as a random variable of infinite dimensions
• GP provides a distribution over functions.
![Page 6: Recitation 3 April 30](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815f60550346895dce480b/html5/thumbnails/6.jpg)
Gaussian Processes – Noise free observations• Model
• (x, f) are the observed locations and values (training data)• (x*, f*) are the test or prediction data locations and values.
• After observing some noise free data (x, f),
• Length-scale• R Demo 2
![Page 7: Recitation 3 April 30](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815f60550346895dce480b/html5/thumbnails/7.jpg)
• Model• (x, y) are the observed locations and values (training data)• (x*, f*) are the test or prediction data locations and values.
• After observing some noisy data (x, y),
• R Demo 3
Gaussian Processes – Noisy observations(GP for Regression)
![Page 8: Recitation 3 April 30](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815f60550346895dce480b/html5/thumbnails/8.jpg)
Reference• Chapter 2 from Gaussian Processes for Machine Learning
Carl Edward Rasmussen and Christopher K. I. Williams
• 527 lecture notes by Emily Fox