Locally weighted linear regression

1. Guide

            

  The leftmost figure shows the result of fitting a y = θ0 + θ1x1 to a dataset. We see that the data doesn’t really lie on straight line, and so the fit is not very good. This is called underfitting.---there is only one feature, it's too few.

      So, we add an extra feature x12, and fit y = θ0 + θ1x1 + θ2x2, here x2 = x12. The middle figure shows a better fitting.

      The rightmost figure is the result of fitting 5-th order polynomial.This is called overfitting.---there is too many features.(compare to dataset)

  As discussed previously, and as shown in the example above, the choice of features is important to ensuring good performance of a learning algorithm.

 

2. LWR

  We assuming there is sufficient training data, makes the choice of features less critical.

  In the original linear regression algorithm, to make a prediction at a query point x (i.e., to evaluate h(x)), we would:

  a. Fit θ to minimize ∑i(y(i) − θT x(i))2.

  b. Output θT x.

  In contrast, the locally weighted linear regression algorithm does the following:

  a. Fit θ to minimize ∑iw(i)(y(i) − θT x(i))2.

  b. Output θT x.

  Here, the w(i)’s are non-negative valued weights. Intuitively, if w(i) is large for a particular value of i, then in picking θ, we’ll try hard to make (y(i) − θT x(i))2 small. If w(i) is small, then the (y(i) − θT x(i))2 error term will be pretty much ignored in the fit.

  A fairly standard choice for the weights is:

                          

  ps. If x is vector-valued, this is generalized to be w(i) = exp(−(x(i)−x)T (x(i)−x)/(2τ2)), or w(i) = exp(−(x(i) − x)T∑−1(x(i) − x)/2), for an appropriate choice of τ or .

  Note that the weights depend on the particular point x at which we’re trying to evaluate x. Moreover, if |x(i) − x| is small, then w(i) is close to 1; and if |x(i) − x| is large, then w(i) is small. Hence, θ is chosen giving a much higher “weight” to the (errors on) training examples close to the query point x. (Note also that while the formula for the weights takes a form that is cosmetically similar to the density of a Gaussian distribution, the w(i)’s do not directly have anything to do with Gaussians, and in particular the w(i) are not random variables, normally distributed or otherwise.)---if |x(i) − x| is large, then w(i) is small, then the (y(i) − θT x(i))2 error term will be pretty much ignored in the fit.

  The parameter τ controls how quickly the weight of a training example falls off with distance of its x(i) from the query point x; τ is called the bandwidth parameter.

  Locally weighted linear regression is the first example we’re seeing of a non-parametric algorithm. The (unweighted) linear regression algorithm we saw earlier is known as a parametric learning algorithm, because it has a fixed, finite number of parameters (the θi’s), which are fit to the data. Once we’ve fit the θi’s and stored them away, we no longer need to keep the training data around to make future predictions. In contrast, to make predictions using locally weighted linear regression, we need to keep the entire training set around. The term “non-parametric” (roughly) refers to the fact that the amount of stuff we need to keep in order to represent the hypothesis h grows linearly with the size of the training set.(we store the dataset)

    

posted on 2013-04-13 10:39  BigPalm  阅读(519)  评论(0编辑  收藏  举报

导航