The perceptron learning algorithm

1. Guide

  Consider modifying the logistic regression method to “force” it to output values that are either 0 or 1 or exactly. To do so, it seems natural to change the definition of g to be the threshold function:

        

  If we then let h(x) = g(θT x) as before but using this modified definition of g, and if we use the update rule

           

  then we have the perceptron learning algorithm.

 

2. Note

  In the 1960s, this “perceptron” was argued to be a rough model for how individual neurons in the brain work. Given how simple the algorithm is, it will also provide a starting point for our analysis when we talk about learning theory later in this class. Note however that even though the perceptron may be cosmetically similar to the other algorithms we talked about, it is actually a very different type of algorithm than logistic regression and least squares linear regression; in particular, it is difficult to endow the perceptron’s predictions with meaningful probabilistic interpretations, or derive the perceptron as a maximum likelihood estimation algorithm.

  

  

posted on 2013-04-13 11:46  BigPalm  阅读(265)  评论(0编辑  收藏  举报

导航