【Machine Learning】Supervised Learning

  Concept:

  Supervised learning(监督学习) is to tell the algorithm what the close right answer is for a number of examples,and then we want the algorithm to replicate more or the same.

  Part I Liner Regression(线性回归)

  Liner function(线性函数):

   

  cost funtion(成本函数):

  

  When  is minimization,the  is optimal solution.

  Part II LMS(Least mean squares) Algorithm                Iterative method

  We use gradient descent algorithm to choose .

  Search function:

       (for every j)

  Frist, we use it that starts with some "initial guess" for , and that repeatedly change  to make  smaller,until hopefully we converge to a value of  that minimizes   .

  Use one training example (x,y), we can infer that 

  

  So, for a single training example, the update rule is:

  

  For m training sample, the update rule is:

  Repeat until convergence {

          (for every j)

  }

  This is called batch gradient descent.

  But, if m is large, batch gradient descent has to scan through the entire training set before taking a single step. It's a costly operation.

  Stochastic gradient descent can deal with it.

  for i=1 to m, {

      (for every j)

  }

  But the accuracy of this algorithm is lower than former.

  Part III Normal Equations

  1、Matrix derivatives (矩阵导数)   

  

  2、Trace Operator:

  

 

  

posted @ 2013-10-21 21:46  ssdut-deng  阅读(299)  评论(0编辑  收藏  举报