正则化——逻辑回归

逻辑回归的代价函数为

\[J\left( \theta  \right) =  - \left[ {\frac{1}{m}\sum\limits_{i = 1}^m {{y^{\left( i \right)}}\log {h_\theta }\left( {{x^{\left( i \right)}}} \right) + \left( {1 - {y^{\left( i \right)}}} \right)\log \left( {1 - {h_\theta }\left( {{x^{\left( i \right)}}} \right)} \right)} } \right]\]

正则化后

\[J\left( \theta  \right) =  - \left[ {\frac{1}{m}\sum\limits_{i = 1}^m {{y^{\left( i \right)}}\log {h_\theta }\left( {{x^{\left( i \right)}}} \right) + \left( {1 - {y^{\left( i \right)}}} \right)\log \left( {1 - {h_\theta }\left( {{x^{\left( i \right)}}} \right)} \right)} } \right] + \frac{\lambda }{{2m}}\sum\limits_{j = 1}^n {\theta _j^2} \]

此时梯度下降算法为

重复{

\[{\theta _0}: = {\theta _0} - \alpha \left[ {\frac{1}{m}\sum\limits_{i = 1}^m {\left( {{h_\theta }\left( {{x^{\left( i \right)}}} \right) - {y^{\left( i \right)}}} \right)x_0^{\left( i \right)}} } \right]\]

\[{\theta _j}: = {\theta _j} - \alpha \left[ {\frac{1}{m}\sum\limits_{i = 1}^m {\left( {{h_\theta }\left( {{x^{\left( i \right)}}} \right) - {y^{\left( i \right)}}} \right)x_j^{\left( i \right)} + \frac{\lambda }{m}{\theta _j}} } \right]\left( {j = 1,2,...,n} \right)\]

}

(注意:区分逻辑回归与线性回归的h(x))

posted @ 2018-10-28 18:32  qkloveslife  阅读(697)  评论(0编辑  收藏  举报