10_machine learning_regularize linear Regression and classification

regularize linear Regression

how to certify which values are needed to be regularize?

\[simpler\ model\ less\ likely\ to\ overfit\ \\ J(\vec{w},b)=\frac{1}{2m}\sum\limits^{m}_{i=1}(f_{\vec{w,b}}(\vec{x}^{(i)})-y^{(i)})^2+\frac{\lambda }{2m}\sum\limits_{i=1}^{n}w_j^2 \\ \frac{\lambda }{2m}\sum\limits_{i=1}^{n}w_j^2 \ is \ the \ regularization \ term \]

sometimes we also get the $\frac{\lambda}{2m}b^2 $ but the influence of this is little, so we just only consider the w

min: the mean squared error fit the data min: the regularization term : keep the wj small

the $\lambda $ we choose is essential : too big to underfit, too small to overfit

so we need to choose a good $\lambda $ to balance both goals.

regularized linear regression

the function of regularization is just to minimize the wj a little

the math of the regression

regularized logistic regression

just like the linear regression

the gradient descent of the two is just like same

**but you should remember that the $f_{\vec{w},b}(\vec{x}^{(i)}) $ is different **

posted @ 2022-11-26 15:25  lycheezhang  阅读(10)  评论(0编辑  收藏  举报