[np-ml] Ridge Regression

算法描述

Ridge regression uses the same simple linear regression model but adds an additional penalty on the L2-norm of the coefficients to the loss function. This is sometimes known as Tikhonov regularization.

In particular, the ridge model is the same as the ordinary Linear Squares model:

y=bX+ϵ

where $\epsilon\sim\mathcal{N}(0, \sigma^{2}), except now the error for the model is calculated as:

L=ybX22+αb22

The MLE for the model parameter b can be computed in closed form via the adjusted normal equation:

b^ridge=(XX+αI)1Xy

where (XX+αI)1 is the pseudo-inverse/Moore-Penrose inverse adjusted for L2 penalty on the model coefficients.

代码实现

import numpy as np
def fit(X, y, fit_intercept=True, alpha=1):
if fit_intercept:
X = np.c_[np.ones(X.shape[0]), X]
A = alpha * np.eye(X.shape[1])
pseudo_inverse = np.linalg.inv(X.T @ X + A) @ X.T
beta = pseudo_inverse @ y
return beta

参考资料

code
doc-overview
doc

posted @   WrRan  阅读(8)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 单线程的Redis速度为什么快?
· 展开说说关于C#中ORM框架的用法!
· Pantheons:用 TypeScript 打造主流大模型对话的一站式集成库
· SQL Server 2025 AI相关能力初探
· 为什么 退出登录 或 修改密码 无法使 token 失效
点击右上角即可分享
微信分享提示