《机器学习》线性模型公式推导与算法实现

线性回归

参考西瓜书《机器学习》线性回归

给定训练集\(D={(\boldsymbol x_1, y_1), (\boldsymbol x_2, y_2), ..., (\boldsymbol x_i, y_i), ( \boldsymbol x_n, y_n)}\),其中\(\boldsymbol {x_i} = (x_{i1};x_{i2}; ...; x_{im})\)\(y_i\in \mathbb{R}\).线性回归(linear regression)试图学得一个线性模型以尽可能准确地预测实值输出标记

以一元线性回归为例,来详细讲解\(w\)\(b\)的最小二乘法估计。

线性回归试图学得:$$f(x_i)=wx_i+b,使得f(x_i)\simeq y_i$$

  • 最小二乘法是基于预测值和真实值的均方差最小化的方法来估计参数\(w\)\(b\),即:

    \[\eqalign{ (w^*,b^*) & = \mathop {\arg \min }\limits_{(w,b)} \sum\limits_{i = 1}^n {{{(f({x_i}) - {y_i})}^2}} \cr & = \mathop {\arg \min }\limits_{(w,b)} \sum\limits_{i = 1}^n {{{({y_i} - w{x_i} - b)}^2}} \cr}\]

  • 求解\(w\)\(b\)使\({E_{(w,b)}} = \sum\limits_{i = 1}^n {{{({y_i} - w{x_i} - b)}^2}}\)最小化的过程,称为线性回归模型的最小二乘“参数估计”(parameter estimation)。

  • 我们可将\(E_{(w,b)}\)分别对\(w\)\(b\)求导,得到

    \[\eqalign{ & {{\partial E(w,b)} \over {\partial w}} = 2\left( {w\sum\limits_{i = 1}^n {x_i^2 - \sum\limits_{i = 1}^n {({y_i} - b){x_i}} } } \right) \cr & {{\partial E(w,b)} \over {\partial b}} = 2\left( {nb - \sum\limits_{i = 1}^n {({y_i} - w{x_i})} } \right) \cr}\]

  • 令上两式为零可得到\(w\)\(b\)最优解的闭式(closed-form)解:

    \[\eqalign{ w & = {{\sum\limits_{i = 1}^n {{y_i}({x_i} - \bar x)} } \over {\sum\limits_{i = 1}^n {x_i^2 - {\textstyle{1 \over n}}{{\left( {\sum\limits_{i = 1}^n {{x_i}} } \right)}^2}} }} \cr b & = {\textstyle{1 \over n}}\sum\limits_{i = 1}^n {({y_i} - w{x_i})} \cr}\]

    其中,$$\bar x = {\textstyle{1 \over n}}\sum\limits_{i = 1}^n {{x_i}} $$为\(x\)的均值。

更一般的情况

给定数据集D,样本由d个属性描述,此时我们试图学得

\(f(\boldsymbol {x_i}) = \boldsymbol {w^T}\boldsymbol {x_i} + b\), 使得\(f({x_i}) \simeq {y_i}\)

这称为“多元线性回归”(multivariate linear regression).

类似的,可利用最小二乘法来对\(w\)\(b\)进行估计。为便于讨论,我们把\(w\)\(b\)吸收入向量形式\(\widehat w = (w;b)\)

相应的,把数据集D表示为一个mx(d+1)大小的矩阵\({\bf{X}}\),其中每行对应于一个示例,该行前d个元素对应于前d个属性值,最后一个元素恒置为1,即

\[{\bf{X}} = \left( {\matrix{ {{x_{11}}} & \cdots & {{x_{1d}}} & 1 \cr {{x_{21}}} & \cdots & {{x_{2d}}} & 1 \cr \vdots & \ddots & \vdots & 1 \cr {{x_{m1}}} & \cdots & {{x_{md}}} & 1 \cr } } \right) = \left( {\matrix{ {x_1^T} \cr {x_2^T} \cr \vdots \cr {x_m^T} \cr } \matrix{ 1 \cr 1 \cr \vdots \cr 1 \cr } } \right)\]

再把标记也写成向量形式\(\boldsymbol{y}=(y_1;y_2;...;y_m)\),有

\[\hat {\boldsymbol {w}^*} = \mathop {\arg \min }\limits_{\hat w} \boldsymbol{(y - X\hat w)^T}\boldsymbol {(y - X\hat w)} \]

\({E_{\hat w}} = {(\boldsymbol y - {\bf{X}}\boldsymbol {\hat w})^T}(\boldsymbol y - {\bf{X}}\boldsymbol {\hat w})\),对\(\hat w\)求导得到:

\[{{\partial {E_{\hat w}}} \over {\partial \hat {\boldsymbol w}}} = 2{{\bf{X}}^T}({\bf{X}}\hat w - \boldsymbol y) \]

令上式为零可得\(\hat {w}\)最优解的闭式解。

\({{\bf{X}}^T}{\bf{X}}\)为满秩矩阵或正定矩阵时,令上式为零可得:

\[\boldsymbol {\hat w^*} = {({{\bf{X}}^T}{\bf{X}})^{ - 1}}{{\bf{X}}^T}\boldsymbol {y} \]

\(\boldsymbol {\hat x_i} = (\boldsymbol{x_i};1)\),则最终学得的多元线性回归模型为

\[f({\boldsymbol{\hat x_i}}) = \boldsymbol {\hat x_i}{({{\bf{X}}^T}{\bf{X}})^{ - 1}}{{\bf{X}}^T}\boldsymbol y \]

广义线性模型

线性回归假定输入空间到输出空间的函数映射成线性关系,但现实应用中,很多问题都是非线性的。为了拓展其应用场景,我们可以将线性回归的预测值做一个非线性的函数变化去逼近真实值,这样得到的模型统称为广义线性模型(generalized linear model):

\[y = {g^{ - 1}}(\boldsymbol{w^T}\boldsymbol x + b) \]

其中函数\(g(\cdot)\)称为“联系函数”(link function),它连续且充分光滑

当联系函数为\(g(\cdot)=ln(\cdot)\)时,称为对数线性回归。即

\[\ln y= \boldsymbol{w^T}\boldsymbol x + b \\ y = e^{\boldsymbol{w^T}\boldsymbol x + b}\]

逻辑斯蒂回归

前面介绍的都是完成回归任务,如果要做的是分类任务该怎么办呢?

按照上面介绍的广义线性模型,要完成分类任务,也就是去寻找一个合适的\(g(\cdot)\)函数。为了简化问题,我们先考虑二分类任务,其输出标记\(y \in \{ 0,1\}\)。线性模型产生的预测值\(z=\boldsymbol{w^T}\boldsymbol {x} + b\)是实值,因此,我们需要将实值z转换为0/1值。最理想的是“单位阶跃函数”。

\[y=\begin{cases} 0,&z<0 \\ 0.5, &z=0 \\ 1,&z>0 \end{cases} \]

即若预测值z大于零就判为正例,小于零则判为反例,预测值为临界值零则可任意判定,如图所示
单位阶跃函数与对数几率函数

从图中可以发现,单位阶跃函数不连续,因此不能直接用作联系函数\(g(\cdot)\)。于是我们希望找到能在一定程度上近似单位阶跃函数的替代函数,并希望它在临界点连续且单调可微。

对数几率函数(logistic function)正是这样一个常用的替代函数。

\[y = \frac{1}{{1 + {e^{ - (\boldsymbol{w^T}\boldsymbol x + b)}}}} \]

从图中可以看出,它的图像形式S,对这类图像形式S的函数称为"Sigmoid函数",对数几率函数是它最重要的代表。

这种利用对数几率函数去逼近真实标记的对数几率的模型称为“对数几率回归”,又常叫做“逻辑斯蒂回归”,虽然它的名字是“回归”,但实际上是一种分类学习方法。

对数逻辑回归有很多优点:

  • 1.可以直接对分类可能性进行预测,将y视为样本x作为正例的概率;
  • 2.无需事先假设数据分布,这样避免了假设分布不准确带来的问题;
  • 3.它是任意阶可导的凸函数,可直接应用现有的数值优化算法求取最优解。

参数\(\boldsymbol w\)\(b\)的确定

将y视为样本\(\boldsymbol x\)作为正例的可能性。显然有:

\[p(y=1|\boldsymbol x) = \frac{{{e^{\boldsymbol {w^T}\boldsymbol x + b}}}}{{1 + {e^{\boldsymbol {w^T}\boldsymbol x + b}}}} \]

\[p(y=0|\boldsymbol x) = \frac{1}{{1 + {e^{\boldsymbol {w^T}\boldsymbol x + b}}}} \]

于是,可通过“极大似然法”来估计参数\(\boldsymbol w\)\(b\)。给定数据集\(\{ (\boldsymbol {x_i},{y_i})\} _{i = 1}^m\),其“对数似然”为:

\[l(\boldsymbol w,b) = \sum\limits_{i = 1}^n {\ln p({y_i}|\boldsymbol {x_i};\boldsymbol w,b)} \]

\[\eqalign{ & \boldsymbol \beta = (\boldsymbol w;b),\hat {\boldsymbol x} = (\boldsymbol x;1) \cr & {\boldsymbol \beta ^T}\hat {\boldsymbol x} = \boldsymbol{w^T}\boldsymbol x + b \cr & {p_1}(\boldsymbol {\hat x};\boldsymbol \beta ) = p(y = 1|\boldsymbol {\hat x};\boldsymbol \beta ) \cr & {p_0}(\boldsymbol {\hat x};\boldsymbol \beta ) = p(y = 0|\boldsymbol {\hat x};\boldsymbol \beta ) \cr} \]

则似然项可重写为:

\[p({y_i}|\boldsymbol {x_i};\boldsymbol w,b) = {y_i}{p_1}({\boldsymbol {\hat x_i}};\boldsymbol \beta ) + (1 - {y_i}){p_0}(\boldsymbol {\hat x};\boldsymbol \beta ) \]

将上式带入似然函数:

\[\eqalign{ l(\beta ) & = \sum\limits_{i = 1}^n {\ln ({y_i}{p_1}(\hat x;\beta ) + (1 - {y_i}){p_0}(\hat x;\beta )} \cr & = \sum\limits_{i = 1}^n {\ln \left( {{y_i}\frac{{{e^{{\beta ^T}\hat x}}}}{{1 + {e^{{\beta ^T}\hat x}}}} + (1 - {y_i})\frac{1}{{1 + {e^{{\beta ^T}\hat x}}}}} \right)} \cr & = \sum\limits_{i = 1}^n {\ln \left( {\frac{{{y_i}{e^{{\beta ^T}\hat x}} + (1 - {y_i})}}{{1 + {e^{{\beta ^T}\hat x}}}}} \right)} \cr & = \sum\limits_{i = 1}^n {\ln \left( {\frac{{{y_i}({e^{{\beta ^T}\hat x}} - 1) + 1}}{{1 + {e^{{\beta ^T}\hat x}}}}} \right)} \cr & = \sum\limits_{i = 1}^n {\left( {\ln ({y_i}({e^{{\beta ^T}\hat x}} - 1) + 1) - \ln (1 + {e^{{\beta ^T}\hat x}})} \right)} \cr & =\begin{cases} {\sum\limits_{i = 1}^n { - \ln (1 + {e^{{\beta ^T}\hat x}})} },&y_i=0 \\ {\sum\limits_{i = 1}^n {({\beta ^T}\hat x - \ln (1 + {e^{{\beta ^T}\hat x}}))} },&y_i=1 \cr \end{cases}} \]

考虑到\(y_i \in \{0, 1\}\),即最大化\(l(\boldsymbol w,b)\)等价于最小化

\[l(\boldsymbol \beta ) = \sum\limits_{i = 1}^n {(-y_i\boldsymbol{\beta ^T}\boldsymbol {\hat x_i} + \ln (1 + {e^{{\beta ^T}\hat x_i}}))} \]

接下来可以利用数值优化算法对其求解,即

\[\beta ^* {\text{ = }}\mathop {\arg \min }\limits_\beta l(\beta ) \]

对数逻辑回归代码实现

回归模型

\[y = \frac{1}{{1 + {e^{ - z}}}} \]

损失函数(最小化):

\[l(\boldsymbol \beta ) = \sum\limits_{i = 1}^n {(-y_i\boldsymbol{\beta ^T}\boldsymbol {\hat x_i} + \ln (1 + {e^{{\beta ^T}\hat x_i}}))} \]

\[\beta ^* {\text{ = }}\mathop {\arg \min }\limits_\beta l(\beta ) \]

以牛顿法求解为例:其第\(t+1\)轮迭代解的更新公式为

\[\boldsymbol \beta ^{t+1}=\boldsymbol \beta ^t-\left ( \frac{\partial ^2l(\boldsymbol \beta))}{\partial {\boldsymbol \beta}\partial {\boldsymbol \beta} ^T} \right )^{-1}\frac{\partial l(\boldsymbol \beta)}{\partial \boldsymbol \beta} \]

其中关于\(\boldsymbol \beta\)的一阶、二阶导数分别为

\[\frac{\partial l(\beta)}{\partial \beta} = -\sum ^m_{i=1}\hat x_i(y_i-p_1(\hat x_i;\beta)) \]

\[\frac{\partial ^2l(\beta)}{\partial \beta\partial \beta ^T}=\sum ^m_{i=1}\hat x_i\hat x_i^Tp_1(\hat x_i;\beta)(1-p_1(\hat x_i; \beta)) \]

import os
import numpy as np
import pandas as pd
data = pd.read_csv("../data/irisdata.txt")
# 只保留两种标签,进行二分类任务
data = data[data['name'] != 'Iris-setosa']
data['name'].value_counts()
Iris-versicolor    50
Iris-virginica     50
Name: name, dtype: int64
# 分离标签,并将标签映射到数值
y = data['name']
y[y == 'Iris-versicolor'] = 1
y[y == 'Iris-virginica'] = 0
X = data.drop('name', axis=1)
# 划分训练集和验证集
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)
class LogisticReressionClassifier:
    def __init__(self, max_iter):
        self.max_iter = max_iter
        
    def sigmod(self, z):
        return 1 / (1 + np.exp(-z))
    
    def fit(self, X, y):
        self.beta = np.random.normal(size=(X.shape[0], X.shape[1] + 1))  # 初始化参数
        self.X_hat = np.c_[X, np.ones(X.shape[0])]  # 为数据集加入一列1
        self.loss_function(X, y)  # 打印训练前loss
        for j in range(self.max_iter): # 迭代优化
            pd1 = 0  # 一阶偏导
            for i in  range(len(y)):
                pd1 -= self.X_hat[i]*(y[i] - self.sigmod(np.dot(self.beta[i].T, self.X_hat[i])))
            pd2 = 0  # 二阶偏导
            for i in range(len(y)):
                pd2 += self.X_hat[i].dot(self.X_hat[i].T.dot(self.sigmod(self.beta[i].T.dot(self.X_hat[i]))*(1 - self.sigmod(self.beta[i].T.dot(self.X_hat[i])))))
            self.beta = self.beta - (1 / pd2)*pd1  # 更新参数beta
        self.loss_function(X, y)  # 打印训练后的loss
        
    def loss_function(self, X, y):
        loss = 0
        # 根据损失函数公式计算当前loss
        for i in range(len(y)):
            loss += -y[i]*np.dot(self.beta[i].T, self.X_hat[i]) + np.log(1 + np.exp(np.dot(self.beta[i].T, self.X_hat[i])))
        print(loss)
        
    def predict(self, X):
        y = [] # 存储预测结果
        X = np.c_[X, np.ones(X.shape[0])]  # 为训练集加入一列1
        for i in range(X.shape[0]):
            # 计算样本作为正例的相对可能性(几率)
            odds = self.sigmod(np.mean(self.beta, axis=0).T.dot(X[i]))
            if (odds >= 0.5):
                y.append(1)
            else:
                y.append(0)
        return y

clf = LogisticReressionClassifier(10000)
clf.fit(X_train.values, y_train.values)
187.27577618364464
38.2785420108109
y_pred = clf.predict(X_test.values)
# 正确率
sum(y_pred == y_test)/len(y_test)
0.96

更多

MachineEpoch

posted @ 2019-07-05 09:40  HeoLis  阅读(1524)  评论(0编辑  收藏  举报