最大似然估计

最大似然估计

最大似然估计(Maximum likelihood estimation)可以简单理解为我们有一堆数据(数据之间是独立同分布的.iid),为了得到这些数据,我们设计了一个模型,最大似然估计就是求使模型能够得到这些数据的最大可能性的参数,这是一个统计(statistics)问题

与概率(probability)的区别:概率是我们已知参数\(\theta\)来预测结果,比如对于标准高斯分布\(X~N(0, 1)\),我们知道了确切的表达式,那么最终通过模型得到的结果我们大致也可以猜测到。但是对于统计问题,我们预先知道了结果,比如我们有10000个样本(他们可能服从某一分布,假设服从高斯分布),我们的目的就是估计\(\mu \& \sigma\)使得我们假设的模型能够最大概率的生成我们目前知道的样本

一、似然函数定义

似然函数是一种关于统计模型中的参数的函数,表示模型参数中的似然性,用\(L\)表示,给定输出\(x\)时,关于参数\(\theta\)的似然函数\(L(\theta|x)\)在数值上等于给定参数\(\theta\)后变量X的概率

\[L(\theta|x) = P(X=x|\theta) \]

在统计学习中,我们有\(N\)个样本\(x_{1}, x_{2}, x_{3}...x_{N}\),假设他们之间是相互独立的,那么似然函数

\[L(\theta) = P(X_{1}=x_{1}, X_{2}=x_{2}...X_{N}=x_{N}) = \prod_{i=1}^{N}p(X_{i}=x_{i}) = \prod_{i=1}^{N}p(x_{i},\theta) \]

最大似然函数的目的就是求解一个\(\theta\)使得\(L(\theta)\)最大化

二、最大似然估计的无偏性判断

这里用一维高斯分布来判断\(\mu\)\(\sigma^2\)的无偏性及有偏性,一维高斯分布函数

\[f(x|\theta)=f(x|\mu, \sigma)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma ^2}} \]

其中最大似然估计

\[MLE:\hat\theta = \underset {\theta}{\operatorname {arg\,max}}~lnL(X|\mu, \sigma) \]

分为三种情况

(1)已知\(\sigma^{2}\),未知\(\mu\),求\(\mu\)的最大似然估计量\(\hat\mu\)

似然函数:\(L(X|\mu)=\prod_{i=1}^{N}p(x_{i}|\mu)=\prod_{i=1}^{N}\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x_{i}-\mu)^2}{2\sigma ^2}}\)

两边分别取对数:\(lnL(X|\mu)=ln\prod_{i=1}^{N}p(x_{i}|\mu)=-\frac{N}{2}ln(2\pi)-Nln\sigma-\frac{1}{2\sigma^2}\sum_{i=1}^{N}(x_{i}-\mu)^2\)

两边对\(\mu\)求导

\[\frac{dlnL(X|\mu)}{d\mu}=\sum_{i=1}^{N}\frac{1}{\sigma^2}(x_{i}-\mu)=0 \\ \sum_{i=1}^{N}(x_{i}-\mu)=0 \rightarrow \sum_{i=1}^{N}x_{i}-N\mu=0 \\ \hat \mu = \frac{1}{N}\sum_{i=1}^{N}x_{i}= \overline{X} \]

可以发现,当\(\sigma^{2}\)已知时,\(\mu\)的最大似然估计量只受样本的影响,\(\hat \mu\)\(\mu\)的无偏估计

\(E[\hat \mu]=E[\frac{1}{N}\sum_{i=1}^{N}x_{i}]=\frac{1}{N}\sum_{i=1}^{N}E[x_{i}]=\frac{1}{N}N\mu=\mu\)

(2)已知\(\mu\),未知\(\sigma^{2}\),求\(\sigma^{2}\)的最大似然估计量\(\hat\sigma^{2}\)

似然函数:\(L(X|\sigma^{2})=\prod_{i=1}^{N}p(x_{i}|\sigma^{2})=\prod_{i=1}^{N}\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x_{i}-\mu)^2}{2\sigma ^2}}\)

两边分别取对数:\(lnL(X|\sigma^{2})=ln\prod_{i=1}^{N}p(x_{i}|\sigma^{2})=-\frac{N}{2}ln(2\pi)-Nln\sigma-\frac{1}{2\sigma^2}\sum_{i=1}^{N}(x_{i}-\mu)^2\)

两边对\(\sigma^{2}\)求导

\[\frac{dlnL(X|\sigma^{2})}{d\sigma^{2}}=\sum_{i=1}^{N}\frac{1}{\sigma^2}(x_{i}-\mu)=0 \\ -\frac{N}{2\sigma^{2}}+\frac{1}{2\sigma^{4}}\sum_{i=1}^{N}(x_{i}-\mu)^{2}=0 \\ \hat \sigma^{2} = \frac{1}{N}\sum_{i=1}^{N}(x_{i}-\mu)^2 \]

可以发现,当\(\mu\)已知时,\(\hat \sigma^{2}\)的最大似然估计量受到样本以及样本均值的影响,\(\hat \sigma^{2}\)\(\sigma^{2}\)的无偏估计

\(E[\hat \sigma^{2}]=E[\frac{1}{N}\sum_{i=1}^{N}(x_{i}-\mu)^{2}]=E[\frac{1}{N}\sum_{i=1}^{N}x_{i}^{2}-\frac{1}{N}\sum_{i=1}^{N}2x_{i}\mu+\frac{1}{N}\sum_{i=1}^{N}\mu^{2}] = E[\frac{1}{N}\sum_{N}^{i=1}x_{i}^{2}-2\mu^{2}+\mu^{2}] \\ = E[\frac{1}{N}\sum_{i=1}^{N}x_{i}^2-\mu^{2}] = \frac{1}{N}\sum_{i=1}^{N}(E(x_{i}^2)-E^{2}(x_{i})) = D(x_{i}) = \sigma^{2}\)

(3)\(\mu\)\(\sigma^{2}\)均未知,求\(\mu\)\(\sigma^{2}\)的最大似然估计量\(\hat\mu\)\(\hat\sigma^{2}\)

似然函数:\(L(X|\mu, \sigma^{2})=\prod_{i=1}^{N}p(x_{i}|\mu, \sigma^{2})=\prod_{i=1}^{N}\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x_{i}-\mu)^2}{2\sigma ^2}}\)

两边分别取对数:\(lnL(X|\mu, \sigma^{2})=ln\prod_{i=1}^{N}p(x_{i}|\mu, \sigma^{2})=-\frac{N}{2}ln(2\pi)-Nln\sigma-\frac{1}{2\sigma^2}\sum_{i=1}^{N}(x_{i}-\mu)^2\)

  • 两边对\(\mu\)求导

\[\frac{dlnL(X|\mu)}{d\mu}=\sum_{i=1}^{N}\frac{1}{\sigma^2}(x_{i}-\mu)=0 \\ \sum_{i=1}^{N}(x_{i}-\mu)=0 \rightarrow \sum_{i=1}^{N}x_{i}-N\mu=0 \\ \hat \mu = \frac{1}{N}\sum_{i=1}^{N}x_{i}= \overline{X} \]

  • 两边对\(\sigma^{2}\)求导

\[\frac{dlnL(X|\sigma^{2})}{d\sigma^{2}}=\sum_{i=1}^{N}\frac{1}{\sigma^2}(x_{i}-\mu)=0 \\ -\frac{N}{2\sigma^{2}}+\frac{1}{2\sigma^{4}}\sum_{i=1}^{N}(x_{i}-\mu)^{2}=0 \\ \hat \sigma^{2} = \frac{1}{N}\sum_{i=1}^{N}(x_{i}-\hat \mu)^2 = \frac{1}{N}\sum_{i=1}^{N}(x_{i}-\overline X)^2 \]

可以发现,当\(\mu\)的最大似然估计量\(\hat \mu\)只受样本的影响(因为在计算时\(\sigma^{2}\)被消去了),\(\hat \mu\)\(\mu\)的无偏估计

\(E[\hat \mu]=E[\overline X]=E[\frac{1}{N}\sum_{i=1}^{N}x_{i}]=\frac{1}{N}\sum_{i=1}^{N}E[x_{i}]=\frac{1}{N}N\mu=\mu\)

但是在计算\(\sigma^{2}\)的最大似然估计量\(\hat \sigma^{2}\)不仅受到样本的影响,还受到\(\mu\)的影响,其中\(\mu\)未知,只能用计算出的\(\hat \mu\)来替代,通过下面计算可以发现\(\hat \sigma^{2}\)是$ \sigma^{2}$的有偏估计

\[\begin{aligned} E[\hat \sigma^{2}] &= E[\frac{1}{N}\sum_{i=1}^{N}(x_{i}-\overline X)^{2}] = E[\frac{1}{N}\sum_{i=1}^{N}x_{i}^{2}-\frac{1}{N}\sum_{i=1}^{N}2x_{i}\overline X+\frac{1}{N}\sum_{i=1}^{N}\overline X^{2}] \\ & = E[\frac{1}{N}\sum_{N}^{i=1}x_{i}^{2}-2\overline X^{2}+\overline X^{2}] = E\{(\frac{1}{N}\sum_{i=1}^{N}x_{i}^2-\overline X^{2})-(\overline X^{2}-\overline X^{2})\} \\ & = E[(\frac{1}{N}\sum_{i=1}^{N}x_{i}^2-\overline X^{2})]-E(\overline X^{2}-\overline X^{2}) \\ & = \frac{1}{N}\sum_{i=1}^{N}[E(x_{i}^2)-E^{2}(x_{i})]-[E(\overline X^{2})-E^{2}(\overline X)] \\ & = D(x_{i})-D(\overline X) = \sigma^{2}-\frac{\sigma^{2}}{N} =\frac{N-1}{N}\sigma^{2} \end{aligned} \]

所以在计算样本的方差\(S^{2}\)时,需要在在前面乘上一个系数,即\(S^{2}=\frac{N}{N-1}E[\hat \sigma^{2}]\)

三、最大似然和最小二乘的关系

当数据为高斯分布时,最大似然和最小二乘相同

假设一个模型为线性回归模型,噪声为高斯噪声

已知\(f_{\theta}(\mathbf{x}) = f(y|x,w) = \sum_{i=1}^{N}x_{i}w_{i}^{T}+\epsilon = \mathbf{x} \mathbf{w^{T}}+\mathbf{\epsilon}\),设\(\epsilon_{i}~N(0, \sigma^{2})\)\(f(y_{i}|x_{i},w_{i})=y_{i}~N(x_{i}w_{i}^{T}, \sigma^{2})\)

由上面推导的最大似然函数求解:\(\underset {w}{\operatorname {arg\,max}}~lnL(w)=ln\prod_{i=1}^{N}p(y_{i}|x_{i},w_{i})=-\frac{N}{2}ln(2\pi)-Nln\sigma-\frac{1}{2\sigma^2}\sum_{i=1}^{N}(y_{i}-x_{i}w_{i}^{T})^2\)

由于前两项都与\(w\)无关,因此可以将上式简化为:\(\underset {w}{\operatorname {arg\,max}}~lnL(w)=-\frac{1}{2\sigma^2}\sum_{i=1}^{N}(y_{i}-x_{i}w_{i}^{T})^2~\sum_{i=1}^{N}(y_{i}-x_{i}w_{i}^{T})^2\)

而最小二乘法的公式也是如此:\(\underset {w}{\operatorname {arg\,min}}~f(w)=\sum_{i=1}^{N}(y_{i}-x_{i}w_{i}^{T})^2 = \vert\vert Y-XW^{T}\vert\vert_{2}^{2}\)

posted @ 2021-09-20 20:45  harrytea  阅读(945)  评论(0编辑  收藏  举报