Linear Regression and Maximum Likelihood Estimation
Imagination is an outcome of what you learned. If you can imagine the world, that means you have learned what the world is about.
Actually we don't know how we see, at lease it's really hard to know, so we can't program to tell a machine to see.
One of the most important part in machine learning is to introspect how our brain learn by subconscious. If we can't introspect, it can be fairly hard to replicate a brain.
Linear Models
Supervised learning of linear models can be divided into 2 phases:
-
Training:
- Read training data points with labels , where ;
- Estimate model parameters by certain learning Algorithms.
Note: The parameters are the information the model learned from data.
-
Prediction:
- Read a new data point without label (typically has never seen before);
- Along with parameter , estimate unknown label .
1-D example:
First of all, we create a linear model:
Both and are scalars in this case.
Then we, for example, take SSE (Sum of Squared Error) as our objective / loss / cost / energy / error function[1]:
Linear Prediction Model
In general, each data point should have dimensions, and the corresponding number of parameters should be .
The mathematical form of linear model is:
The matrix form of linear model is:
Or in a more compact way:
Note that the matrix form is widely used not only because it's a concise way to represent the model, but is also straightforward for coding in MatLab or Python (Numpy).
Optimization Approach
In order to optimize the model prediction, we need to minimize the quadratic cost:
by setting the derivatives w.r.t vector to zero since the cost function is strictly convex and the domain of is convex[2].
So we get as an analytical solution:
After passing by these procedures, we can see that learning is just about to adjust model parameters so as to minimize the objective function.
Thus, the prediction function can be rewrite as:
where refers to hat matrix because it added hat to
Multidimensional Label
So far we have been assuming to be a scalar. But what if the model have multiple outputs (e.g. outputs)? Simply align with parameters:
Linear Regression with Maximum Likelihood
If we assume that each label is Gaussian distributed with mean and variance :
Likelihood
With a reasonable i.i.d. assumption over , we can decompose the joint distribution of likelihood:
Maximum Likelihood Estimation
Then our goal is to maximize the probability of the label in our Gaussian linear regression model w.r.t. and .
Instead of minimizing the cost function SSE (length of blue lines), this time we maximize likelihood (length of green lines) to optimize the model parameters.
Since function is monotonic and can simplify exponent function, here we utilize log-likelihood:
MLE of :
There's no surprise that the estimation of maximum likelihood is identical to that of least-square method.
Besides where the "line" is, using MLE with Gaussian will give us the uncertainty, or confidence as another parameter, of the prediction
MLE of :
Thus, we get:
which is the standard estimate of variance, or mean squared error (MSE).
However, this uncertainty estimator does not work very well. We'll see another uncertainty estimator later that is very powerful.
Again, we analytically obtain the optimal parameters for the model to describe labeled data points.
Prediction
Since we have had the optimal parameters of our linear regression model, making prediction is simply get the mean of the Gaussian given different test data point :
with uncertainty .
Frequentist Learning
Maximum Likelihood Learning is part of frequentist learning.
Frequentist learning assumes there is a truth (true model) of parameter that if we had adequate data, we would be able to recover that truth. The core of learning in this case is to guess / estimate / learn the parameter w.r.t. the true model given finite number of training data.
Maximum likelihood is essentially trying to approximate model parameter by maximizing likelihood (joint probability of data given parameter), i.e.
Given data points with corresponding labels , we choose the value of model parameter that is most probable to generate such data points.
Also note that frequentist learning relies on Law of Large Numbers.
KL Divergence and MLE
Given i.i.d assumption on data from distribution :
Then we add a constant value onto the equation and then divide by the constant number :
Recall Law of Large Numbers that is: as ,
where is simulated from
Again, we know from frequentist learning that data point . Hence, as goes , the MLE of becomes
Therefore, maximizing likelihood is equivalent to minimizing KL divergence.
Entropy and MLE
In the last part, we get
The first integral in the equation above is negative entropy w.r.t. true parameter , i.e. information in the world , while the second integral is negative cross entropy w.r.t. model parameter and true parameter ., i.e. information from model. The equation says, if the information in the world matches information from model, then the model has learned!
Statistical Quantities of Frequentist Learning
There are 2 quantities that frequentist often estimate:
- bias
- variance
Refer: CPSC540, UBC
Written with StackEdit.
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· .NET10 - 预览版1新功能体验(一)