一、监督学习
1、回归模型
1.1 线性回归模型
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209104218360-1990744685.png)
求解
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209104307997-1061552749.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209110445143-1557150122.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209110459762-1870436959.png)
最小二乘法
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209111106897-1021618395.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209111129238-2146261677.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209111230544-13557956.png)
梯度下降法
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209111505784-223124309.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209111517741-1291121875.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209111527504-448522935.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209111558626-633216696.png)
2、分类模型
2.1 K近邻(KNN)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209112947949-191634410.png)
示例
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209113124639-1742332840.png)
KNN距离计算
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209113206749-1454652605.png)
KNN算法
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209113238951-1839889756.png)
2.2 逻辑斯蒂回归
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209114103757-1391110524.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209114114821-2102251815.png)
逻辑斯蒂回归 —— 分类问题
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209114127217-2112350316.png)
Sigmoid函数(压缩函数)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209114414909-899404810.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209114425631-273556000.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209114435750-89828517.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209114449730-1215069706.png)
逻辑斯谛回归损失函数
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209115003198-1307846907.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209115016352-1229147450.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209115027082-1240501611.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209115037205-1861968664.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209115048648-951995752.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209115102192-957224516.png)
梯度下降法求解
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209115114846-844990943.png)
2..3 决策树
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209131204799-620408968.png)
示例
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209131338328-204630716.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209131518635-1224388698.png)
决策树与 if-then 规则
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209131616131-1690718607.png)
决策树的目标
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209131708598-680815088.png)
特征选择
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209131913351-1935473829.png)
随机变量
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209131953568-7991111.png)
熵
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209132208249-996365072.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209132226459-1632688094.png)
示例
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209132257235-1781212804.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209132311637-1859558438.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209132324460-736413694.png)
决策树的目标
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209132529126-303431816.png)
条件熵(conditional entropy)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209132611277-1629962827.png)
信息增益
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209132651498-1234616660.png)
决策树生成算法
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209132726947-1900439674.png)
二、无监督学习
1、聚类 —— k均值
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209133002195-2025418156.png)
![](https://img2020.cnblogs.com/blog/1267983/202102/1267983-20210209133013707-598758931.png)
posted @
2021-02-09 13:34
勤奋的园
阅读(
191)
评论()
编辑
收藏
举报