一 kkt 

1: [https://www.cnblogs.com/liaohuiqiang/p/7805954.html]

2: [https://www.zhihu.com/search?type=content&q=kkt]

二 PCA

1: [http://blog.codinglabs.org/articles/pca-tutorial.html]

三 SVM

1: [MT](链接太长啦)

四 EM算法

1: 个人认为李航老师的方法有点难,没这个好-EM-最大期望算法 | D.W's Notes - Machine Learning

2: 三硬币模型[最大似然函数是错误的, 统计学习方法是正确的] : 三硬币模型与EM算法 - 知乎

五 梯度下降,反向传播

1: [https://blog.csdn.net/qq_36459893/article/details/82796304](李宏毅老师讲的特别好) 

2: 机器学习-李宏毅老师-笔记

六 tensorfolw

1: [tensorflow实战]

七 word2vc

1: [https://www.cnblogs.com/pinard/p/7243513.html] 哈夫曼树

八 偏执方差分解

1: https://blog.csdn.net/pipisorry/article/details/50638749

九 多维高斯分布

https://zhuanlan.zhihu.com/p/58987388

十 深度学习模型

1 GNU [https://zhuanlan.zhihu.com/p/32481747]

2 RNN(梯度消失)[https://zhuanlan.zhihu.com/p/28687529]

3 LSTM(l其如何解决梯度消失)[https://weberna.github.io/blog/2017/11/15/LSTM-Vanishing-Gradients.html]

4 Bert(李宏毅)[https://www.jianshu.com/p/f4ed3a7bec7c]

5 Adam [https://blog.paperspace.com/intro-to-optimization-momentum-rmsprop-adam/]

6 GNN [https://towardsdatascience.com/a-gentle-introduction-to-graph-neural-network-basics-deepwalk-and-graphsage-db5d540d50b3]

   GAN [https://github.com/Yangyangii/GAN-Tutorial/blob/master/MNIST/VanillaGAN.ipynb]

7 GraphSAGE [https://blog.csdn.net/yanhe156/article/details/97793589]

8 GNN [https://blog.csdn.net/weixin_40871455/article/details/86515934]

十一 totch

1: torchtext

http://mlexplained.com/2018/02/08/a-comprehensive-tutorial-to-torchtext/

http://mlexplained.com/2018/02/15/language-modeling-tutorial-in-torchtext-practical-torchtext-part-2/

十二  MATH

1: 复合函数,链式求导 (画图#一目了然)

https://zhuanlan.zhihu.com/p/61585348

2:  不动点迭代与牛顿迭代

https://wenku.baidu.com/view/48350321dd36a32d7375818e.html