深度学习算法索引(持续更新)
https://zhuanlan.zhihu.com/p/26004118
机器学习最近几年发展如同野兽出笼,网上的资料铺天盖地。学习之余建立一个索引,把最有潜力(不求最全)的机器学习算法、最好的教程、贴近工业界最前沿的开源代码收录其中。个人能力有限,希望知友指正和补充。
Model篇
1. Reinforcement Learning
领军人物:david silver
教程
2015年david silver的UCL Course on RL:Teaching
david silver的Tutorial: Deep Reinforcement Learning:http://hunch.net/~beygel/deep_rl_tutorial.pdf
Deep Reinforcement Learning, Spring 2017课程:CS 294 Deep Reinforcement Learning, Spring 2017
david silver发表在nature上的alphago算法:Mastering the Game of Go with Deep Neural Networks and Tree Search
2014年Deterministic Policy Gradient Algorithms
ICLR 2016 deepmind发表的DDPG算法:CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING,https://arxiv.org/pdf/1509.02971v2.pdf
2015年Deep Reinforcement Learning with Double Q-learning
2015年Massively Parallel Methods for Deep Reinforcement ... Parallel Methods for Deep Reinforcement Learning
2016年PRIORITIZED EXPERIENCE REPLAY
2016年DUELING NETWORK ARCHITECTURES FOR DEEP REINFORCEMENT LEARNING
2017年 Value Iteration Networks
博客
深度 | David Silver全面解读深度强化学习:从基础概念到AlphaGo
重磅 | Facebook 田渊栋详解:深度学习如何进行游戏推理?
2. GAN
领军人物:Ian Goodfellow
2014年Ian提出GAN:[1406.2661] Generative Adversarial Networks
2017年WGAN:Wasserstein GAN 以及源码实现martinarjovsky/WassersteinGAN
reddit讨论:[R] [1701.07875] Wasserstein GAN • r/MachineLearning
2017年Google Brain的AdaGAN:https://arxiv.org/pdf/1701.02386.pdf
博客
3. Deep Learning
领军人物:Hinton, lecun, bengio三巨头
教程
Google首席研发科学家Vincent Vanhoucke主讲的课程,浅显易懂。从机器学习到深度学习(Udacity)
@
卷积神经网路的tricks:A guide to convolution arithmetic for deep learning(https://arxiv.org/pdf/1603.07285v1.pdf)
Ian Goodfellow and Yoshua Bengio and Aaron Courville写的Deep learning书:Deep Learning
4. RNN和LSTM
领军人物:Alex Graves,他的个人主页Home Page of Alex Graves
博客
有哪些LSTM(Long Short Term Memory)和RNN(Recurrent)网络的教程?
5. Attention Model (Encoder-Decoder框架)
领军人物:?
Neural Machine Translation by Jointly Learning to Align and Translate(Yoshua Bengio):[1409.0473] Neural Machine Translation by Jointly Learning to Align and Translate
Encoding Source Language with Convolutional Neural Network for Machine Translation(Li Hang):https://arxiv.org/abs/1503.01838
Survey on Attention-based Models Applied in NLP
Attention based model 是什么,它解决了什么问题?中@Tao Lei 的回答。
Sequence to Sequence Learning with Neural Networks以及源码https://www.tensorflow.org/tutorials/seq2seq
A Neural Attention Model for Abstractive Sentence Summarization
博客
自然语言处理中的Attention Model:是什么及为什么
源码篇
1. DMLC
Distributed (Deep) Machine Learning Community
2. tensorflow
3. Caffe/Caffe2
Caffe | Deep Learning Framework
4. 微软的开源
5. Facebook
计算机围棋开源程序:facebookresearch/darkforestGo,负责人@田渊栋
应用篇
Deep Reinforcement Learning应用于Go,也就是AlphaGo。
Youtube视频推荐:Deep Neural Networks for YouTube Recommendations
Google的CTR预估model:Wide & Deep Learning for Recommender Systems,开源代码https://www.tensorflow.org/versions/r0.12/tutorials/wide_and_deep/
微软的DSSM模型:DSSM can be used to develop latent semantic models that project entities of different types (e.g., queries and documents) into a common low-dimensional semantic space for a variety of machine learning tasks such as ranking and classification. DSSM - Microsoft Research
~~ 20170416 更新 ~~
position bias 优化:Position-Normalized Click Prediction in Search Advertising