deep learning 的综述

从13年11月初开始接触DL,奈何boss忙or 各种问题,对DL理解没有CSDN大神 比如 zouxy09等 深刻,主要是自己觉得没啥进展,感觉荒废时日(丢脸啊,这么久。。。。)开始开文,即为记录自己是怎么一步一个逗比的走过的路的,也为了自己思维更有条理。请看客,轻拍,(如果有错,我会立马改正,谢谢大家的指正。==!其实有人看没人看都是个问题。哈哈)

推荐tornadomeet的博客园学习资料
http://www.cnblogs.com/tornadomeet/category/497607.html

zouxy09 的csdn学习资料
http://blog.csdn.net/zouxy09

sunmenggmail的csdn的DL的paper整理
http://blog.csdn.net/sunmenggmail/article/details/20904867

falao_beiliu的csdn资料
http://blog.csdn.net/mytestmy/article/category/1465487

Rachel-Zhang浙大DL女神
http://blog.csdn.net/abcjennifer/article/details/7826917

下面是综述类的文章,暂时就只记得这一些

2009 Learning Deep Architectures for AI

2010Deep Machine Learning ; A New Frontier in Artificial Intelligence Research

2011An Introduction to Deep Learning
https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2011-4.pdf

2012Representation Learning: A Review and New Perspectives
2014 Deep Learning in Neural Networks: An Overview
http://arxiv.org/abs/1404.7828

2014Object Detectionwith Deep LearningCVPR 2014 Tutorial

2014DEEP LEARNING:METHODS AND APPLICATIONS
微软的邓力大叔,虽然做语音,但是也写了不少的例如综述类的http://research.microsoft.com/en-us/people/deng/

2014A Tutorial Survey of Architectures, Algorithms, and Applications for Deep Learning
http://research.microsoft.com/en-us/people/deng/
其实很多比如PPT什么的就很好,比如hinton的,andrew ng的 ,Yann LeCun的,Yoshua Bengio的,从他们的home page上就可以找到很多有用的文章。他们作为大神,没话说,而且难能可贵的 作为老师,他们也出了视频,或者很多对于我们菜鸟的浅显入门的东西。还有吴立德,吴老爷子的教学视频(优酷就有,但是杂音太多)。

http://blog.coursegraph.com/公开课可下载资源汇总 (这里有很全的视频学习资料,比如ng的机器学习,hinton的机器学习,自然语言处理各种。)

读书列表,有http://deeplearning.net/reading-list/列的,也有Yoshua Bengio推荐的书单(有些链接失效的。我这个月才发现这个,如果太老,或者什么请忽略我。)

Reading lists for new LISA students
Research in General
● How to write a great research paper
Basics of machine learning
● http://www.iro.umontreal.ca/~bengioy/DLbook/math.html
● http://www.iro.umontreal.ca/~bengioy/DLbook/ml.html
Basics of deep learning
● http://www.iro.umontreal.ca/~bengioy/DLbook/intro.html
● http://www.iro.umontreal.ca/~bengioy/DLbook/mlp.html
● Learning deep architectures for AI
● Practical recommendations for gradientbased
training of deep architectures
● Quick’n’dirty introduction to deep learning: Advances in Deep Learning
● A fast learning algorithm for deep belief nets
● Greedy LayerWise
Training of Deep Networks
● Stacked denoising autoencoders: Learning useful representations in a deep network with
a local denoising criterion
● Contractive autoencoders:
Explicit invariance during feature extraction
● Why does unsupervised pretraining
help deep learning?
● An Analysis of Single Layer Networks in Unsupervised Feature Learning
● The importance of Encoding Versus Training With Sparse Coding and Vector
Quantization
● Representation Learning: A Review and New Perspectives
● Deep Learning of Representations: Looking Forward
● Measuring Invariances in Deep Networks
● Neural networks course at USherbrooke [youtube]
Feedforward nets
● http://www.iro.umontreal.ca/~bengioy/DLbook/mlp.html
● “Improving Neural Nets with Dropout” by Nitish Srivastava
● “Deep Sparse Rectifier Neural Networks”
● “What is the best multistage
architecture for object recognition?”
● “Maxout Networks”
MCMC
● Iain Murray’s MLSS slides
● Radford Neal’s Review Paper (old but still very comprehensive)
● Better Mixing via Deep Representations
Restricted Boltzmann Machines
● Unsupervised learning of distributions of binary vectors using 2layer
networks
● A practical guide to training restricted Boltzmann machines
● Training restricted Boltzmann machines using approximations to the likelihood gradient
● Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machine
● How to Center Binary Restricted Boltzmann Machines
● Enhanced Gradient for Training Restricted Boltzmann Machines
● Using fast weights to improve persistent contrastive divergence
● Training Products of Experts by Minimizing Contrastive Divergence
Boltzmann Machines
● Deep Boltzmann Machines (Salakhutdinov & Hinton)
● Multimodal Learning with Deep Boltzmann Machines
● MultiPrediction
Deep Boltzmann Machines
● A Twostage
Pretraining Algorithm for Deep Boltzmann Machines
Regularized Auto-Encoders
● The Manifold Tangent Classifier
Regularization
Stochastic Nets & GSNs
● Estimating or Propagating Gradients Through Stochastic Neurons for Conditional
Computation
● Learning Stochastic Feedforward Neural Networks
● Generalized Denoising AutoEncoders
as Generative Models
● Deep Generative Stochastic Networks Trainable by Backprop
Others
● Slow, Decorrelated Features for Pretraining Complex Celllike
Networks
● What Regularized AutoEncoders
Learn from the Data Generating Distribution
● Generalized Denoising AutoEncoders
as Generative Models
● Why the logistic function?
Recurrent Nets
● Learning longterm
dependencies with gradient descent is difficult
● Advances in Optimizing Recurrent Networks
● Learning recurrent neural networks with Hessianfree
optimization
● On the importance of momentum and initialization in deep learning,
● Long shortterm
memory (Hochreiter & Schmidhuber)
● Generating Sequences With Recurrent Neural Networks
● Long ShortTerm
Memory in Echo State Networks: Details of a Simulation Study
● The "echo state" approach to analysing and training recurrent neural networks
● BackpropagationDecorrelation:
online recurrent learning with O(N) complexity
● New results on recurrent network training:Unifying the algorithms and accelerating
convergence
● Audio Chord Recognition with Recurrent Neural Networks
● Modeling Temporal Dependencies in HighDimensional
Sequences: Application to
Polyphonic Music Generation and Transcription
Convolutional Nets
● http://www.iro.umontreal.ca/~bengioy/DLbook/convnets.html
● Generalization and Network Design Strategies (LeCun)
● ImageNet Classification with Deep Convolutional Neural Networks, Alex Krizhevsky, Ilya
Sutskever, Geoffrey E Hinton, NIPS 2012.
● On Random Weights and Unsupervised Feature Learning
Optimization issues with DL
● Curriculum Learning
● Evolving Culture vs Local Minima
● Knowledge Matters: Importance of Prior Information for Optimization
● Efficient Backprop
● Practical recommendations for gradientbased
training of deep architectures
● Natural Gradient Works Efficiently (Amari 1998)
● Hessian Free
● Natural Gradient (TONGA)
● Revisiting Natural Gradient
NLP + DL
● Natural Language Processing (Almost) from Scratch
● DeViSE: A Deep VisualSemantic
Embedding Model
● Distributed Representations of Words and Phrases and their Compositionality
● Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection
CV+RBM
● Fields of Experts
● What makes a good model of natural images?
● Phone Recognition with the meancovariance
restricted Boltzmann machine
● Unsupervised Models of Images by SpikeandSlab
RBMs
CV + DL
● Imagenet classification with deep convolutional neural networks
● Learning to relate images
Scaling Up
● Large Scale Distributed Deep Networks
● Random search for hyperparameter
optimization
● Practical Bayesian Optimization of Machine Learning Algorithms
DL + Reinforcement learning
● Playing Atari with Deep Reinforcement Learning (paper not officially released yet!)
Graphical Models Background
● An Introduction to Graphical Models (Mike Jordan, brief course notes)
● A View of the EM Algorithm that Justifies Incremental, Sparse and Other Variants (Neal &
Hinton, important paper to the modern understanding of ExpectationMaximization)
● A Unifying Review of Linear Gaussian Models (Roweis & Ghahramani, ties together PCA,
factor analysis, hidden Markov models, Gaussian mixtures, kmeans,
linear dynamical
systems)
● An Introduction to Variational Methods for Graphical Models (Jordan et al, meanfield,
etc.)
Writing
● Writing a great research paper (video of the presentation)
Software documentation
● Python, Theano, Pylearn2, Linux (bash) (at least the 5 first sections), git (5 first sections),
github/contributing to it (Theano doc), vim tutorial or emacs tutorial
Software lists of built-in commands/functions
● Bash commands
● List of Builtin
Python Functions
● vim commands
Other Software stuff to know about:
● screen
● ssh
● ipython
● matplotlib
●

posted @ 2014-08-25 16:51  仙守  阅读(1671)  评论(0编辑  收藏  举报