deeplearning 源码收集
- Theano – CPU/GPU symbolic expression compiler in python (from MILA lab at University of Montreal)
- Torch – provides a Matlab-like environment for state-of-the-art machine learning algorithms in lua (from Ronan Collobert, Clement Farabet and Koray Kavukcuoglu)
- Pylearn2 - Pylearn2 is a library designed to make machine learning research easy.
- Blocks - A Theano framework for training neural networks
- Tensorflow - TensorFlow™ is an open source software library for numerical computation using data flow graphs.
- MXNet - MXNet is a deep learning framework designed for both efficiency and flexibility.
- Caffe -Caffe is a deep learning framework made with expression, speed, and modularity in mind.Caffe is a deep learning framework made with expression, speed, and modularity in mind.
- Lasagne - Lasagne is a lightweight library to build and train neural networks in Theano.
- Keras- A theano based deep learning library.
- Deep Learning Tutorials – examples of how to do Deep Learning with Theano (from LISA lab at University of Montreal)
- DeepLearnToolbox – A Matlab toolbox for Deep Learning (from Rasmus Berg Palm)
- Cuda-Convnet – A fast C++/CUDA implementation of convolutional (or more generally, feed-forward) neural networks. It can model arbitrary layer connectivity and network depth. Any directed acyclic graph of layers will do. Training is done using the back-propagation algorithm.
- Deep Belief Networks. Matlab code for learning Deep Belief Networks (from Ruslan Salakhutdinov).
- RNNLM- Tomas Mikolov’s Recurrent Neural Network based Language models Toolkit.
- RNNLIB-RNNLIB is a recurrent neural network library for sequence learning problems. Applicable to most types of spatiotemporal data, it has proven particularly effective for speech and handwriting recognition.
- matrbm. Simplified version of Ruslan Salakhutdinov’s code, by Andrej Karpathy (Matlab).
- deeplearning4j- Deeplearning4J is an Apache 2.0-licensed, open-source, distributed neural net library written in Java and Scala.
- Estimating Partition Functions of RBM’s. Matlab code for estimating partition functions of Restricted Boltzmann Machines using Annealed Importance Sampling (from Ruslan Salakhutdinov).
- Learning Deep Boltzmann Machines Matlab code for training and fine-tuning Deep Boltzmann Machines (from Ruslan Salakhutdinov).
- The LUSH programming language and development environment, which is used @ NYU for deep convolutional networks
- Eblearn.lsh is a LUSH-based machine learning library for doing Energy-Based Learning. It includes code for “Predictive Sparse Decomposition” and other sparse auto-encoder methods for unsupervised learning. Koray Kavukcuoglu provides Eblearn code for several deep learning papers on this page.
- deepmat- Deepmat, Matlab based deep learning algorithms.
- MShadow - MShadow is a lightweight CPU/GPU Matrix/Tensor Template Library in C++/CUDA. The goal of mshadow is to support efficient, device invariant and simple tensor library for machine learning project that aims for both simplicity and performance. Supports CPU/GPU/Multi-GPU and distributed system.
- CXXNET - CXXNET is fast, concise, distributed deep learning framework based on MShadow. It is a lightweight and easy extensible C++/CUDA neural network toolkit with friendly Python/Matlab interface for training and prediction.
- Nengo-Nengo is a graphical and scripting based software package for simulating large-scale neural systems.
- Eblearn is a C++ machine learning library with a BSD license for energy-based learning, convolutional networks, vision/recognition applications, etc. EBLearn is primarily maintained by Pierre Sermanet at NYU.
- cudamat is a GPU-based matrix library for Python. Example code for training Neural Networks and Restricted Boltzmann Machines is included.
- Gnumpy is a Python module that interfaces in a way almost identical to numpy, but does its computations on your computer’s GPU. It runs on top of cudamat.
- The CUV Library (github link) is a C++ framework with python bindings for easy use of Nvidia CUDA functions on matrices. It contains an RBM implementation, as well as annealed importance sampling code and code to calculate the partition function exactly (from AIS labat University of Bonn).
- 3-way factored RBM and mcRBM is python code calling CUDAMat to train models of natural images (from Marc’Aurelio Ranzato).
- Matlab code for training conditional RBMs/DBNs and factored conditional RBMs (from Graham Taylor).
- mPoT is python code using CUDAMat and gnumpy to train models of natural images (from Marc’Aurelio Ranzato).
- neuralnetworks is a java based gpu library for deep learning algorithms.
- ConvNet is a matlab based convolutional neural network toolbox.
Theano
http://deeplearning.net/software/theano/
code from: http://deeplearning.net/
Deep Learning Tutorial notes and code
https://github.com/lisa-lab/DeepLearningTutorials
code from: lisa-lab
A Matlab toolbox for Deep Learning
https://github.com/rasmusbergpalm/DeepLearnToolbox
code from: RasmusBerg Palm
deepmat
Matlab Code for Restricted/Deep BoltzmannMachines and Autoencoder
https://github.com/kyunghyuncho/deepmat
code from: KyungHyun Cho http://users.ics.aalto.fi/kcho/
Training a deep autoencoder or a classifieron MNIST digits
http://www.cs.toronto.edu/~hinton/MatlabForSciencePaper.html
code from: Ruslan Salakhutdinov and GeoffHinton
CNN - Convolutional neural network class
http://www.mathworks.cn/matlabcentral/fileexchange/24291
Code from: matlab
Neural Network for Recognition ofHandwritten Digits (CNN)
http://www.codeproject.com/Articles/16650/Neural-Network-for-Recognition-of-Handwritten-Digi
cuda-convnet
A fast C++/CUDA implementation ofconvolutional neural networks
http://code.google.com/p/cuda-convnet/
matrbm
a small library that can train RestrictedBoltzmann Machines, and also Deep Belief Networks of stacked RBM's.
http://code.google.com/p/matrbm/
code from: Andrej Karpathy
Exercise from UFLDL Tutorial:
http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial
and tornadomeet’s bolg: http://www.cnblogs.com/tornadomeet/tag/Deep%20Learning/
and https://github.com/dkyang/UFLDL-Tutorial-Exercise
Conditional Restricted Boltzmann Machines
http://www.cs.nyu.edu/~gwtaylor/publications/nips2006mhmublv/code.html
from Graham Taylor http://www.cs.nyu.edu/~gwtaylor/
Factored Conditional Restricted BoltzmannMachines
http://www.cs.nyu.edu/~gwtaylor/publications/icml2009/code/index.html
from Graham Taylor http://www.cs.nyu.edu/~gwtaylor/
Marginalized Stacked Denoising Autoencodersfor Domain Adaptation
http://www1.cse.wustl.edu/~mchen/code/mSDA.tar
code from: http://www.cse.wustl.edu/~kilian/code/code.html
Tiled Convolutional Neural Networks
http://cs.stanford.edu/~quocle/TCNNweb/pretraining.tar.gz
http://cs.stanford.edu/~pangwei/projects.html
tiny-cnn:
A C++11 implementation of convolutionalneural networks
https://github.com/nyanp/tiny-cnn
myCNN
https://github.com/aurofable/18551_Project/tree/master/server/2009-09-30-14-33-myCNN-0.07
Adaptive Deconvolutional Network Toolbox
http://www.matthewzeiler.com/software/DeconvNetToolbox2/DeconvNetToolbox.zip
Deep Learning手写字符识别C++代码
http://download.csdn.net/detail/lucky_greenegg/5413211
from: http://blog.csdn.net/lucky_greenegg/article/details/8949578
convolutionalRBM.m
A MATLAB / MEX / CUDA-MEX implementation ofConvolutional Restricted Boltzmann Machines.
https://github.com/qipeng/convolutionalRBM.m
from: http://qipeng.me/software/convolutional-rbm.html
rbm-mnist
C++ 11 implementation of Geoff Hinton'sDeep Learning matlab code
https://github.com/jdeng/rbm-mnist
Learning Deep Boltzmann Machines
http://web.mit.edu/~rsalakhu/www/code_DBM/code_DBM.tar
http://web.mit.edu/~rsalakhu/www/DBM.html
Code provided by Ruslan Salakhutdinov
Efficient sparse coding algorithms
http://web.eecs.umich.edu/~honglak/softwares/fast_sc.tgz
http://web.eecs.umich.edu/~honglak/softwares/nips06-sparsecoding.htm
Linear Spatial Pyramid Matching UsingSparse Coding for Image Classification
http://www.ifp.illinois.edu/~jyang29/codes/CVPR09-ScSPM.rar
http://www.ifp.illinois.edu/~jyang29/ScSPM.htm
SPAMS
(SPArse Modeling Software) is anoptimization toolbox for solving various sparse estimation problems.
http://spams-devel.gforge.inria.fr/
sparsenet
Sparse coding simulation software
http://redwood.berkeley.edu/bruno/sparsenet/
fast dropout training
https://github.com/sidaw/fastdropout
http://nlp.stanford.edu/~sidaw/home/start
Deep Learning of Invariant Features viaSimulated Fixations in Video
http://ai.stanford.edu/~wzou/deepslow_release.tar.gz
Sparse filtering
http://cs.stanford.edu/~jngiam/papers/NgiamKohChenBhaskarNg2011_Supplementary.pdf
k-means
http://www.stanford.edu/~acoates/papers/kmeans_demo.tgz
others:
posted on 2015-11-30 21:10 alexanderkun 阅读(1180) 评论(0) 编辑 收藏 举报