What are some good books/papers for learning deep learning?
Yes, if you have prior machine learning experience it should be easy to get started with deep learning. The Deep Learning book should help you, for sure. So would the deep learning summer school. See my other answers, about Yoshua Bengio: How can one get started with machine learning?
I’d first start off by checking off this checklist: Karlijn Willems' answer to How does a total beginner start to learn machine learning if they have some knowledge of programming languages? , which is concerned with learning Machine Learning.
The seven following steps (+ resources!), listed below, are included in the answer above.
- Assess, refresh and learn math and stats.
- Don’t be scared of investing in “theory”.
- Get hands-on.
- Practice.
- Don’t be scared of projects.
- Don’t stop.
- Make use of all the material that is out there.
It’s clear from step 6 that you never stop learning and since deep learning is a subfield of Machine Learning that is a set of algorithms that is inspired by the structure and function of the brain, deep learning will also fall under this.
The steps that I outlined will still stay the same, but you’ll probably want to make use of Deep Learning-specific material:
Tutorials
- Keras Tutorial: Deep Learning in Python
- TensorFlow Tutorial For Beginners
- keras: Deep Learning in R
- http://www.marekrei.com/blog/the...
Courses
- Deep Learning | Coursera
- Deep Learning in Python
- fast.ai · Making neural nets uncool again
- CS224d: Deep Learning for Natural Language Processing
Books
Sites
- Deep Learning
- http://playground.tensorflow.org
- TensorFlow
- Keras Documentation
- Welcome - Theano 0.9.0 documentation
Video
Cheat sheets, infographics, …
- Keras Cheat Sheet: Neural Networks in Python (cheat sheet)
- The Neural Network Zoo - The Asimov Institute (infographic)
Relevant posts, papers, …
Blogs
Podcasts
- Episodes
- This Week in Machine Learning and AI Podcast
- Learning Machines 101: A Gentle Introduction to Artificial Intelligence and Machine Learning
This is just a small overview; There is much more material out there!
At first, I would recommend you to go through this video that will give you an overall description of Deep Learning in a concise manner:
If you have no prior experience with Machine Learning, you may find it difficult to start with Deep Learning as Deep Learning is a sub-set of Machine Learning. But don’t worry, if you follow proper structure, you will be able to do it conveniently. Now, let me give you a proper structure that I believe one should follow in order to get started with Deep Learning:
- Basic Mathematics - Many of the Deep Learning concepts involves maths, if you are interested in knowing things in and out. Therefore, before diving deep into Deep Learning, you should have basic knowledge of statistics, probability, linear algebra and machine learning algorithm.
- Programming Tool/Library: There are many tools or libraries out there that provide all the required build - in functions and data structure need for implementing different type of Deep Neural Networks. You need to choose one of them based on the latest trend and how much familiar you are with the programming language used by them. I would recommend you to go ahead with TensorFlow library based on Python that has been developed by Google and is quite trending nowadays. Following is the video on TensorFlow Tutorial that explains all the basics first and then take you through various use cases:
- Perceptron and Neural Network - A perceptron is a fundamental unit of a neural network that mimics the functionality of a biological neuron. You need to understand its functioning like how the output of one layer in a neural network is propagated forward to the next layer so as to learn the intrinsic features of the input data set.
- Gradient Descent and Backpropagation - Once you are familiar with the standard neural network , you need to understand the mechanism by which these networks are trained to obtain the correct results. In fact, you need to understand this part quite well so that you can choose the correct variants of gradient descent or other optimization techniques that suits your use case.
- Hands on Experience in Training Neural Networks - Although it is quite evident, still I think it is necessary to cast light on the importance of hands on experience. You need to work on different projects that are already there on the internet as well as devise some your own. In fact, you need to experiment a lot by tweaking different parameters and playing around with different API’s provided by the framework that you have chosen for Deep Learning.
- Advanced Neural Network Types: Once you are familiar with neural network and have gained enough experience in training a neural networks, you can move ahead and explore more advanced topics such as CNN(Convolutional Neural Networks),RNN (Recurrent Neural Networks), RBM(Restricted Boltzmann Machine) andAuto-Encoders.
- CNN are a special case of Deep Neural Network that has been successfully applied to analyze visual imagery.
- RNN, which is also a type of deep neural mode that are good at processing arbitrary sequences of inputs and therefore, are quite effective for Speech Recognition or sequences with temporal dependencies.
- RBMs also a special case of Artificial Neural Network are used for applications in dimensionality reduction, classification, collaborativefiltering, feature learning and topic modelling.
- Auto-encoder is an artificial neural network used for unsupervised learning of efficient codings.
You may also start with this introductory blog on Deep Learning that explains idea behind deep learning in a nutshell or you go through the following video on Deep Learning Tutorial:
At last, I want to remind you again about the importance of hands on experience because this is the only way to gain various insights on the working of a neural network. Hope it helps!
Free Online Books
- Deep Learning by Yoshua Bengio, Ian Goodfellow and Aaron Courville (05/07/2015)
- Neural Networks and Deep Learning by Michael Nielsen (Dec 2014)
- Deep Learning by Microsoft Research (2013)
- Deep Learning Tutorial by LISA lab, University of Montreal (Jan 6 2015)
- neuraltalk by Andrej Karpathy : numpy-based RNN/LSTM implementation
- An introduction to genetic algorithms
- Artificial Intelligence: A Modern Approach
- Deep Learning in Neural Networks: An Overview
Courses
- Machine Learning - Stanford by Andrew Ng in Coursera (2010-2014)
- Machine Learning - Caltech by Yaser Abu-Mostafa (2012-2014)
- Machine Learning - Carnegie Mellon by Tom Mitchell (Spring 2011)
- Neural Networks for Machine Learning by Geoffrey Hinton in Coursera (2012)
- Neural networks class by Hugo Larochelle from Université de Sherbrooke (2013)
- Deep Learning Course by CILVR lab @ NYU (2014)
- A.I - Berkeley by Dan Klein and Pieter Abbeel (2013)
- A.I - MIT by Patrick Henry Winston (2010)
- Vision and learning - computers and brains by Shimon Ullman, Tomaso Poggio, Ethan Meyers @ MIT (2013)
- Convolutional Neural Networks for Visual Recognition - Stanford by Fei-Fei Li, Andrej Karpathy (2015)
- Deep Learning for Natural Language Processing - Stanford
- Neural Networks - usherbrooke
- Machine Learning - Oxford (2014-2015)
- Deep Learning - Nvidia (2015)
Videos and Lectures
- How To Create A Mind By Ray Kurzweil
- Deep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng
- Recent Developments in Deep Learning By Geoff Hinton
- The Unreasonable Effectiveness of Deep Learning by Yann LeCun
- Deep Learning of Representations by Yoshua bengio
- Principles of Hierarchical Temporal Memory by Jeff Hawkins
- Machine Learning Discussion Group - Deep Learning w/ Stanford AI Lab by Adam Coates
- Making Sense of the World with Deep Learning By Adam Coates
- Demystifying Unsupervised Feature Learning By Adam Coates
- Visual Perception with Deep Learning By Yann LeCun
- The Next Generation of Neural Networks By Geoffrey Hinton at GoogleTechTalks
- The wonderful and terrifying implications of computers that can learn By Jeremy Howard at TEDxBrussels
- Unsupervised Deep Learning - Stanford by Andrew Ng in Stanford (2011)
- Natural Language Processing By Chris Manning in Stanford
Papers
- ImageNet Classification with Deep Convolutional Neural Networks
- Using Very Deep Autoencoders for Content Based Image Retrieval
- Learning Deep Architectures for AI
- CMU’s list of papers
- Neural Networks for Named Entity Recognition zip
- Training tricks by YB
- Geoff Hinton's reading list (all papers)
- Supervised Sequence Labelling with Recurrent Neural Networks
- Statistical Language Models based on Neural Networks
- Training Recurrent Neural Networks
- Recursive Deep Learning for Natural Language Processing and Computer Vision
- Bi-directional RNN
- LSTM
- GRU - Gated Recurrent Unit
- GFRNN . .
- LSTM: A Search Space Odyssey
- A Critical Review of Recurrent Neural Networks for Sequence Learning
- Visualizing and Understanding Recurrent Networks
- Wojciech Zaremba, Ilya Sutskever, An Empirical Exploration of Recurrent Network Architectures
- Recurrent Neural Network based Language Model
- Extensions of Recurrent Neural Network Language Model
- Recurrent Neural Network based Language Modeling in Meeting Recognition
- Deep Neural Networks for Acoustic Modeling in Speech Recognition
- Speech Recognition with Deep Recurrent Neural Networks
- Reinforcement Learning Neural Turing Machines
- Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
- Google - Sequence to Sequence Learning with Nneural Networks
- Memory Networks
- Policy Learning with Continuous Memory States for Partially Observed Robotic Control
- Microsoft - Jointly Modeling Embedding and Translation to Bridge Video and Language
- Neural Turing Machines
- Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
Tutorials
- UFLDL Tutorial 1
- UFLDL Tutorial 2
- Deep Learning for NLP (without Magic)
- A Deep Learning Tutorial: From Perceptrons to Deep Networks
- Deep Learning from the Bottom up
- Theano Tutorial
- Neural Networks for Matlab
- Using convolutional neural nets to detect facial keypoints tutorial
- Torch7 Tutorials
- The Best Machine Learning Tutorials On The Web
- VGG Convolutional Neural Networks Practical
Datasets
- MNIST Handwritten digits
- Google House Numbers from street view
- CIFAR-10 and CIFAR-1004.
- IMAGENET
- Tiny Images 80 Million tiny images6.
- Flickr Data 100 Million Yahoo dataset
- Berkeley Segmentation Dataset 500
- UC Irvine Machine Learning Repository
- Flickr 8k
- Flickr 30k
- Microsoft COCO
- VQA
- Image QA
- AT&T Laboratories Cambridge face database
- AVHRR Pathfinder
- Air Freight - The Air Freight data set is a ray-traced image sequence along with ground truth segmentation based on textural characteristics. (455 images + GT, each 160x120 pixels). (Formats: PNG)
- Amsterdam Library of Object Images - ALOI is a color image collection of one-thousand small objects, recorded for scientific purposes. In order to capture the sensory variation in object recordings, we systematically varied viewing angle, illumination angle, and illumination color for each object, and additionally captured wide-baseline stereo images. We recorded over a hundred images of each object, yielding a total of 110,250 images for the collection. (Formats: png)
- Annotated face, hand, cardiac & meat images - Most images & annotations are supplemented by various ASM/AAM analyses using the AAM-API. (Formats: bmp,asf)
- Image Analysis and Computer Graphics
- Brown University Stimuli - A variety of datasets including geons, objects, and "greebles". Good for testing recognition algorithms. (Formats: pict)
- CAVIAR video sequences of mall and public space behavior - 90K video frames in 90 sequences of various human activities, with XML ground truth of detection and behavior classification (Formats: MPEG2 & JPEG)
- Machine Vision Unit
- CCITT Fax standard images - 8 images (Formats: gif)
- CMU CIL's Stereo Data with Ground Truth - 3 sets of 11 images, including color tiff images with spectroradiometry (Formats: gif, tiff)
- CMU PIE Database - A database of 41,368 face images of 68 people captured under 13 poses, 43 illuminations conditions, and with 4 different expressions.
- CMU VASC Image Database - Images, sequences, stereo pairs (thousands of images) (Formats: Sun Rasterimage)
- Caltech Image Database - about 20 images - mostly top-down views of small objects and toys. (Formats: GIF)
- Columbia-Utrecht Reflectance and Texture Database - Texture and reflectance measurements for over 60 samples of 3D texture, observed with over 200 different combinations of viewing and illumination directions. (Formats: bmp)
- Computational Colour Constancy Data - A dataset oriented towards computational color constancy, but useful for computer vision in general. It includes synthetic data, camera sensor data, and over 700 images. (Formats: tiff)
- Computational Vision Lab
- Content-based image retrieval database - 11 sets of color images for testing algorithms for content-based retrieval. Most sets have a description file with names of objects in each image. (Formats: jpg)
- Efficient Content-based Retrieval Group
- Densely Sampled View Spheres - Densely sampled view spheres - upper half of the view sphere of two toy objects with 2500 images each. (Formats: tiff)
- Computer Science VII (Graphical Systems)
- Digital Embryos - Digital embryos are novel objects which may be used to develop and test object recognition systems. They have an organic appearance. (Formats: various formats are available on request)
- Univerity of Minnesota Vision Lab
- El Salvador Atlas of Gastrointestinal VideoEndoscopy - Images and Videos of his-res of studies taken from Gastrointestinal Video endoscopy. (Formats: jpg, mpg, gif)
- FG-NET Facial Aging Database - Database contains 1002 face images showing subjects at different ages. (Formats: jpg)
- FVC2000 Fingerprint Databases - FVC2000 is the First International Competition for Fingerprint Verification Algorithms. Four fingerprint databases constitute the FVC2000 benchmark (3520 fingerprints in all).
- Biometric Systems Lab - University of Bologna
- Face and Gesture images and image sequences - Several image datasets of faces and gestures that are ground truth annotated for benchmarking
- German Fingerspelling Database - The database contains 35 gestures and consists of 1400 image sequences that contain gestures of 20 different persons recorded under non-uniform daylight lighting conditions. (Formats: mpg,jpg)
- Language Processing and Pattern Recognition
- Groningen Natural Image Database - 4000+ 1536x1024 (16 bit) calibrated outdoor images (Formats: homebrew)
- ICG Testhouse sequence - 2 turntable sequences from ifferent viewing heights, 36 images each, resolution 1000x750, color (Formats: PPM)
- Institute of Computer Graphics and Vision
- IEN Image Library - 1000+ images, mostly outdoor sequences (Formats: raw, ppm)
- INRIA's Syntim images database - 15 color image of simple objects (Formats: gif)
- INRIA
- INRIA's Syntim stereo databases - 34 calibrated color stereo pairs (Formats: gif)
- Image Analysis Laboratory - Images obtained from a variety of imaging modalities -- raw CFA images, range images and a host of "medical images". (Formats: homebrew)
- Image Analysis Laboratory
- Image Database - An image database including some textures
- JAFFE Facial Expression Image Database - The JAFFE database consists of 213 images of Japanese female subjects posing 6 basic facial expressions as well as a neutral pose. Ratings on emotion adjectives are also available, free of charge, for research purposes. (Formats: TIFF Grayscale images.)
- ATR Research, Kyoto, Japan
- JISCT Stereo Evaluation - 44 image pairs. These data have been used in an evaluation of stereo analysis, as described in the April 1993 ARPA Image Understanding Workshop paper ``The JISCT Stereo Evaluation'' by R.C.Bolles, H.H.Baker, and M.J.Hannah, 263--274 (Formats: SSI)
- MIT Vision Texture - Image archive (100+ images) (Formats: ppm)
- MIT face images and more - hundreds of images (Formats: homebrew)
- Machine Vision - Images from the textbook by Jain, Kasturi, Schunck (20+ images) (Formats: GIF TIFF)
- Mammography Image Databases - 100 or more images of mammograms with ground truth. Additional images available by request, and links to several other mammography databases are provided. (Formats: homebrew)
- ftp://ftp.cps.msu.edu/pub/prip - many images (Formats: unknown)
- Middlebury Stereo Data Sets with Ground Truth - Six multi-frame stereo data sets of scenes containing planar regions. Each data set contains 9 color images and subpixel-accuracy ground-truth data. (Formats: ppm)
- Middlebury Stereo Vision Research Page - Middlebury College
- Modis Airborne simulator, Gallery and data set - High Altitude Imagery from around the world for environmental modeling in support of NASA EOS program (Formats: JPG and HDF)
- NIST Fingerprint and handwriting - datasets - thousands of images (Formats: unknown)
- NIST Fingerprint data - compressed multipart uuencoded tar file
- NLM HyperDoc Visible Human Project - Color, CAT and MRI image samples - over 30 images (Formats: jpeg)
- National Design Repository - Over 55,000 3D CAD and solid models of (mostly) mechanical/machined engineerign designs. (Formats: gif,vrml,wrl,stp,sat)
- Geometric & Intelligent Computing Laboratory
- OSU (MSU) 3D Object Model Database - several sets of 3D object models collected over several years to use in object recognition research (Formats: homebrew, vrml)
- OSU (MSU/WSU) Range Image Database - Hundreds of real and synthetic images (Formats: gif, homebrew)
- OSU/SAMPL Database: Range Images, 3D Models, Stills, Motion Sequences - Over 1000 range images, 3D object models, still images and motion sequences (Formats: gif, ppm, vrml, homebrew)
- Signal Analysis and Machine Perception Laboratory
- Otago Optical Flow Evaluation Sequences - Synthetic and real sequences with machine-readable ground truth optical flow fields, plus tools to generate ground truth for new sequences. (Formats: ppm,tif,homebrew)
- Vision Research Group
- ftp://ftp.limsi.fr/pub/quenot/op... - Real and synthetic image sequences used for testing a Particle Image Velocimetry application. These images may be used for the test of optical flow and image matching algorithms. (Formats: pgm (raw))
- LIMSI-CNRS/CHM/IMM/vision
- LIMSI-CNRS
- Photometric 3D Surface Texture Database - This is the first 3D texture database which provides both full real surface rotations and registered photometric stereo data (30 textures, 1680 images). (Formats: TIFF)
- SEQUENCES FOR OPTICAL FLOW ANALYSIS (SOFA) - 9 synthetic sequences designed for testing motion analysis applications, including full ground truth of motion and camera parameters. (Formats: gif)
- Computer Vision Group
- Sequences for Flow Based Reconstruction - synthetic sequence for testing structure from motion algorithms (Formats: pgm)
- Stereo Images with Ground Truth Disparity and Occlusion - a small set of synthetic images of a hallway with varying amounts of noise added. Use these images to benchmark your stereo algorithm. (Formats: raw, viff (khoros), or tiff)
- Stuttgart Range Image Database - A collection of synthetic range images taken from high-resolution polygonal models available on the web (Formats: homebrew)
- Department Image Understanding
- The AR Face Database - Contains over 4,000 color images corresponding to 126 people's faces (70 men and 56 women). Frontal views with variations in facial expressions, illumination, and occlusions. (Formats: RAW (RGB 24-bit))
- Purdue Robot Vision Lab
- The MIT-CSAIL Database of Objects and Scenes - Database for testing multiclass object detection and scene recognition algorithms. Over 72,000 images with 2873 annotated frames. More than 50 annotated object classes. (Formats: jpg)
- The RVL SPEC-DB (SPECularity DataBase) - A collection of over 300 real images of 100 objects taken under three different illuminaiton conditions (Diffuse/Ambient/Directed). -- Use these images to test algorithms for detecting and compensating specular highlights in color images. (Formats: TIFF )
- Robot Vision Laboratory
- The Xm2vts database - The XM2VTSDB contains four digital recordings of 295 people taken over a period of four months. This database contains both image and video data of faces.
- Centre for Vision, Speech and Signal Processing
- Traffic Image Sequences and 'Marbled Block' Sequence - thousands of frames of digitized traffic image sequences as well as the 'Marbled Block' sequence (grayscale images) (Formats: GIF)
- IAKS/KOGS
- U Bern Face images - hundreds of images (Formats: Sun rasterfile)
- U Michigan textures (Formats: compressed raw)
- U Oulu wood and knots database - Includes classifications - 1000+ color images (Formats: ppm)
- UCID - an Uncompressed Colour Image Database - a benchmark database for image retrieval with predefined ground truth. (Formats: tiff)
- UMass Vision Image Archive - Large image database with aerial, space, stereo, medical images and more. (Formats: homebrew)
- UNC's 3D image database - many images (Formats: GIF)
- USF Range Image Data with Segmentation Ground Truth - 80 image sets (Formats: Sun rasterimage)
- University of Oulu Physics-based Face Database - contains color images of faces under different illuminants and camera calibration conditions as well as skin spectral reflectance measurements of each person.
- Machine Vision and Media Processing Unit
- University of Oulu Texture Database - Database of 320 surface textures, each captured under three illuminants, six spatial resolutions and nine rotation angles. A set of test suites is also provided so that texture segmentation, classification, and retrieval algorithms can be tested in a standard manner. (Formats: bmp, ras, xv)
- Machine Vision Group
- Usenix face database - Thousands of face images from many different sites (circa 994)
- View Sphere Database - Images of 8 objects seen from many different view points. The view sphere is sampled using a geodesic with 172 images/sphere. Two sets for training and testing are available. (Formats: ppm)
- PRIMA, GRAVIR
- Vision-list Imagery Archive - Many images, many formats
- Wiry Object Recognition Database - Thousands of images of a cart, ladder, stool, bicycle, chairs, and cluttered scenes with ground truth labelings of edges and regions. (Formats: jpg)
- 3D Vision Group
- Yale Face Database - 165 images (15 individuals) with different lighting, expression, and occlusion configurations.
- Yale Face Database B - 5760 single light source images of 10 subjects each seen under 576 viewing conditions (9 poses x 64 illumination conditions). (Formats: PGM)
- Center for Computational Vision and Control
Frameworks
- Caffe
- Torch7
- Theano
- cuda-convnet
- convetjs
- Ccv
- NuPIC
- DeepLearning4J
- Brain
- DeepLearnToolbox
- Deepnet
- Deeppy
- JavaNN
- hebel
- Mocha.jl
- OpenDL
- cuDNN
- MGL
- KUnet.jl
- Nvidia DIGITS - a web app based on Caffe
- Neon - Python based Deep Learning Framework
- Keras - Theano based Deep Learning Library
- Chainer - A flexible framework of neural networks for deep learning
- RNNLM Toolkit
- RNNLIB - A recurrent neural network library
- char-rnn
- MatConvNet: CNNs for MATLAB
- Minerva - a fast and flexible tool for deep learning on multi-GPU
Miscellaneous
- Google Plus - Deep Learning Community
- Caffe Webinar
- 100 Best Github Resources in Github for DL
- Word2Vec
- Caffe DockerFile
- TorontoDeepLEarning convnet
- gfx.js
- Torch7 Cheat sheet
- Misc from MIT's 'Advanced Natural Language Processing' course
- Misc from MIT's 'Machine Learning' course
- Misc from MIT's 'Networks for Learning: Regression and Classification' course
- Misc from MIT's 'Neural Coding and Perception of Sound' course
- Implementing a Distributed Deep Learning Network over Spark
- A chess AI that learns to play chess using deep learning.
- Reproducing the results of "Playing Atari with Deep Reinforcement Learning" by DeepMind
- Wiki2Vec. Getting Word2vec vectors for entities and word from Wikipedia Dumps
- The original code from the DeepMind article + tweaks
- Google deepdream - Neural Network art
- An efficient, batched LSTM.
- A recurrent neural network designed to generate classical music.
We’ve just relaunched a new course on Tensorflow: Creative Applications of Deep Learning with TensorFlow | Kadenze
Unlike other courses, this is an application-led course, teaching you fundamentals of Tensorflow as well as state-of-the-art algorithms by encouraging exploration through the development of creative thinking and creative applications of deep neural networks. We’ve already built a very strong community with an active forum and Slack, where students are able to ask each other questions and learn from each others approaches on the homework. I highly encourage you to try this course. There are plenty of *GREAT* resources for learning Deep Learning and Tensorflow. But this is the only comprehensive online course that will both teach you how to use Tensorflow and develop your creative potential for understanding how to apply the techniques in creating Neural Networks. The feedback has been overwhelmingly positive. Please have a look!
Course Information:
This course introduces you to deep learning: the state-of-the-art approach to building artificial intelligence algorithms. We cover the basic components of deep learning, what it means, how it works, and develop code necessary to build various algorithms such as deep convolutional networks, variational autoencoders, generative adversarial networks, and recurrent neural networks. A major focus of this course will be to not only understand how to build the necessary components of these algorithms, but also how to apply them for exploring creative applications. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets and using them to self-organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of another image. Deep learning offers enormous potential for creative applications and in this course we interrogate what's possible. Through practical applications and guided homework assignments, you'll be expected to create datasets, develop and train neural networks, explore your own media collections using existing state-of-the-art deep nets, synthesize new content from generative algorithms, and understand deep learning's potential for creating entirely new aesthetics and new ways of interacting with large amounts of data.
SCHEDULE
Session 1: Introduction To Tensorflow
We'll cover the importance of data with machine and deep learning algorithms, the basics of creating a dataset, how to preprocess datasets, then jump into Tensorflow, a library for creating computational graphs built by Google Research. We'll learn the basic components of Tensorflow and see how to use it to filter images.
Session 2: Training A Network W/ Tensorflow
We'll see how neural networks work, how they are "trained", and see the basic components of training a neural network. We'll then build our first neural network and use it for a fun application of teaching a neural network how to paint an image, and explore such a network can be extended to produce different aesthetics.
Session 3: Unsupervised And Supervised Learning
We explore deep neural networks capable of encoding a large dataset, and see how we can use this encoding to explore "latent" dimensions of a dataset or for generating entirely new content. We'll see what this means, how "autoencoders" can be built, and learn a lot of state-of-the-art extensions that make them incredibly powerful. We'll also learn about another type of model that performs discriminative learning and see how this can be used to predict labels of an image.
Session 4: Visualizing And Hallucinating Representations
This sessions works with state of the art networks and sees how to understand what "representations" they learn. We'll see how this process actually allows us to perform some really fun visualizations including "Deep Dream" which can produce infinite generative fractals, or "Style Net" which allows us to combine the content of one image and the style of another to produce widely different painterly aesthetics automatically.
Session 5: Generative Models
The last session offers a teaser into some of the future directions of generative modeling, including some state of the art models such as the "generative adversarial network", and its implementation within a "variational autoencoder", which allows for some of the best encodings and generative modeling of datasets that currently exist. We also see how to begin to model time, and give neural networks memory by creating "recurrent neural networks" and see how to use such networks to create entirely generative text.
In terms of resources, I think the best available resource out there is definitely Coursera - Neural Networks and Machine Learning by Geoffrey Hinton. The first half of the material gives a very good picture of the things you need to know when starting out, and the second half covers a lot of the more advanced material.
I also think it is a good idea to start implementing things as soon as possible. Kaggle contests, in particular Digit Recognizer might be a good place to start with that.
Reading research papers might make more sense after you do the above two. Geoffrey Hinton, Andrew Ng, and the people they cite often is a good place to start off.
Hope you have a good time making machines smarter. :)
What's the most effective way to get started with deep learning?
There has been a great deal of discussion recently about how our educational systems should focus more on deep learning to encourage students to understand subject matter, as opposed to simply memorizing the key terms and basic facts of a subject. Deep learning is the key to developing students' abilities to assimilate and apply what they learn long after they complete a course.
I recommend you this site http://bit.ly/DeepLearningOnline these days is one of the most effective ways to get started with deep learning, there you can find excellent resources to begin.
Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times
Deep learning adds the assumption that these layers of factors correspond to levels ofabstraction or composition. Varying numbers of layers and layer sizes can provide different amounts of abstraction
Learning deeply is all about repetition. It is one formula that never fails you. If it is an audio lesson that you have been given make sure you have heard it with full concentration till you have learnt it completely. Yes, even if you need to do it hundred times. Although it may sound tedious but actually deep learning can be very fun, if you do it the right way.
Then, you could read through this paper written by Hinton: A Fast Learning Algorithm for Deep Belief Nets. This paper is one of the most important papers in Deep Learning. If you would like to get a more rigorous introduction to Deep Learning, you could also read through Yoshua Bengio's manuscript: Page on iro.umontreal.ca. This might not be very beginner-friendly though and can be a pretty tough read.
Thereafter, you should get your hands dirty and start implementing some of these deep learning methods. You could follow this tutorial set here to do that: Deep Learning Tutorials. These tutorials also give a very beginner-friendly introductions to many deep learning methods. You'll have to use theano, a python library, and though it might be pretty unintuitive at times, it's pretty awesome (performs automatic differentiation).
Deep Learning is a relatively new field and new methods are being devised at an incredibly fast pace. You have got to keep up and the best way is to continually read research papers from conferences such as NIPS. A reading list that could be useful is this: Reading List " Deep Learning
Good Luck!
There are already amazing answers here but what has not been mentioned is that learning never stops, you need to commit to a long learning process. You need to build side projects like build a basic deep learning library from scratch.
Implement backpropagation, stochastic gradient descent (SGD), convolutional neural networks (convNets), long-short-term-memory (LSTM) networks and run them on well known datasets like the MNIST dataset. Implement the mini versions from scratch using your mini DL library.
Trust me DL is not so easy to work with in practice, the theories may ignore the fact that you won't know the right number of layers, you won’t know the right number of neurons per layer before hand. Heck you won't even know that bugs can mess things up, it can be frustratingly nice to build practical systems and you learn a great deal of stuff from that experience.
The debugging process, even though cumbersome, can give you a lot of insights into the working principles of DL architectures. When it finally works, you will transition in a euphoric state, worth the hassle.
As you do that make sure to explore any shortcomings by implementing unique novel approaches into your mini library. That way you will know these things inside out unlike just reading alone. Without practice you will quickly forget the stuff you are reading, practicing is a good way to retain knowledge. Another good technique for knowledge retention is to answer questions about the stuff you are reading, one of the best place to do just that is on Quora, because the audience is diverse from all backgrounds.
The mind is like a library, a library has a lot of books on a variety of literature. That literature needs to be easy to find otherwise the library becomes useless. Thus when you read stuff it is like stocking up books in the library but without sorting them in a manner that is searchable, you risk being unable to find what you are looking for in the future, in short you will forget easily without practice.
Answering questions and practicing is like rearranging the books in the mental library so that you can quickly relate that knowledge more effortlessly and efficiently to the problems you encounter in the field. Your knowledge becomes indexed like Google search when you practice and answer questions.
Of course don't just jump into implementation, get an overview of deep learning, which is just about stacking many processing layers one atop the other, by reading some introductory stuff as most answers have pointed out. It is even easier if you have knowledge in machine learning (ML) because DL architectures are just neural networks so it is not such a big deal for someone with knowledge in ML.
You can even quickly implement them by treating them as a black box. Use well known libraries like TensorFlow to implement them in this manner. In fact once you have done your mini-library and have some more in depth understanding of DL architectures it is now okay to implement them using such high-level libraries and treating them as black-boxes. Because your focus at this point is the high-level understanding of a wide range of DL architectures.
And finally try to apply DL to a real-world problem. Find something you want to solve in which DL can be a module or can solve the whole problem altogether. Start that side project as a GitHub open-source project or a personal project.
Like I am trying to build a sign language interpreter using convNets + LSTMs + my own ideas on mobile devices. It is such a challenging project that it has pushed me to the extreme, it motivates me to read new research papers, implement stuff and see how they work in real life and be just actively pursuing a long term go, which makes me happy in the end.
You can also get into competitions such as Kaggle or other competitions for the sake of practicing and not just winning.
Hope this helps.
Baby Steps for Deep Learning with Python:
- Be comfortable with python. For this, you can take a course in Code Academy and complete it. After completing it, you will be very much familiar with basics of an Object Oriented Programming.
- Theano: It’s another requirement for Deep Learning as you will be working with a data which is represented in the form of a tensor in our work. For Theano, you can go to Deep Learning and there are enough tutorials which makes you to become familiar with syntax and it’s structure.
- One of the main prerequisite to work in this field is Machine Learning (ML) background. If you don’t know ML, you can do the following things.
- Brush up your skills in Probability and Statistics before taking any course on Machine Learning as it involves Probability. After that, you can take a course in Coursera “Machine Learning” by Andrew NG, from Stanford University. Do all the assignments in Python which are available in the course.
- By now, you are familiar with neural networks which are the building blocks of Deep Learning. Now, I’d recommend you to take cs231n course (Convolutional Neural Networks for Visual Recognition) by Dr. Fei-Fei Li, Dr. Andrej Karpathy, Justin Johnson from Stanford University. Above mentioned course covers a lot about Deep Learning.
- It’s time to implement your skill set on MNIST , CIFAR 10, CIFAR 100 data set. If possible, you can work with ImageNet data set also.
You can learn Deep Learning by Online . There are Various Online Courses , i will Suggest you Best Deep Learning Online Courses
Deep Learning A-Z™: Hands-On Artificial Neural Networks [BEST]
*** As seen on Kickstarter ***
Artificial intelligence is growing exponentially. There is no doubt about that. Self-driving cars are clocking up millions of miles, IBM Watson is diagnosing patients better than armies of doctors and Google Deepmind's AlphaGo beat the World champion at Go - a game where intuition plays a key role.
But the further AI advances, the more complex become the problems it needs to solve. And only Deep Learning can solve such complex problems and that's why it's at the heart of Artificial intelligence.
--- Why Deep Learning A-Z? ---
Here are five reasons we think Deep Learning A-Z™ really is different, and stands out from the crowd of other training programs out there:
1. ROBUST STRUCTURE
The first and most important thing we focused on is giving the course a robust structure. Deep Learning is very broad and complex and to navigate this maze you need a clear and global vision of it.
That's why we grouped the tutorials into two volumes, representing the two fundamental branches of Deep Learning: Supervised Deep Learning and Unsupervised Deep Learning. With each volume focusing on three distinct algorithms, we found that this is the best structure for mastering Deep Learning.
2. INTUITION TUTORIALS
So many courses and books just bombard you with the theory, and math, and coding... But they forget to explain, perhaps, the most important part: why you are doing what you are doing. And that's how this course is so different. We focus on developing an intuitive *feel*for the concepts behind Deep Learning algorithms.
With our intuition tutorials you will be confident that you understand all the techniques on an instinctive level. And once you proceed to the hands-on coding exercises you will see for yourself how much more meaningful your experience will be. This is a game-changer.
3. EXCITING PROJECTS
Are you tired of courses based on over-used, outdated data sets?
Yes? Well then you're in for a treat.
Inside this class we will work on Real-World datasets, to solve Real-World business problems. (Definitely not the boring iris or digit classification datasets that we see in every course). In this course we will solve six real-world challenges:
- Artificial Neural Networks to solve a Customer Churn problem
- Convolutional Neural Networks for Image Recognition
- Recurrent Neural Networks to predict Stock Prices
- Self-Organizing Maps to investigate Fraud
- Boltzmann Machines to create a Recomender System
- Stacked Autoencoders* to take on the challenge for the Netflix $1 Million prize
*Stacked Autoencoders is a brand new technique in Deep Learning which didn't even exist a couple of years ago. We haven't seen this method explained anywhere else in sufficient depth.
==> Deep Learning Specialization by Andrew Ng Co-founder, Coursera
If you want to break into AI, this Specialization will help you do so. Deep Learning is one of the most highly sought after skills in tech. We will help you become good at Deep Learning.
In five courses, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. You will work on case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing. You will master not only the theory, but also see how it is applied in industry. You will practice all these ideas in Python and in TensorFlow, which we will teach.
You will also hear from many top leaders in Deep Learning, who will share with you their personal stories and give you career advice.
AI is transforming multiple industries. After finishing this specialization, you will likely find creative ways to apply it to your work.
We will help you master Deep Learning, understand how to apply it, and build a career in AI.
Relevant Courses
1. Zero to Deep Learning™ with Python and Keras]
2. Data Science: Deep Learning in Python
All The Best .
I think the best way to get started with Deep Learning is to fully understand the problems it arose to solve. The most important point being that Deep Learning is hierarchical feature learning.
Deep learning models are able to automatically extract features from raw data, otherwise know as feature feature learning.
Yoshua Bengio describes deep learning in terms of its capability to uncover and learn good representations:
Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features
This quote is from his 2012 paper titled “Deep Learning of Representations for Unsupervised and Transfer Learning”, which I would recommend as a starting point to anyone with a background in machine learning.
If you are looking for a full curriculum, take a look at the open-source Deep Learning curriculum.
As soon as you are familiar with the core concepts dive into a project where you are forced to implement the algorithms, deep learning mastery is a great place to start How to Run Your First Classifier in Weka - Machine Learning Mastery
Happy learning and hacking.
10-805 DEEP LEARNING
The following link has a very comprehensive reading list for deep learning papers. Someone familiar with machine learning fundamentals will gain much from a selected reading off the page:
Reading List " Deep Learning
Also, it is best to spread out readings to include prominent different groups such as Bengio, Hinton, Ng, LeCun and others. This way you would develop a much broader understanding of the subject.
The following techniques are for feed forward nets and the baseline classifiers such as deep belief networks.
Deep Learning is composed of both Unsupervised and Supervised Learning.
The first thing to understand with respect to the fundamentals is how a neural network learns.
Restricted Boltzmann machines and denoising autoencoders are generative models that learn with repeated gibbs sampling of corrupted input data to come up with a good approximation of the data.
If we think about images, this would be learning good representations of images such as different parts of a face or lines in mnist.
With text, this would be context, or how the word is used.
From here, these are stacked such that the output of one goes in to another.
Eventually the network learns good enough features being capable of running a classifier.
This, in combination with logistic regression as an output layer creates a deep belief network used for classification.
From here, it comes down to the different knobs you can turn to generalize better, whether you do things like adaptive learning rates to help with or speed up learning, among other things.
Convolutional Restricted Boltzmann Machines are basically this but with a moving window over various subsets of something like an image to learn good features.
It also learns by repeated gibbs sampling over what we call different slices of the data.
Convolutional nets are a bit harder to understand because they require tensors which are multi dimensional matrices known as slices.
When you get in to recurrent and recursive nets, you can do sequential data.
That's a somewhat decent overview though.
Get Started with deep learning through this course. This is one of the best selling course on Deep learning all over the Internet. Let’s see what you will learn through this course-
What Will You Learn?
- Understand the intuition behind Artificial Neural Networks
- Apply Artificial Neural Networks in practice
- Understand the intuition behind Convolutional Neural Networks
- Apply Convolutional Neural Networks in practice
- Understand the intuition behind Recurrent Neural Networks
- Apply Recurrent Neural Networks in practice
- Understand the intuition behind Self-Organizing Maps
- Apply Self-Organizing Maps in practice
- Understand the intuition behind Boltzmann Machines
- Apply Boltzmann Machines in practice
- Understand the intuition behind AutoEncoders
- Apply AutoEncoders in practice
Course Link-Deep Learning A-Z™: Hands-On Artificial Neural Networks | Learn to create Deep Learning Algorithms in Python
Learn to create Deep Learning Algorithms in Python from two Machine Learning & Data Science experts. Templates included.
Course Description By Course Instructor-
Artificial intelligence is growing exponentially. There is no doubt about that. Self-driving cars are clocking up millions of miles, IBM Watson is diagnosing patients better than armies of doctors and Google Deepmind's AlphaGo beat the World champion at Go - a game where intuition plays a key role.
But the further AI advances, the more complex become the problems it needs to solve. And only Deep Learning can solve such complex problems and that's why it's at the heart of Artificial intelligence.
--- Why Deep Learning A-Z? ---
Here are five reasons we think Deep Learning A-Z™ really is different, and stands out from the crowd of other training programs out there:
1. ROBUST STRUCTURE
The first and most important thing we focused on is giving the course a robust structure. Deep Learning is very broad and complex and to navigate this maze you need a clear and global vision of it.
That's why we grouped the tutorials into two volumes, representing the two fundamental branches of Deep Learning: Supervised Deep Learning and Unsupervised Deep Learning. With each volume focusing on three distinct algorithms, we found that this is the best structure for mastering Deep Learning.
2. INTUITION TUTORIALS
So many courses and books just bombard you with the theory, and math, and coding... But they forget to explain, perhaps, the most important part: why you are doing what you are doing. And that's how this course is so different. We focus on developing an intuitive *feel*for the concepts behind Deep Learning algorithms.
With our intuition tutorials you will be confident that you understand all the techniques on an instinctive level. And once you proceed to the hands-on coding exercises you will see for yourself how much more meaningful your experience will be. This is a game-changer.
3. EXCITING PROJECTS
Are you tired of courses based on over-used, outdated data sets?
Yes? Well then you're in for a treat.
Inside this class we will work on Real-World datasets, to solve Real-World business problems. (Definitely not the boring iris or digit classification datasets that we see in every course). In this course we will solve six real-world challenges:
- Artificial Neural Networks to solve a Customer Churn problem
- Convolutional Neural Networks for Image Recognition
- Recurrent Neural Networks to predict Stock Prices
- Self-Organizing Maps to investigate Fraud
- Boltzmann Machines to create a Recomender System
- Stacked Autoencoders* to take on the challenge for the Netflix $1 Million prize
*Stacked Autoencoders is a brand new technique in Deep Learning which didn't even exist a couple of years ago. We haven't seen this method explained anywhere else in sufficient depth.
4. HANDS-ON CODING
In Deep Learning A-Z™ we code together with you. Every practical tutorial starts with a blank page and we write up the code from scratch. This way you can follow along and understand exactly how the code comes together and what each line means.
In addition, we will purposefully structure the code in such a way so that you can download it and apply it in your own projects. Moreover, we explain step-by-step where and how to modify the code to insert YOUR dataset, to tailor the algorithm to your needs, to get the output that you are after.
This is a course which naturally extends into your career.
5. IN-COURSE SUPPORT
Have you ever taken a course or read a book where you have questions but cannot reach the author?
Well, this course is different. We are fully committed to making this the most disruptive and powerful Deep Learning course on the planet. With that comes a responsibility to constantly be there when you need our help.
In fact, since we physically also need to eat and sleep we have put together a team of professional Data Scientists to help us out. Whenever you ask a question you will get a response from us within 48 hours maximum.
No matter how complex your query, we will be there. The bottom line is we want you to succeed.
--- The Tools ---
Tensorflow and Pytorch are the two most popular open-source libraries for Deep Learning. In this course you will learn both!
TensorFlow was developed by Google and is used in their speech recognition system, in the new google photos product, gmail, google search and much more. Companies using Tensorflow include AirBnb, Airbus, Ebay, Intel, Uber and dozens more.
PyTorch is as just as powerful and is being developed by researchers at Nvidia and leading universities: Stanford, Oxford, ParisTech. Companies using PyTorch include Twitter, Saleforce and Facebook.
So which is better and for what?
Well, in this course you will have an opportunity to work with both and understand when Tensorflow is better and when PyTorch is the way to go. Throughout the tutorials we compare the two and give you tips and ideas on which could work best in certain circumstances.
The interesting thing is that both these libraries are barely over 1 year old. That's what we mean when we say that in this course we teach you the most cutting edge Deep Learning models and techniques.
--- More Tools ---
Theano is another open source deep learning library. It's very similar to Tensorflow in its functionality, but nevertheless we will still cover it.
Keras is an incredible library to implement Deep Learning models. It acts as a wrapper for Theano and Tensorflow. Thanks to Keras we can create powerful and complex Deep Learning models with only a few lines of code. This is what will allow you to have a global vision of what you are creating. Everything you make will look so clear and structured thanks to this library, that you will really get the intuition and understanding of what you are doing.
--- Even More Tools ---
Scikit-learn the most practical Machine Learning library. We will mainly use it:
- to evaluate the performance of our models with the most relevant technique, k-Fold Cross Validation
- to improve our models with effective Parameter Tuning
- to preprocess our data, so that our models can learn in the best conditions
And of course, we have to mention the usual suspects. This whole course is based on Python and in every single section you will be getting hours and hours of invaluable hands-on practical coding experience.
Plus, throughout the course we will be using Numpy to do high computations and manipulate high dimensional arrays, Matplotlib to plot insightful charts and Pandas to import and manipulate datasets the most efficiently.
--- Who Is This Course For? ---
As you can see, there are lots of different tools in the space of Deep Learning and in this course we make sure to show you the most important and most progressive ones so that when you're done with Deep Learning A-Z™ your skills are on the cutting edge of today's technology.
If you are just starting out into Deep Learning, then you will find this course extremely useful. Deep Learning A-Z™ is structured around special coding blueprint approaches meaning that you won't get bogged down in unnecessary programming or mathematical complexities and instead you will be applying Deep Learning techniques from very early on in the course. You will build your knowledge from the ground up and you will see how with every tutorial you are getting more and more confident.
If you already have experience with Deep Learning, you will find this course refreshing, inspiring and very practical. Inside Deep Learning A-Z™ you will master some of the most cutting-edge Deep Learning algorithms and techniques (some of which didn't even exist a year ago) and through this course you will gain an immense amount of valuable hands-on experience with real-world business challenges. Plus, inside you will find inspiration to explore new Deep Learning skills and applications.
--- Real-World Case Studies ---
Mastering Deep Learning is not just about knowing the intuition and tools, it's also about being able to apply these models to real-world scenarios and derive actual measurable results for the business or project. That's why in this course we are introducing six exciting challenges:
#1 Churn Modelling Problem
In this part you will be solving a data analytics challenge for a bank. You will be given a dataset with a large sample of the bank's customers. To make this dataset, the bank gathered information such as customer id, credit score, gender, age, tenure, balance, if the customer is active, has a credit card, etc. During a period of 6 months, the bank observed if these customers left or stayed in the bank.
Your goal is to make an Artificial Neural Network that can predict, based on geo-demographical and transactional information given above, if any individual customer will leave the bank or stay (customer churn). Besides, you are asked to rank all the customers of the bank, based on their probability of leaving. To do that, you will need to use the right Deep Learning model, one that is based on a probabilistic approach.
If you succeed in this project, you will create significant added value to the bank. By applying your Deep Learning model the bank may significantly reduce customer churn.
#2 Image Recognition
In this part, you will create a Convolutional Neural Network that is able to detect various objects in images. We will implement this Deep Learning model to recognize a cat or a dog in a set of pictures. However, this model can be reused to detect anything else and we will show you how to do it - by simply changing the pictures in the input folder.
For example, you will be able to train the same model on a set of brain images, to detect if they contain a tumor or not. But if you want to keep it fitted to cats and dogs, then you will literally be able to a take a picture of your cat or your dog, and your model will predict which pet you have. We even tested it out on Hadelin’s dog!
#3 Stock Price Prediction
In this part, you will create one of the most powerful Deep Learning models. We will even go as far as saying that you will create the Deep Learning model closest to “Artificial Intelligence”. Why is that? Because this model will have long-term memory, just like us, humans.
The branch of Deep Learning which facilitates this is Recurrent Neural Networks. Classic RNNs have short memory, and were neither popular nor powerful for this exact reason. But a recent major improvement in Recurrent Neural Networks gave rise to the popularity of LSTMs (Long Short Term Memory RNNs) which has completely changed the playing field. We are extremely excited to include these cutting-edge deep learning methods in our course!
In this part you will learn how to implement this ultra-powerful model, and we will take the challenge to use it to predict the real Google stock price. A similar challenge has already been faced by researchers at Stanford University and we will aim to do at least as good as them.
#4 Fraud Detection
According to a recent report published by Markets & Markets the Fraud Detection and Prevention Market is going to be worth $33.19 Billion USD by 2021. This is a huge industry and the demand for advanced Deep Learning skills is only going to grow. That’s why we have included this case study in the course.
This is the first part of Volume 2 - Unsupervised Deep Learning Models. The business challenge here is about detecting fraud in credit card applications. You will be creating a Deep Learning model for a bank and you are given a dataset that contains information on customers applying for an advanced credit card.
This is the data that customers provided when filling the application form. Your task is to detect potential fraud within these applications. That means that by the end of the challenge, you will literally come up with an explicit list of customers who potentially cheated on their applications.
#5 & 6 Recommender Systems
From Amazon product suggestions to Netflix movie recommendations - good recommender systems are very valuable in today's World. And specialists who can create them are some of the top-paid Data Scientists on the planet.
We will work on a dataset that has exactly the same features as the Netflix dataset: plenty of movies, thousands of users, who have rated the movies they watched. The ratings go from 1 to 5, exactly like in the Netflix dataset, which makes the Recommender System more complex to build than if the ratings were simply “Liked” or “Not Liked”.
Your final Recommender System will be able to predict the ratings of the movies the customers didn’t watch. Accordingly, by ranking the predictions from 5 down to 1, your Deep Learning model will be able to recommend which movies each user should watch. Creating such a powerful Recommender System is quite a challenge so we will give ourselves two shots. Meaning we will build it with two different Deep Learning models.
Our first model will be Deep Belief Networks, complex Boltzmann Machines that will be covered in Part 5. Then our second model will be with the powerful AutoEncoders, my personal favorites. You will appreciate the contrast between their simplicity, and what they are capable of.
And you will even be able to apply it to yourself or your friends. The list of movies will be explicit so you will simply need to rate the movies you already watched, input your ratings in the dataset, execute your model and voila! The Recommender System will tell you exactly which movies you would love one night you if are out of ideas of what to watch on Netflix!
--- Summary ---
In conclusion, this is an exciting training program filled with intuition tutorials, practical exercises and real-World case studies.
We are super enthusiastic about Deep Learning and hope to see you inside the class!
Kirill & Hadelin
Who is the target audience?
- Anyone interested in Deep Learning
- Students who have at least high school knowledge in math and who want to start learning Deep Learning
- Any intermediate level people who know the basics of Machine Learning or Deep Learning, including the classical algorithms like linear regression or logistic regression and more advanced topics like Artificial Neural Networks, but who want to learn more about it and explore all the different fields of Deep Learning
- Anyone who is not that comfortable with coding but who is interested in Deep Learning and wants to apply it easily on datasets
- Any students in college who want to start a career in Data Science
- Any data analysts who want to level up in Deep Learning
- Any people who are not satisfied with their job and who want to become a Data Scientist
- Any people who want to create added value to their business by using powerful Deep Learning tools
- Any business owners who want to understand how to leverage the Exponential technology of Deep Learning in their business
- Any Entrepreneur who wants to create disruption in an industry using the most cutting edge Deep Learning algorithms
Requirements
- Just some high school mathematics level
Course Link-Deep Learning A-Z™: Hands-On Artificial Neural Networks | Learn to create Deep Learning Algorithms in Python
Hi, I could not get the context of the question though. Are you saying, get started with deep learning for business purpose or wanted to learn, how to apply while building techs and products. To get started with deep learning, you can do the following:
- Go to Google Tensorflow and learn it through various examples and illustrations
- Try to build some small test program with the tensorflow
- You can also try through Amazon & IBM developer tools tools too
- Do learn standard languages to be used for building deep learning focused technologies, tools & products
- Once you are good with public libraries or open source platforms, you can try to build your own models from scratch
Keep practicing, learn new, try new ideas. Will help further.
I tried to cover than in Learning Deep Learning with Keras.
In short: I recommend from image classification on simple datasets (especially notMNIST) with a high-level library such as Keras.
An alternative would be to go from lower lever, e.g. TensorFlow and building neural networks as multi-dimensional array expressions - but only when one has mathematical background (and ‘fun’ part will happen later).
Assuming you a software engineer looking to up your game with DL.
Here is what you can do -
- Take python class from data camp or code academy
- Learn about basic, therotical concepts in machine learning / back proportion networks. Most useful resource is programming collective intelligence. Or try Andrew ng course from coursera
- Once you know basics, move on to following courses in deep learning : fast.ai · Making neural nets uncool again , search audacity free deep learning course
- Start looking at kaggle. Look at published answers from past competitions.
You might need a GPU machine as you progress in learning.
Consider signing up AmpLabs - Up your game if you can wait. This is something I have been working on.
Our Company has created Deep Learning Studio - A UI based drag and drop tool to create Deep Learning Models. Check it out at Visual Deep Learning in Cloud without Programming
The best part is you do not need to know coding or learn about tools like Tensorflow
You will need to know about Deep Learning concepts though. We are in process of building a Deep Learning course that can help get started with our tool ( if you are new to Deep Learning)
Apart from cloud version we have just started shipping out Desktop and Enterprise ( Can run Locally) version
There are few other companies who are trying to create a UI based tool for Deep learning but they are quite behind in terms of maturity of the tool
Mahesh Kashyap
Chief Digital Officer
Deep Cognition, Inc.
Being open-minded, understanding that all the “magic” is mathematics and a deep learning algorithm is a complex NN.
Knowing that, I recommend you to search for free online courses on Coursera, Udacity, Big Data University, etc… Beginning to use some library (Tensorflow, Caffe, Pytorch…) will give you some skills to understand better how they work on high level, and reading papers on low level. So, you have to choose the road you take, but, to start, online courses are the best.
I would suggest two possible paths. The first one is based on a bunch of 6 courses on Udemy :
1. Deep Learning Prerequisites: Linear Regression in Python
2. Ensemble Machine Learning in Python: Random Forest, AdaBoost
3. Deep Learning Prerequisites: Logistic Regression in Python
4. Data Science: Deep Learning in Python - Udemy
5.Deep Learning: Convolutional Neural Networks in Python
6.Data Science: Practical Deep Learning in Theano + TensorFlow
The second one is the new Nanodegree offered by Udacity : Deep Learning Nanodegree Foundation | Udacity
I hope that I could help :)
Please don’t build a chatbot. The tech isn’t mature and you won’t get good results.
Here’s a list of resources for people starting out with deep learning:
The most comprehensive source of information on Deep Learning I’ve found is StanfordCS231 class. Just watch all lectures and do all assignments.
The best way to start to learn deep learning is to read this book from beginning to end: Deep Learning .
Hi i am a director at TECLOV (GeeksHub Pvt Ltd). We provides online courses for Machine Learning and Deep Learning. We have IIT’ian and MIT’ian teachers who provides basic as well as advanced topics . Can visit our site : Home | Teclov for whole syllabus. As we are a startup so we are providing great offers .Must visit.
Recently I have started reading and found very useful:
A Complete Guide on Getting Started with Deep Learning in Python