NIPS 2015 workshops
NIPS 2015 workshops
My experience during the 2 days of NIPS workshops after the main meeting (part 1,2,3).
Statistical Methods for Understanding Neural Systems workshop
Organized by: Allie Fletcher, Jakob Macke, Ryan P. Adams, Jascha Sohl-Dickstein
Towards a theory of high dimensional, single trial neural data analysis: On the role of random projections and phase transitions
Surya Ganguli
Surya talked about conditions for recovering the embedding dimension of discrete neural responses from noisy single trial observations (very similar to his talk at NIPS 2014 workshop organized by me). He models neural response as where S is sparse sampling matrix, U is a random orthogonal embedding matrix, X is the latent manifold driven by P stimulus conditions. Assuming Gaussian noise, and using free probability theory [Nica & Speicher], he shows the recovery condition .
Translating between human and animal studies via Bayesian multi-task
learning
Katherine Heller
Katherine talked about using a hierarchical Bayesian model and variational inference algorithms to infer linear latent dynamics. She talked about several ideas including (1) structural prior for connectivity, (2) using cross-spectral mixture kernel for LFP [Wilson & Adams ICML 2013; Ulrich et al. NISP 2015], (3) combining fMRI and LFP through shared dynamics.
Similarity matching: A new theory of neural
computation
Dmitri (Mitya) Chklovskii
Principled derivation of local learning rules for PCA [Pehlevan & Chklovskii NIPS 2015] and NMF [Pehlevan & Chklovskii 2015].
Small Steps Towards Biologically Plausible Deep
Learning
Yoshua Bengio
What should hidden layers do in a deep neur(on)al network? He talked about some happy coincidences: What is the objective function for STDP in this setting [Bengio et al. 2015]? Deep autoencoders and symmetric weight learning [Arora et al. 2015]. Energy based models approximates back-propagation [Bengio 2015].
The Human Visual Hierarchy is Isomorphic to the Hierarchy learned by
a Deep Convolutional Neural Network Trained for Object
Recognition
Pulkit Agrawal
Which layers of various CNN trained on image discrimination task explain the fMRI voxels the best? [Agrawal et al. 2014] shows hierarchy of CNN matches the visual hierarchy and it’s not because of the receptive field sizes.
Unsupervised learning with deterministic reconstruction
using What-where-convnet
Yann LeCun
CNN often loses the ‘where’ information in the pooling process. What-where-convnet keeps the ‘where’ information in the pooling stage and use it to reconstruct the image [Zhao et al. 2015].
Mechanistic fallacy and modelling how we think
Neil
Lawrence
He came out as a cognitive scientist. He talked about System 1 (fast subconscious data-driven inference which handles uncertainty well) and System 2 (slow conscious symbolic inference that thinks it is driving the body), and how they could talk to one another. Interesting solution to the variations of the trolly problem and how System 1 kicks in and gives the ‘irrational’ answer.
Approximation methods for inferring time-varying interactions of a
large neural population (poster)
Christian Donner and Hideaki
Shimazaki
Inference on an Ising model with latent diffusion dynamics on the parameters (both first and second order). Due to large number of parameters, it needs multiple trials with identical latent process to make good inference.
Panel discussion: Neil Lawrence, Yann LeCun, Yoshua Bengio, Konrad Kording, Surya Ganguli, Matthias Bethge
Discussion on the interface between neuroscience and machine learning. Are we only focusing on ‘vision’ problems too much? What problems should neuroscience focus on to help advance machine learning? How can datasets and problems change machine learning? Should we train DNN to perform more diverse tasks?
Correlations and signatures of criticality in neural population
models (ML and stat physics workshop)
Jakob Macke
Jakob talked about how subsampling neural data to infer different sizes of neural dynamics could lead to misleading conclusions (esp. criticality).
Black Box Learning and Inference workshop
- Dustin Tran, Rajesh Ranganath, David M. Blei. Variational Gaussian Process. [arXiv 2015]
- Yuri Burda, Roger Grosse, Ruslan
Salakhutdinov. Importance Weighted Autoencoders. [arXiv 2015]
A tighter lower-bound to the marginal likelihood! Better generative model estimation! - Alp Kucukelbir. Automatic Differentiation Variational Inference in Stan. [NIPS 2015][software]
- Jan-Willem van de Meent, David Tolpin, Brooks Paige, Frank Wood. Black-Box Policy Search with Probabilistic Programs. [arXiv 2015]