网络压缩论文集(network compression)
Convolutional Neural Networks
- ImageNet Models
- Architecture Design
- Activation Functions
- Visualization
- Fast Convolution
- Low-Rank Filter Approximation
- Low Precision
- Parameter Pruning
- Transfer Learning
- Theory
- 3D Data
- Hardware
ImageNet Models
- 2017 CVPR Xception: Deep Learning with Depthwise Separable Convolutions(Xception)
- 2017 CVPR Aggregated Residual Transformations for Deep Neural Networks (ResNeXt)
- 2016 ECCV Identity Mappings in Deep Residual Networks (Pre-ResNet)
- 2016 arXiv Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning (Inception V4)
- 2016 CVPR Deep Residual Learning for Image Recognition (ResNet)
- 2015 arXiv Rethinking the Inception Architecture for Computer Vision (Inception V3)
- 2015 ICML Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (Inception V2)
- 2015 ICCV Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- 2015 ICLR Very Deep Convolutional Networks For Large-scale Image Recognition (VGG)
- 2015 CVPR Going Deeper with Convolutions (GoogleNet/Inception V1)
- 2012 NIPS ImageNet Classification with Deep Convolutional Neural Networks (AlexNet)
Architecture Design
- 2017 arXiv One Model To Learn Them All
- 2017 arXiv MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- 2017 ICML AdaNet: Adaptive Structural Learning of Artificial Neural Networks
- 2017 ICML Large-Scale Evolution of Image Classifiers
- 2017 CVPR Aggregated Residual Transformations for Deep Neural Networks
- 2017 CVPR Densely Connected Convolutional Networks
- 2017 ICLR Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
- 2017 ICLR Neural Architecture Search with Reinforcement Learning
- 2017 ICLR Designing Neural Network Architectures using Reinforcement Learning
- 2017 ICLR Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
- 2017 ICLR Highway and Residual Networks learn Unrolled Iterative Estimation
- 2016 NIPS Residual Networks Behave Like Ensembles of Relatively Shallow Networks
- 2016 BMVC Wide Residual Networks
- 2016 arXiv Benefits of depth in neural networks
- 2016 AAAI On the Depth of Deep Neural Networks: A Theoretical View
- 2016 arXiv SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
- 2015 ICMLW Highway Networks
- 2015 CVPR Convolutional Neural Networks at Constrained Time Cost
- 2015 CVPR Fully Convolutional Networks for Semantic Segmentation
- 2014 NIPS Do Deep Nets Really Need to be Deep?
- 2014 ICLRW Understanding Deep Architectures using a Recursive Convolutional Network
- 2013 ICML Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures
- 2009 ICCV What is the Best Multi-Stage Architecture for Object Recognition?
- 1995 NIPS Simplifying Neural Nets by Discovering Flat Minima
- 1994 T-NN SVD-NET: An Algorithm that Automatically Selects Network Structure
Activation Functions
- 2017 arXiv Self-Normalizing Neural Networks (SELU)
- 2016 ICLR Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) (ELU)
- 2015 arXiv Empirical Evaluation of Rectified Activations in Convolutional Network (RReLU)
- 2015 ICCV Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- 2013 ICML Rectifier Nonlinearities Improve Neural Network Acoustic Models
- 2010 ICML Rectified Linear Units Improve Restricted Boltzmann Machines (ReLU)
Visualization
- 2017 CVPR Network Dissection: Quantifying Interpretability of Deep Visual Representations
- 2015 ICMLW Understanding Neural Networks Through Deep Visualization
- 2014 ECCV Visualizing and Understanding Convolutional Networks
Fast Convolution
- 2017 ICML Warped Convolutions: Efficient Invariance to Spatial Transformations
- 2017 ICLR Faster CNNs with Direct Sparse Convolutions and Guided Pruning
- 2016 NIPS PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions
- 2016 CVPR Fast Algorithms for Convolutional Neural Networks (Winograd)
- 2015 CVPR Sparse Convolutional Neural Networks
Low-Rank Filter Approximation
- 2016 ICLR Convolutional Neural Networks with Low-rank Regularization
- 2016 ICLR Training CNNs with Low-Rank Filters for Efficient Image Classification
- 2016 TPAMI Accelerating Very Deep Convolutional Networks for Classification and Detection
- 2015 CVPR Efficient and Accurate Approximations of Nonlinear Convolutional Networks
- 2015 ICLR Speeding-up convolutional neural networks using fine-tuned cp-decomposition
- 2014 NIPS Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
- 2014 BMVC Speeding up Convolutional Neural Networks with Low Rank Expansions
- 2013 NIPS Predicting Parameters in Deep Learning
- 2013 CVPR Learning Separable Filters
Low Precision
- 2017 arXiv BitNet: Bit-Regularized Deep Neural Networks
- 2017 arXiv Gradient Descent for Spiking Neural Networks
- 2017 arXiv ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks
- 2017 arXiv Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework
- 2017 arXiv The High-Dimensional Geometry of Binary Neural Networks
- 2017 NIPS Training Quantized Nets: A Deeper Understanding
- 2017 NIPS TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
- 2017 ICML Analytical Guarantees on Numerical Precision of Deep Neural Networks
- 2017 arXiv Deep Learning with Low Precision by Half-wave Gaussian Quantization
- 2017 CVPR Network Sketching: Exploiting Binary Structure in Deep CNNs
- 2017 CVPR Local Binary Convolutional Neural Networks
- 2017 ICLR Towards the Limit of Network Quantization
- 2017 ICLR Loss-aware Binarization of Deep Networks
- 2017 ICLR Trained Ternary Quantization
- 2017 ICLR Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights
- 2016 arXiv Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
- 2016 arXiv Accelerating Deep Convolutional Networks using low-precision and sparsity
- 2016 arXiv Deep neural networks are robust to weight binarization and other non-linear distortions
- 2016 ECCV XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
- 2016 ICMLW Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks
- 2016 ICML Fixed Point Quantization of Deep Convolutional Networks
- 2016 NIPS Binarized Neural Networks
- 2016 arXiv Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
- 2016 CVPR Quantized Convolutional Neural Networks for Mobile Devices
- 2016 ICLR Neural Networks with Few Multiplications
- 2015 arXiv Resiliency of Deep Neural Networks under Quantization
- 2015 arXiv Rounding Methods for Neural Networks with Low Resolution Synaptic Weights
- 2015 NIPS Backpropagation for Energy-Efficient Neuromorphic Computing
- 2015 NIPS BinaryConnect: Training Deep Neural Networks with Binary Weights during Propagations
- 2015 ICMLW Bitwise Neural Networks
- 2015 ICML Deep Learning with Limited Numerical Precision
- 2015 ICLRW Training deep neural networks with low precision multiplications
- 2015 arXiv Training Binary Multilayer Neural Networks for Image Classification using Expectation Backpropagation
- 2014 NIPS Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights
- 2013 arXiv Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
- 2011 NIPSW Improving the speed of neural networks on CPUs
- 1987 Combinatorica Randomized rounding: A technique for provably good algorithms and algorithmic proofs
Parameter Pruning
- 2017 ICML Beyond Filters: Compact Feature Map for Portable Deep Model
- 2017 ICLR Soft Weight-Sharing for Neural Network Compression
- 2017 ICLR Pruning Convolutional Neural Networks for Resource Efficient Inference
- 2017 ICLR Pruning Filters for Efficient ConvNets
- 2016 arXiv Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning
- 2016 arXiv Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures
- 2016 NIPS Learning the Number of Neurons in Deep Networks
- 2016 NIPS Learning Structured Sparsity in Deep Learning [code]
- 2016 NIPS Dynamic Network Surgery for Efficient DNNs
- 2016 ECCV Less is More: Towards Compact CNNs
- 2016 CVPR Fast ConvNets Using Group-wise Brain Damage
- 2016 ICLR Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
- 2016 ICLR Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications
- 2015 arXiv Structured Pruning of Deep Convolutional Neural Networks
- 2015 IEEE Access Channel-Level Acceleration of Deep Face Representations
- 2015 BMVC Data-free parameter pruning for Deep Neural Networks
- 2015 ICML Compressing Neural Networks with the Hashing Trick
- 2015 ICCV Deep Fried Convnets
- 2015 ICCV An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections
- 2015 NIPS Learning both Weights and Connections for Efficient Neural Networks
- 2015 ICLR FitNets: Hints for Thin Deep Nets
- 2014 arXiv Compressing Deep Convolutional Networks using Vector Quantization
- 2014 NIPSW Distilling the Knowledge in a Neural Network
- 1995 ISANN Evaluating Pruning Methods
- 1993 T-NN Pruning Algorithms--A Survey
- 1989 NIPS Optimal Brain Damage
Transfer Learning
- 2016 arXiv What makes ImageNet good for transfer learning?
- 2014 NIPS How transferable are features in deep neural networks?
- 2014 CVPR CNN Features off-the-shelf: an Astounding Baseline for Recognition
- 2014 ICML DeCAF: A Deep Convolutional Activation
Theory
- 2017 ICML On the Expressive Power of Deep Neural Networks
- 2017 ICML A Closer Look at Memorization in Deep Networks
- 2017 ICML An Analytical Formula of Population Gradient for two-layered ReLU network and its Applications in Convergence and Critical Point Analysis
- 2016 NIPS Exponential expressivity in deep neural networks through transient chaos
- 2016 arXiv Understanding Deep Convolutional Networks
- 2014 NIPS On the number of linear regions of deep neural networks
- 2014 ICML Provable Bounds for Learning Some Deep Representations
- 2014 ICLR On the number of response regions of deep feed forward networks with piece-wise linear activations
- 2014 ICLR Revisiting natural gradient for deep networks
3D Data
- 2017 NIPS PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
- 2017 ICCV Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs
- 2017 SIGGRAPH O-CNN: Octree-based Convolutional Neural Network for Understanding 3D Shapes
- 2017 CVPR PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
- 2017 CVPR OctNet: Learning Deep 3D Representations at High Resolutions
- 2016 NIPS FPNN: Field Probing Neural Networks for 3D Data
- 2016 NIPS Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
- 2015 ICCV Multi-view Convolutional Neural Networks for 3D Shape Recognition
- 2015 BMVC Sparse 3D convolutional neural networks
- 2015 CVPR 3D ShapeNets: A Deep Representation for Volumetric Shapes
Hardware
- 2017 ISVLSI YodaNN: An ultra-low power convolutional neural network accelerator based on binary weights
- 2017 ASPLOS SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing
- 2017 FPGA Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Neural Networks
- 2015 NIPS Tutorial High-Performance Hardware for Machine Learning