• Corpus ID: 195798897

Neuron ranking - an informed way to condense convolutional neural networks architecture

@article{Adamczewski2019NeuronR,
  title={Neuron ranking - an informed way to condense convolutional neural networks architecture},
  author={Kamil Adamczewski and Mijung Park},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.02519}
}
Convolutional neural networks (CNNs) in recent years have made a dramatic impact in science, technology and industry, yet the theoretical mechanism of CNN architecture design remains surprisingly vague. The CNN neurons, including its distinctive element, convolutional filters, are known to be learnable features, yet their individual role in producing the output is rather unclear. The thesis of this work is that not all neurons are equally important and some of them contain more useful… 

Figures and Tables from this paper

Q-FIT: The Quantifiable Feature Importance Technique for Explainable Machine Learning

A distribution over the explanation allows to define a closed-form divergence to measure the similarity between learned feature importance under different models, and is used to study how the feature importance trade-offs with essential notions in modern machine learning, such as privacy and fairness, are dealt with.

FaShapley: Fast and Approximated Shapley Based Model Pruning Towards Certifiably Robust DNNs

  • Computer Science
  • 2022
This paper designs a quantitative criterion, neuron Shapley, to evaluate the neuron weight/filter importance within DNNs, leading to effective unstructured/structured pruning strategies to improve the certified robustness of the pruned models.

References

SHOWING 1-10 OF 38 REFERENCES

Less Is More: Towards Compact CNNs

This work shows that, by incorporating sparse constraints into the objective function, it is possible to decimate the number of neurons during the training stage, thus theNumber of parameters and the memory footprint of the neural network are reduced, which is desirable at the test time.

Network Dissection: Quantifying Interpretability of Deep Visual Representations

This work uses the proposed Network Dissection method to test the hypothesis that interpretability is an axis-independent property of the representation space, then applies the method to compare the latent representations of various networks when trained to solve different classification problems.

A Unified Approach to Interpreting Model Predictions

A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

Learning Sparse Neural Networks through L0 Regularization

A practical method for L_0 norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero, which allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way.

Visualizing and Understanding Convolutional Networks

A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.

Understanding Neural Networks Through Deep Visualization

This work introduces several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations of convolutional neural networks.

ImageNet classification with deep convolutional neural networks

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Recent advances in convolutional neural networks

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.