# Pruning via Iterative Ranking of Sensitivity Statistics

@article{Verdenius2020PruningVI, title={Pruning via Iterative Ranking of Sensitivity Statistics}, author={Stijn Verdenius and Maarten Stol and Patrick Forr'e}, journal={ArXiv}, year={2020}, volume={abs/2006.00896} }

With the introduction of SNIP [arXiv:1810.02340v2], it has been demonstrated that modern neural networks can effectively be pruned before training. Yet, its sensitivity criterion has since been criticized for not propagating training signal properly or even disconnecting layers. As a remedy, GraSP [arXiv:2002.07376v1] was introduced, compromising on simplicity. However, in this work we show that by applying the sensitivity criterion iteratively in smaller steps - still before training - we can… Expand

#### Figures, Tables, and Topics from this paper

#### 8 Citations

ESPN: Extremely Sparse Pruned Networks

- Computer Science, Mathematics
- 2021 IEEE Data Science and Learning Workshop (DSLW)
- 2021

It is demonstrated that an simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks and outperform several existing pruning approaches in both test accuracy and compression ratio. Expand

Sparse Training via Boosting Pruning Plasticity with Neuroregeneration

- Computer Science
- ArXiv
- 2021

A novel gradual magnitude pruning (GMP) method is designed, named gradual pruning with zero-cost neuroregeneration (GraNet), and its dynamic sparse training (DST) variant (Gra net-ST), both of them advance state of the art. Expand

Plant 'n' Seek: Can You Find the Winning Ticket?

- Computer Science, Mathematics
- ArXiv
- 2021

This work derives a framework to plant and hide target architectures within large randomly initialized neural networks and finds that current limitations of pruning algorithms to identify extremely sparse tickets are likely of algorithmic rather than fundamental nature. Expand

RGP: Neural Network Pruning through Its Regular Graph Structure

- Computer Science
- ArXiv
- 2021

This paper proposes regular graph based pruning (RGP) to perform a one-shot neural network pruning, and shows a strong precision retention capability with extremely high parameter reduction and FLOPs reduction. Expand

COPS: Controlled Pruning Before Training Starts

- Computer Science
- IJCNN
- 2021

This work provides a framework for combining arbitrary GSSs to create more powerful pruning strategies and compared pruning with COPS against state-of-the-art methods for different network architectures and image classification tasks and obtained improved results. Expand

Progressive Skeletonization: Trimming more fat from a network at initialization

- Computer Science
- ICLR
- 2021

This work progressively prunes connections of a given network at initialization, allowing parameters that were unimportant at earlier stages of skeletonization to become important at later stages, while keeping networks trainable and providing significantly better performance than recent approaches. Expand

Pruning neural networks without any data by iteratively conserving synaptic flow

- Computer Science, Physics
- NeurIPS
- 2020

The data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important, and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models, datasets, and sparsity constraints. Expand

Sparse Linear Networks with a Fixed Butterfly Structure: Theory and Practice

- Computer Science, Mathematics
- ArXiv
- 2020

The butterfly architecture used in this work can replace any dense linear operator with a gadget consisting of a sequence of logarithmically many sparse layers, containing a total of near linear number of weights, with little compromise in expressibility of the resulting operator. Expand

#### References

SHOWING 1-10 OF 95 REFERENCES

On Pruning Adversarially Robust Neural Networks

- Computer Science, Mathematics
- ArXiv
- 2020

It is shown that integrating existing pruning techniques with multiple types of robust training techniques, including verifiably robust training, leads to poor robust accuracy even though such techniques can preserve high regular accuracy. Expand

The Search for Sparse, Robust Neural Networks

- Computer Science, Mathematics
- ArXiv
- 2019

An extensive empirical evaluation and analysis testing the Lottery Ticket Hypothesis with adversarial training is performed and it is shown this approach enables us to find sparse, robust neural networks. Expand

Towards Evaluating the Robustness of Neural Networks

- Computer Science
- 2017 IEEE Symposium on Security and Privacy (SP)
- 2017

It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand

SNIP: Single-shot Network Pruning based on Connection Sensitivity

- Computer Science
- ICLR
- 2019

This work presents a new approach that prunes a given network once at initialization prior to training, and introduces a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. Expand

The State of Sparsity in Deep Neural Networks

- Computer Science, Mathematics
- ArXiv
- 2019

It is shown that unstructured sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with joint sparsification and optimization, and the need for large-scale benchmarks in the field of model compression is highlighted. Expand

A Signal Propagation Perspective for Pruning Neural Networks at Initialization

- Computer Science, Mathematics
- ICLR
- 2020

By noting connection sensitivity as a form of gradient, this work formally characterize initialization conditions to ensure reliable connection sensitivity measurements, which in turn yields effective pruning results and modifications to the existing pruning at initialization method lead to improved results on all tested network models for image classification tasks. Expand

Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers

- Computer Science, Mathematics
- ICLR
- 2018

This paper proposes a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs) that focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computationally difficult and not-always-useful task. Expand

Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models

- Computer Science, Mathematics
- ArXiv
- 2017

Foolbox is a new Python package that provides reference implementations of most published adversarial attack methods alongside some new ones, all of which perform internal hyperparameter tuning to find the minimum adversarial perturbation. Expand

Pruning untrained neural networks: Principles and Analysis

- Computer Science
- ArXiv
- 2020

This paper provides a comprehensive theoretical analysis of pruning at initialization and training of sparse architectures and proposes novel principled approaches which are validated experimentally on a variety of NN architectures. Expand

Foolbox: A Python toolbox to benchmark the robustness of machine learning models

- 2017

Even todays most advanced machine learning models are easily fooled by almost imperceptible perturbations of their inputs. Foolbox is a new Python package to generate such adversarial perturbations… Expand