Corpus ID: 201646137

DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures

@article{Yang2020DeepHoyerLS,
  title={DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures},
  author={Huanrui Yang and W. Wen and H. Li},
  journal={ArXiv},
  year={2020},
  volume={abs/1908.09979}
}
In seeking for sparse and efficient neural network models, many previous works investigated on enforcing L1 or L0 regularizers to encourage weight sparsity during training. The L0 regularizer measures the parameter sparsity directly and is invariant to the scaling of parameter values, but it cannot provide useful gradients, and therefore requires complex optimization techniques. The L1 regularizer is almost everywhere differentiable and can be easily optimized with gradient descent. Yet it is… Expand
Nonconvex penalization for sparse neural networks
Grouped sparse projection
FlipOut: Uncovering Redundant Weights via Sign Flipping
Neural Network Training Using 𝓁1-Regularization and Bi-fidelity Data
...
1
2
...

References

SHOWING 1-10 OF 59 REFERENCES
Transformed 𝓁1 Regularization for Learning Sparse Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
Learning Sparse Neural Networks through L0 Regularization
Structured Bayesian Pruning via Log-Normal Multiplicative Noise
Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning
Sparse Convolutional Neural Networks
Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Ratio and difference of $l_1$ and $l_2$ norms and sparse representation with coherent dictionaries
...
1
2
3
4
5
...