Corpus ID: 211010897

Interpreting a Penalty as the Influence of a Bayesian Prior

@article{Wolinski2020InterpretingAP,
  title={Interpreting a Penalty as the Influence of a Bayesian Prior},
  author={Pierre Wolinski and G. Charpiat and Y. Ollivier},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.00178}
}
In machine learning, it is common to optimize the parameters of a probabilistic model, modulated by a somewhat ad hoc regularization term that penalizes some values of the parameters. Regularization terms appear naturally in Variational Inference (VI), a tractable way to approximate Bayesian posteriors: the loss to optimize contains a Kullback--Leibler divergence term between the approximate posterior and a Bayesian prior. We fully characterize which regularizers can arise this way, and provide… Expand

References

SHOWING 1-10 OF 30 REFERENCES
Practical Variational Inference for Neural Networks
  • A. Graves
  • Computer Science, Mathematics
  • NIPS
  • 2011
  • 781
  • PDF
An Introduction to Variational Methods for Graphical Models
  • 2,369
  • PDF
Variational Dropout Sparsifies Deep Neural Networks
  • 432
  • PDF
Natural Langevin Dynamics for Neural Networks
  • 21
  • PDF
A Practical Bayesian Framework for Backpropagation Networks
  • D. Mackay
  • Mathematics, Computer Science
  • Neural Computation
  • 1992
  • 2,249
  • Highly Influential
  • PDF
Bayesian Compression for Deep Learning
  • 291
  • PDF
Variational Dropout and the Local Reparameterization Trick
  • 631
Keeping Neural Networks Simple
  • 61
Group sparse regularization for deep neural networks
  • 239
  • PDF
...
1
2
3
...