PEP: Parameter Ensembling by Perturbation
@article{Mehrtash2020PEPPE, title={PEP: Parameter Ensembling by Perturbation}, author={Alireza Mehrtash and P. Abolmaesumi and P. Golland and T. Kapur and D. Wassermann and W. M. Wells}, journal={ArXiv}, year={2020}, volume={abs/2010.12721} }
Ensembling is now recognized as an effective approach for increasing the predictive performance and calibration of deep networks. We introduce a new approach, Parameter Ensembling by Perturbation (PEP), that constructs an ensemble of parameter values as random perturbations of the optimal parameter set from training by a Gaussian with a single variance parameter. The variance is chosen to maximize the log-likelihood of the ensemble average ($\mathbb{L}$) on the validation data set. Empirically… CONTINUE READING
One Citation
Encoding the latent posterior of Bayesian Neural Networks for uncertainty quantification
- Computer Science, Mathematics
- ArXiv
- 2020
- PDF
References
SHOWING 1-10 OF 43 REFERENCES
Why M Heads are Better than One: Training a Diverse Ensemble of Deep Networks
- Computer Science
- ArXiv
- 2015
- 112
- PDF
Averaging Weights Leads to Wider Optima and Better Generalization
- Computer Science, Mathematics
- UAI
- 2018
- 257
- PDF
Bayesian Uncertainty Estimation for Batch Normalized Deep Networks
- Computer Science, Mathematics
- ICML
- 2018
- 97
- PDF
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- Computer Science
- ICML
- 2015
- 20,767
- PDF
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
- Computer Science, Mathematics
- ICLR
- 2018
- 167
- PDF
A Simple Baseline for Bayesian Uncertainty in Deep Learning
- Computer Science, Mathematics
- NeurIPS
- 2019
- 149
- PDF
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
- Computer Science, Mathematics
- NIPS
- 2017
- 1,098
- PDF
Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
- Mathematics, Computer Science
- ICML
- 2020
- 13
- PDF