Corpus ID: 208637294

Deep Ensembles: A Loss Landscape Perspective

@article{Fort2019DeepEA,
  title={Deep Ensembles: A Loss Landscape Perspective},
  author={Stanislav Fort and Huiyi Hu and Balaji Lakshminarayanan},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.02757}
}
Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the… Expand
84 Citations
Accurate and Reliable Forecasting using Stochastic Differential Equations
  • Highly Influenced
  • PDF
Automated Cleanup of the ImageNet Dataset by Model Consensus, Explainability and Confident Learning
  • Highly Influenced
  • PDF
Learning Neural Network Subspaces
  • Highly Influenced
  • PDF
LiBRe: A Practical Bayesian Approach to Adversarial Detection
  • Highly Influenced
  • PDF
BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning
  • 48
  • Highly Influenced
  • PDF
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
  • 5
  • Highly Influenced
  • PDF
Accuracy-Privacy Trade-off in Deep Ensemble
  • Highly Influenced
  • PDF
Clinical Validation of Saliency Maps for Understanding Deep Neural Networks in Ophthalmology
  • Highly Influenced
  • PDF
Deep Ensembles for Low-Data Transfer Learning
  • 2
  • Highly Influenced
  • PDF
One Versus all for deep Neural Network Incertitude (OVNNI) quantification
  • 4
  • Highly Influenced
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 38 REFERENCES
ImageNet: A large-scale hierarchical image database
  • 7,035
  • Highly Influential
  • PDF
Large Scale Structure of Neural Network Loss Landscapes
  • 23
  • PDF
Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning
  • 53
  • PDF
Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
  • 44
  • PDF
A Simple Baseline for Bayesian Uncertainty in Deep Learning
  • 175
  • PDF
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
  • 503
  • PDF
Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
  • 294
  • PDF
Can you trust your model's uncertainty?
  • Evaluating predictive uncertainty under dataset shift
  • 2019
The Goldilocks zone: Towards better understanding of neural network loss landscapes
  • 17
  • PDF
Averaging Weights Leads to Wider Optima and Better Generalization
  • 312
  • PDF
...
1
2
3
4
...