• Corpus ID: 235377065

Neural Ensemble Search for Uncertainty Estimation and Dataset Shift

@inproceedings{Zaidi2021NeuralES,
  title={Neural Ensemble Search for Uncertainty Estimation and Dataset Shift},
  author={Sheheryar Zaidi and Arber Zela and Thomas Elsken and Chris C. Holmes and Frank Hutter and Yee Whye Teh},
  booktitle={NeurIPS},
  year={2021}
}
Ensembles of neural networks achieve superior performance compared to standalone networks in terms of accuracy, uncertainty calibration and robustness to dataset shift. Deep ensembles, a state-of-the-art method for uncertainty estimation, only ensemble random initializations of a fixed architecture. Instead, we propose two methods for automatically constructing ensembles with varying architectures, which implicitly trade-off individual architectures’ strengths against the ensemble’s diversity… 
AutoDEUQ: Automated Deep Ensemble with Uncertainty Quantification
TLDR
The law of total variance is used to decompose the predictive variance of deep ensembles into aleatoric and epistemic uncertainties, and it is shown that AutoDEUQ outperforms probabilistic backpropagation, Monte Carlo dropout, deep ensemble, distribution-free ensemble, and hyper ensemble methods on a number of regression benchmarks.
Evolutionary Neural Cascade Search across Supernetworks
TLDR
ENCAS can be used to search over multiple pretrained supernetworks to achieve a trade-off front of cascades of different neural network architectures, maximizing accuracy while minimizing FLOPs count and leading to Pareto dominance in all computation regimes.
Simple Regularisation for Uncertainty-Aware Knowledge Distillation
TLDR
This work examines a simple regularisation approach for distribution-free knowledge distillation of ensemble of machine learning models into a single NN to preserve the diversity, accuracy and uncertainty estimation characteristics of the original ensemble without any intricacies, such as Ne-tuning.
N O O NE R EPRESENTATION TO R ULE T HEM A LL : O VERLAPPING F EATURES OF T RAINING M ETHODS
TLDR
A large-scale empirical study of models across hyper-parameters, architectures, frameworks, and datasets finds that model pairs that diverge more in training methodology display categorically different generalization behavior, producing increasingly uncorrelated errors.
A Taxonomy of Error Sources in HPC I/O Machine Learning Models
TLDR
This work analyzes multiple years of application, scheduler, and storage system logs on two leadership-class HPC platforms to understand why I/O models underperform in practice, and proposes a taxonomy consisting of five categories ofI/O modeling errors: poor application and system modeling, inadequate dataset coverage, I/o contention, and I-O noise.
Deep-Ensemble-Based Uncertainty Quantification in Spatiotemporal Graph Neural Networks for Traffic Forecasting
TLDR
This work develops a scalable deep ensemble approach to quantify uncertainties for DCRNN, a state-of-the-art method for short-term traffic forecasting and shows that it outperforms the current state of theart Bayesian and number of other commonly used frequentist techniques.
UncertaINR: Uncertainty Quantification of End-to-End Implicit Neural Representations for Computed Tomography
TLDR
This work studies UncertaINR: a Bayesian reformulation of INR-based image reconstruction, for computed tomography (CT), and finds that it achieves well-calibrated uncertainty, while retaining accuracy competitive with other classical, INRbased, and CNN-based reconstruction techniques.
Deep Ensembles Work, But Are They Necessary?
TLDR
While deep ensembles are a practical way to achieve performance improvement, the results show that they may be a tool of convenience rather than a fundamentally better model class.
Meta-Learning to Perform Bayesian Inference in a single Forward Propagation
TLDR
It is demonstrated that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems, with over 200-fold speedups in multiple setups compared to current methods.
Uncertainty Quantification in End-to-End Implicit Neural Representations for Medical Imaging
TLDR
This work proposes the first uncertainty aware, end-to-end INR architecture for computed tomography (CT) image reconstruction, and indicates that, with adequate tuning, Hamiltonian Monte Carlo may outperform Monte Carlo dropout deep ensembles.
...
...

References

SHOWING 1-10 OF 69 REFERENCES
NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search
TLDR
This work proposes an extension to NAS-bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information, which provides additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
Regularized Evolution for Image Classifier Architecture Search
TLDR
This work evolves an image classifier---AmoebaNet-A---that surpasses hand-designs for the first time and gives evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search.
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
TLDR
Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits.
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
TLDR
This paper standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.
A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets
TLDR
This work proposes a downsampled version of ImageNet that contains exactly the same number of classes and images as ImageNet, with the only difference that the images aredownsampled to 32$\times$32 pixels per image.
DARTS: Differentiable Architecture Search
TLDR
The proposed algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.
Bayesian Hyperparameter Optimization for Ensemble Learning
TLDR
This paper bridges the gap between hyperparameter optimization and ensemble learning by performing Bayesian optimization of an ensemble with regards to its hyperparameters, showing that this approach is better than both the best single model and a greedy ensemble construction over the models produced by a standard Bayesian optimize.
Ensemble selection from libraries of models
TLDR
A method for constructing ensembles from libraries of thousands of models using forward stepwise selection to be optimized to performance metric such as accuracy, cross entropy, mean precision, or ROC Area is presented.
Deep Ensembles: A Loss Landscape
  • Perspective. ArXiv,
  • 2019
Automated Machine Learning: Methods, Systems, Challenges. Springer, 2018
  • 2018
...
...