Latent Discriminant deterministic Uncertainty
@article{Franchi2022LatentDD, title={Latent Discriminant deterministic Uncertainty}, author={Gianni Franchi and Xuanlong Yu and Andrei Bursuc and Emanuel Aldea and S{\'e}verine Dubuisson and David Filliat}, journal={ArXiv}, year={2022}, volume={abs/2207.10130} }
Predictive uncertainty estimation is essential for deploying Deep Neural Networks in real-world autonomous systems. However, most successful approaches are computationally intensive. In this work, we attempt to address these challenges in the context of autonomous driving perception tasks. Recently proposed Deterministic Uncertainty Methods (DUM) can only partially meet such requirements as their scalability to complex computer vision tasks is not obvious. In this work we advance a scalable and…
Figures and Tables from this paper
2 Citations
Window-Based Early-Exit Cascades for Uncertainty Estimation: When Deep Ensembles are More Efficient than Single Models
- 2023
Computer Science
ArXiv
Experiments on ImageNet-scale data across a number of network architectures and uncertainty tasks show that the proposed window-based early-exit approach is able to achieve a superior uncertainty-computation trade-off compared to scaling single models.
Generative Transformer for Accurate and Reliable Salient Object Detection
- 2021
Computer Science
This paper conducts extensive research on exploiting the contributions of transformers for accurate and reliable salient object detection, and presents a latent variable model, namely inferential generative adversarial network (iGAN), based on the generative adversary network (GAN).
83 References
Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
- 2020
Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
This work proposes a comprehensive evaluation framework for scalable epistemic uncertainty estimation methods in deep learning and applies this framework to provide the first properly extensive and conclusive comparison of the two current state-of-the- art scalable methods: ensembling and MC-dropout.
Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate
- 2022
Computer Science
AAAI
A systematic exploration into training-free uncertainty estimation for dense regression, an unrecognized yet important problem, and a theoretical construction justifying such estimations are provided, which produce comparable or even better uncertainty estimation when compared to training-required state-of-the-art methods.
StyleLess layer: Improving robustness for real-world driving
- 2021
Computer Science
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
This work proposes a novel type of layer, dubbed StyleLess, which enables DNNs to learn robust and informative features that can cope with varying external conditions, and proposes multiple variations of this layer that can be integrated in most of the architectures and trained jointly with the main task.
MUAD: Multiple Uncertainties for Autonomous Driving benchmark for multiple uncertainty types and tasks
- 2022
Computer Science
BMVC
The MUAD dataset (Multiple Uncertainties for Autonomous Driving), consisting of 10,413 realistic synthetic images with diverse adverse weather conditions, allows to better assess the impact of different sources of uncertainty on model performance and is released to allow researchers to benchmark their algorithm methodically in adverse conditions.
Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding
- 2017
Computer Science
BMVC
A practical system which is able to predict pixel-wise class labels with a measure of model uncertainty, and shows that modelling uncertainty improves segmentation performance by 2-3% across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation.
SLURP: Side Learning Uncertainty for Regression Problems
- 2021
Computer Science
BMVC
SLURP is proposed, a generic approach for regression uncertainty estimation via a side learner that exploits the output and the intermediate representations generated by the main task model and has a low computational cost with respect to existing solutions.
Improving Deterministic Uncertainty Estimation in Deep Learning for Classification and Regression
- 2021
Computer Science
ArXiv
This approach combines a bi-Lipschitz feature extractor with an inducing point approximate Gaussian process, offering robust and principled uncertainty estimation, and provides uncertainty estimates that outperform previous single forward pass uncertainty models.
Invertible Residual Networks
- 2019
Computer Science
ICML
The empirical evaluation shows that invertible ResNets perform competitively with both state-of-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.
Uncertainty Estimates and Multi-hypotheses Networks for Optical Flow
- 2018
Computer Science
ECCV
A new network architecture and loss function is introduced that enforce complementary hypotheses and provide uncertainty estimates efficiently with a single forward pass and without the need for sampling or ensembles.
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
- 2017
Computer Science
NIPS
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.