Rapid Risk Minimization with Bayesian Models Through Deep Learning Approximation

@article{Lwe2021RapidRM,
  title={Rapid Risk Minimization with Bayesian Models Through Deep Learning Approximation},
  author={Mathias L{\"o}we and Jes Frellsen and Per Lunnemann Hansen and Sebastian Risi},
  journal={2021 International Joint Conference on Neural Networks (IJCNN)},
  year={2021},
  pages={1-8}
}
We introduce a novel combination of Bayesian Models (BMs) and Neural Networks (NNs) for making predictions with a minimum expected risk. Our approach combines the best of both worlds, the data efficiency and interpretability of a BM with the speed of a NN. For a BM, making predictions with the lowest expected loss requires integrating over the posterior distribution. When exact inference of the posterior predictive distribution is intractable, approximation methods are typically applied, e.g… 

Figures from this paper

References

SHOWING 1-10 OF 62 REFERENCES
Deep Learning
Information Theory
Information TheoryPapers read at a Symposium on Information Theory held at the Royal Institution, London, August 29th to September 2nd, 1960. Edited by Colin Cherry. Pp. xi + 476. (London:
Language Models are Few-Shot Learners
TLDR
GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
TLDR
It is shown that deep ensembles provide an effective mechanism for approximate Bayesian marginalization, and a related approach is proposed that further improves the predictive distribution by marginalizing within basins of attraction, without significant overhead.
The Case for Bayesian Deep Learning
TLDR
The key distinguishing property of a Bayesian approach is marginalization instead of optimization, not the prior, or Bayes rule, which reflects the inductive biases of neural networks that help them generalize.
Normalizing Flows for Probabilistic Modeling and Inference
TLDR
This review places special emphasis on the fundamental principles of flow design, and discusses foundational topics such as expressive power and computational trade-offs, and summarizes the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
Neural network approximation of Bayesian models for the inference of ion and electron temperature profiles at W7-X
TLDR
A method for training a neural network (NN) to approximate the full model Bayesian inference of plasma profiles from x-ray imaging diagnostic measurements to use NNs for fast ion and electron temperature profile inversion from measured image data.
I and J
I and i
TLDR
There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
...
...