Probabilistic machine learning and artificial intelligence

@article{Ghahramani2015ProbabilisticML,
  title={Probabilistic machine learning and artificial intelligence},
  author={Zoubin Ghahramani},
  journal={Nature},
  year={2015},
  volume={521},
  pages={452-459}
}
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and… Expand

Paper Mentions

Automating inference, learning, and design using probabilistic programming
TLDR
The aim of this paper is to propose a novel approach toference called Automated Variational Inference for Probabilistic Programming, which allows programmers to specify a stochastic process using syntax that resembles modern programming lan 2. Expand
Optimization for Probabilistic Machine Learning
TLDR
This dissertation presents a convex relaxation technique for dealing with hardness of the optimization involved in the inference of probabilistic models, based on semidefinite optimization that has a general applicability to polynomial optimization problem. Expand
Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo
TLDR
This tutorial will provide a self-contained introduction to one of the state-of-the-art methods—the particle Metropolis-Hastings algorithm—which has proven to offer a practical approximation to the problem of learning probabilistic nonlinear state-space models. Expand
Bayesian Inference with Anchored Ensembles of Neural Networks, and Application to Reinforcement Learning
TLDR
This work proposes one minor modification to the normal ensembling methodology, which it is proved allows the ensemble to perform Bayesian inference, hence converging to the corresponding Gaussian Process as both the total number of NNs, and the size of each, tend to infinity. Expand
Edward: A library for probabilistic modeling, inference, and criticism
TLDR
Edward enables the development of complex probabilistic models and their algorithms at a massive scale and builds on top of TensorFlow to support distributed training and hardware such as GPUs. Expand
From Machine Learning to Explainable AI
  • Andreas Holzinger
  • Computer Science
  • 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA)
  • 2018
TLDR
The goal of explainable AI is linking probabilistic learning methods with large knowledge representations (ontologies) and logical approaches, thus making results re-traceable, explainable and comprehensible on demand. Expand
Probabilistic Models with Deep Neural Networks
TLDR
An overview of the main concepts, methods, and tools needed to use deep neural networks within a probabilistic modeling framework is provided. Expand
An active inference model of concept learning
TLDR
This paper articulate a novel, biologically plausible approach to concept learning based on active inference, and on the idea that a generative model can be equipped with extra ‘slots’ that can be engaged when an agent learns about novel concepts. Expand
Bayesian Inference with Anchored Ensembles of Neural Networks, and Application to Exploration in Reinforcement Learning
TLDR
This work proposes one minor modification to the normal ensembling methodology, which it is proved allows the ensemble to perform Bayesian inference, hence converging to the corresponding Gaussian Process as both the total number of NNs, and the size of each, tend to infinity. Expand
What does the mind learn? A comparison of human and machine learning representations
TLDR
It is suggested that continued applications of machine learning techniques will allow cognitive researchers the ability to model the complex real-world problems where machine learning has recently been successful, providing more complete behavioural descriptions. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 135 REFERENCES
Bayesian Reinforcement Learning
  • P. Poupart
  • Computer Science
  • Encyclopedia of Machine Learning
  • 2010
TLDR
This chapter surveys recent lines of work that use Bayesian techniques for reinforcement learning by explicitly maintaining a distribution over various quantities such as the parameters of the model, the value function, the policy or its gradient. Expand
Model-based machine learning
  • Charles M. Bishop
  • Computer Science, Medicine
  • Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
  • 2013
TLDR
It is shown how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and a large-scale commercial application of this framework involving tens of millions of users is outlined. Expand
Bayesian learning for neural networks
TLDR
Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional neural network learning methods. Expand
Machine learning - a probabilistic perspective
  • K. Murphy
  • Computer Science
  • Adaptive computation and machine learning series
  • 2012
TLDR
This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students. Expand
Artificial Intelligence: A Modern Approach
The long-anticipated revision of this #1 selling book offers the most comprehensive, state of the art introduction to the theory and practice of artificial intelligence for modern applications.Expand
Practical Bayesian Optimization of Machine Learning Algorithms
TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms. Expand
How to Grow a Mind: Statistics, Structure, and Abstraction
TLDR
This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Expand
Practical Probabilistic Programming
TLDR
A new probabilistic programming language named Figaro is presented that was designed with practicality and usability in mind and can represent models naturally that have been difficult to represent in other languages, such as Probabilistic relational models and models with undirected relationships with arbitrary constraints. Expand
Gaussian Processes for Machine Learning
TLDR
The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics, and deals with the supervised learning problem for both regression and classification. Expand
Probabilistic Inference Using Markov Chain Monte Carlo Methods
TLDR
The role of probabilistic inference in artificial intelligence is outlined, the theory of Markov chains is presented, and various Markov chain Monte Carlo algorithms are described, along with a number of supporting techniques. Expand
...
1
2
3
4
5
...