Probabilistic machine learning and artificial intelligence

@article{Ghahramani2015ProbabilisticML,
  title={Probabilistic machine learning and artificial intelligence},
  author={Zoubin Ghahramani},
  journal={Nature},
  year={2015},
  volume={521},
  pages={452-459}
}
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and… 

Automating inference, learning, and design using probabilistic programming

TLDR
The aim of this paper is to propose a novel approach toference called Automated Variational Inference for Probabilistic Programming, which allows programmers to specify a stochastic process using syntax that resembles modern programming lan 2.

Learning and Optimizing Probabilistic Models for Planning Under Uncertainty

TLDR
This work presents a method that employs the Bayesian Optimization (BO) framework to learn MDPs autonomously from a set of execution traces, optimizing the expected value and performance in simulations over aSet of tasks the underlying system is expected to perform.

Optimization for Probabilistic Machine Learning

TLDR
This dissertation presents a convex relaxation technique for dealing with hardness of the optimization involved in the inference of probabilistic models, based on semidefinite optimization that has a general applicability to polynomial optimization problem.

Bayesian Inference with Anchored Ensembles of Neural Networks, and Application to Reinforcement Learning

TLDR
This work proposes one minor modification to the normal ensembling methodology, which it is proved allows the ensemble to perform Bayesian inference, hence converging to the corresponding Gaussian Process as both the total number of NNs, and the size of each, tend to infinity.

Edward: A library for probabilistic modeling, inference, and criticism

TLDR
Edward enables the development of complex probabilistic models and their algorithms at a massive scale and builds on top of TensorFlow to support distributed training and hardware such as GPUs.

Probabilistic Models with Deep Neural Networks

TLDR
An overview of the main concepts, methods, and tools needed to use deep neural networks within a probabilistic modeling framework is provided.

From Machine Learning to Explainable AI

  • Andreas Holzinger
  • Computer Science
    2018 World Symposium on Digital Intelligence for Systems and Machines (DISA)
  • 2018
TLDR
The goal of explainable AI is linking probabilistic learning methods with large knowledge representations (ontologies) and logical approaches, thus making results re-traceable, explainable and comprehensible on demand.

An active inference model of concept learning

TLDR
This paper articulate a novel, biologically plausible approach to concept learning based on active inference, and on the idea that a generative model can be equipped with extra ‘slots’ that can be engaged when an agent learns about novel concepts.

What does the mind learn? A comparison of human and machine learning representations

...

References

SHOWING 1-10 OF 116 REFERENCES

Bayesian Learning for Neural Networks

TLDR
Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional neural network learning methods.

Bayesian Reinforcement Learning

  • P. Poupart
  • Computer Science
    Encyclopedia of Machine Learning
  • 2010
TLDR
This chapter surveys recent lines of work that use Bayesian techniques for reinforcement learning by explicitly maintaining a distribution over various quantities such as the parameters of the model, the value function, the policy or its gradient.

Model-based machine learning

  • Charles M. Bishop
  • Computer Science
    Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
  • 2013
TLDR
It is shown how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and a large-scale commercial application of this framework involving tens of millions of users is outlined.

Machine learning - a probabilistic perspective

  • K. Murphy
  • Computer Science
    Adaptive computation and machine learning series
  • 2012
TLDR
This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

Artificial Intelligence: A Modern Approach

The long-anticipated revision of this #1 selling book offers the most comprehensive, state of the art introduction to the theory and practice of artificial intelligence for modern applications.

Practical Bayesian Optimization of Machine Learning Algorithms

TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.

How to Grow a Mind: Statistics, Structure, and Abstraction

TLDR
This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems.

Practical Probabilistic Programming

TLDR
A new probabilistic programming language named Figaro is presented that was designed with practicality and usability in mind and can represent models naturally that have been difficult to represent in other languages, such as Probabilistic relational models and models with undirected relationships with arbitrary constraints.

Towards common-sense reasoning via conditional simulation: legacies of Turing in Artificial Intelligence

TLDR
This work describes a computational formalism centered around a probabilistic Turing machine called QUERY, which captures the operation of Probabilistic conditioning via conditional simulation and demonstrates how the QUERY abstraction can be used to cast common-sense reasoning as probabilism inference in a statistical model of observations and the uncertain structure of the world that generated that experience.

A New Approach to Probabilistic Programming Inference

TLDR
A new approach to inference in expressive probabilistic programming languages based on particle Markov chain Monte Carlo that supports accurate inference in models that make use of complex control ow, including stochastic recursion is introduced.
...