• Corpus ID: 17793133

Machine learning - a probabilistic perspective

  title={Machine learning - a probabilistic perspective},
  author={Kevin P. Murphy},
  booktitle={Adaptive computation and machine learning series},
  • K. Murphy
  • Published in
    Adaptive computation and…
    24 August 2012
  • Computer Science
Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as… 
Understanding Machine Learning - From Theory to Algorithms
The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way in an advanced undergraduate or beginning graduate course.
Introduction to Statistical Machine Learning
This introduction to Statistical Machine Learning provides a general introduction to machine learning that covers a wide range of topics concisely and will help you bridge the gap between theory and practice.
Machine Learning Background
This chapter provides an introduction to standard machine learning approaches that learn from tabular data representations, followed by an outline of approaches using various other data types
Probabilistic Data Analysis with Probabilistic Programming by Feras
This thesis introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques.
General Purpose Probabilistic Programming Platform with Effective Stochastic Inference
This work formulated structure discovery as a form of “probabilistic program synthesis”, and showed that 10 lines of code are sufficient to extend ABCD into a nonparametric Bayesian clustering technique that identifies time series which share covariance structure.
Automating inference, learning, and design using probabilistic programming
The aim of this paper is to propose a novel approach toference called Automated Variational Inference for Probabilistic Programming, which allows programmers to specify a stochastic process using syntax that resembles modern programming lan 2.
Learning Probabilistic Logic Programs in Continuous Domains
The first steps towards inducing probabilistic logic programs for continuous and mixed discrete-continuous data, without being pigeon-holed to a fixed set of distribution families are taken.
Machine learning methods for generating high dimensional discrete datasets
Two possible approaches to generating datasets that reflect patterns of real ones using a two‐step approach are explored: Constraint‐based generation and probabilistic generative modeling.
A Survey on Large-Scale Machine Learning
A systematic survey on existing LML methods is offered to provide a blueprint for the future developments of this area and categorize the methods in each perspective according to their targeted scenarios and introduce representative methods in line with intrinsic strategies.
Deep Learning
Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.


Gaussian Processes for Machine Learning
The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics, and deals with the supervised learning problem for both regression and classification.
Probabilistic Graphical Models - Principles and Techniques
The framework of probabilistic graphical models, presented in this book, provides a general approach for causal reasoning and decision making under uncertainty, allowing interpretable models to be constructed and then manipulated by reasoning algorithms.
Computer Vision: Models, Learning, and Inference
This modern treatment of computer vision shows how to use training data to learn the relationships between the observed image data and the aspects of the world that the authors wish to estimate, such as the 3D structure or the object class, and how to exploit these relationships to make new inferences about the world from new image data.
Learning Determinantal Point Processes
This thesis shows how determinantal point processes can be used as probabilistic models for binary structured problems characterized by global, negative interactions, and demonstrates experimentally that the techniques introduced allow DPPs to be used for real-world tasks like document summarization, multiple human pose estimation, search diversification, and the threading of large document collections.
Inducing Features of Random Fields
The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated.
Probabilistic models of vision and max-margin methods
This paper shows that by placing bounds on the normalization constant the authors can obtain computationally tractable approximations to probabilistic methods including multi-class max- margin, ordinal regression, max-margin Markov networks and parsers, multiple-instance learning, and latent SVM.
Large-scale kernel machines
This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets, and offers information that can address the relative lack of theoretical grounding for many useful algorithms.
Max-Margin Markov Networks
Maximum margin Markov (M3) networks incorporate both kernels, which efficiently deal with high-dimensional features, and the ability to capture correlations in structured data, and a new theoretical bound for generalization in structured domains is provided.
Experiments with a New Boosting Algorithm
This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers.