A brief introduction to probabilistic machine learning with neuroscientific relations


My aim of this article is to summarize in a concise way what I consider the most important ideas of modern machine learning. I start with some general comments of organizational mechanisms, and will then focus on unsupervised, supervised and reinforcement learning. Another aim of this introductory review is the focus on relating different approaches in machine learning such as SVM and Bayesian networks, or reinforcement learning and temporal supervised learning. Some examples of relations to brain processing are included such as synaptic plasticity and models of the Basal Ganglia. I also provide Matlab examples for each of the three main learning paradigms with programs available at www.cs.dal.ca/∼tt/repository/MLintro2012. 1 Evolution, Development and Learning Development and learning are both important ingredients for the success of natural organisms, and applying those concepts to artificial systems might hold the key to new breakthroughs in science and technology. This article is an introduction to machine learning with examples of its relation to neuroscientific findings. There has been much progress in this area, specifically by realizing the importance of representing uncertainties and the corresponding usefulness of a probabilistic framework. 1.1 Organizational mechanisms Before focusing on the main learning paradigms that are dominating much of our recent thinking in machine learning, I would like to start by outlining briefly some Thomas Trappenberg Dalhousie University, Halifax Canada, e-mail: tt@cs.dal.ca

22 Figures and Tables

Cite this paper

@inproceedings{Trappenberg2012ABI, title={A brief introduction to probabilistic machine learning with neuroscientific relations}, author={Thomas P. Trappenberg}, year={2012} }