Approximating Bayesian inference with a sparse distributed memory system

Abstract

Probabilistic models of cognition have enjoyed recent success in explaining how people make inductive inferences. Yet, the difficult computations over structured representations that are often required by these models seem incompatible with the continuous and distributed nature of human minds. To reconcile this issue, and to understand the implications of constraints on probabilistic models, we take the approach of formalizing the mechanisms by which cognitive and neural processes could approximate Bayesian inference. Specifically, we show that an associative memory system using sparse, distributed representations can be reinterpreted as an importance sampler, a Monte Carlo method of approximating Bayesian inference. This capacity is illustrated through two case studies: a simple letter reconstruction task, and the classic problem of property induction. Broadly, our work demonstrates that probabilistic models can be implemented in a practical, distributed manner, and helps bridge the gap between algorithmicand computationallevel models of cognition.

Extracted Key Phrases

4 Figures and Tables

Cite this paper

@inproceedings{Abbott2013ApproximatingBI, title={Approximating Bayesian inference with a sparse distributed memory system}, author={Joshua T. Abbott and Jessica B. Hamrick and Thomas L. Griffiths}, booktitle={CogSci}, year={2013} }