Marco F. Cusumano-Towner

Learn More
This paper introduces a new technique for quantifying the approximation error of a broad class of probabilistic inference programs, including ones based on both variational and Monte Carlo approaches. The key idea is to derive a subjective bound on the symmetrized KL divergence between the distribution achieved by an approximate inference program and its(More)
For AIS Markov chains with converged detailed-balance kernels the gap is the sum of KL-divergences between consecutive target distributions Estimated lower bounds on ELBO, and upper bounds on KL divergence to the posterior for SMC samplers applied to Bayesian linear regression (left) and Dirichlet process mixture modeling (right). SMC samplers use MCMC(More)
Approximate probabilistic inference algorithms are central to many fields. Examples include sequential Monte Carlo inference in robotics, variational inference in machine learning, and Markov chain Monte Carlo inference in statistics. A key problem faced by practitioners is measuring the accuracy of an approximate inference algorithm on a specific dataset.(More)
This paper introduces the probabilistic module interface, which allows encapsulation of complex probabilistic models with latent variables alongside custom stochastic approximate inference machinery, and provides a platform-agnostic abstraction barrier separating the model internals from the host probabilistic inference system. The interface can be seen as(More)
Intelligent systems sometimes need to infer the probable goals of people, cars, and robots, based on partial observations of their motion. This paper introduces a class of probabilis-tic programs for formulating and solving these problems. The formulation uses randomized path planning algorithms as the basis for prob-abilistic models of the process by which(More)
  • 1