• Corpus ID: 8424615

Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models

@inproceedings{Seeger2009LargeSV,
  title={Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models},
  author={Matthias W. Seeger and Hannes Nickisch},
  booktitle={Sampling-based Optimization in the Presence of Uncertainty},
  year={2009}
}
Sparsity is a fundamental concept of modern statistics, and often the only general principle available at the moment to address novel learning applications with many more variables than observations. While much progress has been made recently in the theoretical understanding and algorithmics of sparse point estimation, higher-order problems such as covariance estimation or optimal data acquisition are seldomly addressed for sparsity-favouring models, and there are virtually no algorithms for… 

Figures from this paper

Sparse linear models: Variational approximate inference and Bayesian experimental design
TLDR
It is argued that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms.
Variational approximate inference in latent linear models
TLDR
This thesis considers deterministic approximate inference methods based on minimising the Kullback-Leibler (KL) divergence between a given target density and an approximating `variational' density and presents a new method to perform KL variational inference for a broad class of approximating variational densities.
Bayesian inference and experimental design for large generalised linear models
TLDR
This thesis derives, studies and applies deterministic approximate inference and experimental design algorithms with a focus on the class of generalised linear models (GLMs), with special emphasis on algorithmic properties such as convexity, numerical stability, and scalability to large numbers of interacting variables.
Concave Gaussian Variational Approximations for Inference in Large-Scale Bayesian Linear Models
TLDR
For a large class of models, the local variational method is equivalent to a weakened form of Kullback-Leibler Gaussian approximation, showing that it is a concave function in the parameters of the Gaussian for log concave sites.
Gaussian Kullback-Leibler approximate inference
TLDR
Numerical results comparing G-KL and other deterministic Gaussian approximate inference methods are presented for: robust Gaussian process regression models with either Student-t or Laplace likelihoods, large scale Bayesian binary logistic regression models, and Bayesian sparse linear models for sequential experimental design.
Variational Bayesian Inference Techniques
TLDR
Novel variational relaxations of Bayesian integration are described, characterized as well as posterior maximization, which can be solved robustly for very large models by algorithms unifying convex reconstruction and Bayesian graphical model technology.
Gaussian sampling by local perturbations
TLDR
A technique for exact simulation of Gaussian Markov random fields (GMRFs), which can be interpreted as locally injecting noise to each Gaussian factor independently, followed by computing the mean/mode of the perturbed GMRF, which leads to an efficient unbiased estimator of marginal variances.
glm-ie: Generalised Linear Models Inference & Estimation Toolbox
TLDR
This work designed the glm-ie package to be simple, generic and easily expansible, and provides a wide choice of penalty functions for estimation, potential functions for inference and matrix classes with lazy evaluation for convenient modelling.
Causal and Probabilistic Inference Empirical and Theoretical Analysis of Bayesian Inference in Gaussian Process Models
The Gaussian process (GP) predictor is a popular kernel method allowing to express uncertainty about smooth functions [5]. In Bayesian inference, a GP prior is combined with the data to yield the
Regularization Strategies and Empirical Bayesian Learning for MKL
TLDR
This paper shows how different MKL algorithms can be understood as applications of either regularization on the kernel weights or block-norm-based regularization, which is more common in structured sparsity and multi-task learning.
...
1
2
3
...

References

SHOWING 1-10 OF 78 REFERENCES
Sparse linear models: Variational approximate inference and Bayesian experimental design
TLDR
It is argued that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms.
Convex variational Bayesian inference for large scale generalized linear models
We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any log-concave model. We provide a generic
Latent Variable Bayesian Models for Promoting Sparsity
TLDR
In coefficient space, the analysis reveals that Type II is exactly equivalent to performing standard MAP estimation using a particular class of dictionary- and noise-dependent, nonfactorial coefficient priors.
Gaussian Covariance and Scalable Variational Inference
TLDR
This work provides theoretical and empirical insights into algorithmic and statistical consequences of low-rank covariance approximation errors on decision outcomes in nonlinear sequential Bayesian experimental design.
Graphical Models, Exponential Families, and Variational Inference
TLDR
The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.
Variational methods for inference and estimation in graphical models
TLDR
This thesis proposes a principled framework for approximating graphical models based on variational methods and develops variational techniques from the perspective that unifies and expands their applicability to graphical models.
Sparse reconstruction by separable approximation
TLDR
This work proposes iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian plus the original sparsity-inducing regularizer, and proves convergence of the proposed iterative algorithm to a minimum of the objective function.
A Variational Baysian Framework for Graphical Models
This paper presents a novel practical framework for Bayesian model averaging and model selection in probabilistic graphical models. Our approach approximates full posterior distributions over model
Propagation Algorithms for Variational Bayesian Learning
TLDR
It is demonstrated how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian learning to infer the hidden state dimensionality of the state-space model in a variety of synthetic problems and one real high-dimensional data set.
Bayesian Inference for Spiking Neuron Models with a Sparsity Prior
TLDR
Using the expectation propagation algorithm, the Bayesian treatment of generalized linear models is presented, able to approximate the full posterior distribution over all weights, and the sparsity of the Laplace prior is used to select those filters from a spike-triggered covariance analysis that are most informative about the neural response.
...
1
2
3
4
5
...