#### Filter Results:

#### Publication Year

1998

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Brain Region

#### Cell Type

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

In this article we propose a new Rao-Blackwellized particle filtering based algorithm for tracking an unknown number of targets. The algorithm is based on formulating probabilistic stochastic process models for target states, data associations, and birth and death processes. The tracking of these stochastic processes is implemented using sequential Monte… (More)

We give a short review on the Bayesian approach for neural network learning and demonstrate the advantages of the approach in three real applications. We discuss the Bayesian approach with emphasis on the role of prior knowledge in Bayesian models and in classical error minimization approaches. The generalization capability of a statistical model, classical… (More)

- Simo Särkkä, Arno Solin, Aapo Nummenmaa, Aki Vehtari, Toni Auranen, Simo Vanni +1 other
- NeuroImage
- 2012

In this article we introduce the DRIFTER algorithm, which is a new model based Bayesian method for retrospective elimination of physiological noise from functional magnetic resonance imaging (fMRI) data. In the method, we first estimate the frequency trajectories of the physiological signals with the interacting multiple models (IMM) filter algorithm. The… (More)

- Aki Vehtari, Andrew Gelman, Jonah Gabry
- 2016

Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values. LOO and WAIC have various advantages over simpler estimates of predictive… (More)

In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimate, it is important to obtain the distribution of the… (More)

Magnetoencephalography (MEG) allows millisecond-scale non-invasive measurement of magnetic fields generated by neural currents in the brain. However, localization of the underlying current sources is ambiguous due to the so-called inverse problem. The most widely used source localization methods (i.e., minimum-norm and minimum-current estimates (MNE and… (More)

The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for Bayesian inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.

We review the Akaike, deviance, and Watanabe-Akaike information criteria from a Bayesian perspective, where the goal is to estimate expected out-of-sample-prediction error using a bias-corrected adjustment of within-sample error. We focus on the choices involved in setting up these measures, and we compare them in three simple examples, one theoretical and… (More)

Log Gaussian processes are an attractive manner to construct intensity surfaces for the purposes of spatial epidemiology. The intensity surfaces are naturally smoothed by placing a Gaussian process (GP) prior over the relative log Poisson rate, and the spatial correlations between areas can be included in an explicit and natural way into the model via a… (More)

We consider the input variable selection in complex Bayesian hierarchical models. Our goal is to find a model with the smallest number of input variables having statistically or practically at least the same expected utility as the full model with all the available inputs. A good estimate for the expected utility can be computed using cross-validation… (More)