Theophane Weber

Learn More
We present a framework for efficient inference in structured image models that explicitly reason about objects. We achieve this by performing probabilistic inference using a recurrent neural network that attends to scene elements and processes them one at a time. Crucially, the model itself learns to choose the appropriate number of inference steps. We use(More)
In a variety of problems originating in supervised, unsupervised, and reinforcement learning, the loss function is defined by an expectation over a collection of random variables, which might be part of a probabilistic model or the external world. Estimating the gradient of this loss function, using samples, lies at the core of gradient-based learning(More)
Stochastic event synchrony (SES) is a recently proposed family of similarity measures. First, "events" are extracted from the given signals; next, one tries to align events across the different time series. The better the alignment, the more similar the N time series are considered to be. The similarity measures quantify the reliability of the events (the(More)
A variety of (dis)similarity measures for one-dimensional point processes (e.g., spike trains) are investigated, including the Victor-Purpura distance metric, the van Rossum distance metric, the Schreiber et al. similarity measure, the Hunter-Milton similarity measure, the event synchronization proposed by Quiroga, and the stochastic event synchrony(More)
We present a novel approach to quantify the statistical interdependence of two time series, referred to as stochastic event synchrony (SES). The first step is to extract the two given time series. The next step is to try to align events from one time series with events from the other. The better the alignment the more similar the two series are considered(More)
Stochastic event synchrony is a technique to quantify the similarity of pairs of signals. First, events are extracted from the two given time series. Next, one tries to align events from one time series with events from the other. The better the alignment, the more similar the two time series are considered to be. In Part I, the companion letter in this(More)
Abstract We consider a decision network on an undirected graph in which each node corresponds to a decision variable, and each node and edge of the graph is associated with a reward function whose value depends only on the variables of the corresponding nodes. The goal is to construct a decision vector which maximizes the total reward. This decision problem(More)
Finding the largest independent set in a graph is a notoriously difficult NP -complete combinatorial optimization problem. Unlike other NP-complete problems, it does not admit a constant factor approximation algorithm for general graphs. Furthermore, even for graphs with largest degree 3, no polynomial time approximation algorithm exists with a(More)
We introduce Dimple, a fully open-source API for probabilistic modeling. Dimple allows the user to specify probabilistic models in the form of graphical models, Bayesian networks, or factor graphs, and performs inference (by automatically deriving an inference engine from a variety of algorithms) on the model. Dimple also serves as a compiler for GP5, a(More)