• Publications
  • Influence
Learning Networks of Stochastic Differential Equations
We consider linear models for stochastic dynamics. To any such model can be associated a network (namely a directed graph) describing which degrees of freedom interact under the dynamics. We tackleExpand
  • 49
  • 11
  • PDF
Generative Adversarial Active Learning
TLDR
We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN). Expand
  • 72
  • 8
  • PDF
A message-passing algorithm for multi-agent trajectory planning
TLDR
We describe a novel approach for computing collision-free global trajectories for p agents with specified initial and final configurations, based on an improved version of the alternating direction method of multipliers (ADMM). Expand
  • 36
  • 4
  • PDF
The Boundary Forest Algorithm for Online Supervised and Unsupervised Learning
TLDR
We describe a new instance-based learning algorithm called the Boundary Forest (BF) algorithm, that can be used for supervised and unsupervised learning. Expand
  • 23
  • 3
  • PDF
Probabilistic document model for automated document composition
TLDR
We present a new paradigm for automated document composition based on a generative, unified probabilistic document model (PDM) that models document composition. Expand
  • 26
  • 2
  • PDF
A Family of Tractable Graph Distances
TLDR
We define a broad family of graph distances, that includes both the chemical and the CKS distance, and prove that these are all metrics. Expand
  • 23
  • 2
  • PDF
A metric for sets of trajectories that is practical and mathematically consistent
  • José Bento
  • Computer Science, Mathematics
  • ArXiv
  • 12 January 2016
TLDR
Metrics on the space of sets of trajectories are important for scientists in the field of computer vision, machine learning, robotics, and general artificial intelligence. Expand
  • 13
  • 2
  • PDF
An explicit rate bound for over-relaxed ADMM
  • G. França, José Bento
  • Mathematics, Computer Science
  • IEEE International Symposium on Information…
  • 7 December 2015
TLDR
We provide an exact analytical solution to semi-definite programming and obtain a general and explicit upper bound on the convergence rate of the entire family of over-relaxed ADMM. Expand
  • 32
  • 1
  • PDF
How is Distributed ADMM Affected by Network Topology
When solving consensus optimization problems over a graph, there is often an explicit characterization of the convergence rate of Gradient Descent (GD) using the spectrum of the graph Laplacian. TheExpand
  • 17
  • 1
  • PDF
Markov Chain Lifting and Distributed ADMM
TLDR
The time to converge to the steady state of a finite Markov chain can be greatly reduced by a lifting operation, as opposed to the lifting of a Markov chains, which sometimes only provides a marginal speedup. Expand
  • 12
  • 1
  • PDF
...
1
2
3
4
5
...