• Publications
  • Influence
Learning Networks of Stochastic Differential Equations
TLDR
The $\ell_1$-regularized least squares algorithm is analyzed and it is proved that performance guarantees are uniform in the sampling rate as long as this is sufficiently high, substantiates the notion of a well defined `time complexity' for the network inference problem.
Generative Adversarial Active Learning
TLDR
Different from regular active learning, the resulting algorithm adaptively synthesizes training instances for querying to increase learning speed and is likely to be the first active learning work using GAN.
A message-passing algorithm for multi-agent trajectory planning
TLDR
A novel approach for computing collision-free global trajectories for p agents with specified initial and final configurations, based on an improved version of the alternating direction method of multipliers (ADMM), which allows for incorporating different cost functionals with only minor adjustments.
The Boundary Forest Algorithm for Online Supervised and Unsupervised Learning
We describe a new instance-based learning algorithm called the Boundary Forest (BF) algorithm, that can be used for supervised and unsupervised learning. The algorithm builds a forest of trees whose
A Family of Tractable Graph Distances
TLDR
This work defines a broad family of graph distances, that includes both the chemical and the CKS distance, and proves that these are all metrics, and shows that this family includes metrics that are tractable.
A metric for sets of trajectories that is practical and mathematically consistent
TLDR
The notion of closeness is the first demonstrating the following three features: the metric can be quickly computed, incorporates confusion of trajectories' identity in an optimal way, and is a metric in the mathematical sense.
How is Distributed ADMM Affected by Network Topology
TLDR
A full characterization of the convergence of distributed over-relaxed ADMM for the same type of consensus problem in terms of the topology of the underlying graph is provided and a proof of the aforementioned conjecture is shown it is valid for any graph, even the ones whose random walks cannot be accelerated via Markov chain lifting.
Probabilistic document model for automated document composition
We present a new paradigm for automated document composition based on a generative, unified probabilistic document model (PDM) that models document composition. The model formally incorporates key
Markov Chain Lifting and Distributed ADMM
TLDR
For a class of quadratic objectives, it is shown an analogous behavior where a distributed alternating direction method of multipliers (ADMM) algorithm can be seen as a lifting of gradient descent.
Group recommendations via multi-armed bandits
TLDR
A recommendation policy for persistent groups that repeatedly engage in a joint activity has logarithmic regret and it shows that regret depends linearly on d, the size of the underlying persistent group.
...
1
2
3
4
5
...