Learn More
An algorithm to learn optimal actions in convex distributed online problems is developed. Learning is online because cost functions are revealed sequentially and distributed because they are revealed to agents of a network that can exchange information with neighboring nodes only. Learning is measured in terms of the global network regret, which is defined(More)
We consider discriminative dictionary learning in a distributed online setting, where a network of agents aims to learn, from sequential observations, statistical model parameters jointly with data-driven signal representations. We formulate this problem as a distributed stochastic program with a nonconvex objective that quantifies the merit of the choice(More)
We consider stochastic optimization problems in multiagent settings, where a network of agents aims to learn parameters that are optimal in terms of a global convex objective, while giving preference to locally observed streaming information. To do so, we depart from the canonical decentralized optimization framework where agreement constraints are(More)
We consider supervised learning problems over training sets in which both the number of training examples and the dimension of the feature vectors are large. We focus on the case where the loss function defining the quality of the parameter we wish to estimate may be non-convex, but also has a convex regularization. We propose a Doubly Stochastic Successive(More)
We consider learning problems over training sets in which both, the number of training examples and the dimension of the feature vectors, are large. To solve these problems we propose the random parallel stochastic algorithm (RAPSA). We call the algorithm random parallel because it utilizes multiple processors to operate in a randomly chosen subset of(More)
We consider unconstrained convex optimization problems with objective functions that vary continuously in time. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of 1{h. The prediction step is derived by analyzing(More)
In pursuit of increasing the operational tempo of a ground robotics platform in unknown domains, we consider the problem of predicting the distribution of structural state-estimation error due to poorly-modeled platform dynamics as well as environmental effects. Such predictions are a critical component of any modern control approach that utilizes(More)
We study networked unconstrained convex optimization problems where the objective function changes continuously in time. We propose a decentralized algorithm (DePCoT) with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and gradient-based correction steps, while sampling the problem data at a constant sampling(More)
We develop a framework for trajectory tracking in dynamic settings, where an autonomous system is charged with the task of remaining close to an object of interest whose position varies continuously in time. We model this scenario as a convex optimization problem with a time-varying objective function and propose an adaptive discrete-time sampling(More)
An algorithm to learn optimal actions in distributed convex repeated games is developed. Learning is repeated because cost functions are revealed sequentially and distributed because they are revealed to agents of a network that can exchange information with neighboring nodes only. Learning is measured in terms of the global networked regret, which is the(More)