Learn More
This paper examines the learning mechanism of adaptive agents over weakly connected graphs and reveals an interesting behavior on how information flows through such topologies. The results clarify how asymmetries in the exchange of data can mask local information at certain agents and make them totally dependent on other agents. A leader-follower(More)
In this paper, we study diffusion social learning over weakly-connected graphs. We show that the asymmetric flow of information hinders the learning abilities of certain agents regardless of their local observations. Under some circumstances that we clarify in this work, a scenario of total influence (or "mind-control") arises where a set of influential(More)
In this paper, we examine the learning mechanism of adaptive agents over weakly-connected graphs and reveal an interesting behavior on how information flows through such topologies. The results clarify how asymmetries in the exchange of data can mask local information at certain agents and make them totally dependent on other agents. A leader-follower(More)
This work examines the performance of stochastic sub-gradient learning strategies, for both cases of stand-alone and networked agents, under weaker conditions than usually considered in the literature. It is shown that these conditions are automatically satisfied by several important cases of interest, including support-vector machines and sparsity-inducing(More)
This work examines the performance of stochastic sub-gradient learning strategies under weaker conditions than usually considered in the literature. The conditions are shown to be automatically satisfied by several important cases of interest including the construction of Linear-SVM, LASSO, and Total-Variation denoising formulations. In comparison, these(More)
This paper examines the convergence rate and mean-square-error performance of momentum stochastic gradient methods in the constant step-size and slow adaptation regime. The results establish that momentum methods are equivalent to the standard stochastic gradient method with a re-scaled (larger) step-size value. The equivalence result is established for all(More)
In this paper, we study diffusion social learning over weakly connected graphs. We show that the asymmetric flow of information hinders the learning abilities of certain agents regardless of their local observations. Under some circumstances that we clarify in this paper, a scenario of <italic>total influence</italic> (or &#x201C;mind-control&#x201D;)(More)
<lb>This work develops a fully decentralized variance-reduced learning algorithm for<lb>on-device intelligence where nodes store and process the data locally and are only<lb>allowed to communicate with their immediate neighbors. In the proposed algo-<lb>rithm, there is no need for a central or master unit while the objective is to enable<lb>the dispersed(More)
Several useful variance-reduced stochastic gradient algorithms, such as SVRG, SAGA, Finito, and SAG, have been proposed to minimize empirical risks with linear convergence properties to the exact minimizers. The existing convergence results assume uniform data sampling with replacement. However, it has been observed that random reshuffling can deliver(More)