Corpus ID: 237559386

DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning

@article{Yu2021DiNNODN,
  title={DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning},
  author={Javier Yu and Joseph A. Vincent and Mac Schwager},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.08665}
}
We present a distributed algorithm that enables a group of robots to collaboratively optimize the parameters of a deep neural network model while communicating over a mesh network. Each robot only has access to its own data and maintains its own version of the neural network, but eventually learns a model that is as good as if it had been trained on all the data centrally. No robot sends raw data over the wireless network, preserving data privacy and ensuring efficient use of wireless bandwidth… Expand

Figures from this paper

References

SHOWING 1-10 OF 41 REFERENCES
Collective robot reinforcement learning with distributed asynchronous guided policy search
TLDR
This work proposes a distributed and asynchronous version of guided policy search and uses it to demonstrate collective policy learning on a vision-based door opening task using four robots, describes how both policy learning and data collection can be conducted in parallel across multiple robots, and presents a detailed empirical evaluation of the system. Expand
Parallel and distributed training of neural networks via successive convex approximation
  • P. Lorenzo, Simone Scardapane
  • Computer Science
  • 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP)
  • 2016
TLDR
A theoretical framework for training neural network (NN) models, when data is distributed over a set of agents that are connected to each other through a sparse network topology, which naturally leads to distributed architectures where agents solve local optimization problems exploiting parallel multi-core processors. Expand
Distributed Reinforcement Learning for Multi-robot Decentralized Collective Construction
TLDR
It is shown that the sum of experience of all agents can be leveraged to quickly train a collaborative policy that naturally scales to smaller and larger swarms, in a fully-observable system. Expand
A Survey of Distributed Optimization Methods for Multi-Robot Systems
TLDR
The Consensus Alternating Direction Method of Multipliers (C-ADMM) emerges as a particularly attractive and versatile distributed optimization method for multi-robot systems. Expand
Adaptive Sampling and Online Learning in Multi-Robot Sensor Coverage with Mixture of Gaussian Processes
  • Wenhao Luo, K. Sycara
  • Computer Science
  • 2018 IEEE International Conference on Robotics and Automation (ICRA)
  • 2018
TLDR
This work proposes a new approach with mixture of locally learned Gaussian Processes for collective model learning and an information-theoretic criterion for simultaneous adaptive sampling in multi-robot coverage that demonstrates a better generalization of the environment modeling and thus the improved performance of coverage without assuming the density function is known a priori. Expand
Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization
TLDR
This paper presents an overview of recent work in decentralized optimization and surveys the state-of-theart algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology. Expand
Revisiting Parameter Sharing in Multi-Agent Deep Reinforcement Learning
TLDR
It is shown that increasing centralization during learning arbitrarily mitigates the slowing of convergence due to nonstationarity and gives a formal proof of a set of methods that allow parameter sharing to serve in environments with heterogeneous agents. Expand
Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks
TLDR
This work consistently evaluate and compare three different classes of MARL algorithms in a diverse range of cooperative multi-agent learning tasks, and provides insights regarding the effectiveness of different learning approaches. Expand
EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
TLDR
A novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem and uses a fixed, large step size, which can be determined independently of the network size or topology. Expand
Neural-Swarm2: Planning and Control of Heterogeneous Multirotor Swarms using Learned Interactions
TLDR
Experimental results demonstrate that Neural-Swarm2 is able to generalize to larger swarms beyond training cases and significantly outperforms a baseline nonlinear tracking controller with up to three times reduction in worst-case tracking errors. Expand
...
1
2
3
4
5
...