Decentralized Statistical Inference with Unrolled Graph Neural Networks
@article{Wang2021DecentralizedSI, title={Decentralized Statistical Inference with Unrolled Graph Neural Networks}, author={He Wang and Yifei Shen and Ziyuan Wang and Dongsheng Li and Jun Zhang and Khaled Ben Letaief and Jie Lu}, journal={2021 60th IEEE Conference on Decision and Control (CDC)}, year={2021}, pages={2634-2640} }
In this paper, we investigate the decentralized statistical inference problem, where a network of agents cooperatively recover a (structured) vector from private noisy samples without centralized coordination. Existing optimization-based algorithms suffer from issues of model mismatches and poor convergence speed, and thus their performance would be degraded provided that the number of communication rounds is limited. This motivates us to propose a learning-based framework, which unrolls well…
2 Citations
AI Empowered Resource Management for Future Wireless Networks
- Computer Science2021 IEEE International Mediterranean Conference on Communications and Networking (MeditCom)
- 2021
For K-user interference management problem, it is theoretically show that graph neural networks (GNNs) are superior to multi-layer perceptrons (MLPs), and the performance gap between these two methods grows with $\sqrt K $.
Scalable Power Control/Beamforming in Heterogeneous Wireless Networks with Graph Neural Networks
- Computer Science2021 IEEE Global Communications Conference (GLOBECOM)
- 2021
A novel unsupervised learning-based framework named heterogeneous interference graph neural network (HIGNN) is proposed to empower each link to obtain its individual transmission scheme after limited information exchange with neighboring links.
References
SHOWING 1-10 OF 31 REFERENCES
A fast proximal gradient algorithm for decentralized composite optimization over directed networks
- Computer Science, MathematicsSyst. Control. Lett.
- 2017
Graph Neural Networks for Scalable Radio Resource Management: Architecture Design and Theoretical Analysis
- Computer ScienceIEEE Journal on Selected Areas in Communications
- 2021
This paper demonstrates that radio resource management problems can be formulated as graph optimization problems that enjoy a universal permutation equivariance property, and identifies a family of neural networks, named message passing graph neural networks (MPGNNs), which can generalize to large-scale problems, while enjoying a high computational efficiency.
GLAD: Learning Sparse Graph Recovery
- Computer ScienceICLR
- 2020
A deep learning architecture, GLAD, is proposed, which uses an Alternating Minimization algorithm as the authors' model inductive bias, and learns the model parameters via supervised learning, and it is shown that GLAD learns a very compact and effective model for recovering sparse graphs from data.
A Unified Algorithmic Framework for Distributed Composite Optimization
- Computer Science2020 59th IEEE Conference on Decision and Control (CDC)
- 2020
A by-product of the analysis is a tuning recommendation for several existing (non-accelerated) distributed algorithms yielding the fastest provable (worst-case) convergence rate.
A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Computer Science, MathematicsIEEE Transactions on Signal Processing
- 2015
A proximal gradient exact first-order algorithm (PG-EXTRA) that utilizes the composite structure and has the best known convergence rate and is a nontrivial extension to the recent algorithm EXTRA.
A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- Computer ScienceIEEE Transactions on Signal Processing
- 2019
This paper proposes a novel proximal-gradient algorithm for a decentralized optimization problem with a composite objective containing smooth and nonsmooth terms that is as good as one of the two convergence rates that match the typical rates for the general gradient descent and the consensus averaging.
COLA: Decentralized Linear Learning
- Computer ScienceNeurIPS
- 2018
This work proposes COLA, a new decentralized training algorithm with strong theoretical guarantees and superior practical performance, that achieves communication efficiency, scalability, elasticity as well as resilience to changes in data and allows for unreliable and heterogeneous participating devices.
Graph Neural Networks for Decentralized Controllers
- Computer ScienceICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2021
This paper proposes a framework using graph neural networks (GNNs) to learn decentralized controllers from data, and finds that GNNs are naturally distributed architectures, making them perfectly suited for the task, and adapt them to handle delayed communications as well.
On Nonconvex Decentralized Gradient Descent
- Computer ScienceIEEE Transactions on Signal Processing
- 2018
Somewhat surprisingly, the decentralized consensus algorithms, DGD and Prox-DGD, retain most other properties that are known in the convex setting, and can take the constraint to a nonconvex set with an easy projection.
How Powerful are Graph Neural Networks?
- Computer ScienceICLR
- 2019
This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs.