# Distributed Inference With Sparse and Quantized Communication

@article{Mitra2021DistributedIW, title={Distributed Inference With Sparse and Quantized Communication}, author={Aritra Mitra and John A. Richards and Saurabh Bagchi and Shreyas Sundaram}, journal={IEEE Transactions on Signal Processing}, year={2021}, volume={69}, pages={3906-3921} }

We consider the problem of distributed inference where agents in a network observe a stream of private signals generated by an unknown state, and aim to uniquely identify this state from a finite set of hypotheses. We focus on scenarios where communication between agents is costly, and takes place over channels with finite bandwidth. To reduce the frequency of communication, we develop a novel event-triggered distributed learning rule that is based on the principle of diffusing low beliefs on…

## 7 Citations

Random Information Sharing over Social Networks

- Computer ScienceArXiv
- 2022

It is shown that agents can learn the true hypothesis even if they do not discuss it, at rates comparable to traditional social learning, and that using one’s own belief as a prior for estimating the neighbors’ non-transmitted components might create opinion clusters that prevent learning with full confidence.

Social Learning under Randomized Collaborations

- Computer Science2022 IEEE International Symposium on Information Theory (ISIT)
- 2022

It is shown that under this sparser communication scheme, the agents learn the truth eventually and the asymptotic convergence rate remains the same as the standard algorithms, which use more communication resources.

Non-Bayesian Social Learning on Random Digraphs with Aperiodically Varying Network Connectivity

- Mathematics, Computer ScienceIEEE Transactions on Control of Network Systems
- 2022

It is shown by proof and an example that if the network of influences is balanced in a certain sense, then asymptotic learning occurs almost surely even in the absence of uniform strong connectivity.

Communication-Efficient and Fault-Tolerant Social Learning

- Computer Science2021 55th Asilomar Conference on Signals, Systems, and Computers
- 2021

Almost sure asymptotic convergence of the beliefs of non-faulty agents around the optimal hypothesis are shown and numerical evidence for the communication efficiency and robustness of the proposed algorithm is provided.

Robust Federated Best-Arm Identification in Multi-Armed Bandits

- Computer Science
- 2021

This work proposes Fed-SEL, a simple communication-efficient algorithm that builds on successive elimination techniques and involves local sampling steps at the clients and introduces a notion of arm-heterogeneity that captures the level of dissimilarity between distributions of arms corresponding to different clients.

Exploiting Heterogeneity in Robust Federated Best-Arm Identification

- Computer ScienceArXiv
- 2021

This work proposes Fed-SEL, a simple communication-efficient algorithm that builds on successive elimination techniques and involves local sampling steps at the clients and introduces a notion of arm-heterogeneity that captures the level of dissimilarity between distributions of arms corresponding to different clients.

Communication-Efficient Distributed Cooperative Learning with Compressed Beliefs

- Computer ScienceArXiv
- 2021

This work proves the almost sure asymptotic exponential convergence of beliefs around the set of optimal hypotheses, and shows a nonasymptotic, explicit, and linear concentration rate in probability of the beliefs on the optimal hypothesis set.

## References

SHOWING 1-10 OF 36 REFERENCES

Social learning and distributed hypothesis testing

- Computer Science2014 IEEE International Symposium on Information Theory
- 2014

Under mild assumptions, the belief of any agent in any incorrect parameter converges to zero exponentially fast, and the exponential rate of learning is a characterized by the network structure and the divergences between the observations' distributions.

A New Approach to Distributed Hypothesis Testing and Non-Bayesian Learning: Improved Learning Rate and Byzantine Resilience

- Computer ScienceIEEE Transactions on Automatic Control
- 2021

A distributed learning rule is proposed that differs fundamentally from existing approaches, in that it does not employ any form of "belief-averaging", and agents update their beliefs based on a min-rule.

Switching to learn

- Computer Science2015 American Control Conference (ACC)
- 2015

In this model, agents exchange information only when their private signals are not informative enough; thence, by switching between the two regimes, agents efficiently learn the truth using only a few rounds of communications, preserves learnability while incurring a lower communication cost.

Non-Bayesian social learning

- EconomicsGames Econ. Behav.
- 2012

It is shown that, as long as individuals take their personal signals into account in a Bayesian way, repeated interactions lead them to successfully aggregate information and learn the true parameter.

A New Approach for Distributed Hypothesis Testing with Extensions to Byzantine-Resilience

- Computer Science2019 American Control Conference (ACC)
- 2019

It is proved that each non-adversarial agent can asymptotically learn the true state of the world almost surely, under appropriate conditions on the observation model and the network topology.

Communication Constrained Learning with Uncertain Models

- Computer Science, EconomicsICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2020

This work proposes an event-triggered communication protocol that only transmits a belief for a hypothesis if new information has been incorporated since the previous communication time, and shows that the proposed solution allows the agents to achieve beliefs within the neighborhood of a full communication network, while significantly reducing the amount of transmissions.

Event-Triggered Distributed Inference

- Computer Science2020 59th IEEE Conference on Decision and Control (CDC)
- 2020

This work proposes an event-triggered distributed learning algorithm based on the principle of diffusing low beliefs on each false hypothesis, and designs a trigger condition under which an agent broadcasts only those components of its belief vector that have adequate innovation, to only those neighbors that require such information.

Non-Bayesian Social Learning with Uncertain Models over Time-Varying Directed Graphs

- Computer Science2019 IEEE 58th Conference on Decision and Control (CDC)
- 2019

This work proposes a new algorithm to iteratively construct a set of beliefs that indicate whether a certain hypothesis is supported by the empirical evidence, and can be implemented over time-varying directed graphs, with non-doubly stochastic weights.

A Communication-Efficient Algorithm for Exponentially Fast Non-Bayesian Learning in Networks

- Computer Science2019 IEEE 58th Conference on Decision and Control (CDC)
- 2019

A novel distributed learning rule is proposed wherein agents aggregate neighboring beliefs based on a min-protocol, and the inter-communication intervals grow geometrically at a rate a ≥ 1, to achieve communication-efficient non-Bayesian learning over a network.

Fast Convergence Rates of Distributed Subgradient Methods With Adaptive Quantization

- Computer ScienceIEEE Transactions on Automatic Control
- 2021

This article introduces a novel quantization method, which it is shown that if the objective functions are convex or strongly convex, then using adaptive quantization does not affect the rate of convergence of the distributed subgradient methods when the communications are quantized, except for a constant that depends on the resolution of the quantizer.