Exact Asymptotics for Learning Tree-Structured Graphical Models With Side Information: Noiseless and Noisy Samples

@article{Tandon2020ExactAF,
  title={Exact Asymptotics for Learning Tree-Structured Graphical Models With Side Information: Noiseless and Noisy Samples},
  author={Anshoo Tandon and Vincent Yan Fu Tan and Shiyao Zhu},
  journal={IEEE Journal on Selected Areas in Information Theory},
  year={2020},
  volume={1},
  pages={760-776}
}
Given side information that an Ising tree-structured graphical model is homogeneous and has no external field, we derive the exact asymptotics of learning its structure from independently drawn samples. Our results, which leverage the use of probabilistic tools from the theory of strong large deviations, refine the large deviation (error exponents) results of Tan et al. (2011) and strictly improve those of Bresler and Karzand (2020). In addition, we extend our results to the scenario in which… 

Figures from this paper

SGA: A Robust Algorithm for Partial Recovery of Tree-Structured Graphical Models with Noisy Samples
TLDR
This paper presents a novel impossibility result by deriving a bound on the necessary number of samples for partial recovery of Katiyar et al. (2020), and proposes Symmetrized Geometric Averaging (SGA), a more statistically robust algorithm for partial tree recovery.
Decentralized Learning of Tree-Structured Gaussian Graphical Models from Noisy Data
TLDR
This paper investigates the effects of three common types of noisy channels: Gaussian, Erasure, and binary symmetric channel and proposes the Algorithmic Bound, which will achieve obviously better performance with small sample size compared with formulaic bounds.
Recoverability Landscape of Tree Structured Markov Random Fields under Symmetric Noise
TLDR
A polynomial time, sample efficient algorithm that recovers the exact tree when this is possible, or up to the unidentifiability as promised by the characterization, when full recoverability is impossible.
Near-optimal learning of tree-structured distributions by Chow-Liu
TLDR
The upper bound is based on a new conditional independence tester that addresses an open problem posed by Canonne, Diakonikolas, Kane, and Stewart (STOC, 2018): it is proved that for three random variables X,Y,Z each over Σ, testing if I(X; Y ∣ Z) is 0 or ≥ ε is possible with O(|Σ|3/ε) samples.
Identifiability in robust estimation of tree structured models
MARTA CASANELLAS1,2, MARINA GARROTE-LÓPEZ3 and PIOTR ZWIERNIK4 1Institut de Matemàtiques de la UPC-BarcelonaTech (IMTech). Universitat Politècnica de Catalunya, Barcelona, Spain, E-mail:
Robust Estimation of Tree Structured Markov Random Fields
TLDR
This work provides a precise characterization of recoverability by deriving a necessary and sufficient condition for the recoverability of a leaf cluster, and provides an algorithm that recovers the tree if this condition is satisfied, and recovering the tree up to the leaf clusters failing this condition.

References

SHOWING 1-10 OF 35 REFERENCES
Predictive Learning on Hidden Tree-Structured Ising Models
TLDR
This paper quantifies how noise in the hidden model impacts the sample complexity of structure learning and marginal distributions' estimation by proving upper and lower bounds on thesample complexity.
Learning Tree Structures from Noisy Data
TLDR
The impact of measurement noise on the task of learning the underlying tree structure via the well-known Chow-Liu algorithm is studied and formal sample complexity guarantees for exact recovery are provided.
Learning High-Dimensional Markov Forest Distributions: Analysis of Error Rates
TLDR
It is shown that this algorithm is both structurally consistent and risk consistent and the error probability of structure learning decays faster than any polynomial in the number of samples under fixed model size, and it is proved that the independent tree model is the hardest to learn using the proposed algorithm in terms of error rates for structure learning.
High-dimensional structure estimation in Ising models: Local separation criterion
TLDR
A novel criterion for tractable graph families, where this method is efficient, based on the presence of sparse local separators between node pairs in the underlying graph, is introduced.
Learning Gaussian Tree Models: Analysis of Error Exponents and Extremal Structures
TLDR
It is shown that the extremal tree structure that minimizes the error exponent is the star for any fixed set of correlation coefficients on the edges of the tree and that the Markov chain graphs represent the hardest and the easiest structures to learn in the class of tree-structured Gaussian graphical models.
Lower Bounds on Active Learning for Graphical Model Selection
TLDR
This work considers the problem of estimating the underlying graph associated with a Markov random field, with the added twist that the decoding algorithm can iteratively choose which subsets of nodes to sample based on the previous samples, resulting in an active learning setting, and provides algorithm-independent lower bounds for high-probability recovery within the class of degree-bounded graphs.
Learning a Tree-Structured Ising Model in Order to Make Predictions
TLDR
One of the main messages of this paper is that far fewer samples are needed than for recovering the underlying tree, which means that accurate predictions are possible using the wrong tree.
On the Information Theoretic Limits of Learning Ising Models
TLDR
This work isolates two key graph-structural ingredients that can be used to specify sample complexity lower-bounds and derive corollaries of this framework that not only recover existing recent results, but also provide lower bounds for novel graph classes not considered previously.
Efficiently Learning Ising Models on Arbitrary Graphs
TLDR
A simple greedy procedure allows to learn the structure of an Ising model on an arbitrary bounded-degree graph in time on the order of p2, and it is shown that for any node there exists at least one neighbor with which it has a high mutual information.
Robust Learning of Fixed-Structure Bayesian Networks
TLDR
This work provides the first computationally efficient robust learning algorithm for this problem with dimension-independent error guarantees, which has near-optimal sample complexity, runs in polynomial time, and achieves error that scales nearly-linearly with the fraction of adversarially corrupted samples.
...
...