• Corpus ID: 240354773

Learning linear non-Gaussian directed acyclic graph with diverging number of nodes

@article{Zhao2021LearningLN,
  title={Learning linear non-Gaussian directed acyclic graph with diverging number of nodes},
  author={Ruixuan Zhao and Xin He and Junhui Wang},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.00740}
}
Acyclic model, often depicted as a directed acyclic graph (DAG), has been widely employed to represent directional causal relations among collected nodes. In this article, we propose an efficient method to learn linear non-Gaussian DAG in high dimensional cases, where the noises can be of any continuous non-Gaussian distribution. This is in sharp contrast to most existing DAG learning methods assuming Gaussian noise with additional variance assumptions to attain exact DAG recovery. The proposed… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 44 REFERENCES

Constrained likelihood for reconstructing a directed acyclic Gaussian graph.

A constraint reduction method is developed that constructs a set of active constraints from super-exponentially many constraints, coupled with an alternating direction method of multipliers and a difference convex method, permits efficient computation for large-graph learning.

DAGs with NO TEARS: Continuous Optimization for Structure Learning

This paper forms the structure learning problem as a purely continuous optimization problem over real matrices that avoids this combinatorial constraint entirely and achieves a novel characterization of acyclicity that is not only smooth but also exact.

DirectLiNGAM: A Direct Method for Learning a Linear Non-Gaussian Structural Equation Model

This paper proposes a new direct method to estimate a causal ordering and connection strengths based on non-Gaussianity that requires no algorithmic parameters and is guaranteed to converge to the right solution within a small fixed number of steps if the data strictly follows the model.

PenPC: A two‐step approach to estimate the skeletons of high‐dimensional directed acyclic graphs

A novel method named PenPC is proposed to estimate the skeleton of a high‐dimensional DAG by a two‐step approach and has higher and specificity than the state‐of‐the‐art method, the PC‐stable algorithm.

Likelihood Ratio Tests for a Large Directed Acyclic Graph

This article proposes constrained likelihood ratio tests for inference of the connectivity as well as directionality subject to nonconvex acyclicity constraints in a Gaussian directed graphical model and derives the asymptotic distributions of the constrained likelihood ratios in a high-dimensional situation.

Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm

This work proves uniform consistency of the PC-algorithm for very high-dimensional, sparse DAGs where the number of nodes is allowed to quickly grow with sample size n, as fast as O(na) for any 0 < a < ∞.

High-dimensional causal discovery under non-Gaussianity

This work considers graphical models based on a recursive system of linear structural equations and proposes an algorithm that yields consistent estimates of the graph also in high-dimensional settings in which thenumber of variables may grow at a faster rate than the number of observations, but inWhich the underlying causal structure features suitable sparsity.

Learning a High-dimensional Linear Structural Equation Model via l1-Regularized Regression

This paper proves that sample sizes n = (d2 log p) and n =(d2p2=m) are sufficient for the proposed algorithm to recover linear SEMs with sub- Gaussian and (4m)-th bounded-moment error distributions, respectively, and is statistically consistent and computationally feasible for learning a high-dimensional linear SEM when its moralized graph is sparse.

Learning linear structural equation models in polynomial time and sample complexity

A new algorithm is developed that recovers the directed acyclic graph (DAG) structure of the SEM under an identifiability condition that is more general than those considered in the literature, and without faithfulness assumptions.

A Linear Non-Gaussian Acyclic Model for Causal Discovery

This work shows how to discover the complete causal structure of continuous-valued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have non-Gaussian distributions of non-zero variances.