Estimation of Structural Causal Model via Sparsely Mixing Independent Component Analysis
@article{Harada2020EstimationOS, title={Estimation of Structural Causal Model via Sparsely Mixing Independent Component Analysis}, author={Kazuharu Harada and Hironori Fujisawa}, journal={ArXiv}, year={2020}, volume={abs/2009.03077} }
We consider the problem of inferring the causal structure from observational data, especially when the structure is sparse. This type of problem is usually formulated as an inference of a directed acyclic graph (DAG) model. The linear non-Gaussian acyclic model (LiNGAM) is one of the most successful DAG models, and various estimation methods have been developed. However, existing methods are not efficient for some reasons: (i) the sparse structure is not always incorporated in causal order…
References
SHOWING 1-10 OF 37 REFERENCES
DirectLiNGAM: A Direct Method for Learning a Linear Non-Gaussian Structural Equation Model
- Computer ScienceJ. Mach. Learn. Res.
- 2011
This paper proposes a new direct method to estimate a causal ordering and connection strengths based on non-Gaussianity that requires no algorithmic parameters and is guaranteed to converge to the right solution within a small fixed number of steps if the data strictly follows the model.
A Linear Non-Gaussian Acyclic Model for Causal Discovery
- Computer ScienceJ. Mach. Learn. Res.
- 2006
This work shows how to discover the complete causal structure of continuous-valued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have non-Gaussian distributions of non-zero variances.
High-dimensional causal discovery under non-Gaussianity
- Computer Science, MathematicsBiometrika
- 2019
This work considers graphical models based on a recursive system of linear structural equations and proposes an algorithm that yields consistent estimates of the graph also in high-dimensional settings in which thenumber of variables may grow at a faster rate than the number of observations, but inWhich the underlying causal structure features suitable sparsity.
Pairwise likelihood ratios for estimation of non-Gaussian structural equation models
- Computer ScienceJ. Mach. Learn. Res.
- 2013
Results on simulated fMRI data indicate that the proposed framework is useful in neuroimaging where the number of time points is typically quite small, and it is computationally and conceptually very simple.
Causal discovery with continuous additive noise models
- MathematicsJ. Mach. Learn. Res.
- 2014
If the observational distribution follows a structural equation model with an additive noise structure, the directed acyclic graph becomes identifiable from the distribution under mild conditions, which constitutes an interesting alternative to traditional methods that assume faithfulness and identify only the Markov equivalence class of the graph, thus leaving some edges undirected.
Identifiability of Gaussian structural equation models with equal error variances
- Mathematics, Computer Science
- 2014
This work proves full identifiability in the case where all noise variables have the same variance: the directed acyclic graph can be recovered from the joint Gaussian distribution.
Estimation of a Structural Vector Autoregression Model Using Non-Gaussianity
- Computer Science, MathematicsJ. Mach. Learn. Res.
- 2010
This work shows how to combine the non-Gaussian instantaneous model with autoregressive models, effectively what is called a structural vector autoregression (SVAR) model, and contributes to the long-standing problem of how to estimate SVAR's.
Statistical Learning with Sparsity: The Lasso and Generalizations
- Computer Science
- 2015
Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.
On the Identifiability of the Post-Nonlinear Causal Model
- MathematicsUAI
- 2009
It is shown that this post-nonlinear causal model is identifiable in most cases; by enumerating all possible situations in which the model is not identifiable, this model is identified by sufficient conditions for its identifiability.
DAGs with NO TEARS: Continuous Optimization for Structure Learning
- Computer ScienceNeurIPS
- 2018
This paper forms the structure learning problem as a purely continuous optimization problem over real matrices that avoids this combinatorial constraint entirely and achieves a novel characterization of acyclicity that is not only smooth but also exact.