• Corpus ID: 235421892

DAGs with No Curl: An Efficient DAG Structure Learning Approach

@article{Yu2021DAGsWN,
  title={DAGs with No Curl: An Efficient DAG Structure Learning Approach},
  author={Yue Yu and Tian Gao and Naiyu Yin and Qiang Ji},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.07197}
}
Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints and was solved iteratively through subproblem optimization. To further improve efficiency, we propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly. Specifically, we first show that the set of weighted adjacency matrices of DAGs are equivalent to the set of weighted gradients of… 

Figures and Tables from this paper

On the Convergence of Continuous Constrained Optimization for Structure Learning

This work reviews the standard convergence result of the ALM and shows that the required conditions are not satisfied in the recent continuous constrained formulation for learning DAGs, and establishes the convergence guarantee of QPM to a DAG solution, under mild conditions, based on a property of the DAG constraint term.

Truncated Matrix Power Iteration for Differentiable DAG Learning

This work discovers that large coefficients on higher-order terms are beneficial for DAG learning, when the spectral radiuses of the adjacency matrices are small, and that larger coefficients for higher- order terms can approximate the DAG constraints much better than the small counterparts.

DAGMA: Learning DAGs via M-matrices and a Log-Determinant Acyclicity Characterization

A new acyclicity characterization based on the log-determinant (log-det) function, which leverages the nilpotency property of DAGs and can reach large speed-ups and smaller structural Hamming distances against state-of-the-art methods.

Differentiable and Transportable Structure Learning

D-Struct is introduced which recovers transportability in the discovered structures through a novel architecture and loss function while remaining fully differentiable, and can be easily adopted in existing differentiable architectures, as was previously done with NOTEARS.

Convergence of Feedback Arc Set-Based Heuristics for Linear Structural Equation Models

This work builds upon previous contributions on such heuristics by first establishing mathematical convergence analysis, previously lacking, and showing empirically how one can significantly speed-up convergence in practice using simple warmstarting strategies.

Learning Discrete Directed Acyclic Graphs via Backpropagation

DAG-DB is proposed, a framework for learning DAGs by Discrete Backpropagation, based on the architecture of Implicit Maximum Likelihood Estimation, and adopts a probabilistic approach to the problem, sampling binary adjacency matrices from an implicit probability distribution.

Learning DAGs from Data with Few Root Causes

A novel perspective and algorithm for learning directed acyclic graphs (DAGs) from data generated by a linear structural equation model (SEM) is presented and it is proved that the true DAG is the global minimizer of the $L^0$-norm of the vector of root causes.

Differentiable DAG Sampling

VI-DP-DAG is guaranteed to output a valid DAG at any time during training and does not require any complex augmented Lagrangian optimization scheme in contrast to existing differentiable DAG learning approaches.

FedDAG: Federated DAG Structure Learning

This paper takes the first step in developing a gradient-based learning framework named FedDAG, which can learn the DAG structure without directly touching the local data and also can naturally handle the data heterogeneity.

Structure Learning with Continuous Optimization: A Sober Look and Beyond

Investigating in which cases continuous optimization for directed acyclic graph (DAG) structure learning can and cannot perform well and why this happens, and suggesting possible directions to make the search procedure more reliable, suggests that future works should take into account the non-equal noise variances formulation to handle more general settings.

DAGs with NO TEARS: Continuous Optimization for Structure Learning

This paper forms the structure learning problem as a purely continuous optimization problem over real matrices that avoids this combinatorial constraint entirely and achieves a novel characterization of acyclicity that is not only smooth but also exact.

On the Role of Sparsity and DAG Constraints for Learning Linear DAGs

This paper studies the asymptotic roles of the sparsity and DAG constraints for learning DAG models in the linear Gaussian and non-Gaussian cases, and investigates their usefulness in the finite sample regime, and forms a likelihood-based score function that leads to an unconstrained optimization problem that is much easier to solve.

DAG-GNN: DAG Structure Learning with Graph Neural Networks

A deep generative model is proposed and a variant of the structural constraint to learn the DAG is applied that learns more accurate graphs for nonlinearly generated samples; and on benchmark data sets with discrete variables, the learned graphs are reasonably close to the global optima.

Learning DAGs without imposing acyclicity

It is empirically shown that solving an $\ell_1$-penalized optimization yields to good recovery of the true graph and, in general, to almost-DAG graphs.

Learning Bayesian Network Structure using LP Relaxations

This work proposes to solve the combinatorial problem ofding the highest scoring Bayesian network structure from data by maintaining an outer bound approximation to the polytope and iteratively tighten it by searching over a new class of valid constraints.

Gradient-Based Neural DAG Learning

A novel score-based approach to learning a directed acyclic graph (DAG) from observational data that outperforms current continuous methods on most tasks, while being competitive with existing greedy search methods on important metrics for causal inference.

Causal Discovery with Reinforcement Learning

This work proposes to use Reinforcement Learning (RL) to search for a Directed Acyclic Graph (DAG) according to a predefined score function and shows that the proposed approach not only has an improved search ability but also allows a flexible score function under the acyclicity constraint.

Learning Sparse Nonparametric DAGs

A completely general framework for learning sparse nonparametric directed acyclic graphs (DAGs) from data is developed that can be applied to general nonlinear models, general differentiable loss functions, and generic black-box optimization routines.

Learning Optimal Bayesian Networks: A Shortest Path Perspective

An A* search algorithm that learns an optimal Bayesian network structure by only searching the most promising part of the solution space and a heuristic function that reduces the amount of relaxation by avoiding directed cycles within some groups of variables.

DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks

A local search post-processing algorithm is proposed and shown to substantially and universally improve the structural Hamming distance of all tested algorithms, typically by a factor of 2 or more.
...