• Corpus ID: 226254168

A Bregman Method for Structure Learning on Sparse Directed Acyclic Graphs

@article{Romain2020ABM,
  title={A Bregman Method for Structure Learning on Sparse Directed Acyclic Graphs},
  author={Manon Romain and Alexandre d'Aspremont},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.02764}
}
We develop a Bregman proximal gradient method for structure learning on linear structural causal models. While the problem is non-convex, has high curvature and is in fact NP-hard, Bregman gradient methods allow us to neutralize at least part of the impact of curvature by measuring smoothness against a highly nonlinear kernel. This allows the method to make longer steps and significantly improves convergence. Each iteration requires solving a Bregman proximal step which is convex and… 

Figures from this paper

An inexact Bregman proximal gradient method and its inertial variant
TLDR
This paper develops an inexact version of the BPG (denoted by iBPG) by employing a novel two-point-type inexact stopping condition for solving the subproblems and establishes the iteration complexity of O (1 /k γ ), where γ ≥ 1 is a restricted relative smoothness exponent.
Graph Neural Networks for Asset Management Graph Neural Networks for Asset Management
TLDR
This paper builds portfolios and shows that graph layers act as a stabilizer to classical methods like LSTM, reducing transaction costs and filtering out high-frequency signals, and study the effect of different graph-based information on the forecast and observe that in 2021, supply chain information becomes much more informative than sectoral or correlation-based graphs.

References

SHOWING 1-10 OF 39 REFERENCES
Masked Gradient-Based Causal Structure Learning
TLDR
A masked gradient-based structure learning method based on binary adjacency matrix that exists for any structural equation model that can readily include any differentiable score function and model function for learning causal structures is proposed.
DAGs with NO TEARS: Continuous Optimization for Structure Learning
TLDR
This paper forms the structure learning problem as a purely continuous optimization problem over real matrices that avoids this combinatorial constraint entirely and achieves a novel characterization of acyclicity that is not only smooth but also exact.
Gradient-Based Neural DAG Learning
TLDR
A novel score-based approach to learning a directed acyclic graph (DAG) from observational data that outperforms current continuous methods on most tasks, while being competitive with existing greedy search methods on important metrics for causal inference.
On the Role of Sparsity and DAG Constraints for Learning Linear DAGs
TLDR
This paper studies the asymptotic roles of the sparsity and DAG constraints for learning DAG models in the linear Gaussian and non-Gaussian cases, and investigates their usefulness in the finite sample regime, and forms a likelihood-based score function that leads to an unconstrained optimization problem that is much easier to solve.
A Graph Autoencoder Approach to Causal Structure Learning
TLDR
This work proposes a new gradient-based method to learn causal structures from observational data to a graph autoencoder framework that allows nonlinear structural equation models and is easily applicable to vector-valued variables.
Concave penalized estimation of sparse Gaussian Bayesian networks
TLDR
This work develops a penalized likelihood estimation framework to estimate the structure of Gaussian Bayesian networks from observational data using concave regularization and provides theoretical guarantees which generalize existing asymptotic results when the underlying distribution is Gaussian.
Learning directed acyclic graph models based on sparsest permutations
TLDR
The sparsest permutation (SP) algorithm is proposed, showing that learning Bayesian networks is possible under strictly weaker assumptions than faithfulness, but this comes at a computational price, thereby indicating a statistical‐computational trade‐off for causal inference algorithms.
Learning Directed Acyclic Graphs with Penalized Neighbourhood Regression
TLDR
The main results establish support recovery guarantees and deviation bounds for a family of penalized least-squares estimators under concave regularization without assuming prior knowledge of a variable ordering.
$\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs
TLDR
This work shows that the $\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edge representing the distribution), and that it converges in Frobenius norm.
CAM: Causal Additive Models, high-dimensional order search and penalized regression
TLDR
This work substantially simplify the problem of structure search and estimation for an important class of causal models by establishing consistency of the (restricted) maximum likelihood estimator for low- and high-dimensional scenarios, and allowing for misspecification of the error distribution.
...
...