• Corpus ID: 221970453

CASTLE: Regularization via Auxiliary Causal Graph Discovery

@article{Kyono2020CASTLERV,
  title={CASTLE: Regularization via Auxiliary Causal Graph Discovery},
  author={Trent Kyono and Yao Zhang and Mihaela van der Schaar},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.13180}
}
Regularization improves generalization of supervised models to out-of-sample data. Prior works have shown that prediction in the causal direction (effect from cause) results in lower testing error than the anti-causal direction. However, existing regularization methods are agnostic of causality. We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables. CASTLE learns the causal… 

GFlowCausal: Generative Flow Networks for Causal Discovery

This work proposes a novel approach to learning a DAG from observational data called GFlowCausal, which converts the graph search problem to a generation problem, in which direct edges are added gradually, and proposes a plug-and-play module based on transitive closure to ensure efficient sampling.

Invariant Structure Learning for Better Generalization and Causal Explainability

This work proposes a novel framework, Invariant Structure Learning (ISL), that is designed to improve causal structure discovery by utilizing generalization as an indication, and extends ISL to a self-supervised learning setting where accurate causal structureiscovery does not rely on any labels.

DARING: Differentiable Causal Discovery with Residual Independence

A novel differentiable method DARING is proposed by imposing explicit residual independence constraint in an adversarial way that can significantly improve the causal discovery performances in various scientific and industrial scenarios.

Causal Regularization Using Domain Priors

This work proposes a causal regularization method that can incorporate such causal domain priors into the network and which supports both direct and total causal effects, and shows that this approach can generalize to various kinds of specifications of causal priors.

Deep Causal Learning: Representation, Discovery and Inference

It is pointed out that deep causal learning is important for the theoretical extension and application expansion of causal science and is also an indispensable part of general artificial intelligence.

On the Generalization and Adaption Performance of Causal Models

This work systematically study the generalization and adaption performance of such modular neural causal models by comparing it to monolithic models and structured models where the set of predictors is not constrained to causal parents.

Matching Learned Causal Effects of Neural Networks with Domain Priors

This work proposes a regularization method that aligns the learned causal effects of a neural network with domain priors, including both direct and total causal effects, and shows that it is robust and gets improved accuracy on noisy inputs.

Incorporating Causal Graphical Prior Knowledge into Predictive Modeling via Simple Data Augmentation

This work proposes a model-agnostic data augmentation method that allows us to exploit the prior knowledge of the CI encoded in a CG for supervised machine learning and experimentally shows that the proposed method is effective in improving the prediction accuracy, especially in the small-data regime.

Multivariable Causal Discovery with General Nonlinear Relationships

This work shows that causal models resolve the permutation indeterminacy of ICA and proves that under strong identifiability, the inference function’s Jacobian captures the sparsity structure of the causal graph.

DAPDAG: Domain Adaptation via Perturbed DAG Reconstruction

This work proposes to learn an auto-encoder that undertakes inference on population statistics given features and reconstructing a directed acyclic graph (DAG) as an auxiliary task, and demonstrates that reconstructing the DAG benefits the approximate inference.

References

SHOWING 1-10 OF 53 REFERENCES

Causal Discovery with Reinforcement Learning

This work proposes to use Reinforcement Learning (RL) to search for a Directed Acyclic Graph (DAG) according to a predefined score function and shows that the proposed approach not only has an improved search ability but also allows a flexible score function under the acyclicity constraint.

Triad Constraints for Learning Causal Structure of Latent Variables

This paper designs a form of "pseudo-residual" from three variables, and shows that when causal relations are linear and noise terms are non-Gaussian, the causal direction between the latent variables for the three observed variables is identifiable by checking a certain kind of independence relationship.

High-dimensional learning of linear causal networks via inverse covariance estimation

It is shown that when the error variances are known or estimated to close enough precision, the true DAG is the unique minimizer of the score computed using the reweighted squared l2-loss.

Learning Directed Acyclic Graphs with Penalized Neighbourhood Regression

The main results establish support recovery guarantees and deviation bounds for a family of penalized least-squares estimators under concave regularization without assuming prior knowledge of a variable ordering.

Invariant Models for Causal Transfer Learning

This work relaxes the usual covariate shift assumption and assumes that it holds true for a subset of predictor variables: the conditional distribution of the target variable given this subset of predictors is invariant over all tasks.

Semi-supervised interpolation in an anticausal learning scenario

It is proved that unlabelled data help for the problem of interpolating a monotonically increasing function if and only if the orthogonality conditions are violated - which the authors only expect for the anticausal direction.

Causal Regularization

The proposed causal regularizer is proposed to steer predictive models towards causally-interpretable solutions and theoretically study its properties and is shown to outperforms its L1-regularized counterpart in causal accuracy and is competitive in predictive performance.

Learning Sparse Nonparametric DAGs

A completely general framework for learning sparse nonparametric directed acyclic graphs (DAGs) from data is developed that can be applied to general nonlinear models, general differentiable loss functions, and generic black-box optimization routines.

Causal inference by using invariant prediction: identification and confidence intervals

This work proposes to exploit invariance of a prediction under a causal model for causal inference: given different experimental settings (e.g. various interventions) the authors collect all models that do show invariance in their predictive accuracy across settings and interventions, and yields valid confidence intervals for the causal relationships in quite general scenarios.

Domain Adaptation by Using Causal Inference to Predict Invariant Conditional Distributions

This work proposes an approach for solving causal domain adaptation problems that exploits causal inference and does not rely on prior knowledge of the causal graph, the type of interventions or the intervention targets, and demonstrates a possible implementation on simulated and real world data.
...