• Corpus ID: 210023636

Discovering Nonlinear Relations with Minimum Predictive Information Regularization

@article{Wu2020DiscoveringNR,
  title={Discovering Nonlinear Relations with Minimum Predictive Information Regularization},
  author={Tailin Wu and Thomas Breuel and Michael Skuhersky and Jan Kautz},
  journal={ArXiv},
  year={2020},
  volume={abs/2001.01885}
}
Identifying the underlying directional relations from observational time series with nonlinear interactions and complex relational structures is key to a wide range of applications, yet remains a hard problem. In this work, we introduce a novel minimum predictive information regularization method to infer directional relations from time series, allowing deep learning models to discover nonlinear relations. Our method substantially outperforms other methods for learning nonlinear relations in… 

Figures and Tables from this paper

Rhino: Deep Causal Temporal Relationship Learning With History-dependent Noise

This paper proposes a novel causal relationship learning framework for timeseries data, called Rhino, which combines vector auto-regression, deep learning and variational inference to model non-linear relationships with instantaneous effects while allowing the noise distribution to be modulated by historical observations.

Amortized Causal Discovery: Learning to Infer Causal Graphs from Time-Series Data

This work proposes Amortized Causal Discovery, a novel framework that leverages shared dynamics to learn to infer causal relations from time-series data, and enables a single, amortized model to be trained that infers causal relations across samples with different underlying causal graphs.

Large-scale kernelized GRANGER causality to infer topology of directed graphs with applications to brain networks

The proposed method, large-scale kernelized Granger causality (lsKGC), uses kernel functions to transform data into a low-dimensional feature space and solves the autoregressive problem in the feature space, then finds the pre-images in the input space to infer the topology.

Large-scale Kernelized Granger Causality (lsKGC) for inferring topology of directed graphs in brain networks

The present paper proposes a novel topology inference method for analyzing directed networks with co-evolving nodal processes that solves the ill-posedness problem, and uses kernel functions to transform data into a low-dimensional feature space, solves the autoregressive problem in thefeature space, and then finds the pre-images in the input space to infer the topology.

Amortized Causal Discovery Previous Approaches

  • Computer Science
This work proposes Amortized Causal Discovery, a novel framework that leverages shared dynamics to learn to infer causal relations from time-series data, and enables a single, amortized model to be trained that infers causal relations across samples with different underlying causal graphs.

Discovering long term dependencies in noisy time series data using deep learning

A framework for capturing and explaining temporal dependencies in time series data using deep neural networks and test it on various synthetic and real world datasets is developed.

Learning interaction rules from multi-animal trajectories via augmented behavioral models

This paper proposes a new framework for learning Granger causality from multi-animal trajectories via augmented theory-based behavioral models with interpretable data-driven models and adopts an approach for augmenting incomplete multi-agent behavioral models described by time-varying dynamical systems with neural networks.

Interpretable Models for Granger Causality Using Self-explaining Neural Networks

This paper proposes a novel framework for inferring multivariate Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks that is more interpretable than other neural-network-based techniques for inference.

An Interpretable Neural Network for Parameter Inference

An application to an asset pricing problem demonstrates how the PENN can be used to explore nonlinear risk dynamics in financial markets, and to compare empirical nonlinear effects to behavior posited by financial theory.

Learning Causal Discovery

It is argued that the causality should consider, where possible, a supervised approach, where CD procedures are learned from large datasets with known causal relations instead of being designed by a human specialist.

References

SHOWING 1-10 OF 47 REFERENCES

Neural Granger Causality for Nonlinear Time Series

This work proposes a class of nonlinear methods by applying structured multilayer perceptrons (MLPs) or recurrent neural networks (RNNs) combined with sparsity-inducing penalties on the weights to extract the Granger causal structure.

Scalable Matrix-valued Kernel Learning for High-dimensional Nonlinear Multivariate Regression and Granger Causality

It is shown how high-dimensional causal inference tasks can be naturally cast as sparse function estimation problems, leading to novel nonlinear extensions of a class of Graphical Granger Causality techniques.

Radial basis function approach to nonlinear Granger causality of time series.

This work considers an extension of Granger causality to nonlinear bivariate time series modeled by a generalization of radial basis functions and shows its application to a pair of unidirectionally coupled chaotic maps and to physiological examples.

Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions

This work presents a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion and incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently.

Regularization and variable selection via the elastic net

It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.

MINE: Mutual Information Neural Estimation

This paper presents a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size. MINE is back-propable and we prove that it is strongly

Grouped graphical Granger modeling for gene expression regulatory networks discovery

A novel methodology, which overcomes the limitations of existing methods in computational biology by applying a regression method suited for high-dimensional and large data, and by leveraging the group structure among the lagged temporal variables according to the time series they belong to is proposed.

Visual Interaction Networks: Learning a Physics Simulator from Video

The Visual Interaction Network is introduced, a general-purpose model for learning the dynamics of a physical system from raw visual observations, consisting of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks.

Interaction Networks for Learning about Objects, Relations and Physics

The interaction network is introduced, a model which can reason about how objects in complex systems interact, supporting dynamical predictions, as well as inferences about the abstract properties of the system, and is implemented using deep neural networks.

Kernel-Granger causality and the analysis of dynamical networks.

The proposed method of analysis of dynamical networks based on a recent measure of Granger causality between time series is applied to a network of chaotic maps and to a simulated genetic regulatory network and it is shown that the underlying topology of the network can be reconstructed from time series of node's dynamics, provided that a sufficient number of samples is available.