# Lower Bounds on Information Requirements for Causal Network Inference

@article{Kang2021LowerBO, title={Lower Bounds on Information Requirements for Causal Network Inference}, author={Xiaohan Kang and Bruce E. Hajek}, journal={2021 IEEE International Symposium on Information Theory (ISIT)}, year={2021}, pages={754-759} }

Recovery of the causal structure of dynamic networks from noisy measurements has long been a problem of intense interest across many areas of science and engineering. Many algorithms have been proposed, but there is no work that compares the performance of the algorithms to converse bounds in a non-asymptotic setting. As a step to address this problem, this paper gives lower bounds on the error probability for causal network support recovery in a linear Gaussian setting. The bounds are based on…

## 20 References

### Causal Network Inference by Optimal Causation Entropy

- Computer ScienceSIAM J. Appl. Dyn. Syst.
- 2015

The mathematical theory of causation entropy is developed, an information-theoretic statistic designed for model-free causality inference that proves that for a given node in the network, its causal parents form the minimal set of nodes that maximizes causation entropy.

### Logarithmic Regret Bound in Partially Observable Linear Dynamical Systems

- Computer Science, MathematicsNeurIPS
- 2020

The first model estimation method with finite-time guarantees in both open and closed-loop system identification and adaptive control online learning (AdaptOn), an efficient reinforcement learning algorithm that adaptively learns the system dynamics and continuously updates its controller through online learning steps.

### An Introduction to Signal Detection and Estimation

- Computer ScienceSpringer Texts in Electrical Engineering
- 1994

Signal Detection in Discrete Time and Signal Estimation in Continuous Time: Elements of Hypothesis Testing and Elements of Parameter Estimation.

### Bounds on the area under the ROC curve

- Computer Science, Physics
- 1999

Upper and lower bounds are derived for the area under the receiver-operating-characteristic (ROC) curve of binary hypothesis testing with area-under-the-curve (AUC) approximation and the AUC lower bound recently reported by Barrett et al.

### Learning Sparse Dynamical Systems from a Single Sample Trajectory

- Computer Science, Mathematics2019 IEEE 58th Conference on Decision and Control (CDC)
- 2019

A Lasso-like estimator is introduced for the parameters of the LTI system, taking into account their sparse nature, and it is shown that the proposed estimator can correctly identify the sparsity pattern of the system matrices with high probability, provided that the length of the sample trajectory exceeds a threshold.

### Sample Complexity Lower Bounds for Linear System Identification

- Computer Science, Mathematics2019 IEEE 58th Conference on Decision and Control (CDC)
- 2019

This paper establishes problem-specific sample complexity lower bounds for linear system identification problems, and really captures the identification hardness specific to the system.

### The Divergence and Bhattacharyya Distance Measures in Signal Selection

- Computer Science
- 1967

This partly tutorial paper compares the properties of an often used measure, the divergence, with a new measure that is often easier to evaluate, called the Bhattacharyya distance, which gives results that are at least as good and often better than those given by the divergence.

### Non-asymptotic Identification of LTI Systems from a Single Trajectory

- Mathematics, Computer Science2019 American Control Conference (ACC)
- 2019

By proving a stability result for the Ho-Kalman algorithm and combining it with the sample complexity results for Markov parameters, it is shown how much data is needed to learn a balanced realization of the system up to a desired accuracy with high probability.

### Sample Complexity of Sparse System Identification Problem

- Computer Science, Mathematics
- 2018

A sparsity promoting block-regularized estimator to identify the dynamics of the system with only a limited number of input-state data samples is proposed, and it is shown that this estimator results in a small element-wise error, provided that the number of sample trajectories is above a threshold.

### Learning Without Mixing: Towards A Sharp Analysis of Linear System Identification

- MathematicsCOLT
- 2018

It is proved that the ordinary least-squares (OLS) estimator attains nearly minimax optimal performance for the identification of linear dynamical systems from a single observed trajectory, and generalizes the technique to provide bounds for a more general class of linear response time-series.