Corpus ID: 220302057

# Provably Efficient Neural Estimation of Structural Equation Model: An Adversarial Approach

@article{Liao2020ProvablyEN,
title={Provably Efficient Neural Estimation of Structural Equation Model: An Adversarial Approach},
author={Luofeng Liao and You-Lin Chen and Zhuoran Yang and Bo Dai and Zhaoran Wang and M. Kolar},
journal={ArXiv},
year={2020},
volume={abs/2007.01290}
}
Structural equation models (SEMs) are widely used in sciences, ranging from economics to psychology, to uncover causal relationships underlying a complex system under consideration and estimate structural parameters of interest. We study estimation in a class of generalized SEMs where the object of interest is defined as the solution to a linear operator equation. We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and… Expand
Scalable Quasi-Bayesian Inference for Instrumental Variable Regression
• Ziyu Wang, Jun Zhu
• Computer Science, Mathematics
• ArXiv
• 2021
This work presents a scalable quasi-Bayesian procedure for IV regression, building upon the recently developed kernelized IV models, and leads to a scalable approximate inference algorithm with time cost comparable to the corresponding point estimation methods. Expand
Adversarial Estimation of Riesz Representers
• Computer Science, Economics
• ArXiv
• 2021
This work provides an adversarial approach to estimating Riesz representers of linear functionals within arbitrary function spaces with a plethora of recently introduced machine learning techniques and proves oracle inequalities based on the localized Rademacher complexity of the function space used to approximate the RiesZ representer and the approximation error. Expand
Approximate Last Iterate Convergence in Overparameterized GANs
The Neural Tangent Kernel shows that as the authors increase the width of every layer in a neural network to ∞, the function computed by the neural network approaches a linear function, which makes it possible to analyze the convergence of neural networks with only the assumption that they are sufficiently wide. Expand
Causal Inference Under Unmeasured Confounding With Negative Controls: A Minimax Learning Approach
• Computer Science, Mathematics
• ArXiv
• 2021
This paper tackles the primary challenge to causal inference using negative controls: the identification and estimation of these bridge functions, and provides a new identification strategy that avoids both uniqueness and completeness. Expand
Deep Proxy Causal Learning and its Application to Confounded Bandit Policy Evaluation
• Computer Science, Mathematics
• ArXiv
• 2021
A novel method is proposed, the deep feature proxy variable method (DFPV), to address the case where the proxies, treatments, and outcomes are high-dimensional and have nonlinear complex relationships, as represented by deep neural network features. Expand
Instrument Space Selection for Kernel Maximum Moment Restriction
• Computer Science
• ArXiv
• 2021
This work presents a systematic way to select the instrument space for parameter estimation based on a principle of the least identifiable instrument space (LIIS) that identifies model parameters with the least space complexity. Expand
Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning
• Computer Science, Mathematics
• ArXiv
• 2021
This work studies a confounded Markov decision process where the transition dynamics admit an additive nonlinear functional form and proposes a provably efficient IV-aided Value Iteration (IVVI) algorithm based on a primal-dual reformulation of CMR. Expand
Last Iterate Convergence in Overparameterized GANs
One of the new and exciting results from the last few years is the Neural Tangent Kernel (NTK) [1], which shows that as we increase the width of every layer in a neural network to ∞, the functionExpand
Learning Causal Relationships from Conditional Moment Conditions by Importance Weighting
• Masahiro Kato
• Computer Science, Economics
• ArXiv
• 2021
A method is proposed that transforms conditional moment conditions to unconditional moment conditions through importance weighting using the conditional density ratio, and proposes a method that successfully approximates conditional moments conditions. Expand
Mathematical Models of Overparameterized Neural Networks
• Computer Science, Mathematics
• Proceedings of the IEEE
• 2021
The analysis of two-layer NNs is focused on and the key mathematical models, with their algorithmic implications, are explained and the challenges in understanding deep NNs are discussed. Expand

#### References

SHOWING 1-10 OF 55 REFERENCES
Linear Inverse Problems in Structural Econometrics Estimation Based on Spectral Decomposition and Regularization
• Mathematics
• 2003
Inverse problems can be described as functional equations where the value of the function is known or easily estimable but the argument is unknown. Many problems in econometrics can be stated in theExpand
Deep Generalized Method of Moments for Instrumental Variable Analysis
• Mathematics, Computer Science
• NeurIPS
• 2019
This paper proposes the DeepGMM algorithm, a new variational reformulation of GMM with optimal inverse-covariance weighting that allows us to efficiently control very many moment conditions and develops practical techniques for optimization and model selection that make it particularly successful in practice. Expand
Adversarial Generalized Method of Moments
• Computer Science, Economics
• ArXiv
• 2018
An approach for learning deep neural net representations of models described via conditional moment restrictions, similar in nature to Generative Adversarial Networks, though here the modeler is learning a representation of a function that satisfies a continuum of moment conditions and the adversary is identifying violating moments. Expand
Neural tangent kernel: convergence and generalization in neural networks (invited paper)
• Computer Science, Mathematics
• NeurIPS
• 2018
This talk will introduce this formalism and give a number of results on the Neural Tangent Kernel and explain how they give us insight into the dynamics of neural networks during training and into their generalization features. Expand
Deep IV: A Flexible Approach for Counterfactual Prediction
• Computer Science
• ICML
• 2017
This paper provides a recipe for augmenting deep learning methods to accurately characterize causal relationships in the presence of instrument variables (IVs)—sources of treatment randomization that are conditionally independent from the outcomes. Expand
Nonparametric dynamic panel data models: Kernel estimation and specification testing
• Mathematics
• 2013
Motivated by the first-differencing method for linear panel data models, we propose a class of iterative local polynomial estimators for nonparametric dynamic panel data models with or withoutExpand
A Finite-Time Analysis of Q-Learning with Neural Network Function Approximation
• Computer Science, Mathematics
• ICML
• 2020
This paper proves that neural Q-learning finds the optimal policy with O(1/\sqrt{T})$convergence rate if the neural function approximator is sufficiently overparameterized, where$T\$ is the number of iterations. Expand
Is completeness necessary?
• Estimation in nonidentified linear models
• 2020
Minimax Estimation of Conditional Moment Models
• Mathematics, Economics
• NeurIPS
• 2020
This work develops an approach for estimating models described via conditional moment restrictions, and introduces a min-max criterion function, under which the estimation problem can be thought of as solving a zero-sum game between a modeler who is optimizing over the hypothesis space of the target model and an adversary who identifies violating moments over a test function space. Expand