• Publications
  • Influence
Estimating individual treatment effect: generalization bounds and algorithms
TLDR
A novel, simple and intuitive generalization-error bound is given showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalized-error of that representation and the distance between the treated and control distributions induced by the representation. Expand
Learning Representations for Counterfactual Inference
TLDR
A new algorithmic framework for counterfactual inference is proposed which brings together ideas from domain adaptation and representation learning and significantly outperforms the previous state-of-the-art approaches. Expand
Causal Effect Inference with Deep Latent-Variable Models
TLDR
This work builds on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect and shows its method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects. Expand
Structured Inference Networks for Nonlinear State Space Models
TLDR
A unified algorithm is introduced to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks. Expand
Large Scale Online Learning of Image Similarity through Ranking
TLDR
OASIS is an online dual approach using the passive-aggressive family of learning algorithms with a large margin criterion and an efficient hinge loss cost, which suggests that query-independent similarity could be accurately learned even for large-scale datasets that could not be handled before. Expand
Large Scale Online Learning of Image Similarity Through Ranking
TLDR
OASIS is an online dual approach using the passive-aggressive family of learning algorithms with a large margin criterion and an efficient hinge loss cost, which suggests that query independent similarity could be accurately learned even for large scale data sets that could not be handled before. Expand
Deep Kalman Filters
TLDR
A unified algorithm is introduced to efficiently learn a broad spectrum of Kalman filters and investigates the efficacy of temporal generative models for counterfactual inference, and introduces the "Healing MNIST" dataset where long-term structure, noise and actions are applied to sequences of digits. Expand
Automated versus Do-It-Yourself Methods for Causal Inference: Lessons Learned from a Data Analysis Competition
TLDR
The causal inference data analysis challenge, "Is Your SATT Where It's At?", launched as part of the 2016 Atlantic Causal Inference Conference, sought to make progress with respect to both the data testing grounds and the researchers submitting methods whose efficacy would be evaluated. Expand
An Online Algorithm for Large Scale Image Similarity Learning
TLDR
The non-metric similarities learned by OASIS can be transformed into metric similarities, achieving higher precisions than similarities that are learned as metrics in the first place, suggesting an approach for learning a metric from data that is larger by orders of magnitude than was handled before. Expand
Learning Weighted Representations for Generalization Across Designs
Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects fromExpand
...
1
2
3
4
5
...