Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Estimating individual treatment effect: generalization bounds and algorithms
A novel, simple and intuitive generalization-error bound is given showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalized-error of that representation and the distance between the treated and control distributions induced by the representation.
Learning Representations for Counterfactual Inference
A new algorithmic framework for counterfactual inference is proposed which brings together ideas from domain adaptation and representation learning and significantly outperforms the previous state-of-the-art approaches.
Causal Effect Inference with Deep Latent-Variable Models
- Christos Louizos, Uri Shalit, J. Mooij, D. Sontag, R. Zemel, M. Welling
- Computer ScienceNIPS
- 24 May 2017
This work builds on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect and shows its method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects.
Large Scale Online Learning of Image Similarity Through Ranking
OASIS is an online dual approach using the passive-aggressive family of learning algorithms with a large margin criterion and an efficient hinge loss cost, which suggests that query independent similarity could be accurately learned even for large scale data sets that could not be handled before.
Structured Inference Networks for Nonlinear State Space Models
A unified algorithm is introduced to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks.
Large Scale Online Learning of Image Similarity through Ranking
OASIS is an online dual approach using the passive-aggressive family of learning algorithms with a large margin criterion and an efficient hinge loss cost, which suggests that query-independent similarity could be accurately learned even for large-scale datasets that could not be handled before.
Deep Kalman Filters
A unified algorithm is introduced to efficiently learn a broad spectrum of Kalman filters and investigates the efficacy of temporal generative models for counterfactual inference, and introduces the "Healing MNIST" dataset where long-term structure, noise and actions are applied to sequences of digits.
Automated versus Do-It-Yourself Methods for Causal Inference: Lessons Learned from a Data Analysis Competition
- V. Dorie, Jennifer L. Hill, Uri Shalit, M. Scott, D. Cervone
- Computer ScienceStatistical Science
- 9 July 2017
The causal inference data analysis challenge, "Is Your SATT Where It's At?", launched as part of the 2016 Atlantic Causal Inference Conference, sought to make progress with respect to both the data testing grounds and the researchers submitting methods whose efficacy would be evaluated.
An Online Algorithm for Large Scale Image Similarity Learning
The non-metric similarities learned by OASIS can be transformed into metric similarities, achieving higher precisions than similarities that are learned as metrics in the first place, suggesting an approach for learning a metric from data that is larger by orders of magnitude than was handled before.
Learning Weighted Representations for Generalization Across Designs
This work devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting, and proposes an algorithmic framework inspired by this bound to verify is effectiveness in causal effect estimation.