• Publications
  • Influence
Estimating individual treatment effect: generalization bounds and algorithms
TLDR
A novel, simple and intuitive generalization-error bound is given showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalized-error of that representation and the distance between the treated and control distributions induced by the representation.
Learning Representations for Counterfactual Inference
TLDR
A new algorithmic framework for counterfactual inference is proposed which brings together ideas from domain adaptation and representation learning and significantly outperforms the previous state-of-the-art approaches.
Why Is My Classifier Discriminatory?
TLDR
This work argues that the fairness of predictions should be evaluated in context of the data, and that unfairness induced by inadequate samples sizes or unmeasured predictive variables should be addressed through data collection, rather than by constraining the model.
A survey on graph kernels
TLDR
This survey gives a comprehensive overview of techniques for kernel-based graph classification developed in the past 15 years and describes and categorizes graph kernels based on properties inherent to their design, such as the nature of their extracted graph features, their method of computation and their applicability to problems in practice.
Support and Invertibility in Domain-Invariant Representations
TLDR
This work gives generalization bounds for unsupervised domain adaptation that hold for any representation function by acknowledging the cost of non-invertibility and proposes a bound based on measuring the extent to which the support of the source domain covers the target domain.
Learning Weighted Representations for Generalization Across Designs
Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from
Global graph kernels using geometric embeddings
TLDR
Empirical results on classification of synthesized graphs with important global properties as well as established benchmark graph datasets are given, showing that the accuracy of the kernels is better than or competitive to existing graph kernels.
Evaluating Reinforcement Learning Algorithms in Observational Health Settings
TLDR
The goal is to expose some of the subtleties associated with evaluating RL algorithms in healthcare to provide a conceptual starting point for clinical and computational researchers to ask the right questions when designing and evaluating algorithms for new ways of treating patients.
Entity disambiguation in anonymized graphs using graph kernels
TLDR
This paper describes the similarity between two nodes based on their local neighborhood structure using graph kernels; and solves the resulting classification task using SVMs to show that using less information, the method is significantly better in terms of either speed or accuracy or both.
Guidelines for reinforcement learning in healthcare
TLDR
New guidelines for reinforcement learning for decisions about patient treatment are provided that are hoped will accelerate the rate at which observational cohorts can inform healthcare practice in a safe, risk-conscious manner.
...
1
2
3
4
5
...