• Corpus ID: 59553567

Causal Simulations for Uplift Modeling

  title={Causal Simulations for Uplift Modeling},
  author={Jeroen Berrevoets and Wouter Verbeke},
Uplift modeling requires experimental data, preferably collected in random fashion. This places a logistical and financial burden upon any organisation aspiring such models. Once deployed, uplift models are subject to effects from concept drift. Hence, methods are being developed that are able to learn from newly gained experience, as well as handle drifting environments. As these new methods attempt to eliminate the need for experimental data, another approach to test such methods must be… 
Optimising Individual-Treatment-Effect Using Bandits
The uplifted contextual multi-armed bandit (U-CMAB), a novel approach to optimise the ITE by drawing upon bandit literature, is proposed, and experiments indicate that the proposed approach compares favourably against the state-of-the-art.


Uplift modeling for randomized experiments and observational studies
This work designs an ensemble tree-based algorithm (CTS) for uplift modeling and puts forward an unbiased estimate of the expected response, which makes it possible to evaluate an uplift model with multiple treatments, the first evaluation metric of uplift models aligning with the problem objective in the literature.
A Literature Survey and Experimental Evaluation of the State-of-the-Art in Uplift Modeling: A Stepping Stone Toward the Development of Prescriptive Analytics
It is found that the available evaluation metrics do not provide an intuitively understandable indication of the actual use and performance of a model, and the instability of uplift models is highlighted.
Statistics and causal inference: A review
This paper aims at assisting empirical researchers benefit from recent advances in causal inference. The paper stresses the paradigmatic shifts that must be undertaken in moving from traditional
Decision trees for uplift modeling with single and multiple treatments
This paper presents tree-based classifiers designed for uplift modeling in both single and multiple treatment cases, and designs new splitting criteria and pruning methods that show significant improvement over previous uplifts.
Structural Causal Bandits: Where to Intervene?
This paper builds a new algorithm that takes as input a causal structure and finds a minimal, sound, and complete set of qualified arms that an agent should play to maximize its expected reward and empirically demonstrates that the new strategy learns an optimal policy and leads to orders of magnitude faster convergence rates when compared with its causal-insensitive counterparts.
Contextual Multi-Armed Bandits for Causal Marketing
This work explores the idea of a causal contextual multi-armed bandit approach to automated marketing, where it optimizes on causal treatment effects rather than pure outcome, and incorporates counterfactual generation within data collection.
Bandits with Unobserved Confounders: A Causal Approach
It is shown that to achieve low regret in certain realistic classes of bandit problems (namely, in the face of unobserved confounders), both experimental and observational quantities are required by the rational agent.
Causal Bandits: Learning Good Interventions via Causal Inference
A new algorithm is proposed that exploits the causal feedback and proves a bound on its simple regret that is strictly better (in all quantities) than algorithms that do not use the additional causal information.
1. Introduction. Until recently, statistical theory has been restricted to the design and analysis of sampling experiments in which the size and composition of the samples are completely determined
Contextual Bandits with Latent Confounders: An NMF Approach
An $\epsilon$-greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret is proposed.