• Corpus ID: 236965456

Bandit Algorithms for Precision Medicine

@article{Lu2021BanditAF,
  title={Bandit Algorithms for Precision Medicine},
  author={Yangyi Lu and Ziping Xu and Ambuj Tewari},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.04782}
}
The Oxford English Dictionary defines precision medicine as “medical care designed to optimize efficiency or therapeutic benefit for particular groups of patients, especially by using genetic or molecular profiling.” It is not an entirely new idea: physicians from ancient times have recognized that medical treatment needs to consider individual variations in patient characteristics (Konstantinidou et al., 2017). However, the modern precision medicine movement has been enabled by a confluence of… 

Real-time infection prediction with wearable physiological monitoring and AI to aid military workforce readiness during COVID-19

Feelings of feasibility of a real-time risk prediction score to minimize workforce impacts of infection are shown, as well as barriers to implementation, including adequate data capture and delays in data transmission.

Doubly Robust Interval Estimation for Optimal Policy Evaluation in Online Learning

The probability of exploration is derived that quantifies the probability of exploring the non-optimal actions under commonly used bandit algorithms and the doubly robust interval estimation (DREAM) method is developed to infer the value under the estimated optimal policy in online learning.

Transfer Learning for Contextual Multi-armed Bandits

A data-driven algorithm is developed that achieves near-optimal statistical guarantees while automatically adapting to the unknown parameters over a large collection of parameter spaces under an additional self-similarity assumption.

References

SHOWING 1-10 OF 127 REFERENCES

Are the Origins of Precision Medicine Found in the Corpus Hippocraticum?

Although the ancient ‘precision medicine’ is different from its modern description, the latter derived from well-established experimental conclusions, it becomes apparent that there is a common conception, aiming to achieve more effective healing by focusing on the individual.

A Biologically Plausible Benchmark for Contextual Bandit Algorithms in Precision Oncology Using in vitro Data

A benchmark dataset is proposed to evaluate contextual bandit algorithms based on real in vitro drug response of approximately 900 cancer cell lines and it is found that the methods accumulate less regret over a sequence of treatment assignment tasks than a rule-based baseline derived from current clinical practice.

Clinical use of current polygenic risk scores may exacerbate health disparities

To realize the full and equitable potential of polygenic risk scores, greater diversity must be prioritized in genetic studies, and summary statistics must be publically disseminated to ensure that health disparities are not increased for those individuals already most underserved.

How Do Tumor Cytogenetics Inform Cancer Treatments? Dynamic Risk Stratification and Precision Medicine Using Multi-armed Bandits

An econometric model is developed – the Hidden Markov model – to understand patients’ treatment responses and sequentially selects treatments based on contextual information about patients and therapies, with the goal of maximizing overall survival outcomes.

Reinforcement Learning for Clinical Decision Support in Critical Care: Comprehensive Review

RL has been used to optimize the choice of medications, drug dosing, and timing of interventions and to target personalized laboratory values and has great potential for enhancing decision making in critical care.

From Ads to Interventions: Contextual Bandits in Mobile Health

The contextual bandits literature is surveyed with a focus on modifications needed to adapt existing approaches to the mobile health setting, and specific challenges in this direction such as: good initialization of the learning algorithm, finding interpretable policies, assessing usefulness of tailoring variables, computational considerations, robustness to failure of assumptions, and dealing with variables that are costly to acquire and missing.

Online Decision-Making with High-Dimensional Covariates

This work forms this problem as a multi-armed bandit with high-dimensional covariates, and presents a new efficient bandit algorithm based on the LASSO estimator that outperforms existing bandit methods as well as physicians to correctly dose a majority of patients.

Dissecting racial bias in an algorithm used to manage the health of populations

It is suggested that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.

Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities

How concepts of algorithmic fairness might apply in healthcare, where predictive algorithms are being increasingly used to support decision-making, are discussed and a provisional framework for the evaluation of clinical prediction models is proposed.

Personalized HeartSteps: A Reinforcement Learning Algorithm for Optimizing Physical Activity

A Reinforcement Learning (RL) algorithm that continuously learns and improves the treatment policy embedded in the JITAI as the data is being collected from the user is developed.
...