Deep Reinforcement Learning for Multi-class Imbalanced Training

@article{Yang2022DeepRL,
  title={Deep Reinforcement Learning for Multi-class Imbalanced Training},
  author={Jenny Yang and Rasheed El-Bouri and Odhran O'Donoghue and Alexander S Lachapelle and Andrew A. S. Soltan and David A. Clifton},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.12070}
}
With the rapid growth of memory and computing power, datasets are becoming increasingly complex and imbalanced. This is especially severe in the context of clinical data, where there may be one rare event for many cases in the majority class. We introduce an imbalanced classification framework, based on reinforcement learning, for training extremely imbalanced data sets, and extend it for use in multiclass settings. We combine dueling and double deep Q-learning architectures, and formulate a… 
Algorithmic Fairness and Bias Mitigation for Clinical Machine Learning: A New Utility for Deep Reinforcement Learning
TLDR
This study introduces a reinforcement learning framework capable of mitigating biases that may have been acquired during data collection, and evaluates its model for the task of rapidly predicting COVID-19 for patients presenting to hospital emergency departments.

References

SHOWING 1-10 OF 23 REFERENCES
Deep reinforcement learning for imbalanced classification
TLDR
A general imbalanced classification model based on deep reinforcement learning, in which the problem is formulated as a sequential decision-making process and solved by a deep Q-learning network, and the agent finally finds an optimal classification policy in imbalanced data.
Dueling Network Architectures for Deep Reinforcement Learning
TLDR
This paper presents a new neural network architecture for model-free reinforcement learning that leads to better policy evaluation in the presence of many similar-valued actions and enables the RL agent to outperform the state-of-the-art on the Atari 2600 domain.
Deep Reinforcement Learning with Double Q-Learning
TLDR
This paper proposes a specific adaptation to the DQN algorithm and shows that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.
Imbalanced Learning: Foundations, Algorithms, and Applications
TLDR
The first comprehensive look at this new branch of machine learning, this book offers a critical review of the problem of imbalanced learning, covering the state of the art in techniques, principles, and real-world applications.
Sampling Approaches for Imbalanced Data Classification Problem in Machine Learning
TLDR
It has been observed that adaptive synthetic oversampling approach can best improve the imbalance ratio as well as classification results, however, undersampling approaches gave better overall performance on all datasets.
Weighted extreme learning machine for imbalance learning
An overview of classification algorithms for imbalanced datasets
TLDR
A brief review of existing solutions to the class-imbalance problem proposed both at the data and algorithmic levels is presented.
A Systematic Review on Imbalanced Data Challenges in Machine Learning
TLDR
A comparative analysis of the approaches from the reference of data pre-processing, algorithmic and hybrid paradigms for contemporary imbalance data analysis techniques, and their comparative study in lieu of different data distribution and their application areas are presented.
SMOTE: Synthetic Minority Over-sampling Technique
TLDR
A combination of the method of oversampling the minority (abnormal) class and under-sampling the majority class can achieve better classifier performance (in ROC space) and a combination of these methods and the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy is evaluated.
Overlap versus Imbalance
TLDR
It is demonstrated that these two factors have interdependent effects and that one cannot form a full understanding of their effects by considering them only in isolation.
...
...