A Penalty Approach for Normalizing Feature Distributions to Build Confounder-Free Models
@article{Vento2022APA, title={A Penalty Approach for Normalizing Feature Distributions to Build Confounder-Free Models}, author={Anthony Vento and Qingyu Zhao and Robert Paul and Kilian M. Pohl and Ehsan Adeli}, journal={Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention}, year={2022}, volume={13433}, pages={ 387-397 } }
Translating the use of modern machine learning algorithms into clinical applications requires settling challenges related to explain-ability and management of nuanced confounding factors. To suitably interpret the results, removing or explaining the effect of confounding variables (or metadata) is essential. Confounding variables affect the relationship between input training data and target outputs. Accordingly, when we train a model on such data, confounding variables will bias the…
One Citation
A Survey of Fairness in Medical Image Analysis: Concepts, Algorithms, Evaluations, and Challenges
- Computer ScienceArXiv
- 2022
This paper gives a comprehensive and precise definition of fairness, followed by introducing currently used techniques in fairness issues in MedIA, and lists public medical image datasets that contain demographic attributes for facilitating the fairness research and summarize current algorithms concerning fairness in Media.
References
SHOWING 1-10 OF 28 REFERENCES
Training confounder-free deep learning models for medical applications
- Computer ScienceNature communications
- 2020
This article introduces an end-to-end approach for deriving features invariant to confounding factors while accounting for intrinsic correlations between the confounder(s) and prediction outcome, exploiting concepts from traditional statistical methods and recent fair machine learning schemes.
Bridging the Generalization Gap: Training Robust Models on Confounded Biological Data
- Computer ScienceArXiv
- 2018
Methods to control for confounding factors and further improve prediction performance are proposed, introducing OrthoNormal basis construction In cOnfounding factor Normalization (ONION) to remove confounding covariates and using the Domain-Adversarial Neural Network (DANN) to penalize models for encoding confounder information.
Metadata Normalization
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
The Metadata Normalization (MDN) layer is introduced, a new batch-level operation which can be used end-to-end within the training framework, to correct the influence of metadata on the feature distribution.
Causality-aware counterfactual confounding adjustment for feature representations learned by deep models.
- Computer Science
- 2020
This work describes how a recently proposed counterfactual approach developed to deconfound linear structural causal models can still be used to decon found the feature representations learned by deep neural network (DNN) models and validates the proposed methodology using colored versions of the MNIST dataset.
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- Computer ScienceICML
- 2015
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Adam: A Method for Stochastic Optimization
- Computer ScienceICLR
- 2015
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Extracting patterns of morphometry distinguishing HIV associated neurodegeneration from mild cognitive impairment via group cardinality constrained classification
- Biology, PsychologyHuman brain mapping
- 2016
A data‐driven, nonparameteric model to identify morphometric patterns separating HAND from MCI due to non‐HIV conditions in this older age group contributed to distinguishing with high accuracy HAND‐related impairment from cognitive impairment found in the HIV uninfected, MCI cohort.
Chained regularization for identifying brain patterns specific to HIV infection
- MedicineNeuroImage
- 2018
Leveraging Batch Normalization for Vision Transformers
- Computer Science2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
- 2021
The initial exploration reveals frequent crashes in model training when directly replacing all LN layers with BN, contributing to the un-normalized feed forward network (FFN) blocks, so it is proposed to add a BN layer in-between the two linear layers in the FFN block where stabilized training statistics are observed, resulting in a pure BN-based architecture.
R\'enyi Fair Inference
- Computer Science
- 2019
This paper uses Renyi correlation as a measure of fairness of machine learning models and develops a general training framework to impose fairness, and proposes a min-max formulation which balances the accuracy and fairness when solved to optimality.