Visual Analysis of Discrimination in Machine Learning

@article{Wang2021VisualAO,
  title={Visual Analysis of Discrimination in Machine Learning},
  author={Qianwen Wang and Zhen Xu and Zhutian Chen and Yong Wang and Shixia Liu and Huamin Qu},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2021},
  volume={27},
  pages={1470-1480}
}
The growing use of automated decision-making in critical applications, such as crime prediction and college admission, has raised questions about fairness in machine learning. How can we decide whether different treatments are reasonable or discriminatory? In this paper, we investigate discrimination in machine learning from a visual analytics perspective and propose an interactive visualization tool, DiscriLens, to support a more comprehensive analysis. To reveal detailed information on… 
Online Decision Trees with Fairness
TLDR
A novel framework of online decision tree with fairness in the data stream with possible distribution drifting is proposed and two fairness decision tree online growth algorithms that fulfills different online fair decision-making requirements are proposed.
From Learning to Relearning: A Framework for Diminishing Bias in Social Robot Navigation
TLDR
This work investigates a framework for diminishing bias in social robot navigation models so that robots are equipped with the capability to plan as well as adapt their paths based on both physical and social demands.
IF-City: Intelligible Fair City Planning to Measure, Explain and Mitigate Inequality
TLDR
An interactive visual tool, Intelligible Fair City Planner (IF-City), is proposed to help urban planners to perceive inequality across groups, identify and attribute sources of inequality, and mitigate inequality with automatic allocation simulations and constraint-satisfying recommendations.
Visual Identification of Problematic Bias in Large Label Spaces
TLDR
Different models and datasets for large label spaces can be systematically and visually analyzed and compared to make informed fairness assessments tackling problematic bias, and the approach can be integrated into classical model and data pipelines.
DENOUNCER: Detection of Unfairness in Classifiers
TLDR
This work presents a method for detecting groups that are treated unfairly by the algorithm efficiently for various fairness definitions, implemented in a system called DENOUNCER, an interactive system that allows users to explore different fairness measures of a (trained) classifier for a given test data.

References

SHOWING 1-10 OF 71 REFERENCES
Achieving Non-Discrimination in Data Release
TLDR
The key to discrimination discovery and prevention is to find the meaningful partitions that can be used to provide quantitative evidences for the judgment of discrimination, and a simple criterion for the claim of non-discrimination is developed.
Combating discrimination using Bayesian networks
TLDR
A discrimination discovery method based on modeling the probability distribution of a class using Bayesian networks and a classification method that corrects for the discovered discrimination without using protected attributes in the decision process are proposed.
FAIRVIS: Visual Analytics for Discovering Intersectional Bias in Machine Learning
TLDR
FAIRVIS is a mixed-initiative visual analytics system that integrates a novel subgroup discovery technique for users to audit the fairness of machine learning models and demonstrates how interactive visualization may help data scientists and the general public understand and create more equitable algorithms.
Achieving non-discrimination in prediction
TLDR
This paper adopts the causal model for modeling the data generation mechanism, and formally defining discrimination in population, in a dataset, and in prediction, and develops a two-phase framework for constructing a discrimination-free classifier with a theoretical guarantee.
A Causal Framework for Discovering and Removing Direct and Indirect Discrimination
TLDR
This paper proposes an effective algorithm for discovering direct and indirect discrimination, as well as an algorithm for precisely removing both types of discrimination while retaining good data utility.
A study of top-k measures for discrimination discovery
TLDR
To what extent the sets of top-k ranked rules with respect to any two pairs of measures agree is studied, including risk difference, risk ratio, odds ratio, and few others.
Discrimination-aware data mining
TLDR
This approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge, and an empirical assessment of the results on the German credit dataset.
Data preprocessing techniques for classification without discrimination
TLDR
This paper surveys and extends existing data preprocessing techniques, being suppression of the sensitive attribute, massaging the dataset by changing class labels, and reweighing or resampling the data to remove discrimination without relabeling instances and presents the results of experiments on real-life data.
Avoiding Discrimination through Causal Reasoning
TLDR
This work crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion and put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.
Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining
TLDR
The aim of this tutorial is to survey algorithmic bias, presenting its most common variants, with an emphasis on the algorithmic techniques and key ideas developed to derive efficient solutions.
...
...