FairNN- Conjoint Learning of Fair Representations for Fair Decisions

  title={FairNN- Conjoint Learning of Fair Representations for Fair Decisions},
  author={Tongxin Hu and Vasileios Iosifidis and Wentong Liao and Hang Zhang and M. Yang and Eirini Ntoutsi and B. Rosenhahn},
In this paper, we propose FairNN a neural network that performs joint feature representation and classification for fairness-aware learning. Our approach optimizes a multi-objective loss function in which (a) learns a fair representation by suppressing protected attributes (b) maintains the information content by minimizing a reconstruction loss and (c) allows for solving a classification task in a fair manner by minimizing the classification error and respecting the equalized odds-based… Expand
Multi-Fair Pareto Boosting
A new fairness notion is introduced, Multi-Max Mistreatment (MMM), which measures unfairness while considering both (multiattribute) protected group and class membership of instances, and a multi-objective problem formulation is proposed to learn an MMM -fair classifier. Expand
FABBOO - Online Fairness-Aware Learning Under Class Imbalance
FABBOO is an online boosting approach that changes the training distribution in an online fashion based on both stream imbalance and discriminatory behavior of the model evaluated over the historical stream, and shows that long-term consideration of class-imbalance and fairness are beneficial for maintaining models that exhibit good predictive and fairness-related performance. Expand
Introduction to The Special Section on Bias and Fairness in AI
This special section includes six articles presenting different perspectives on bias and fairness from different angles that aim to fill the gap and bring together scholars of these disciplines working on fairness. Expand
Online Fairness-Aware Learning with Imbalanced Data Streams
Data-driven learning algorithms are employed in many online applications, in which data become available over time, like network monitoring, stock price prediction, job applications, etc. TheExpand
Teaching Responsible Machine Learning to Engineers
With the increasing application of machine learning in practice, there is a growing need to incorporate ethical considerations in engineering curricula. In this paper, we reflect upon the developmentExpand


Learning Fair and Transferable Representations
This work argues that the goal of imposing demographic parity can be substantially facilitated within a multitask learning setting and derives learning bounds establishing that the learned representation transfers well to novel tasks both in terms of prediction performance and fairness metrics. Expand
FNNC: Achieving Fairness through Neural Networks
An automated solution to achieve fairness in classification, which is easily extendable to many fairness constraints, is proposed and shows that FNNC performs as good as the state of the art, if not better. Expand
Learning Deep Fair Graph Neural Networks
This work investigates how to impose this constraint in the different layers of a deep graph neural network through the use of two different regularizers, based on a simple convex relaxation and a Wasserstein distance formulation of demographic parity. Expand
A Neural Network Framework for Fair Classifier
An automated solution which is generalizable over any fairness constraint is proposed which uses a neural network which is trained on batches and directly enforces the fairness constraint as the loss function without modifying it further. Expand
Fairness-enhancing interventions in stream classification
This work proposes fairness-enhancing interventions that modify the input data so that the outcome of any stream classifier applied to that data will be fair. Expand
Learning Adversarially Fair and Transferable Representations
This paper presents the first in-depth experimental demonstration of fair transfer learning and demonstrates empirically that the authors' learned representations admit fair predictions on new tasks while maintaining utility, an essential goal of fair representation learning. Expand
FAHT: An Adaptive Fairness-aware Decision Tree Classifier
This paper introduces a learning mechanism to design a fair classifier for online stream based decision-making, an extension of the well-known Hoeffding Tree algorithm for decision tree induction over streams, that also accounts for fairness. Expand
AdaFair: Cumulative Fairness Adaptive Boosting
AdaFair, a fairness-aware classifier based on AdaBoost that further updates the weights of the instances in each boosting round taking into account a cumulative notion of fairness based upon all current ensemble members, while explicitly tackling class-imbalance by optimizing the number of ensemble members for balanced classification error is proposed. Expand
A Confidence-Based Approach for Balancing Fairness and Accuracy
A new measure of fairness, called resilience to random bias (RRB), is proposed and demonstrated that RRB distinguishes well between the authors' naive and sensible fairness algorithms, and together with bias and accuracy provides a more complete picture of the fairness of an algorithm. Expand
Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification
This work presents a process that iteratively adapts training sample weights with a theoretically grounded model that achieves better or similar trades-offs between accuracy and unfairness mitigation on real-world and synthetic datasets. Expand