Fairness-Aware Online Meta-learning

@article{Zhao2021FairnessAwareOM,
  title={Fairness-Aware Online Meta-learning},
  author={Chengli Zhao and Feng Chen and Bhavani M. Thuraisingham},
  journal={Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery \& Data Mining},
  year={2021}
}
In contrast to offline working fashions, two research paradigms are devised for online learning: (1) Online Meta-Learning (OML)[6, 20, 26] learns good priors over model parameters (or learning to learn) in a sequential setting where tasks are revealed one after another. Although it provides a sub-linear regret bound, such techniques completely ignore the importance of learning with fairness which is a significant hallmark of human intelligence. (2) Online Fairness-Aware Learning [1, 8, 21… 

Figures and Tables from this paper

Adaptive Fairness-Aware Online Meta-Learning for Changing Environments
TLDR
A novel adaptive fairness-aware online meta-learning algorithm, namely FairSAOML, which is able to adapt to changing environments in both bias control and model precision is proposed, which significantly outperforms alternatives based on the best prior online learning approaches.
Comprehensive Fair Meta-learned Recommender System
TLDR
A comprehensive fair meta-learning framework, named CLOVER, for ensuring the fairness of meta-learned recommendation models and systematically study three kinds of fairness - individual fairness, counterfactual fairness, and group fairness in the recommender systems, and proposes to satisfy all three kinds via a multi-task adversarial learning scheme.
A Meta-learning Approach to Fair Ranking
TLDR
A meta-learning framework is adopted to explicitly train a meta-learner from an unbiased sampled dataset, and simultaneously, train a listwise learning-to-rank (LTR) model on the whole (biased) dataset governed by "fair" loss weights.
Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey
TLDR
This paper provides a comprehensive survey of bias mitigation methods for achieving fairness in Machine Learning (ML) models and investigates how existing bias mitigation Methods are evaluated in the literature.
Layer Adaptive Deep Neural Networks for Out-of-distribution Detection
TLDR
A novel layer-adaptive OOD detection framework for DNNs that can fully utilize the intermediate layers’ outputs and is robust against OODs of varying complexity and can outperform state-of-the-art competitors by a large margin on some real-world datasets.

References

SHOWING 1-10 OF 33 REFERENCES
Fairness warnings and fair-MAML: learning fairly with minimal data
TLDR
Two algorithms are proposed: Fairness Warnings and Fair-MAML, a model-agnostic algorithm that provides interpretable boundary conditions for when a fairly trained model may not behave fairly on similar but slightly different tasks within a given domain.
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the
Achieving Fairness in the Stochastic Multi-armed Bandit Problem
TLDR
A fairness-aware regret is defined that takes into account the above fairness constraints and naturally extends the conventional notion of regret, called r-Regret, that holds uniformly over time irrespective of the choice of the learning algorithm.
Fairness Constraints: Mechanisms for Fair Classification
TLDR
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
Fair Meta-Learning For Few-Shot Classification
TLDR
This work proposes a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train by ensuring controlling the decision boundary covariance that between the protected variable and the signed distance from the feature vectors to the decision Boundary.
Unfairness Discovery and Prevention For Few-Shot Regression
TLDR
It is demonstrated that the proposed unfairness discovery and prevention approaches efficiently detect discrimination and mitigate biases on model output as well as generalize both accuracy and fairness to unseen tasks with a limited amount of training samples.
Rank-Based Multi-task Learning for Fair Regression
  • Chen Zhao, Feng Chen
  • Computer Science
    2019 IEEE International Conference on Data Mining (ICDM)
  • 2019
In this work, we develop a novel fairness learning approach for multi-task regression models based on a biased training dataset, using a popular rank-based non-parametric independence test, i.e.,
Online Learning: A Comprehensive Survey
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning
Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL
TLDR
This work develops a method for continual online learning from an incoming stream of data, using deep neural network models, and demonstrates that MOLe outperforms alternative prior methods, and enables effective continuous adaptation in non-stationary task distributions such as varying terrains, motor failures, and unexpected disturbances.
...
...