• Corpus ID: 239998542

Fair Sequential Selection Using Supervised Learning Models

  title={Fair Sequential Selection Using Supervised Learning Models},
  author={Mohammad Mahdi Khalili and Xueru Zhang and Mahed Abroshan},
We consider a selection problem where sequentially arrived applicants apply for a limited number of positions/jobs. At each time step, a decision maker accepts or rejects the given applicant using a pre-trained supervised learning model until all the vacant positions are filled. In this paper, we discuss whether the fairness notions (e.g., equal opportunity, statistical parity, etc.) that are commonly used in classification problems are suitable for the sequential selection problems. In… 

Figures and Tables from this paper


Improving Fairness and Privacy in Selection Problems
This work studies the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models and shows that the exponential mechanisms can make the decision-making process perfectly fair.
Fairness in Learning: Classic and Contextual Bandits
A tight connection between fairness and the KWIK (Knows What It Knows) learning model is proved: a provably fair algorithm for the linear contextual bandit problem with a polynomial dependence on the dimension, and a worst-case exponential gap in regret between fair and non-fair learning algorithms.
Fairness in Learning-Based Sequential Decision Algorithms: A Survey
This survey reviews existing literature on the fairness of data-driven sequential decision-making and focuses on two types of sequential decisions: (1) past decisions have no impact on the underlying user population and thusNo impact on future data; and (2)past decisions have an impact onThe underlying userpopulation and therefore the future data, which can then impact future decisions.
Efficient candidate screening under multiple tests and implications for fairness
This paper characterize the optimal policy when employees constitute a single group and addresses the multi-group setting, demonstrating that when the noise levels vary across groups, a fundamental impossibility emerges whereby the authors cannot administer the same number of tests, subject candidates to the same decision rule, and yet realize the same outcomes in both groups.
Offline Contextual Bandits with High Probability Fairness Guarantees
This work provides a theoretical analysis of RobinHood, an offline contextual bandit algorithm designed to satisfy a broad family of fairness constraints, and provides a proof that it will not return an unfair solution with probability greater than a user-specified threshold.
Algorithmic Decision Making and the Cost of Fairness
This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.
Fairness Constraints: A Flexible Approach for Fair Classification
A flexible constraint-based framework to enable the design of fair margin-based classifiers and a general and intuitive measure of decision boundary unfairness, which serves as a tractable proxy to several of the most popular computational definitions of unfairness from the literature.
Group Fairness for the Allocation of Indivisible Goods
This work considers the problem of fairly dividing a collection of indivisible goods among a set of players and introduces two “up to one good” style relaxations, which implies most existing notions of individual fairness.
Differentially Private Fair Learning
New tradeoffs between fairness, accuracy, and privacy emerge only when requiring all three properties, and it is shown that these tradeoffs can be milder if group membership may be used at test time.
Selection Problems in the Presence of Implicit Bias
A theoretical model for studying the effects of implicit bias on selection decisions, and a way of analyzing possible procedural remedies for implicit bias within this model are proposed.