Maximizing Welfare with Incentive-Aware Evaluation Mechanisms

@article{Haghtalab2020MaximizingWW,
  title={Maximizing Welfare with Incentive-Aware Evaluation Mechanisms},
  author={Nika Haghtalab and Nicole Immorlica and Brendan Lucier and Jack Wang},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.01956}
}
Motivated by applications such as college admission and insurance rate determination, we propose an evaluation problem where the inputs are controlled by strategic individuals who can modify their features at a cost. A learner can only partially observe the features, and aims to classify individuals with respect to a quality score. The goal is to design an evaluation mechanism that maximizes the overall quality score, i.e., welfare, in the population, taking any strategic updating into account… 

Figures from this paper

Incentive-Aware PAC Learning
TLDR
This work proposes an incentive-aware version of the ERM principle which has asymptotically optimal sample complexity, and gives a sample complexity bound that is, curiously, independent of the hypothesis class, for the ERR principle restricted to incentivecompatible classifiers.
Automated Mechanism Design for Classification with Partial Verification
TLDR
This work studies the problem of automated mechanism design with partial verification, where each type can (mis)report only a restricted set of types (rather than any other type), induced by the principal’s limited verification power, and presents a number of algorithmic and structural results.
Linear Models are Robust Optimal Under Strategic Behavior
TLDR
This work studies the problem of robust decision-making under strategic behavior, and explores the computational problem of searching for the robust optimal decision rule and its connection to distributionally robust optimization.
Information Discrepancy in Strategic Learning
TLDR
This work considers a game where a principal deploys a decision rule in an attempt to optimize the whole population’s welfare, and agents strategically adapt to it to receive better scores, and shows that in many natural cases, optimal improvement is guaranteed simultaneously for all subgroups in equilibrium.
Setting Fair Incentives to Maximize Improvement
TLDR
This work provides algorithms for optimal and near-optimal improvement for both social welfare and fairness objectives and shows a placement of target levels exists that is approximately optimal for the social welfare of each group.
On classification of strategic agents who can both game and improve
TLDR
This work provides an algorithm that determines whether there exists a linear classifier that classifies all agents accurately and causes all improvable agents to become qualified, and shows that maximizing the number of true positives subject to no false positive is NP-hard in the full linear model.
Strategic Ranking
TLDR
It is found that randomization in the ranking reward design can mitigate two measures of disparate impact, welfare gap and access, whereas non-randomization may induce a high level of competition that systematically excludes a disadvantaged group.
Automated Mechanism Design with Partial Verification
TLDR
This work studies the problem of automated mechanisms design with partial verification, and presents a number of algorithmic and structural results, including an efficient algorithm for finding optimal deterministic truthful mechanisms via a characterization based on the notion of convexity.
Strategic Recourse in Linear Classification
TLDR
This paper explores how to design a classifier that achieves high accuracy while providing recourse to strategic individuals so as to incentivize them to improve their features in non-manipulative ways and provides insights for designing a machine learning model that focuses not only on the static distribution as of now, but also tries to encourage future improvement.
Algorithmic classification and strategic effort
TLDR
Drawing upon principal-agent models in the mechanism design literature, a model of strategic behavior under algorithmic evaluation is constructed and analyzed, showing that simple linear mechanisms are sufficient to incentivize desired behavior.
...
...

References

SHOWING 1-10 OF 43 REFERENCES
Incentive compatible regression learning
Algorithms for strategyproof classification
Strategic Classification from Revealed Preferences
TLDR
For a broad family of agent cost functions, this work gives a computationally efficient learning algorithm that is able to obtain diminishing "Stackelberg regret" --- a form of policy regret that guarantees that the learner is realizing loss nearly as small as that of the best classifier in hindsight.
Simple versus Optimal Contracts
TLDR
This paper considers the classic principal-agent model of contract theory, and proves that linear contracts are guaranteed to be worst-case optimal, ranging over all reward distributions consistent with the given moments.
Optimum Statistical Estimation with Strategic Data Sources
We propose an optimum mechanism for providing monetary incentives to the data sources of a statistical estimator such as linear regression, so that high quality data is provided at low cost, in the
Multitask Principal–Agent Analyses: Incentive Contracts, Asset Ownership, and Job Design
In the standard economic treatment of the principal-agent problem, compensation systems serve the dual function of allocating risks and rewarding productive work. A tension between these two
Theoretical Computer Science
TLDR
The Fully Mixed Nash Equilibrium Conjecture is valid for pure Nash equilibria and that under a certain condition, the social cost of any Nash equilibrium is within a factor of 6 + ε, of that of the fully mixed Nash equilibrium, assuming that link capacities are identical.
Strategic Classification is Causal Modeling in Disguise
TLDR
This work develops a causal framework for strategic adaptation and proves any procedure for designing classifiers that incentivize improvement must inevitably solve a non-trivial causal inference problem.
How Do Classifiers Induce Agents to Invest Effort Strategically?
TLDR
A model for how strategic agents can invest effort in order to change the outcomes they receive is developed, and a tight characterization of when such agents can be incentivized to invest specified forms of effort into improving their outcomes is given.
Strategyproof Linear Regression in High Dimensions
TLDR
This paper focuses on the ubiquitous problem of linear regression, where strategyproof mechanisms have previously been identified in two dimensions, and finds a family of group strategyproof linear regression mechanisms in any number of dimensions, which are called generalized resistant hyperplane mechanisms.
...
...