• Corpus ID: 226281759

Performative Prediction in a Stateful World

@article{Brown2020PerformativePI,
  title={Performative Prediction in a Stateful World},
  author={Gavin Brown and Shlomi Hod and Iden Kalemaj},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.03885}
}
Deployed supervised machine learning models make predictions that interact with and influence the world. This phenomenon is called "performative prediction" by Perdomo et al. (2020), who investigated it in a stateless setting. We generalize their results to the case where the response of the population to the deployed classifier depends both on the classifier and the previous distribution of the population. We also demonstrate such a setting empirically, for the scenario of strategic… 

Figures from this paper

Performative Prediction with Neural Networks

This work assumes that the data distribution is Lipschitz continuous with respect to the model's predictions, a more natural assumption for performative systems, and significantly relax the assumptions on the loss function.

How to Learn when Data Gradually Reacts to Your Model

This work proposes a new algorithm, Stateful Performative Gradient Descent (Stateful PerfGD), for minimizing the performative loss even in the presence of these effects, and provides theoretical guarantees for the convergence of Stateful perfGD.

State Dependent Performative Prediction with Stochastic Approximation

This paper studies the performative prediction problem which optimizes a stochastic loss function with data distribution that depends on the decision variable by considering a setting where the agent(s) provides samples adapted to the learner's and agent's previous states.

Performative Prediction with Bandit Feedback: Learning through Reparameterization

A two-level zeroth-order optimization algorithm is developed, where one level aims to compute the distribution map, and the other level reparameterizes the performative prediction objective as a function of the induced data distribution, which allows for provable regret guarantees.

Performative Reinforcement Learning

This work considers a regularized version of the reinforcement learning problem and shows that repeatedly optimizing this objective converges to a performatively stable policy under reasonable assumptions on the transition dynamics.

How to Learn when Data Reacts to Your Model: Performative Gradient Descent

This work introduces performative gradient descent (PerfGD), which is the first algorithm which provably converges to the performatively optimal point and is simple to use.

Approximate Regions of Attraction in Learning with Decision-Dependent Distributions

This work considers the case where there may be multiple local minimizers of performative risk, motivated by situations where the initial conditions may have significant impact on the long-term behavior of the system.

Which Echo Chamber? Regions of Attraction in Learning with Decision-Dependent Distributions

This work considers a company whose current employee demographics affect the applicant pool they interview: the initial demographics of the company can affect the long-term hiring policies of theCompany, and introduces the notion of performative alignment, which provides a geometric condition on the convergence of repeated risk minimization to performative risk minimizers.

Data Feedback Loops: Model-driven Amplification of Dataset Biases

This work formalizes a system where interactions with one model are recorded as history and scraped as training data in the future, and proposes an intervention to help calibrate and stabilize unstable feedback systems.

Making Decisions under Outcome Performativity

This work demonstrates that efficient performative omnipredictors exist, under a natural restriction of performative prediction, which is called outcome performativity, and introduces a new optimality concept -- Performative omniprediction -- adapted from the supervised (non-performative) learning setting.

Stochastic Optimization for Performative Prediction

It is proved non-asymptotic rates of convergence for both greedily deploying models after each stochastic update as well as for taking several updates before redeploying, illustrating how depending on the strength of performative effects, there exists a regime where either approach outperforms the other.

Performative Prediction

This work develops a risk minimization framework for performative prediction bringing together concepts from statistics, game theory, and causality, and gives the first sufficient conditions for retraining to overcome strategic feedback effects.

How to Learn when Data Gradually Reacts to Your Model

This work proposes a new algorithm, Stateful Performative Gradient Descent (Stateful PerfGD), for minimizing the performative loss even in the presence of these effects, and provides theoretical guarantees for the convergence of Stateful perfGD.

State Dependent Performative Prediction with Stochastic Approximation

This paper studies the performative prediction problem which optimizes a stochastic loss function with data distribution that depends on the decision variable by considering a setting where the agent(s) provides samples adapted to the learner's and agent's previous states.

How to Learn when Data Reacts to Your Model: Performative Gradient Descent

This work introduces performative gradient descent (PerfGD), which is the first algorithm which provably converges to the performatively optimal point and is simple to use.

Which Echo Chamber? Regions of Attraction in Learning with Decision-Dependent Distributions

This work considers a company whose current employee demographics affect the applicant pool they interview: the initial demographics of the company can affect the long-term hiring policies of theCompany, and introduces the notion of performative alignment, which provides a geometric condition on the convergence of repeated risk minimization to performative risk minimizers.

Who Leads and Who Follows in Strategic Classification?

It is argued that the order of play in strategic classification is fundamentally determined by the relative frequencies at which the decision-maker and the agents adapt to each other’s actions, and it is shown that a decision-makers that makes updates faster than the agents can reverse the orders of play.

Outside the Echo Chamber: Optimizing the Performative Risk

This paper identifies a natural set of properties of the loss function and model-induced distribution shift under which the performative risk is convex, a property which does not follow from convexity of the losses alone.

Strategic Classification

This paper formalizes the problem, and pursue algorithms for learning classifiers that are robust to gaming, and obtains computationally efficient learning algorithms which are near optimal, achieving a classification error that is arbitrarily close to the theoretical minimum.

Regret Minimization with Performative Feedback

This work establishes a conceptual approach for leveraging tools from the bandits literature for the purpose of regret minimization with performative feedback by developing an algorithm that achieves regret bounds scaling only with the complexity of the distribution shifts and not that of the reward function.