A Dynamic Principal-Agent Model with Hidden Information: Sequential Optimality Through Truthful State Revelation

@article{Zhang2008ADP,
  title={A Dynamic Principal-Agent Model with Hidden Information: Sequential Optimality Through Truthful State Revelation},
  author={H. Zhang and Stefanos A. Zenios},
  journal={Oper. Res.},
  year={2008},
  volume={56},
  pages={681-696}
}
The principal-agent paradigm has been extensively studied in economics and attracted attention in operations research. A dynamic principal-agent model with an underlying Markov decision process (MDP) is especially useful in operations research because MDP is one of the most mature modeling tools in this field. In this model, the principal delegates the control of the system to the agent and the agent has access to private information, either of the state of the system or of the agent's actions… 

Figures from this paper

Dynamic Principal-Agent Models

This paper contributes to the theoretical and numerical analysis of discrete time dynamic principal-agent problems with continuous choice sets. We first provide a new and simplified proof for the

Uncertainty, Risk, and the Efficiencies of the Principal and the Agent: A Chance Constrained Data Envelopment Analysis Approach

In a principal-agent relationship, although the principal (e.g., the investor) can control the inputs and decide the motivated payments to the agent (e.g., the top management team), she cannot have

Optimal dynamic information provision

Analysis of a Dynamic Adverse Selection Model with Asymptotic Efficiency

A novel approach is introduced that constructs the continuation payoff frontier exactly, as the fixed point of a functional operator, if the model supports an incentive-compatible first-best (ICFB) contract, and the continued payoff frontier can be efficiently constructed.

Learning in a Hiring Logic and Optimal Contracts

This paper examines a hiring logic problem in which all players involved in this game are exposed to scenarios where they can learn from the changes and these modifications influence their

Gig Economy: A Dynamic Principal-Agent Model

The gig economy, where employees take short-term, project-based jobs, is increasingly spreading all over the world. In this paper, we investigate the employer's and the worker's behavior in the gig

Solving a Dynamic Adverse Selection Model Through Finite Policy Graphs

This paper studies an infinite-horizon adverse selection model with an underlying Markov information process and a risk-neutral agent. It introduces a graphic representation of continuation contracts

Policy teaching through reward function learning

This paper considers the specific objective of inducing a pre-specified desired policy, and examines both the case in which the agent's reward function is known and unknown to the interested party, presenting a linear program for the former and formulating an active, indirect elicitation method for the latter.

Unobservable effort, objective consistency and the efficiencies of the principal and the top management team

Novel models are formulates to overcome limitations by incorporating DEA and bi-level programming into the principal–agent framework and identify objective consistence between the two parties by incorporating the organisational outcomes into the outputs of the TMT.
...

References

SHOWING 1-10 OF 30 REFERENCES

Markov Decision Processes: Discrete Stochastic Dynamic Programming

  • M. Puterman
  • Computer Science
    Wiley Series in Probability and Statistics
  • 1994
Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria, and explores several topics that have received little or no attention in other books.

Towards a Theory of Discounted Repeated Games with Imperfect Monitoring

This paper investigates pure strategy sequential equilibria of repeated games with imperfect monitoring. The approach emphasizes the equilibrium value set and the static optimization problems

Dynamic Mechanism Design with Hidden Income and Hidden Actions

This appendix provides the detailed derivations of all recursive formulations presented in the paper, as well as proofs for all propositions.

A Recursive Formulation for Repeated Agency with History Dependence

This paper presents general recursive methods to handle environments where privately observed variables are linked over time and shows that incentive compatible contracts are implemented recursively with a threat keeping constraint in addition to the usual temporary incentive compatibility conditions.

Long-Term Contracting with Markovian Consumers

To study how a firm can capitalize on a long-term customer relationship, we characterize the optimal contract between a monopolist and a consumer whose preferences follow a Markov process. The

Dynamic Games with Hidden Actions and Hidden States

An algorithm is developed that solves for the subset of sequential equilibria in which equilibrium strategies are Markov in the privately observed state.

Moral Hazard and Observability

The role of imperfect information in a principal-agent relationship subject to moral hazard is considered. A necessary and sufficient condition for imperfect information to improve on contracts based

Stochastic Inventory Systems in a Supply Chain with Asymmetric Information: Cycle Stocks, Safety Stocks, and Consignment Stock

It is suggested that consignment stock helps reduce cycle stock by providing the supplier with an additional incentive to decrease batch size, but simultaneously gives the buyer an incentive to increase safety stock by exaggerating backorder costs.

The Economics of Contracts

A contract is an agreement under which two parties make reciprocal commitments in terms of their behavior to coordinate. As this concept has become essential to economics in the last 30 years, three

Regulation and information in a continuing relationship