# Cooperative Inverse Reinforcement Learning

@inproceedings{HadfieldMenell2016CooperativeIR, title={Cooperative Inverse Reinforcement Learning}, author={Dylan Hadfield-Menell and Stuart J. Russell and P. Abbeel and Anca D. Dragan}, booktitle={NIPS}, year={2016} }

For an autonomous system to be helpful to humans and to pose no unwarranted risks, it needs to align its values with those of the humans in its environment in such a way that its actions contribute to the maximization of value for the humans. We propose a formal definition of the value alignment problem as cooperative inverse reinforcement learning (CIRL). A CIRL problem is a cooperative, partial-information game with two agents, human and robot; both are rewarded according to the human's…

## 391 Citations

### An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning

- Computer ScienceICML
- 2018

This work exploits a specific property of CIRL---the human is a full information agent---to derive an optimality-preserving modification to the standard Bellman update, which reduces the complexity of the problem by an exponential factor and allows C IRL's assumption of human rationality to be relaxed.

### Cooperative Reinforcement Learning

- Computer Science
- 2017

It is demonstrated that the solutions to the CRL problems allow the robot to outperform non-intervention in cases where the human is suboptimal, even when the robot receives no reward signal.

### Interactive Inverse Reinforcement Learning for Cooperative Games

- Computer ScienceICML
- 2022

It is shown that when the learning agent’s policies have a signiﬁcant effect on the transition function, the reward function can be learned ef ﬁciently.

### General-Sum Multi-Agent Continuous Inverse Optimal Control

- Computer ScienceIEEE Robotics and Automation Letters
- 2021

This work presents a novel inverse reinforcement learning (IRL) algorithm that can infer the reward function in multi-agent interactive scenarios and demonstrates that the proposed method accurately infers the ground truth rewarded function in two- agent interactive experiments.

### Multi-Principal Assistance Games

- EconomicsArXiv
- 2020

A social choice method that uses shared control of a system to combine preference inference with social welfare optimization is proposed, and the extent to which the cost of choosing suboptimal arms reduces the incentive to mislead is explored.

### Inverse Learning of Robot Behavior for Collaborative Planning

- Computer Science2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- 2018

This work shows how the agent's preferences learned using IRL can be incorporated in a subject robot's decision making and planning, to enable the robot to spontaneously collaborate with the previously observed agent on the task.

### Repeated Inverse Reinforcement Learning for AI Safety

- Computer Science
- 2017

A novel repeated IRL problem that captures an aspect of AI safety as follows: the agent has to act on behalf of a human in a sequence of tasks and wishes to minimize the number of tasks that it surprises the human.

### Cooperative Inverse Reinforcement Learning - Cooperation and learning in an asymmetric information setting with a suboptimal teacher

- Computer Science
- 2018

The purpose of this report is to analyze CIRL when the human is not behaving fully optimally and may make mistakes, and the difficulty of differentiating between the actual goal and other possible goals that are similar in some aspects.

### SIMILE: INTRODUCING SEQUENTIAL INFORMATION

- Computer Science
- 2018

The core idea is to introduce sequential information, so that an agent can refer to both the current state and past state-action pairs to make a decision, and formulate the approach into a recurrent model, and instantiate it using LSTM so as to fuse both long-term and short-term information.

### Non-Cooperative Inverse Reinforcement Learning

- Computer ScienceNeurIPS
- 2019

The non-cooperative inverse reinforcement learning (N-CIRL) formalism is introduced and the benefits of this formalism over the existing multi-agent IRL formalism are demonstrated via extensive numerical simulation in a novel cyber security setting.

## References

SHOWING 1-10 OF 34 REFERENCES

### Apprenticeship learning via inverse reinforcement learning

- Computer ScienceICML
- 2004

This work thinks of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and gives an algorithm for learning the task demonstrated by the expert, based on using "inverse reinforcement learning" to try to recover the unknown reward function.

### Learning agents for uncertain environments (extended abstract)

- Computer ScienceCOLT' 98
- 1998

A very simple “baseline architecture” for a learning agent that can handle stochastic, partially observable environments is proposed together with a method for representing temporal processes as graphical models and whether reinforcement learning can provide a good model of animal and human learning is discussed.

### Computational Rationalization: The Inverse Equilibrium Problem

- Computer ScienceICML
- 2011

Employing the game-theoretic notion of regret and the principle of maximum entropy, this work introduces a technique for predicting and generalizing behavior in competitive and cooperative multi-agent domains.

### Multi-Agent Inverse Reinforcement Learning

- Computer Science2010 Ninth International Conference on Machine Learning and Applications
- 2010

This work introduces the problem of multi-agent inverse reinforcement learning, where reward functions of multiple agents are learned by observing their uncoordinated behavior, and shows that the learner is not only able to match but even significantly outperform the expert.

### Bayesian Inverse Reinforcement Learning

- Computer ScienceIJCAI
- 2007

This paper shows how to combine prior knowledge and evidence from the expert's actions to derive a probability distribution over the space of reward functions and presents efficient algorithms that find solutions for the reward learning and apprenticeship learning tasks that generalize well over these distributions.

### Active Learning for Reward Estimation in Inverse Reinforcement Learning

- Computer ScienceECML/PKDD
- 2009

An algorithm is proposed that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at "arbitrary" states, to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert.

### A Decision-Theoretic Model of Assistance

- Computer ScienceIJCAI
- 2007

The problem of intelligent assistance in a decision-theoretic framework is formed, and it is shown that in all three domains the framework results in an assistant that substantially reduces user effort with only modest computation.

### Maximum Entropy Inverse Reinforcement Learning

- Computer ScienceAAAI
- 2008

A probabilistic approach based on the principle of maximum entropy that provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods is developed.

### A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning

- Computer ScienceAISTATS
- 2011

This paper proposes a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting and demonstrates that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.

### Sequential Optimality and Coordination in Multiagent Systems

- Computer ScienceIJCAI
- 1999

This work proposes an extension of value iteration in which the system's state space is augmented with the state of the coordination mechanism adopted, allowing agents to reason about the short and long term prospects for coordination, the long term consequences of (mis)coordination, and make decisions to engage or avoid coordination problems based on expected value.