Inferring Objectives in Continuous Dynamic Games from Noise-Corrupted Partial State Observations

  title={Inferring Objectives in Continuous Dynamic Games from Noise-Corrupted Partial State Observations},
  author={Lasse Peters and David Fridovich-Keil and Vicencc Rubies-Royo and Claire J. Tomlin and C. Stachniss},
Robots and autonomous systems must interact with one another and their environment to provide high-quality services to their users. Dynamic game theory provides an expressive theoretical framework for modeling scenarios involving multiple agents with differing objectives interacting over time. A core challenge when formulating a dynamic game is designing objectives for each agent that capture desired behavior. In this paper, we propose a method for inferring parametric objective models of… 

Figures from this paper

Intention Communication and Hypothesis Likelihood in Game-Theoretic Motion Planning

A fault-tolerant receding horizon game-theoretic motion planner that leverages inter-agent communication with intention hypothesis likelihood and can capitalize on alternative intention hypotheses to generate safe trajectories in the presence of faulty transmissions in the communication network is proposed.

Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation

This work proposes a novel deep neural network exploiting both spatial- temporal information and different representation modalities ofLiDAR scans to improve LiDAR-MOS performance and outperforms the state-of-the-art methods significantly in terms of LiDARS IoU.

Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models

This work presents a module that tightly couples these layers via a game-theoretic Model Predictive Controller (MPC) that uses a novel interactive multi-agent neural network policy as part of its predictive model.

GTP-SLAM: Game-Theoretic Priors for Simultaneous Localization and Mapping in Multi-Agent Scenarios

GTP-SLAM is presented, a novel, iterative best response-based SLAM algorithm that accurately performs state localization and map reconstruction in an uncharted scene, while capturing the inherent game-theoretic interactions among multiple agents in that scene.

Individual-Level Inverse Reinforcement Learning for Mean Field Games

Mean Field IRL (MFIRL), the first dedicated IRL framework for MFGs that can handle both cooperative and non-cooperative environments, is proposed and evaluated, demonstrating that MFIRL excels in reward recovery, sample efficiency and robustness in the face of changing dynamics.

Automatic Labeling to Generate Training Data for Online LiDAR-Based Moving Object Segmentation

This letter proposes an automatic data labeling pipeline for 3D LiDAR data to save the extensive manual labeling effort and to improve the performance of existing learning-based MOS systems by automatically annotation training data.

Learning MPC for Interaction-Aware Autonomous Driving: A Game-Theoretic Approach

This work considers the problem of interaction-aware motion planning for automated vehicles in general traffic situations and proposes a quadratic penalty method to deal with the shared constraints and solve the resulting optimal control problem online using an Augmented Lagrangian method based on PANOC.

Maximum-Entropy Multi-Agent Dynamic Games: Forward and Inverse Solutions

A new notion of stochastic Nash equilibrium for boundedly rational agents, which is called the Entropic Cost Equilibrium (ECE), is defined and it is shown that ECE is a natural extension to multiple agents of Maximum Entropy optimality for single agents.

Cost Inference in Smooth Dynamic Games from Noise-Corrupted Partial State Observations

This paper proposes a method for inferring parametric objective models of multiple agents based on observed interactions and shows that it reliably estimates player objectives from a short sequence of noise-corrupted, partial state observations.



Newton’s Method and Differential Dynamic Programming for Unconstrained Nonlinear Dynamic Games

This paper shows how to extend a recursive Newton algorithm and differential dynamic programming to the case of full-information non-zero sum dynamic games and shows that the iterates of Newton’s method and DDP are sufficiently close for DDP to inherit the quadratic convergence rate of Newton's method.

Inverse KKT: Learning cost functions of manipulation tasks from demonstrations

A non-parametric variant of inverse KKT that represents the cost function as a functional in reproducing kernel Hilbert spaces is presented, to push learning from demonstration to more complex manipulation scenarios that include the interaction with objects and therefore the realization of contacts/constraints within the motion.

LUCIDGames: Online Unscented Inverse Dynamic Games for Adaptive Trajectory Prediction and Planning

Empirical results demonstrate that LUCIDGames improves the robot's performance over existing game-theoretic and traditional MPC planning approaches, and solves the inverse optimal control problem by recasting it in a recursive parameter-estimation framework.

Continuous Inverse Optimal Control with Locally Optimal Examples

A probabilistic inverse optimal control algorithm that scales gracefully with task dimensionality, and is suitable for large, continuous domains where even computing a full policy is impractical.

Accommodating intention uncertainty in general-sum games for human-robot interaction

  • Master’s thesis, Hamburg University of Technology,
  • 2020

The Computation of Approximate Generalized Feedback Nash Equilibria

A Newton-style method for finding game trajectories which satisfy necessary conditions for an equilibrium, which can be checked against sufficiency conditions is proposed, and the effectiveness of the proposed solution approach on a dynamic game arising in an autonomous driving application is demonstrated.

Inverse Dynamic Games Based on Maximum Entropy Inverse Reinforcement Learning.

We consider the inverse problem of dynamic games, where cost function parameters are sought which explain observed behavior of interacting players. Maximum entropy inverse reinforcement learning is

Inverse Differential Games With Mixed Inequality Constraints

This paper presents a methodology for identifying cost functions for interacting agents and identifies costs that lead to open-loop Nash equilibria for nonzero-sum constrained differential games.