• Corpus ID: 244714356

Dynamic Inference

  title={Dynamic Inference},
  author={Aolin Xu},
  • Aolin Xu
  • Published 29 November 2021
  • Computer Science
  • ArXiv
Traditional statistical estimation, or statistical inference in general, is static, in the sense that the estimate of the quantity of interest does not change the future evolution of the quantity. In some sequential estimation problems however, we encounter the situation where the future values of the quantity to be estimated depend on the estimate of its current value. Examples include stock price prediction by big investors, interactive product recommendation, and behavior prediction in multi… 

Figures from this paper


A Bayesian Framework for Reinforcement Learning
It is proposed that the learning process estimates online the full posterior distribution over models and to determine behavior, a hypothesis is sampled from this distribution and the greedy policy with respect to the hypothesis is obtained by dynamic programming.
Bayesian Reinforcement Learning: A Survey
An in-depth review of the role of Bayesian methods for the reinforcement learning (RL) paradigm, and a comprehensive survey on Bayesian RL algorithms and their theoretical and empirical properties.
An analytic solution to discrete Bayesian reinforcement learning
This work proposes a new algorithm, called BEETLE, for effective online learning that is computationally efficient while minimizing the amount of exploration, and takes a Bayesian model-based approach, framing RL as a partially observable Markov decision process.
Learning Nonparametric Models for Probabilistic Imitation
A new probabilistic method for inferring imitative actions that takes into account both the observations of the teacher as well as the imitator's dynamics, which generalizes to systems with very different dynamics.
Minimum Excess Risk in Bayesian Learning
The definition and analysis of the minimum excess risk (MER) is extended to the setting with multiple parametric model families and the set with nonparametric models, and some comparisons between the MER in Bayesian learning and the excess risk in frequentist learning are drawn.
Probabilistic model-based imitation learning
This work proposes to learn a probabilistic model of the system, which is exploited for mental rehearsal of the current controller by making predictions about future trajectories, and learns a robot-specific controller that directly matches robot trajectories with observed ones.
An Algorithmic Perspective on Imitation Learning
This work provides an introduction to imitation learning, dividing imitation learning into directly replicating desired behavior and learning the hidden objectives of the desired behavior from demonstrations (called inverse optimal control or inverse reinforcement learning [Russell, 1998]).
Behavioral Cloning from Observation
This work proposes a two-phase, autonomous imitation learning technique called behavioral cloning from observation (BCO), that allows the agent to acquire experience in a self-supervised fashion to develop a model which is then utilized to learn a particular task by observing an expert perform that task without the knowledge of the specific actions taken.
Multi-modal Probabilistic Prediction of Interactive Behavior via an Interpretable Model
This paper presents a multi-modal probabilistic prediction approach based on a generative model that is capable of jointly predicting sequential motions of each pair of interacting agents and is interpretable, which can explain the underneath logic as well as obtain more reliability to use in real applications.
Imitation learning for agile autonomous driving
This work presents an end-to-end imitation learning system for agile, off-road autonomous driving using only low-cost on-board sensors and shows that policies trained with online imitation learning overcome well-known challenges related to covariate shift and generalize better than policiestrained with batch imitation learning.