Corpus ID: 235417200

Keyframe-Focused Visual Imitation Learning

  title={Keyframe-Focused Visual Imitation Learning},
  author={Chuan Wen and Jierui Lin and Jianing Qian and Yang Gao and Dinesh Jayaraman},
Imitation learning trains control policies by mimicking pre-recorded expert demonstrations. In partially observable settings, imitation policies must rely on observation histories, but many seemingly paradoxical results show better performance for policies that only access the most recent observation. Recent solutions ranging from causal graph learning to deep information bottlenecks have shown promising results, but failed to scale to realistic settings such as visual imitation. We propose a… Expand


End-to-End Driving Via Conditional Imitation Learning
This work evaluates different architectures for conditional imitation learning in vision-based driving and conducts experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Expand
Deeply AggreVaTeD: Differentiable Imitation Learning for Sequential Prediction
This work presents two gradient procedures that can learn neural network policies for several problems, including a sequential prediction task and several high-dimensional robotics control problems and provides a comprehensive theoretical study of IL. Expand
ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst
The ChauffeurNet model can handle complex situations in simulation, and the perturbations then provide an important signal for these losses and lead to robustness of the learned model. Expand
Disagreement-Regularized Imitation Learning
The algorithm operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost, and uses a fixed reward function which is easy to optimize. Expand
Generative Adversarial Imitation Learning
A new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning, is proposed and a certain instantiation of this framework draws an analogy between imitation learning and generative adversarial networks. Expand
Exponentially Weighted Imitation Learning for Batched Historical Data
A monotonic advantage reweighted imitation learning strategy that is applicable to problems with complex nonlinear function approximation and works well with hybrid (discrete and continuous) action space and can be used to learn from data generated by an unknown policy. Expand
Fighting Copycat Agents in Behavioral Cloning from Observation Histories
This work proposes an adversarial approach to learn a feature representation that removes excess information about the previous expert action nuisance correlate, while retaining the information necessary to predict the next action. Expand
Causal Confusion in Imitation Learning
It is shown that causal misidentification occurs in several benchmark control domains as well as realistic driving settings, and the proposed solution to combat it through targeted interventions to determine the correct causal model is validated. Expand
Monocular Plan View Networks for Autonomous Driving
This work proposes a simple transformation of observations into a bird’s eye view, also known as plan view, for end-to-end control, which provides an abstraction of the environment from which a deep network can easily deduce the positions and directions of entities. Expand
Learning by Cheating
This work shows that this challenging learning problem can be simplified by decomposing it into two stages and uses the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art on the CARLA benchmark and the recent NoCrash benchmark. Expand