Corpus ID: 235436267

Causal Navigation by Continuous-time Neural Networks

@article{Vorbach2021CausalNB,
  title={Causal Navigation by Continuous-time Neural Networks},
  author={Charles Vorbach and Ramin M. Hasani and Alexander Amini and Mathias Lechner and D. Rus},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.08314}
}
Imitation learning enables high-fidelity, vision-based learning of policies within rich, photorealistic environments. However, such techniques often rely on traditional discrete-time neural models and face difficulties in generalizing to domain shifts by failing to account for the causal relationships between the agent and the environment. In this paper, we propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks, specifically… Expand

Figures and Tables from this paper

Closed-form Continuous-Depth Models
TLDR
This paper presents a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster while exhibiting equally strong modeling abilities compared to their ODE-based counterparts. Expand
Sparse Flows: Pruning Continuous-depth Models
TLDR
This work designs a framework to decipher the internal dynamics of these continuous depth models by pruning their network architectures, and empirical results suggest that pruning improves generalization for neural ODEs in generative modeling. Expand

References

SHOWING 1-10 OF 91 REFERENCES
Conditional Affordance Learning for Driving in Urban Environments
TLDR
This work proposes a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs, and is the first to handle traffic lights and speed signs by using image-level labels only. Expand
One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks
TLDR
This work proposes a method that learns both how to learn primitive behaviors from video demonstrations and how to dynamically compose these behaviors to perform multi-stage tasks by "watching" a human demonstrator. Expand
Deep Imitative Models for Flexible Inference, Planning, and Control
TLDR
This paper proposes Imitative Models, probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals, and derives families of flexible goal objectives that can be used to successfully direct behavior. Expand
Gershgorin Loss Stabilizes the Recurrent Neural Network Compartment of an End-to-end Robot Learning Scheme
TLDR
A new regularization loss component is introduced together with a learning algorithm that improves the stability of the learned autonomous system, by forcing the eigenvalues of the internal state updates of an LDS to be negative reals. Expand
Learning by Cheating
TLDR
This work shows that this challenging learning problem can be simplified by decomposing it into two stages and uses the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art on the CARLA benchmark and the recent NoCrash benchmark. Expand
Learning to Control PDEs with Differentiable Physics
TLDR
It is shown that by using a differentiable PDE solver in conjunction with a novel predictor-corrector scheme, this work can train neural networks to understand and control complex nonlinear physical systems over long time frames. Expand
Closed-form Continuous-Depth Models
TLDR
This paper presents a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster while exhibiting equally strong modeling abilities compared to their ODE-based counterparts. Expand
Generative Adversarial Imitation Learning
TLDR
A new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning, is proposed and a certain instantiation of this framework draws an analogy between imitation learning and generative adversarial networks. Expand
Visual Representations for Semantic Target Driven Navigation
TLDR
This work proposes to use semantic segmentation and detection masks as observations obtained by state-of-the-art computer vision algorithms and use a deep network to learn navigation policies on top of representations that capture spatial layout and semantic contextual cues. Expand
State Aware Imitation Learning
TLDR
This paper introduces State Aware Imitation Learning (SAIL), an imitation learning algorithm that allows an agent to learn how to remain in states where it can confidently take the correct action and how to recover if it is lead astray. Expand
...
1
2
3
4
5
...