# Causal Navigation by Continuous-time Neural Networks

@inproceedings{Vorbach2021CausalNB, title={Causal Navigation by Continuous-time Neural Networks}, author={Charles J. Vorbach and Ramin M. Hasani and Alexander Amini and Mathias Lechner and Daniela Rus}, booktitle={Neural Information Processing Systems}, year={2021} }

Imitation learning enables high-fidelity, vision-based learning of policies within rich, photorealistic environments. However, such techniques often rely on traditional discrete-time neural models and face difficulties in generalizing to domain shifts by failing to account for the causal relationships between the agent and the environment. In this paper, we propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks, specifically…

## 15 Citations

### Latent Imagination Facilitates Zero-Shot Transfer in Autonomous Racing

- Computer Science2022 International Conference on Robotics and Automation (ICRA)
- 2022

This paper investigates how model-based agents capable of learning in imagination substantially outperform model-free agents with respect to performance, sample efficiency, successful task completion, and generalization in real-world autonomous vehicle control tasks, where advanced model- free deep RL algorithms fail.

### Are All Vision Models Created Equal? A Study of the Open-Loop to Closed-Loop Causality Gap

- Computer Science
- 2022

The results imply that the causality gap can be solved in situation one with the proposed training guideline with any modern network architecture, whereas achieving out-of-distribution generalization requires further investigations, for instance, on data diversity rather than the model architecture.

### Closed-form continuous-time neural networks

- Computer ScienceNature Machine Intelligence
- 2022

It is shown that it is possible to closely approximate the interaction between neurons and synapses—the building blocks of natural and artificial neural networks—constructed by liquid time-constant networks efficiently in closed form and obtain models that are between one and five orders of magnitude faster in training and inference compared with differential equation-based counterparts.

### Closed-form Continuous-Depth Models

- Computer ScienceArXiv
- 2021

This paper presents a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster while exhibiting equally strong modeling abilities compared to their ODE-based counterparts.

### Interpreting Neural Policies with Disentangled Tree Representations

- Computer Science
- 2022

A new algorithm is designed that programmatically extracts tree representations from compact neural policies, in the form of a set of logic programs grounded by world state, that allows for interpretability metrics that measure disentanglement of learned neural dynamics from a concentration of decisions, mutual information and modularity perspectives.

### Closed-form Continuous-time Neural Models

- Computer Science
- 2021

It is shown it is possible to closely approximate the interaction between neurons and synapses – the building blocks of natural and artificial neural networks – constructed by liquid time-constant networks (LTCs) efficiently in closed-form.

### Sparse Flows: Pruning Continuous-depth Models

- Computer ScienceNeurIPS
- 2021

This work designs a framework to decipher the internal dynamics of these continuous depth models by pruning their network architectures, and empirical results suggest that pruning improves generalization for neural ODEs in generative modeling.

### BarrierNet: A Safety-Guaranteed Layer for Neural Networks

- Computer ScienceArXiv
- 2021

These novel safety layers, termed a BarrierNet, can be used in conjunction with any neural network-based controller, and can be trained by gradient descent, which allows the safety constraints of a neural controller be adaptable to changing environments.

### Social ODE: Multi-agent Trajectory Forecasting with Neural Ordinary Differential Equations

- Computer ScienceECCV
- 2022

The Social ODE approach compares favor-ably with state-of-the-art methods, and more importantly, can successfully avoid sudden obstacles and effectively control the motion of the agent, while previous methods often fail in such cases.

### Entangled Residual Mappings

- Computer ScienceArXiv
- 2022

While entangled mappings can preserve the iterative refinement of features across various deep models, they influence the representation learning process in convolutional networks differently than attention-based models and recurrent neural networks.

## References

SHOWING 1-10 OF 104 REFERENCES

### Learning Exploration Policies for Navigation

- Computer ScienceICLR
- 2019

This work proposes a learning-based approach and finds that the use of policies with spatial memory that are bootstrapped with imitation learning and finally finetuned with coverage rewards derived purely from on-board sensors can be effective at exploring novel environments.

### Conditional Affordance Learning for Driving in Urban Environments

- Computer ScienceCoRL
- 2018

This work proposes a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs, and is the first to handle traffic lights and speed signs by using image-level labels only.

### One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks

- Computer ScienceArXiv
- 2018

This work proposes a method that learns both how to learn primitive behaviors from video demonstrations and how to dynamically compose these behaviors to perform multi-stage tasks by "watching" a human demonstrator.

### Model-based versus Model-free Deep Reinforcement Learning for Autonomous Racing Cars

- Computer ScienceArXiv
- 2021

It is shown that model-based agents capable of learning in imagination, substantially outperform model-free agents with respect to performance, sample efficiency, successful task completion, and generalization, and that the generalization ability of model- based agents strongly depends on the observationmodel choice.

### Deep Imitative Models for Flexible Inference, Planning, and Control

- Computer ScienceICLR
- 2020

This paper proposes Imitative Models, probabilistic predictive models of desirable behavior able to plan interpretable expert-like trajectories to achieve specified goals, and derives families of flexible goal objectives that can be used to successfully direct behavior.

### Gershgorin Loss Stabilizes the Recurrent Neural Network Compartment of an End-to-end Robot Learning Scheme

- Computer Science2020 IEEE International Conference on Robotics and Automation (ICRA)
- 2020

A new regularization loss component is introduced together with a learning algorithm that improves the stability of the learned autonomous system, by forcing the eigenvalues of the internal state updates of an LDS to be negative reals.

### Learning by Cheating

- Computer ScienceCoRL
- 2019

This work shows that this challenging learning problem can be simplified by decomposing it into two stages and uses the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art on the CARLA benchmark and the recent NoCrash benchmark.

### Learning to Control PDEs with Differentiable Physics

- Computer ScienceICLR
- 2020

It is shown that by using a differentiable PDE solver in conjunction with a novel predictor-corrector scheme, this work can train neural networks to understand and control complex nonlinear physical systems over long time frames.

### Lipschitz Recurrent Neural Networks

- Computer ScienceICLR
- 2021

This work proposes a recurrent unit that describes the hidden state's evolution with two parts: a well-understood linear component plus a Lipschitz nonlinearity, which is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.

### Closed-form Continuous-Depth Models

- Computer ScienceArXiv
- 2021

This paper presents a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster while exhibiting equally strong modeling abilities compared to their ODE-based counterparts.