Learning robust perceptive locomotion for quadrupedal robots in the wild

@article{Miki2022LearningRP,
  title={Learning robust perceptive locomotion for quadrupedal robots in the wild},
  author={Takahiro Miki and Joonho Lee and Jemin Hwangbo and Lorenz Wellhausen and Vladlen Koltun and Marco Hutter},
  journal={Science Robotics},
  year={2022},
  volume={7}
}
Legged robots that can operate autonomously in remote and hazardous environments will greatly increase opportunities for exploration into underexplored areas. Exteroceptive perception is crucial for fast and energy-efficient locomotion: Perceiving the terrain before making contact with it enables planning and adaptation of the gait ahead of time to maintain speed and stability. However, using exteroceptive perception robustly for locomotion has remained a grand challenge in robotics. Snow… 
Locomotion Policy Guided Traversability Learning using Volumetric Representations of Complex Environments
TLDR
A sparse convolutional network is trained to predict the simulated traversability cost, which is tailored to the deployed locomotion policy, from an entirely geometric representation of the envi- ronment in the form of a 3D voxel-occupancy map.
Egocentric Visual Self-Modeling for Legged Robot Locomotion
TLDR
This work proposed an end-to-end approach that uses high dimension visual observation and action commands to train a visual self-model for legged locomotion, which learns the spatial relationship between the robot body movement and the ground texture changes from image sequences.
PrePARE: Predictive Proprioception for Agile Failure Event Detection in Robotic Exploration of Extreme Terrains
TLDR
This work proposes an approach to learn a model from past robot experience for predictive detection of potential failures and demonstrates that a potential slip event can be predicted up to 720 ms ahead of a potential fall with an average precision greater than 0.95 and an average F-score of 0.82.
Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual Observations
TLDR
The capabilities of autonomous perceptual locomotion that can be achieved by only using sparse visual observations from direct depth measurements, which are easily available from a Lidar or RGB-D sensor, are demonstrated, showing robust ascent and descent over high stairs of 20 cm height, and robustness against noise and unseen terrains.
Marsupial Walking-and-Flying Robotic Deployment for Collaborative Exploration of Unknown Environments
—This work contributes a marsupial robotic system- of-systems involving a legged and an aerial robot capable of collaborative mapping and exploration path planning that exploits the heterogeneous
Learning Semantics-Aware Locomotion Skills from Human Demonstration
TLDR
This work presents a framework that learns semantics-aware locomotion skills from perception for quadrupedal robots, such that the robot can traverse through complex offroad terrains with appropriate speeds and gaits using perception information.
RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and Optimal Control
TLDR
A unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain using on-board proprioceptive and exteroceptive feedback and a reinforcement learning policy trained over a wide range of procedurally generated terrains.
Autonomous Teamed Exploration of Subterranean Environments using Legged and Aerial Robots
TLDR
This work is structured around the synergy of an onboard exploration path planner that allows for resilient long-term autonomy, and a multi-robot coordination framework that enables navigation in environments with steep slopes, and diverse geometries.
WayFAST: Traversability Predictive Navigation for Field Robots
TLDR
This work presents a self-supervised approach for learning to predict traversable paths for wheeled mobile robots that require good traction to navigate, and shows that the training pipeline based on online traction estimates is more data-efficient than other heuristic-based methods.
Learning Torque Control for Quadrupedal Locomotion
Reinforcement learning (RL) is a promising tool for developing controllers for quadrupedal locomotion. The design of most learning-based locomotion controllers adopts the joint position-based
...
...

References

SHOWING 1-10 OF 81 REFERENCES
Learning quadrupedal locomotion over challenging terrain
TLDR
The presented work indicates that robust locomotion in natural environments can be achieved by training in simple domains.
Fast and Continuous Foothold Adaptation for Dynamic Locomotion Through CNNs
TLDR
The goal is to react to visual stimuli from the environment, bridging the gap between blind reactive locomotion and purely vision-based planning strategies, and results in an up to 200 times faster computation with respect to the full-blown heuristics.
Robust Rough-Terrain Locomotion with a Quadrupedal Robot
TLDR
A novel pose optimization approach that enables the robot to climb over significant obstacles and experimentally validate the approach with the quadrupedal robot ANYmal by autonomously traversing obstacles such steps, inclines, and stairs.
Blind Bipedal Stair Traversal via Sim-to-Real Reinforcement Learning
TLDR
This paper shows that sim-to-real reinforcement learning (RL) can achieve robust locomotion over stair-like terrain on the bipedal robot Cassie using only proprioceptive feedback, and only requires modifying an existing flat-terrain training RL framework to include stair- like terrain randomization, without any changes in reward function.
Onboard perception-based trotting and crawling with the Hydraulic Quadruped Robot (HyQ)
This paper presents a framework developed to increase the autonomy and versatility of a large (~75kg) hydraulically actuated quadrupedal robot. It combines onboard perception with two locomotion
Perceptive Locomotion in Rough Terrain – Online Foothold Optimization
TLDR
A hierarchical locomotion planner together with a foothold optimizer that finds locally optimal footholds within an elevation map that can cope with stairs and obstacles of heights up to 33% of the robot's leg length is presented.
Stereo vision and terrain modeling for quadruped robots
TLDR
This paper presents an integrated perception and control system for a quadruped robot that allows it to perceive and traverse previously unseen, rugged terrain that includes large, irregular obstacles.
Sim-to-Real: Learning Agile Locomotion For Quadruped Robots
TLDR
This system can learn quadruped locomotion from scratch using simple reward signals and users can provide an open loop reference to guide the learning process when more control over the learned gait is needed.
Adaptive Motion Planning for Autonomous Rough Terrain Traversal with a Walking Robot
TLDR
A new method is contributed that can identify the terrain traversability cost to the benefit of the A* algorithm and a probabilistic regression technique is applied for the traversability assessment with the typical RRT‐based motion planner used to explore the space of traversability values.
Vision Aided Dynamic Exploration of Unstructured Terrain with a Small-Scale Quadruped Robot
  • D. Kim, D. Carballo, S. Kim
  • Engineering
    2020 IEEE International Conference on Robotics and Automation (ICRA)
  • 2020
TLDR
This paper integrates two Intel RealSense sensors into the MIT Mini-Cheetah, a 0.3 m tall, 9 kg quadruped robot, and showcases the exploration of highly irregular terrain using dynamic trotting and jumping with the small-scale, fully sensorized Mini- Cheetah quadruped robots.
...
...