• Corpus ID: 202539999

Masking by Moving: Learning Distraction-Free Radar Odometry from Pose Information

@article{Barnes2019MaskingBM,
  title={Masking by Moving: Learning Distraction-Free Radar Odometry from Pose Information},
  author={Dan Barnes and Rob Weston and Ingmar Posner},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.03752}
}
This paper presents an end-to-end radar odometry system which delivers robust, real-time pose estimates based on a learned embedding space free of sensing artefacts and distractor objects. [...] Key Method The system is trained in a (self-)supervised way using only previously obtained pose information as a training signal. Using 280km of urban driving data, we demonstrate that our approach outperforms the previous state-of-the-art in radar odometry by reducing errors by up 68% whilst running an order of…Expand
Under the Radar: Learning to Predict Robust Keypoints for Odometry Estimation and Metric Localisation in Radar
  • Dan Barnes, I. Posner
  • Computer Science
    2020 IEEE International Conference on Robotics and Automation (ICRA)
  • 2020
TLDR
A self-supervised framework capable of full mapping and localisation with radar in urban environments, and is sensor agnostic and can be applied to most modalities.
Kidnapped Radar: Topological Radar Localisation using Rotationally-Invariant Metric Learning
TLDR
The utility of the proposed method is analysed via a comprehensive set of metrics which provide insight into the efficacy when used in a realistic system, showing improved performance over the root architecture even in the face of random rotational perturbation.
BFAR-Bounded False Alarm Rate detector for improved radar odometry estimation
TLDR
A new detector for filtering noise from true detections in radar data, which improves the state of the art in radar odometry and is an optimized combination between CFAR and fixed-level thresholding.
Look Around You: Sequence-based Radar Place Recognition with Learned Rotational Invariance
This paper details an application which yields significant improvements to the adeptness of place recognition with Frequency-Modulated Continuous-Wave scanning, 360-degrees field of view radar - a
kRadar++: Coarse-to-Fine FMCW Scanning Radar Localisation
TLDR
It is proved that the—recently available—seminal radar place recognition (RPR) and scan matching sub-systems are complementary in a style reminiscent of the mapping and localisation systems underpinning visual teach-and-repeat (VTR) systems which have been exhibited robustly in the last decade.
Do We Need to Compensate for Motion Distortion and Doppler Effects in Radar-Based Navigation?
TLDR
A lightweight estimator is presented that can recover the motion between a pair of radar scans while accounting for both motion distortion and the Doppler effect, with the former more prominent than the latter.
Oriented surface points for efficient and accurate radar odometry
TLDR
A radar filter that keeps only the strongest reflections perazimuth that exceeds the expected noise level is proposed that is used to incrementally estimate odometry by registering the current scan with a nearby keyframe and it is found that a point-to-line metric yields significant improvements when matching sparse sets of surface points.
RadarLoc: Learning to Relocalize in FMCW Radar
TLDR
This work proposes a novel end-to-end neural network with self-attention, termed RadarLoc, which is able to estimate 6-DoF global poses directly from Emerging Frequency-Modulated Continuous Wave radar scans, and outperforms radar-based localization and deep camera relocalization methods by a significant margin.
Tight-coupling of Vision, Radar, and Carrier-phase Differential GNSS for Robust All-weather Positioning
Deployment of automated ground vehicles beyond the confines of sunny and dry climes will require sub-lanelevel positioning techniques based on radio waves rather than near-visible-light radiation.
RSL-Net: Localising in Satellite Images From a Radar on the Ground
TLDR
This letter introduces a method that not only naturally deals with the complexity of the signal type but does so in the context of cross modal processing.
...
1
2
3
4
...

References

SHOWING 1-10 OF 26 REFERENCES
Precise Ego-Motion Estimation with Millimeter-Wave Radar Under Diverse and Challenging Conditions
  • Sarah H. Cen, P. Newman
  • Computer Science
    2018 IEEE International Conference on Robotics and Automation (ICRA)
  • 2018
TLDR
This paper presents a reliable and accurate radar-only motion estimation algorithm for mobile autonomous systems, using a frequency-modulated continuous-wave scanning radar to extract landmarks and performs scan matching by greedily adding point correspondences based on unary descriptors and pairwise compatibility scores.
Distraction suppression for vision-based pose estimation at city scales
TLDR
Two methods are described that combine 3D scene priors with vision sensors to generate background-likelihood images, which act as probability masks for objects that are not part of the scene prior, which results in a system that is able to cope with extreme scene motion, even when most of the image is obscured.
Radar SLAM using visual features
TLDR
This paper presents here a navigation framework that requires no additional hardware than the already existing naval radar sensor and shows that visual radar features can be used to accurately estimate the vessel trajectory over an extensive data set.
Learning to Localize Using a LiDAR Intensity Map
TLDR
A real-time, calibration-agnostic and effective localization system for self-driving cars that learns to embed the online LiDAR sweeps and intensity map into a joint deep embedding space and conducts efficient convolutional matching between the embeddings.
Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments
TLDR
This work presents a self-supervised approach to ignoring “distractors” in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments that yields metric-scale VO using only a single camera and can recover the correct egomotion even when 90% of the image is obscured by dynamic, independently moving objects.
Radar Scan Matching SLAM Using the Fourier-Mellin Transform
TLDR
A trajectoryoriented EKF-SLAM technique using data from a 360-degree field of view radar sensor has been developed, which provides an accurate and efficient way of computing the rigid transformation between consecutive scans.
Vehicle localization with low cost radar sensors
TLDR
This work investigates the use of the Iterative Closest Point, ICP, algorithm together with an Extended Kalman filter, EKF, for localizing a vehicle equipped with automotive grade radars and shows that this computationally simpler approach yields sufficiently accurate results on par with more complex methods.
Leveraging experience for large-scale LIDAR localisation in changing cities
TLDR
This paper proposes an experience-based approach to matching a local 3D swathe built using a push-broom 2D LIDAR to a number of prior 3D maps, each of which has been collected during normal driving in different conditions.
The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset
TLDR
The target application is autonomous vehicles where this modality is robust to environmental conditions such as fog, rain, snow, or lens flare, which typically challenge other sensor modalities such as vision and LIDAR.
Are we ready for autonomous driving? The KITTI vision benchmark suite
TLDR
The autonomous driving platform is used to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection, revealing that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.
...
1
2
3
...