Scalable Scene Flow From Point Clouds in the Real World

@article{Jund2022ScalableSF,
  title={Scalable Scene Flow From Point Clouds in the Real World},
  author={Philippe Jund and Chris Sweeney and Nichola Abdo and Z. Chen and Jonathon Shlens},
  journal={IEEE Robotics and Automation Letters},
  year={2022},
  volume={7},
  pages={1589-1596}
}
Autonomous vehicles operate in highly dynamic environments necessitating an accurate assessment of which aspects of a scene are moving and where they are moving to. A popular approach to 3D motion estimation, termed scene flow, is to employ 3D point cloud data from consecutive LiDAR scans, although such approaches have been limited by the small size of real-world, annotated LiDAR data. In this work, we introduce a new large-scale dataset for scene flow estimation derived from corresponding… 
Sequential Point Clouds: A Survey
TLDR
An extensive review of the deep learning-based methods for sequential point cloud research including dynamic estimation, object detection & tracking, point cloud segmentation, and point cloud forecasting is presented.
Deformation and Correspondence Aware Unsupervised Synthetic-to-Real Scene Flow Estimation for Point Clouds
TLDR
This work develops a point cloud collector and scene flow annotator for GTA-V engine to automatically obtain diverse realistic training samples without human intervention and proposes a mean-teacher-based domain adaptation framework that leverages self-generated pseudo-labels of the target domain.
Dynamic 3D Scene Analysis by Point Cloud Accumulation
TLDR
This paper explores multi-frame point cloud accumulation as a mid-level representation of 3D scan sequences, and develops a method that exploits inductive biases of outdoor street scenes, including their geometric layout and object-level rigidity.
Real-Time Optical Flow for Vehicular Perception with Low- and High-Resolution Event Cameras
TLDR
This work forms a novel dense representation for the sparse events flow, in the form of the “inverse exponential distance surface”, designed for the use of proven, state-of-the-art frame-based optical flow computation methods.
FAST3D: Flow-Aware Self-Training for 3D Object Detectors
TLDR
This work proposes a self-training method that enables unsupervised domain adaptation for 3D object detectors on continuous LiDAR point clouds and introduces a multi-target tracker that exploits scene detections through time to get reliable pseudo-labels.

References

SHOWING 1-10 OF 66 REFERENCES
PointPillars: Fast Encoders for Object Detection From Point Clouds
TLDR
benchmarks suggest that PointPillars is an appropriate encoding for object detection in point clouds, and proposes a lean downstream network.
Scalability in Perception for Autonomous Driving: Waymo Open Dataset
TLDR
This work introduces a new large scale, high quality, diverse dataset, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies, and studies the effects of dataset size and generalization across geographies on 3D detection methods.
PillarFlow: End-to-end Birds-eye-view Flow Estimation for Autonomous Driving
TLDR
The proposed end-to-end deep learning framework for LIDAR-based flow estimation in bird’s eye view (BeV) not only estimates 2-D BeV flow accurately but also improves tracking performance of both dynamic and static objects.
PointPWC-Net: A Coarse-to-Fine Network for Supervised and Self-Supervised Scene Flow Estimation on 3D Point Clouds
TLDR
This work proposes a novel end-to-end deep scene flow model, called PointPWC-Net, on 3D point clouds in a coarse- to-fine fashion, which shows great generalization ability on KITTI Scene Flow 2015 dataset, outperforming all previous methods.
End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds
TLDR
This paper aims to synergize the birds-eye view and the perspective view and proposes a novel end-to-end multi-view fusion (MVF) algorithm, which can effectively learn to utilize the complementary information from both and significantly improves detection accuracy over the comparable single-view PointPillars baseline.
HPLFlowNet: Hierarchical Permutohedral Lattice FlowNet for Scene Flow Estimation on Large-Scale Point Clouds
TLDR
A novel deep neural network architecture for end-to-end scene flow estimation that directly operates on large-scale 3D point clouds is presented and shows great generalization ability on real-world data and on different point densities without fine-tuning.
FlowNet3D: Learning Scene Flow in 3D Point Clouds
  • Xingyu Liu, C. Qi, L. Guibas
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This work proposes a novel deep neural network named FlowNet3D that learns scene flow from point clouds in an end-to-end fashion and successfully generalizes to real scans, outperforming various baselines and showing competitive results to the prior art.
A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation
  • N. Mayer, Eddy Ilg, T. Brox
  • Computer Science
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
This paper proposes three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks and presents a convolutional network for real-time disparity estimation that provides state-of-the-art results.
U-Net: Convolutional Networks for Biomedical Image Segmentation
TLDR
It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Object scene flow for autonomous vehicles
TLDR
A novel model and dataset for 3D scene flow estimation with an application to autonomous driving by representing each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object.
...
...