• Corpus ID: 245502450

Benchmarking Pedestrian Odometry: The Brown Pedestrian Odometry Dataset (BPOD)

@article{Charatan2021BenchmarkingPO,
  title={Benchmarking Pedestrian Odometry: The Brown Pedestrian Odometry Dataset (BPOD)},
  author={David Charatan and Hongyi Fan and Benjamin B. Kimia},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.13018}
}
We present the Brown Pedestrian Odometry Dataset (BPOD) for benchmarking visual odometry algorithms in head-mounted pedestrian settings. This dataset was captured using synchronized global and rolling shutter stereo cameras in 12 diverse indoor and outdoor locations on Brown University’s campus. Compared to existing datasets, BPOD contains more image blur and self-rotation, which are common in pedestrian odometry but rare elsewhere. Ground-truth trajectories are generated from stick-on markers… 

References

SHOWING 1-10 OF 52 REFERENCES

ADVIO: An authentic dataset for visual-inertial odometry

TLDR
A set of versatile and challenging real-world computer vision benchmark sets for visual-inertial odometry, using a wide range of raw sensor data accessible on almost any modern-day smartphone together with a high-quality ground-truth track.

The UMA-VI dataset: Visual–inertial odometry in low-textured and dynamic illumination environments

TLDR
A trial evaluation of five existing state-of-the-art visual and visual–inertial methods on a subset of the dataset, which contains hardware-synchronized data from a commercial stereo camera, a custom stereo rig, and an inertial measurement unit.

A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM

TLDR
This work introduces the Imperial College London and National University of Ireland Maynooth (ICL-NUIM) dataset and presents a collection of handheld RGB-D camera sequences within synthetically generated environments to provide a method to benchmark the surface reconstruction accuracy.

Vision meets robotics: The KITTI dataset

TLDR
A novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research, using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras and a high-precision GPS/IMU inertial navigation system.

The TUM VI Benchmark for Evaluating Visual-Inertial Odometry

TLDR
The TUM VI benchmark is proposed, a novel dataset with a diverse set of sequences in different scenes for evaluatingVI odometry, which provides camera images with 1024×1024 resolution at 20 Hz, high dynamic range and photometric calibration, and evaluates state-of-the-art VI odometry approaches on this dataset.

A Benchmark for Visual-Inertial Odometry Systems Employing Onboard Illumination

TLDR
A dataset for evaluating the performance of visual-inertial odometry (VIO) systems employing an onboard light source, and analysis of several start-of-the-art VO and VIO frame-works.

PennCOSYVIO: A challenging Visual Inertial Odometry benchmark

TLDR
This work presents PennCOSYVIO, a new challenging Visual Inertial Odometry benchmark with synchronized data from a VI-sensor (stereo camera and IMU), two Project Tango hand-held devices, and three GoPro Hero 4 cameras, and demonstrates the accuracy with which ground-truth poses can be obtained via optic localization off of fiducial markers.

A Photometrically Calibrated Benchmark For Monocular Visual Odometry

TLDR
A novel, simple approach to non-parametric vignette calibration, which requires minimal set-up and is easy to reproduce and thoroughly evaluate two existing methods (ORB-SLAM and DSO) on the dataset.

Are we ready for autonomous driving? The KITTI vision benchmark suite

TLDR
The autonomous driving platform is used to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection, revealing that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.

D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry

TLDR
D3VO tightly incorporates the predicted depth, pose and uncertainty into a direct visual odometry method to boost both the front-end tracking as well as the back-end non-linear optimization.
...