• Corpus ID: 252519672

TTCDist: Fast Distance Estimation From an Active Monocular Camera Using Time-to-Contact

@inproceedings{Burner2022TTCDistFD,
  title={TTCDist: Fast Distance Estimation From an Active Monocular Camera Using Time-to-Contact},
  author={Levi Burner and Nitin J. Sanket and Cornelia Fermuller and Yiannis Aloimonos},
  year={2022}
}
Distance estimation from vision is fundamental for a myriad of robotic applications such as navigation, manipulation, and planning. Inspired by the mammal's visual system, which gazes at specific objects, we develop two novel constraints relating time-to-contact, acceleration, and distance that we call the $\tau$-constraint and $\Phi$-constraint. They allow an active (moving) camera to estimate depth efficiently and accurately while using only a small portion of the image. The constraints are… 

References

SHOWING 1-10 OF 36 REFERENCES

Binary TTC: A Temporal Geofence for Autonomous Navigation

This method is the first to offer TTC information (binary or coarsely quantized) at sufficiently high frame-rates for practical use and predicts with low latency whether the observer will collide with an obstacle within a certain time, which is often more critical than knowing exact, per-pixel TTC.

VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator

This paper presents VINS-Mono: a robust and versatile monocular visual-inertial state estimator that is applicable for different applications that require high accuracy in localization and performs an onboard closed-loop autonomous flight on the microaerial-vehicle platform.

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

This paper presents an on-manifold sequence-to-sequence learning approach to motion estimation using visual and inertial sensors that eliminates the need for tedious manual synchronization of the camera and IMU and can be trained to outperform state-of-the-art methods in the presence of calibration and synchronization errors.

Monocular distance estimation with optical flow maneuvers and efference copies : a stability-based strategy

The visual cue of optical flowplays an important role in the navigation offlying insects, and is increasingly studied for use by smallflying robots aswell. Amajor problem is that successful optical

A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry

  • Zichao ZhangD. Scaramuzza
  • Computer Science
    2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2018
In this tutorial, we provide principled methods to quantitatively evaluate the quality of an estimated trajectory from visual(-inertial) odometry (VO/VIO), which is the foundation of benchmarking the

Tracking facilitates 3-D motion estimation

It is argued that in many cases when an object is moving in an unrestricted manner (translation and rotation) in the 3D world, the authors are just interested in the motion's translational components.

Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback

Experimental results show that robust localization with high accuracy can be achieved with this filter-based framework, and there is no time-consuming initialization procedure and pose estimates are available starting at the second image frame.

EVReflex: Dense Time-to-Impact Prediction for Event-based Obstacle Avoidance

This work shows that the fusion of events and depth overcomes the failure cases of each individual modality when performing obstacle avoidance, and unifies event camera and lidar streams to estimate metric Time-To-Impact (TTI) without prior knowledge of the scene geometry or obstacles.

Lucas-Kanade 20 Years On: A Unifying Framework

An overview of image alignment is presented, describing most of the algorithms and their extensions in a consistent framework and concentrating on the inverse compositional algorithm, an efficient algorithm that was recently proposed.

Active segmentation for robotics

This paper proposes a novel approach to segmentation based on the operation of fixation by an active observer that integrates monocular cues (color, texture) with binocular cues(stereo disparities and optical flow) and segments the whole scene at once into many areas.