• Corpus ID: 252519672

# TTCDist: Fast Distance Estimation From an Active Monocular Camera Using Time-to-Contact

@inproceedings{Burner2022TTCDistFD,
title={TTCDist: Fast Distance Estimation From an Active Monocular Camera Using Time-to-Contact},
author={Levi Burner and Nitin J. Sanket and Cornelia Fermuller and Yiannis Aloimonos},
year={2022}
}
• Published 14 March 2022
• Computer Science
Distance estimation from vision is fundamental for a myriad of robotic applications such as navigation, manipulation, and planning. Inspired by the mammal's visual system, which gazes at specific objects, we develop two novel constraints relating time-to-contact, acceleration, and distance that we call the $\tau$-constraint and $\Phi$-constraint. They allow an active (moving) camera to estimate depth efficiently and accurately while using only a small portion of the image. The constraints are…

## References

SHOWING 1-10 OF 36 REFERENCES

• Computer Science
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2021
This method is the first to offer TTC information (binary or coarsely quantized) at sufficiently high frame-rates for practical use and predicts with low latency whether the observer will collide with an obstacle within a certain time, which is often more critical than knowing exact, per-pixel TTC.
• Computer Science
IEEE Transactions on Robotics
• 2018
This paper presents VINS-Mono: a robust and versatile monocular visual-inertial state estimator that is applicable for different applications that require high accuracy in localization and performs an onboard closed-loop autonomous flight on the microaerial-vehicle platform.
• Computer Science
AAAI
• 2017
This paper presents an on-manifold sequence-to-sequence learning approach to motion estimation using visual and inertial sensors that eliminates the need for tedious manual synchronization of the camera and IMU and can be trained to outperform state-of-the-art methods in the presence of calibration and synchronization errors.
The visual cue of optical flowplays an important role in the navigation offlying insects, and is increasingly studied for use by smallflying robots aswell. Amajor problem is that successful optical
• Computer Science
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
• 2018
In this tutorial, we provide principled methods to quantitatively evaluate the quality of an estimated trajectory from visual(-inertial) odometry (VO/VIO), which is the foundation of benchmarking the
• Physics
Biological Cybernetics
• 2004
It is argued that in many cases when an object is moving in an unrestricted manner (translation and rotation) in the 3D world, the authors are just interested in the motion's translational components.
• Computer Science
Int. J. Robotics Res.
• 2017
Experimental results show that robust localization with high accuracy can be achieved with this filter-based framework, and there is no time-consuming initialization procedure and pose estimates are available starting at the second image frame.
• Computer Science
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
• 2021
This work shows that the fusion of events and depth overcomes the failure cases of each individual modality when performing obstacle avoidance, and unifies event camera and lidar streams to estimate metric Time-To-Impact (TTI) without prior knowledge of the scene geometry or obstacles.
• Computer Science
International Journal of Computer Vision
• 2004
An overview of image alignment is presented, describing most of the algorithms and their extensions in a consistent framework and concentrating on the inverse compositional algorithm, an efficient algorithm that was recently proposed.
• Computer Science
2009 IEEE/RSJ International Conference on Intelligent Robots and Systems
• 2009
This paper proposes a novel approach to segmentation based on the operation of fixation by an active observer that integrates monocular cues (color, texture) with binocular cues(stereo disparities and optical flow) and segments the whole scene at once into many areas.