Time-of-Flight Cameras

  title={Time-of-Flight Cameras},
  author={Miles E. Hansard and Seungkyu Lee and Ouk Choi and Radu Horaud},
  booktitle={SpringerBriefs in Computer Science},
Time-of-flight (TOF) cameras provide a depth value at each pixel, from which the 3D structure of the scene can be estimated. This new type of active sensor makes it possible to go beyond traditional 2D image processing, directly to depth-based and 3D scene processing. Many computer vision and graphics applications can benefit from TOF data, including 3D reconstruction, activity and gesture recognition, motion capture and face detection. It is already possible to use multiple TOF cameras, in… 

Cross-calibration of time-of-flight and colour cameras

An overview of depth cameras and range scanners based on time-of-flight technologies

The underlying measurement principles of time-of-flight cameras, including pulsed-light cameras, are described, which measure directly the time taken for a light pulse to travel from the device to the object and back again, and continuous-wave-modulated light cameras, whichMeasure the phase difference between the emitted and received signals, and hence obtain the travel time indirectly.

Computational imaging with multi-camera time-of-flight systems

A reproducible hardware system is developed that allows for the exposure times and waveforms of up to three cameras to be synchronized and analyze waveform interference between multiple light sources in ToF applications and proposes simple solutions.

Rom stereo video and time-of-flight camera

  • T. AtesAydin Alatan
  • Computer Science
    2011 IEEE 19th Signal Processing and Communications Applications Conference (SIU)
  • 2011
It could be concluded that it is possible to realize content production and display stages of a free-viewpoint system in real-time by using only low-cost commodity computing devices.

Line-of-sight-based ToF camera's range image filtering for precise 3D scene reconstruction

A new method which removes the jump edges has been proposed and compared with another method which is most cited and used, based on the quality of filtered image, computation time for filtering and also its impact on registration of successive scans and reconstruction of the whole scene.

Model-Based Tracking at 300Hz Using Raw Time-of-Flight Observations

This paper shows how to perform model-based object tracking which allows to reconstruct the object's depth at an order of magnitude higher frame-rate through simple modifications to an off-the-shelf depth camera.

Snapshot Difference Imaging using Time-of-Flight Sensors

This paper introduces a snapshot difference Imaging approach that is directly implemented in the sensor hardware of emerging time-of-flight cameras and demonstrates that the proposed snapshot difference imaging technique is useful for direct-global illumination separation, for direct imaging of spatial and temporal image gradients, fordirect depth edge imaging, and more.

3D Shape Acquisition

This section introduces the linear camera model and shows the non-linearity introduced by the optical lens and provides the theoretical background for estimating depth through triangulation, and elaborate on the depth maps to point cloud transformation and on the color information integration.

Transformation of depth maps produced by ToF cameras

A simple depth map transformation based on a geometrical relations, defined by the pinhole camera model, and independently estimated camera intrinsic parameters is proposed and compared with the post-processing method implemented by the manufacturer.

Compressive time-of-flight imaging

This paper proposes a compressive ToF camera design that allows to reduce the amount of data while keeping high spatial an temporal resolution.



Extrinsic and depth calibration of ToF-cameras

  • S. FuchsG. Hirzinger
  • Computer Science
    2008 IEEE Conference on Computer Vision and Pattern Recognition
  • 2008
In this work a calibration process for ToF-cameras with respect to the intrinsic parameters, the depth measurement distortion and the pose of the camera relative to a robotpsilas end effector is described.

High-quality scanning using time-of-flight depth superresolution

It is shown that ideas from traditional color image superresolution can be applied to TOF cameras in order to obtain 3D data of higher X-Y resolution and less noise.

Integrating 3D Time-of-Flight Camera Data and High Resolution Images for 3DTV Applications

This work builds improved 3D models by integrating two different modalities by applying the machine-learning technique of inference in Markov random fields, and believes that the design of low-cost, fast and highly portable 3D scene acquisition systems will be possible in the near future.

Depth Imaging by Combining Time-of-Flight and On-Demand Stereo

A per-frame confidence map extracted from the TOF sensor data is used for improving the disparity estimation in the stereo part of the depth map to initialize the z-buffer so that virtual objects can be occluded by real objects in an augmented reality scenario.

Fusion of Time of Flight Camera Point Clouds

Fusion methods for partially overlapping range images are investigated, aiming to address the issues of lateral field of view extension and occlusion removal by combining depth images with parallel view axes.

Improved 3D Depth Image Estimation Algorithm for Visual Camera

A new 3D depth image estimation algorithm for visual camera, in which both the intensity image and depth image of Time-of-Flight camera are taken into consideration and the high resolution depth map of visual camera is successfully generated and of high quality.

Real-time 3D visual sensor for robust object recognition

A novel 3D measurement system, which yields both depth and color information in real time, by calibrating a time-of-flight and two CCD cameras, and a robust object recognition using the 3D visual sensor is presented.

Measurements with ToF Cameras and Their Necessary Corrections

The most important characteristic of time-of-flight (ToF) cameras is the ability to measure the distance to each image pixel. Thus, for each pixel, information on both its amplitude and distance to

ToF depth image motion blur detection using 3D blur shape models

A through analysis on the ToF depth motion blur and a modeling method which is used to detect a motion blur region from a depth image are provided and it is shown that the proposed method correctly detects blur regions using the set of all possible motion artifact models.

MixIn3D: 3D Mixed Reality with ToF-Camera

The key element is a ToF-depth camera, accompanied by color cameras, mounted on a pan-tilt head, which allows to compute mutual occlusions between real and virtual objects and correct light and shadow generation with mutual light interaction.