DEEPFOCAL: A method for direct focal length estimation

@article{Workman2015DEEPFOCALAM,
  title={DEEPFOCAL: A method for direct focal length estimation},
  author={Scott Workman and Connor Greenwell and Menghua Zhai and Ryan Baltenberger and Nathan Jacobs},
  journal={2015 IEEE International Conference on Image Processing (ICIP)},
  year={2015},
  pages={1369-1373}
}
Estimating the focal length of an image is an important preprocessing step for many applications. Despite this, existing methods for single-view focal length estimation are limited in that they require particular geometric calibration objects, such as orthogonal vanishing points, co-planar circles, or a calibration grid, to occur in the field of view. In this work, we explore the application of a deep convolutional neural network, trained on natural images obtained from Internet photo… 

Figures and Tables from this paper

Focal length estimation guided with object distribution on FocaLens dataset

Experimental results demonstrate that the proposed model trained on FocaLens can not only achieve state-of-the-art results on the scenes with distinct geometric cues but also obtain comparable results onThe scenes even without distinct geometric cue.

DeepCalib: a deep learning approach for automatic intrinsic calibration of wide field-of-view cameras

This work builds upon the recent developments in deep Convolutional Neural Networks (CNN) and automatically estimates the intrinsic parameters of the camera from a single input image, using the great amount of omnidirectional images available on the Internet to generate a large-scale dataset.

Camera focal length from distances in a single image

Experimental results show that the proposed focal length estimation approach is able to obtain a more accurate focal length than some state of the art in a single image setting and it is demonstrated experimentally that distance information has an improved meaning for the solution of the focal length.

Deep Single Image Camera Calibration With Radial Distortion

This work proposes a parameterization for radial distortion that is better suited for learning than directly predicting the distortion parameters, and proposes a new loss function based on point projections to avoid having to balance heterogeneous loss terms.

Single View Metrology in the Wild

This work presents a novel approach to single view metrology that can recover the absolute scale of a scene represented by 3D heights of objects or camera height above the ground as well as camera parameters of orientation and field of view, using just a monocular image acquired in unconstrained condition.

A Perceptual Measure for Deep Single Image Camera Calibration

A large-scale human perception study is conducted where users are asked to judge the realism of 3D objects composited with and without ground truth camera calibration, and a new perceptual measure for camera calibration is developed, and it is demonstrated that the deep calibration network outperforms other methods on this measure.

DeepPTZ: Deep Self-Calibration for PTZ Cameras

A deep learning based approach to automatically estimate the focal length and distortion parameters of both images as well as the rotation angles between them is proposed, relying on a dual-Siamese structure, imposing bidirectional constraints.

A Geometric Approach to Obtain a Bird's Eye View From an Image

  • A. AbbasAndrew Zisserman
  • Computer Science, Mathematics
    2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
  • 2019
The objective of this paper is to rectify any monocular image by computing a homography matrix that transforms it to a geometrically correct bird's eye (overhead) view, and achieves state-of-the-art results on horizon detection.

Learned Intrinsic Auto-Calibration From Fundamental Matrices

This work proposes to solve for the intrinsic calibration parameters using a neural network that is trained on a synthetic Unity dataset that is created, which outperforms traditional methods by 2% to 30%, and outperforms recent deep learning approaches by a factor of 2 to 4 times.

Multi-task Learning for Camera Calibration

This study presents a unique method for predicting intrinsic and extrinsic properties from a pair of images by reconstructing the 3D points using a camera model neural network and then using the loss in reconstruction to obtain the camera specifications.
...

References

SHOWING 1-10 OF 39 REFERENCES

Depth Map Prediction from a Single Image using a Multi-Scale Deep Network

This paper employs two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally, and applies a scale-invariant error to help measure depth relations rather than scale.

Automatic Camera Calibration from a Single Manhattan Image

We present a completely automatic method for obtaining the approximate calibration of a camera (alignment to a world frame and focal length) from a single image of an unknown scene, provided only

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

This paper presents a novel method to quickly, accurately and simultaneously estimate three orthogonal vanishing points (TOVPs) and focal length from single images, which decomposes a 2D Hough parameter space into two cascaded 1DHough parameter spaces, which makes the method much faster and more robust than previous methods without losing accuracy.

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences

A novel technique to determine camera parameters when the sun is visible in an image sequence is presented, which can be used to successfully recover the camera focal length, as well as its azimuth and zenith angles.

A four-step camera calibration procedure with implicit image correction

  • J. HeikkiläO. Silvén
  • Physics
    Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  • 1997
This paper presents a four-step calibration procedure that is an extension to the two-step method, and a linear method for solving the parameters of the inverse model is presented.

On Sampling Focal Length Values to Solve the Absolute Pose Problem

This paper challenges the notion that using minimal solvers is always optimal and proposes to compute the pose for a camera with unknown focal length by randomly sampling a focal length value and using an efficient pose solver for the now calibrated camera.

Robust Global Translations with 1DSfM

This work proposes a method for removing outliers from problem instances by solving simpler low-dimensional subproblems, which it refers to as 1DSfM problems.

Using vanishing points for camera calibration

Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence.

Recovering Surface Layout from an Image

This paper takes the first step towards constructing the surface layout, a labeling of the image intogeometric classes, to learn appearance-based models of these geometric classes, which coarsely describe the 3D scene orientation of each image region.

On plane-based camera calibration: A general algorithm, singularities, applications

  • P. SturmS. Maybank
  • Mathematics
    Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
  • 1999
A general algorithm for plane-based calibration that can deal with arbitrary numbers of views and calibration planes and it is easy to incorporate known values of intrinsic parameters is presented.