DEEPFOCAL: A method for direct focal length estimation

@article{Workman2015DEEPFOCALAM,
  title={DEEPFOCAL: A method for direct focal length estimation},
  author={Scott Workman and Connor Greenwell and Menghua Zhai and Ryan Baltenberger and Nathan Jacobs},
  journal={2015 IEEE International Conference on Image Processing (ICIP)},
  year={2015},
  pages={1369-1373}
}
Estimating the focal length of an image is an important preprocessing step for many applications. Despite this, existing methods for single-view focal length estimation are limited in that they require particular geometric calibration objects, such as orthogonal vanishing points, co-planar circles, or a calibration grid, to occur in the field of view. In this work, we explore the application of a deep convolutional neural network, trained on natural images obtained from Internet photo… Expand
Focal length estimation guided with object distribution on FocaLens dataset
TLDR
Experimental results demonstrate that the proposed model trained on FocaLens can not only achieve state-of-the-art results on the scenes with distinct geometric cues but also obtain comparable results onThe scenes even without distinct geometric cue. Expand
DeepCalib: a deep learning approach for automatic intrinsic calibration of wide field-of-view cameras
TLDR
This work builds upon the recent developments in deep Convolutional Neural Networks (CNN) and automatically estimates the intrinsic parameters of the camera from a single input image, using the great amount of omnidirectional images available on the Internet to generate a large-scale dataset. Expand
Deep Single Image Camera Calibration With Radial Distortion
TLDR
This work proposes a parameterization for radial distortion that is better suited for learning than directly predicting the distortion parameters, and proposes a new loss function based on point projections to avoid having to balance heterogeneous loss terms. Expand
Horizon Lines in the Wild
TLDR
This work introduces a large, realistic evaluation dataset, Horizon Lines in the Wild (HLW), containing natural images with labeled horizon lines, and investigates the application of convolutional neural networks for directly estimating the horizon line. Expand
A Perceptual Measure for Deep Single Image Camera Calibration
TLDR
A large-scale human perception study is conducted where users are asked to judge the realism of 3D objects composited with and without ground truth camera calibration, and a new perceptual measure for camera calibration is developed, and it is demonstrated that the deep calibration network outperforms other methods on this measure. Expand
DeepPTZ: Deep Self-Calibration for PTZ Cameras
TLDR
A deep learning based approach to automatically estimate the focal length and distortion parameters of both images as well as the rotation angles between them is proposed, relying on a dual-Siamese structure, imposing bidirectional constraints. Expand
A Geometric Approach to Obtain a Bird's Eye View From an Image
  • A. Abbas, Andrew Zisserman
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
  • 2019
TLDR
The objective of this paper is to rectify any monocular image by computing a homography matrix that transforms it to a geometrically correct bird's eye (overhead) view, and achieves state-of-the-art results on horizon detection. Expand
Learning to Recover 3D Scene Shape from a Single Image
TLDR
A two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image, and then uses 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape is proposed. Expand
Deep Fundamental Matrix Estimation without Correspondences
TLDR
Novel neural network architectures are proposed to estimate fundamental matrices in an end-to-end manner without relying on point correspondences to achieve competitive performance with traditional methods without the need for extracting correspondences. Expand
Camera Calibration through Camera Projection Loss
TLDR
This work proposes a novel method to predict extrinsic (baseline, pitch, and translation), intrinsic (focal length and principal point offset) parameters using an image pair using a multi-task learning methodology that combines analytical equations in learning framework for the estimation of camera parameters. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 41 REFERENCES
Depth Map Prediction from a Single Image using a Multi-Scale Deep Network
TLDR
This paper employs two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally, and applies a scale-invariant error to help measure depth relations rather than scale. Expand
Automatic Camera Calibration from a Single Manhattan Image
We present a completely automatic method for obtaining the approximate calibration of a camera (alignment to a world frame and focal length) from a single image of an unknown scene, provided onlyExpand
Simultaneous Vanishing Point Detection and Camera Calibration from Single Images
TLDR
This paper presents a novel method to quickly, accurately and simultaneously estimate three orthogonal vanishing points (TOVPs) and focal length from single images, which decomposes a 2D Hough parameter space into two cascaded 1DHough parameter spaces, which makes the method much faster and more robust than previous methods without losing accuracy. Expand
Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences
TLDR
A novel technique to determine camera parameters when the sun is visible in an image sequence is presented, which can be used to successfully recover the camera focal length, as well as its azimuth and zenith angles. Expand
A four-step camera calibration procedure with implicit image correction
  • J. Heikkilä, O. Silvén
  • Computer Science, Mathematics
  • Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  • 1997
TLDR
This paper presents a four-step calibration procedure that is an extension to the two-step method, and a linear method for solving the parameters of the inverse model is presented. Expand
On Sampling Focal Length Values to Solve the Absolute Pose Problem
TLDR
This paper challenges the notion that using minimal solvers is always optimal and proposes to compute the pose for a camera with unknown focal length by randomly sampling a focal length value and using an efficient pose solver for the now calibrated camera. Expand
Robust Global Translations with 1DSfM
TLDR
This work proposes a method for removing outliers from problem instances by solving simpler low-dimensional subproblems, which it refers to as 1DSfM problems. Expand
Using vanishing points for camera calibration
TLDR
Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence. Expand
Recovering Surface Layout from an Image
TLDR
This paper takes the first step towards constructing the surface layout, a labeling of the image intogeometric classes, to learn appearance-based models of these geometric classes, which coarsely describe the 3D scene orientation of each image region. Expand
On plane-based camera calibration: A general algorithm, singularities, applications
  • P. Sturm, S. Maybank
  • Mathematics, Computer Science
  • Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
  • 1999
TLDR
A general algorithm for plane-based calibration that can deal with arbitrary numbers of views and calibration planes and it is easy to incorporate known values of intrinsic parameters is presented. Expand
...
1
2
3
4
5
...