• Corpus ID: 209140669

Car Pose in Context: Accurate Pose Estimation with Ground Plane Constraints

@article{Li2019CarPI,
  title={Car Pose in Context: Accurate Pose Estimation with Ground Plane Constraints},
  author={Pengfei Li and Weichao Qiu and Michael Peven and Gregory Hager and Alan Loddon Yuille},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.04363}
}
Scene context is a powerful constraint on the geometry of objects within the scene in cases, such as surveillance, where the camera geometry is unknown and image quality may be poor. In this paper, we describe a method for estimating the pose of cars in a scene jointly with the ground plane that supports them. We formulate this as a joint optimization that accounts for varying car shape using a statistical atlas, and which simultaneously computes geometry and internal camera parameters. We… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 50 REFERENCES

Resolving 3 D Human Pose Ambiguities with 3 D Scene Constraints

This work represents human pose using the 3D human body model SMPL-X and extend SMPLify-X to estimate body pose using scene constraints and shows quantitatively that introducing scene constraints significantly reduces 3D joint error and vertex error.

Resolving 3D Human Pose Ambiguities With 3D Scene Constraints

This work represents human pose using the 3D human body model SMPL-X and extend SMPLify-X to estimate body pose using scene constraints and shows quantitatively that introducing scene constraints significantly reduces 3D joint error and vertex error.

GroundNet: Monocular Ground Plane Normal Estimation with Geometric Consistency

This model achieves the top-ranked performance on ground plane normal estimation and horizon line detection on the real-world outdoor datasets of ApolloScape and KITTI, improving the performance of previous art by up to 17.7% relatively.

Making Deep Heatmaps Robust to Partial Occlusions for 3D Object Pose Estimation

A novel method for robust and accurate 3D object pose estimation from a single color image under large occlusions by predicting heatmaps from multiple small patches independently and then computing the 3D pose from these correspondences using a geometric method.

A General and Simple Method for Camera Pose and Focal Length Determination

This paper revisits the pose determination problem of a partially calibrated camera with unknown focal length by using n(n ≥ 4) 3D-to-2D point correspondences and proposes a truly general method, suited both to the minimal 4-point based RANSAC application, and also to large scale scenarios with thousands of points, irrespective of the 3D point configuration.

Automatic Calibration of Stationary Surveillance Cameras in the Wild

This paper presents the first combination of several existing components of calibration systems from literature, including a fully automatic camera calibration algorithm for monocular stationary surveillance cameras, and introduces novel pre- and post-processing stages that improve estimation of the horizon line and the vertical vanishing point.

A Unified Framework for Multi-View Multi-Class Object Pose Estimation

This work presents a scalable framework for accurately inferring six Degree-of-Freedom (6-DoF) pose for a large number of object classes from single or multiple views and shows that this framework consistently improves the performance of the single-view network.

Beyond PASCAL: A benchmark for 3D object detection in the wild

PASCAL3D+ dataset is contributed, which is a novel and challenging dataset for 3D object detection and pose estimation, and on average there are more than 3,000 object instances per category.

Atlanta world: an expectation maximization framework for simultaneous low-level edge grouping and camera calibration in complex man-made environments

  • Grant SchindlerF. Dellaert
  • Computer Science
    Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.
  • 2004
This work proposes to use the expectation maximization (EM) algorithm to perform the search over all continuous parameters that influence the location of the vanishing points in a scene, and presents experimental results on images of "Atlanta worlds", complex urban scenes with multiple orthogonal edge-groups, that validate the approach.

PanoContext: A Whole-Room 3D Context Model for Panoramic Scene Understanding

Experiments show that solely based on 3D context without any image region category classifier, the proposed whole-room context model can achieve a comparable performance with the state-of-the-art object detector, demonstrating that when the FOV is large, context is as powerful as object appearance.