RGBD-Dog: Predicting Canine Pose from RGBD Sensors

@article{Kearney2020RGBDDogPC,
  title={RGBD-Dog: Predicting Canine Pose from RGBD Sensors},
  author={Sinead Kearney and Wenbin Li and Martin Parsons and Kwang In Kim and Darren P. Cosker},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={8333-8342}
}
  • Sinead KearneyWenbin Li D. Cosker
  • Published 16 April 2020
  • Computer Science, Biology
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
The automatic extraction of animal 3D pose from images without markers is of interest in a range of scientific fields. Most work to date predicts animal pose from RGB images, based on 2D labelling of joint positions. However, due to the difficult nature of obtaining training data, no ground truth dataset of 3D animal motion is available to quantitatively evaluate these approaches. In addition, a lack of 3D animal pose data also makes it difficult to train 3D pose-prediction methods in a similar… 

Markerless Dog Pose Recognition in the Wild Using ResNet Deep Learning Model

A methodology to recognize dog poses is proposed that avoids starting from scratch with the feature model and reduces the need for a large dataset, and is implemented as a mobile app that can be used for animal tracking.

SyDog: A Synthetic Dog Dataset for Improved 2D Pose Estimation

SyDog: a synthetic dataset of dogs containing ground truth pose and bounding box coordinates which was generated using the game engine, Unity is introduced and it is demonstrated that pose estimation models trained on SyDog achieve better performance than models trained purely on real data and significantly reduce the need for the labour intensive labelling of images.

The PAIR-R24M Dataset for Multi-animal 3D Pose Estimation

The PAIR-R24M (Paired Acquisition of Interacting oRganisms - Rat) dataset is introduced, the first large multi-animal 3D pose estimation dataset, which contains 24.3 million frames of RGB video and 3D ground-truth motion capture of dyadic interactions in laboratory rats.

Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop

An automatic, end-to-end method for recovering the 3D pose and shape of dogs from monocular internet images is introduced and a new parameterized model (including limb scaling) SMBLD is generated which is released alongside the new annotation dataset StanfordExtra to the research community.

BARC: Learning to Regress 3D Dog Shape from Images by Exploiting Breed Information

This work shows that a-priori information about genetic similarity can help to compensate for the lack of 3D training data, and significantly improve shape accuracy over a baseline without them.

Coarse-to-fine Animal Pose and Shape Estimation

This work designs the mesh refinement GCN (MRGCN) as an encoder-decoder structure with hierarchical feature representations to overcome the limited receptive field of traditional GCNs, and observes that the global image feature used by existing animal mesh reconstruction works is unable to capture detailed shape information for mesh refinement.

Animal pose estimation from video data with a hierarchical von Mises-Fisher-Gaussian model

GIMBAL is presented: a hierarchical von Mises-FisherGaussian model that improves upon deep networks’ estimates by leveraging spatiotemporal constraints and the conditional conjugacy of the model permits simple and efficient Bayesian inference algorithms.

T-LEAP: occlusion-robust pose estimation of walking cows using temporal information

OpenApePose: a database of annotated ape photographs for pose estimation

OpenApePose, a new public dataset of 71,868 photographs, annotated with 16 body landmarks, of six ape species in naturalistic contexts, is presented, showing that a standard deep net trained on ape photos can reliably track out-of-sample ape photos better than networks trained on monkeys and on humans can.

Object Wake-Up: 3D Object Rigging from a Single Image

An automated approach to tackle the entire process of reconstruct such generic 3D objects, rigging and animation, all from single images, by demonstrating promising results on the related sub-tasks of 3D reconstruction and skeleton prediction.

References

SHOWING 1-10 OF 42 REFERENCES

Monocular 3D Human Pose Estimation in the Wild Using Improved CNN Supervision

We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly

MoCap-guided Data Augmentation for 3D Pose Estimation in the Wild

This paper introduces an image-based synthesis engine that artificially augments a dataset of real images with 2D human pose annotations using 3D Motion Capture (MoCap) data to generate a large set of photorealistic synthetic images of humans with 3D pose annotations.

Synthesizing Training Images for Boosting Human 3D Pose Estimation

It is shown that pose space coverage and texture diversity are the key ingredients for the effectiveness of synthetic training data and CNNs trained with the authors' synthetic images out-perform those trained with real photos on 3D pose estimation tasks.

Learning from Synthetic Humans

This work presents SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data and shows that CNNs trained on this synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images.

2D Human Pose Estimation: New Benchmark and State of the Art Analysis

A novel benchmark "MPII Human Pose" is introduced that makes a significant advance in terms of diversity and difficulty, a contribution that is required for future developments in human body models.

Real-time human pose recognition in parts from single depth images

This work takes an object recognition approach, designing an intermediate body parts representation that maps the difficult pose estimation problem into a simpler per-pixel classification problem, and generates confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes.

Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture From Images “In the Wild”

This method, SMALST (SMAL with learned Shape and Texture) goes beyond previous work, which assumed manual keypoints and/or segmentation, to regress directly from pixels to 3D animal shape, pose and texture.

Mouse Pose Estimation From Depth Images

It is demonstrated in this paper that when a top-mounted depth camera is combined with a bottom-mounted color camera, the final system is capable of delivering full-body pose estimation including four limbs and the paws.

Clustered Pose and Nonlinear Appearance Models for Human Pose Estimation

A new annotated database of challenging consumer images is introduced, an order of magnitude larger than currently available datasets, and over 50% relative improvement in pose estimation accuracy over a state-of-the-art method is demonstrated.

Cross-Domain Adaptation for Animal Pose Estimation

This paper proposed a novel cross-domain adaptation method to transform the animal pose knowledge from labeled animal classes to unlabeled animal classes, and uses the modest animal pose dataset to adapt learned knowledge to multiple animals species.