Transferring Dense Pose to Proximal Animal Classes

@article{Sanakoyeu2020TransferringDP,
  title={Transferring Dense Pose to Proximal Animal Classes},
  author={Artsiom Sanakoyeu and Vasil Khalidov and Maureen S. McCarthy and Andrea Vedaldi and Natalia Neverova},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={5232-5241}
}
Recent contributions have demonstrated that it is possible to recognize the pose of humans densely and accurately given a large dataset of poses annotated in detail. In principle, the same approach could be extended to any animal class, but the effort required for collecting new annotations for each case makes this strategy impractical, despite important applications in natural conservation, science and business. We show that, at least for proximal animal classes such as chimpanzees, it is… 
Pose Recognition in the Wild: Animal pose estimation using Agglomerative Clustering and Contrastive Learning
  • Samayan Bhattacharya, Sk Shahnawaz
  • Computer Science
    ArXiv
  • 2021
TLDR
This paper introduces a novel architecture that is able to recognize the pose of multiple animals from unlabelled data, and translates the saying, "neurons that fire together, wire together" to, "parts that move together, group together", in their approach to achieve a more effective classification of the data.
Continuous Surface Embeddings
TLDR
This work proposes a new, learnable image-based representation of dense correspondences and demonstrates that the proposed approach performs on par or better than the state-of-the-art methods for dense pose estimation for humans, while being conceptually simpler.
Discovering Relationships between Object Categories via Universal Canonical Maps
TLDR
This paper shows that improved correspondences can be learned automatically as a natural byproduct of learning category-specific dense pose predictors, and obtains state-of-the-art alignment results, outperforming dedicated methods for matching 3D shapes.
The PAIR-R24M Dataset for Multi-animal 3D Pose Estimation
TLDR
The PAIR-R24M (Paired Acquisition of Interacting oRganisms - Rat) dataset is introduced, the first large multi-animal 3D pose estimation dataset, which contains 24.3 million frames of RGB video and 3D ground-truth motion capture of dyadic interactions in laboratory rats.
DensePose 3D: Lifting Canonical Surface Maps of Articulated Objects to the Third Dimension
TLDR
DensePose 3D is contributed, a method that can learn monocular 3D reconstructions in a weakly supervised fashion from 2D image annotations only, in stark contrast with previous deformable reconstruction methods that use parametric models such as SMPL pre-trained on a large dataset of 3D object scans.
Across-Species Pose Estimation in Poultry Based on Images Using Deep Learning
Animal pose-estimation networks enable automated estimation of key body points in images or videos. This enables animal breeders to collect pose information repeatedly on a large number of animals.
UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model
  • Haonan Yan, Jiaqi Chen, +4 authors Tianxiang Zheng
  • Computer Science
    ArXiv
  • 2021
TLDR
This work introduces a new 3D human-body model with a series of decoupled parameters that could freely control the generation of the body, and constructs an ultra dense synthetic benchmark UltraPose, containing around 1.3 billion corresponding points.
Unified 3D Mesh Recovery of Humans and Animals by Learning Animal Exercise
TLDR
This work proposes an end-to-end unified 3D mesh recovery of humans and quadruped animals trained in a weakly-supervised way, and realizes the morphological similarity by semantic correspondences, called sub-keypoint, which enables joint training of human and animal mesh regression branches.
Making DensePose fast and light
TLDR
This work designs the DensePose R-CNN model’s architecture so that the final network retains most of its accuracy but becomes more light-weight and fast, and achieves 17× model size reduction and 2× latency improvement compared to the baseline model.
LiftPose3D, a deep learning-based approach for transforming 2D to 3D pose in laboratory animals
TLDR
LiftPose3D permits high-quality 3D pose estimation in the absence of complex camera arrays and tedious calibration procedures and despite occluded body parts in freely behaving animals.
...
1
2
3
...

References

SHOWING 1-10 OF 75 REFERENCES
Cross-Domain Adaptation for Animal Pose Estimation
TLDR
This paper proposed a novel cross-domain adaptation method to transform the animal pose knowledge from labeled animal classes to unlabeled animal classes, and uses the modest animal pose dataset to adapt learned knowledge to multiple animals species.
DensePose: Dense Human Pose Estimation in the Wild
TLDR
This work establishes dense correspondences between an RGB image and a surface-based representation of the human body, a task referred to as dense human pose estimation, and improves accuracy through cascading, obtaining a system that delivers highly-accurate results at multiple frames per second on a single gpu.
2D Human Pose Estimation: New Benchmark and State of the Art Analysis
TLDR
A novel benchmark "MPII Human Pose" is introduced that makes a significant advance in terms of diversity and difficulty, a contribution that is required for future developments in human body models.
Slim DensePose: Thrifty Learning From Sparse Annotations and Motion Cues
TLDR
It is demonstrated that if annotations are collected in video frames, their efficacy can be multiplied for free by using motion cues, and that motion cues help much more when they are extracted from videos.
Learning effective human pose estimation from inaccurate annotation
TLDR
A significant increase in pose estimation accuracy is demonstrated, while simultaneously reducing computational expense by a factor of 10, and a dataset of10,000 highly articulated poses is contributed.
Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture From Images “In the Wild”
TLDR
This method, SMALST (SMAL with learned Shape and Texture) goes beyond previous work, which assumed manual keypoints and/or segmentation, to regress directly from pixels to 3D animal shape, pose and texture.
Convolutional Pose Machines
TLDR
This work designs a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference in structured prediction tasks such as articulated pose estimation.
Unsupervised Learning of Object Landmarks through Conditional Image Generation
TLDR
This work proposes a method for learning landmark detectors for visual objects (such as the eyes and the nose in a face) without any manual supervision and introduces a tight bottleneck in the geometry-extraction process that selects and distils geometry-related features.
Unsupervised Learning of Object Landmarks by Factorized Spatial Embeddings
TLDR
This paper proposes a novel unsupervised approach that can discover and learn landmarks in object categories, thus characterizing their structure, and shows that the learned landmarks establish meaningful correspondences between different object instances in a category without having to impose this requirement explicitly.
Unsupervised Part-Based Disentangling of Object Shape and Appearance
TLDR
This work presents an unsupervised approach for disentangling appearance and shape by learning parts consistently over all instances of a category by simultaneously exploiting invariance and equivariance constraints between synthetically transformed images.
...
1
2
3
4
5
...