Corpus ID: 208139192

Towards Robust RGB-D Human Mesh Recovery

@article{Li2019TowardsRR,
  title={Towards Robust RGB-D Human Mesh Recovery},
  author={Ren Li and Changjiang Cai and Georgios Georgakis and Srikrishna Karanam and Terrence Chen and Ziyan Wu},
  journal={ArXiv},
  year={2019},
  volume={abs/1911.07383}
}
We consider the problem of human pose estimation. While much recent work has focused on the RGB domain, these techniques are inherently under-constrained since there can be many 3D configurations that explain the same 2D projection. To this end, we propose a new method that uses RGB-D data to estimate a parametric human mesh model. Our key innovations include (a) the design of a new dynamic data fusion module that facilitates learning with a combination of RGB-only and RGB-D datasets, (b) a new… Expand
Bilevel Online Adaptation for Out-of-Domain Human Mesh Reconstruction
TLDR
A new training algorithm named Bilevel Online Adaptation (BOA) is proposed, which divides the optimization process of overall multi-objective into two steps of weight probe and weight update in a training iteration, and leads to state-of-the-art results on two human mesh reconstruction benchmarks. Expand
MeshLifter: Weakly Supervised Approach for 3D Human Mesh Reconstruction from a Single 2D Pose Based on Loop Structure
TLDR
This paper proposes MeshLifter, a network that estimates a 3D human mesh from an input 2D human pose, and proposes a weakly supervised learning method based on a loop structure to train theMeshLifter. Expand
Heuristic Weakly Supervised 3D Human Pose Estimation in Novel Contexts without Any 3D Pose Ground Truth
TLDR
A heuristic weakly supervised solution, called HW-HuP to estimate 3D human pose in contexts that no ground truth 3D data is accessible, even for fine-tuning, and is able to be extended to other input modalities for pose estimation tasks. Expand
Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19
  • F. Shi, J. Wang, +6 authors D. Shen
  • Computer Science, Engineering
  • IEEE Reviews in Biomedical Engineering
  • 2021
TLDR
This review paper covers the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up, and particularly focuses on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals. Expand
AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system
TLDR
An AI system that automatically analyzes CT images and provides the probability of infection to rapidly detect COVID-19 pneumonia and is able to overcome a series of challenges in this particular situation and deploy the system in four weeks. Expand
Artificial intelligence (AI) for medical imaging to combat coronavirus disease (COVID-19): a detailed review with direction for future research
TLDR
A detailed mythological analysis for the evaluation of AI-based methods used in the process of detecting COVID-19 from medical images is presented, suggesting that future research may focus on multi-modality based models as well as how to select the best model architecture where AI can introduce more intelligence to medical systems. Expand

References

SHOWING 1-10 OF 57 REFERENCES
End-to-End Recovery of Human Shape and Pose
TLDR
This work introduces an adversary trained to tell whether human body shape and pose parameters are real or not using a large database of 3D human meshes, and produces a richer and more useful mesh representation that is parameterized by shape and 3D joint angles. Expand
Shape-Aware Human Pose and Shape Reconstruction Using Multi-View Images
TLDR
This work proposes a scalable neural network framework to reconstruct the 3D mesh of a human body from multi-view images, in the subspace of the SMPL model, which outperforms existing methods on real-world images, especially on shape estimations. Expand
A Multi-view RGB-D Approach for Human Pose Estimation in Operating Rooms
TLDR
The proposed method permits the joint detection and estimation of the poses without knowing a priori the number of persons present in the scene and demonstrates the benefits of using the additional depth channel for pose refinement beyond its use for the generation of improved features. Expand
VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera
TLDR
This work presents the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera and shows that the approach is more broadly applicable than RGB-D solutions, i.e., it works for outdoor scenes, community videos, and low quality commodity RGB cameras. Expand
Exploiting Temporal Context for 3D Human Pose Estimation in the Wild
TLDR
A bundle-adjustment-based algorithm for recovering accurate 3D human pose and meshes from monocular videos and shows that retraining a single-frame 3D pose estimator on this data improves accuracy on both real-world and mocap data by evaluating on the 3DPW and HumanEVA datasets. Expand
Convolutional Mesh Regression for Single-Image Human Shape Reconstruction
TLDR
This paper addresses the problem of 3D human pose and shape estimation from a single image by proposing a graph-based mesh regression, which outperform the comparable baselines relying on model parameter regression, and achieves state-of-the-art results among model-based pose estimation approaches. Expand
Real-time Convolutional Networks for Depth-based Human Pose Estimation
TLDR
The hypothesis is that depth images contain less structures and are easier to process than RGB images while keeping the required information for human detection and pose inference, thus allowing the use of simpler networks for the task. Expand
Ordinal Depth Supervision for 3D Human Pose Estimation
TLDR
This work proposes to use a weaker supervision signal provided by the ordinal depths of human joints, which achieves new state-of-the-art performance for the relevant benchmarks and validate the effectiveness of ordinal depth supervision for 3D human pose. Expand
MoCap-guided Data Augmentation for 3D Pose Estimation in the Wild
TLDR
This paper introduces an image-based synthesis engine that artificially augments a dataset of real images with 2D human pose annotations using 3D Motion Capture (MoCap) data to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. Expand
Synthesizing Training Images for Boosting Human 3D Pose Estimation
TLDR
It is shown that pose space coverage and texture diversity are the key ingredients for the effectiveness of synthetic training data and CNNs trained with the authors' synthetic images out-perform those trained with real photos on 3D pose estimation tasks. Expand
...
1
2
3
4
5
...