3D-Aided Deep Pose-Invariant Face Recognition

@inproceedings{Zhao20183DAidedDP,
  title={3D-Aided Deep Pose-Invariant Face Recognition},
  author={Jian Zhao and Lin Xiong and Yu Cheng and Yi Cheng and Jianshu Li and Li Zhou and Yan Xu and Jayashree Karlekar and Sugiri Pranata and Shengmei Shen and Junliang Xing and Shuicheng Yan and Jiashi Feng},
  booktitle={IJCAI},
  year={2018}
}
Learning from synthetic faces, though perhaps appealing for high data efficiency, may not bring satisfactory performance due to the distribution discrepancy of the synthetic and real face images. [] Key Method Specifically, 3D-PIM incorporates a simulator with the aid of a 3D Morphable Model (3D MM) to obtain shape and appearance prior for accelerating face normalization learning, requiring less training data.

Figures and Tables from this paper

Learning a High Fidelity Pose Invariant Model for High-resolution Face Frontalization

TLDR
Exhaustive experiments demonstrate that the proposed High Fidelity Pose Invariant Model (HF-PIM) not only boosts the performance of pose-invariant face recognition but also dramatically improves high-resolution frontalization appearances.

Recognizing Profile Faces by Imagining Frontal View

TLDR
Qualitative and quantitative experiments on both controlled and in-the-wild benchmark datasets demonstrate the superiority of the proposed Pose-Invariant Model (PIM) for face recognition in the wild, with three distinct novelties.

Towards High Fidelity Face Frontalization in the Wild

TLDR
Quantitative and qualitative evaluations show that the proposed high fidelity pose in-variant model (HF-PIM) boosts the performance of pose-invariant face recognition but also improves the visual quality of high-resolution frontalization appearances.

Multi-View Face Recognition Via Well-Advised Pose Normalization Network

TLDR
This work designs an end-to-end facial pose normalization network with adaptive weights on different objectives to exploit potentialities of various profile-front relationships and encourages intra-class compactness and inter-class separability between facial features by introducing quality-aware feature fusion.

Heterogeneous Face Frontalization via Domain Agnostic Learning

TLDR
A domain agnostic learning-based generative adversarial network (DAL-GAN) which can synthesize frontal views in the visible domain from thermal faces with pose variations and can generate better quality frontal views compared to the other baseline methods.

Look More Into Occlusion: Realistic Face Frontalization and Recognition With BoostGAN

TLDR
A boosting GAN (BoostGAN) for occluded but profile face frontalization, deocclusion, and recognition is proposed, which has two aspects: with the assumption that face occlusion is incomplete and partial, multiple images with patch occlusions are fed into the model for knowledge boosting, and a new aggregation structure integrated with a deep encoder–decoder network for coarse face synthesis and a boosting network for fine face generation is carefully designed.

Detailed Feature Guided Generative Adversarial Pose Reconstruction Network

TLDR
Experimental results show that the proposed Detailed Feature Guided Generative Adversarial Pose Reconstruction Network generates photorealistic front faces and outperforms state-of-the-art methods on M2FPA and CAS-PEAL.

High-Fidelity Face Manipulation With Extreme Poses and Expressions

TLDR
A novel framework that simplifies face manipulation into two correlated stages: a boundary prediction stage and a disentangled face synthesis stage is proposed, which dramatically improves the synthesis quality.

Image-to-Video Generation via 3D Facial Dynamics

TLDR
This paper proposes to “imagine” a face video from a single face image according to the reconstructed 3D face dynamics, aiming to generate a realistic and identity-preserving face video, with precisely predicted pose and facial expression.

Complete Face Recovery GAN: Unsupervised Joint Face Rotation and De-Occlusion from a Single-View Image

TLDR
This work presents a self-supervision strategy called Swap-R&R to overcome the lack of ground-truth in a fully unsupervised manner for joint face rotation and de-occlusion, and shows that this approach can boost the performance of facial recognition and facial expression recognition.

References

SHOWING 1-10 OF 39 REFERENCES

Dual-Agent GANs for Photorealistic and Identity Preserving Profile Face Synthesis

TLDR
Experimental results show that the proposed Dual-Agent Generative Adversarial Network (DA-GAN) model not only presents compelling perceptual results but also significantly outperforms state-of-the-arts on the large-scale and challenging NIST IJB-A unconstrained face recognition benchmark.

Disentangled Representation Learning GAN for Pose-Invariant Face Recognition

TLDR
Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.

Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis

TLDR
A Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details and outperforms state-of-theart results on large pose face recognition.

Pose-Aware Face Recognition in the Wild

TLDR
A method to push the frontiers of unconstrained face recognition in the wild by using multiple pose specific models and rendered face images called Pose-Aware Models (PAMs), which achieve remarkably better performance than commercial products and surprisingly also outperform methods that are specifically fine-tuned on the target dataset.

L2-constrained Softmax Loss for Discriminative Face Verification

TLDR
This paper adds an L2-constraint to the feature descriptors which restricts them to lie on a hypersphere of a fixed radius and shows that integrating this simple step in the training pipeline significantly boosts the performance of face verification.

Stacked Progressive Auto-Encoders (SPAE) for Face Recognition Across Poses

TLDR
The proposed method to learn pose-robust features by modeling the complex non-linear transform from the non-frontal face images to frontal ones through a deep network in a progressive way, termed as stacked progressive auto-encoders (SPAE).

Triplet probabilistic embedding for face verification and clustering

TLDR
This paper proposes an approach that couples a deep CNN-based approach with a low-dimensional discriminative embedding step, learned using triplet probability constraints to address the unconstrained face verification problem.

A 3D Face Model for Pose and Illumination Invariant Face Recognition

TLDR
This paper publishes a generative 3D shape and texture model, the Basel Face Model (BFM), and demonstrates its application to several face recognition task and publishes a set of detailed recognition and reconstruction results on standard databases to allow complete algorithm comparisons.

Learning from Simulated and Unsupervised Images through Adversarial Training

TLDR
This work develops a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors, and makes several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training.

Know You at One Glance: A Compact Vector Representation for Low-Shot Learning

TLDR
Comprehensive evaluations on the MNIST, Labeled Faces in the Wild, and the challenging MS-Celeb-1M Low-Shot Learning Face Recognition benchmark datasets clearly demonstrate the superiority of the Enforced Softmax optimization approach over state-of-the-arts.