Corpus ID: 233714250

Harnessing Geometric Constraints from Emotion Labels to improve Face Verification

@inproceedings{Ramakrishnan2021HarnessingGC,
  title={Harnessing Geometric Constraints from Emotion Labels to improve Face Verification},
  author={Anand Ramakrishnan and Minh Pham and Jacob Whitehill},
  year={2021}
}
For the task of face verification, we explore the utility of harnessing auxiliary facial emotion labels to impose explicit geometric constraints on the embedding space when training deep embedding models. We introduce several novel loss functions that, in conjunction with a standard Triplet Loss [43], or ArcFace loss [10], provide geometric constraints on the embedding space; the labels for our loss functions can be provided using either manually annotated or automatically detected auxiliary… Expand

Figures from this paper

References

SHOWING 1-10 OF 55 REFERENCES
Compositional Embeddings for Multi-Label One-Shot Learning
TLDR
A compositional embedding framework that infers not just a single class per input image, but a set of classes, in the setting of one-shot learning, which has applications to multi-label object recognition for both one- shot and supervised learning. Expand
A Comprehensive Database for Benchmarking Imaging Systems
TLDR
The Tufts Face Database is introduced that includes images acquired in various modalities: photograph images, thermal images, near infrared images, a recorded video, a computerized facial sketch, and 3D images of each volunteer's face. Expand
Mitigating Bias in Face Recognition Using Skewness-Aware Reinforcement Learning
  • Mei Wang, Weihong Deng
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
A reinforcement learning based race balance network (RL-RBN) is proposed that successfully mitigates racial bias and learns more balanced performance and two ethnicity aware training datasets are provided. Expand
AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild
TLDR
AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models and various evaluation metrics show that the deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expressions recognition systems. Expand
ArcFace: Additive Angular Margin Loss for Deep Face Recognition
TLDR
This paper presents arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks, and shows that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead. Expand
Deep Learning for Face Recognition: Pride or Prejudiced?
TLDR
A better understanding of state-of-the-art deep learning networks would enable researchers to address the given challenge of bias in AI, and develop fairer systems. Expand
Deep face recognition using imperfect facial data
TLDR
This work explores the question that surrounds the idea of face recognition using partial facial data by applying novel experiments to test the performance of machine learning using partial faces and other manipulations on face images such as rotation and zooming, which are used as training and recognition cues. Expand
LaSO: Label-Set Operations Networks for Multi-Label Few-Shot Learning
TLDR
This work proposes a novel technique for synthesizing samples with multiple labels for the (yet unhandled) multi-label few-shot classification scenario, and proposes to combine pairs of given examples in feature space so that the resulting synthesized feature vectors will correspond to examples whose label sets are obtained through certain set operations on the label sets of the corresponding input pairs. Expand
RetinaFace: Single-stage Dense Face Localisation in the Wild
TLDR
A robust single-stage face detector, named RetinaFace, which performs pixel-wise face localisation on various scales of faces by taking advantages of joint extra-supervised and self-super supervised multi-task learning. Expand
...
1
2
3
4
5
...