• Corpus ID: 220496317

Loss Function Search for Face Recognition

@article{Wang2020LossFS,
  title={Loss Function Search for Face Recognition},
  author={Xiaobo Wang and Shuo Wang and Cheng Chi and Shifeng Zhang and Tao Mei},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.06542}
}
In face recognition, designing margin-based (e.g., angular, additive, additive angular margins) softmax loss functions plays an important role in learning discriminative features. However, these hand-crafted heuristic methods are sub-optimal because they require much effort to explore the large design space. Recently, an AutoML for loss function search method AM-LFS has been derived, which leverages reinforcement learning to search loss functions during the training process. But its search… 
ArcFace: Additive Angular Margin Loss for Deep Face Recognition
TLDR
This paper presents arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks, and shows that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead.
AutoLoss-GMS: Searching Generalized Margin-based Softmax Loss Function for Person Re-identification
TLDR
A novel method, AutoLoss-GMS, to search the better loss function in the space of generalized margin-based softmax loss function for person re-identification automatically, and results demonstrate that the searched loss functions can achieve state-of-the-art performance and be transferable across different models and datasets in person.
An Efficient Training Approach for Very Large Scale Face Recognition
TLDR
This work proposes a novel training approach, termed Faster Face Classification (F 2 C), to alleviate time and cost without sacrificing the performance, and proves the speed is faster than state-of-the-art FC-based methods in terms of recognition accuracy and hardware costs.
Prototype Memory for Large-Scale Face Representation Learning
TLDR
A novel face representation learning model called Prototype Memory, which alleviates "prototype obsolescence" and allows training on a dataset of any size and can be used with various loss functions, hard example mining algorithms and encoder architectures.
Teacher Guided Neural Architecture Search for Face Recognition
TLDR
This paper develops a novel teacher guided neural architecture search method to directly search the student network with flexible channel and layer sizes and defines the search space as the number of channels/layers, which is sampled based on the probability distribution and is learned by minimizing the search objective of theStudent network.
TECTION VIA CONVERGENCE-SIMULATION DRIVEN SEARCH
TLDR
This work makes the first attempt to discover new loss functions for the challenging object detection from primitive operation levels and finds the searched losses are insightful.
SphereFace Revived: Unifying Hyperspherical Face Recognition
TLDR
This paper introduces a unified framework to understand large angular margin in hyperspherical face recognition, and extends the study of SphereFace and proposes an improved variant with substantially better training stability -- SphereFace-R.
Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search
TLDR
This work makes the first attempt to discover new loss functions for the challenging object detection from primitive operation levels and finds the searched losses are insightful.
SeqFace: Learning discriminative features by using face sequences
TLDR
A framework, called SeqFace, for learning discriminative face features is proposed, which achieves very competitive performance on several face recognition benchmarks, including LFW, YTF, CFP, AgeDB, and MegaFace.
AUTO SEG-LOSS: SEARCHING METRIC SURROGATES
TLDR
This paper proposes to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric by substitute the non-differentiable operations in the metrics with parameterized functions, and conduct parameter search to optimize the shape of loss surfaces.
...
...

References

SHOWING 1-10 OF 48 REFERENCES
Support Vector Guided Softmax Loss for Face Recognition
TLDR
A novel loss function, namely support vector guided softmax loss (SV-Softmax), which adaptively emphasizes the mis-classified points (support vectors) to guide the discriminative features learning and results in more discrimiantive features.
KappaFace: Adaptive Additive Angular Margin Loss for Deep Face Recognition
TLDR
This work introduces a novel adaptive strategy, called KappaFace, to modulate the relative importance based on class difficultness and imbalance, which can intensify the margin’s magnitude for hard learning or low concentration classes while relaxing it for counter classes.
AdaCos: Adaptively Scaling Cosine Logits for Effectively Learning Deep Face Representations
TLDR
A novel cosine-based softmax loss is proposed, AdaCos, which is hyperparameter-free and leverages an adaptive scale parameter to automatically strengthen the training supervisions during the training process and outperforms state-of-the-art softmax losses on all the three datasets.
CosFace: Large Margin Cosine Loss for Deep Face Recognition
  • H. Wang, Yitong Wang, Wei Liu
  • Computer Science
    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
This paper reformulates the softmax loss as a cosine loss by L2 normalizing both features and weight vectors to remove radial variations, based on which acosine margin term is introduced to further maximize the decision margin in the angular space, and achieves minimum intra-class variance and maximum inter- class variance by virtue of normalization and cosine decision margin maximization.
ArcFace: Additive Angular Margin Loss for Deep Face Recognition
TLDR
This paper presents arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks, and shows that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead.
SphereFace: Deep Hypersphere Embedding for Face Recognition
TLDR
This paper proposes the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features in deep face recognition (FR) problem under open-set protocol.
Mis-classified Vector Guided Softmax Loss for Face Recognition
TLDR
This paper develops a novel loss function, which adaptively emphasizes the mis-classified feature vectors to guide the discriminative feature learning and is the first attempt to inherit the advantages of feature margin and feature mining into a unified loss function.
L2-constrained Softmax Loss for Discriminative Face Verification
TLDR
This paper adds an L2-constraint to the feature descriptors which restricts them to lie on a hypersphere of a fixed radius and shows that integrating this simple step in the training pipeline significantly boosts the performance of face verification.
Ring Loss: Convex Feature Normalization for Face Recognition
TLDR
This work motivates and presents Ring loss, a simple and elegant feature normalization approach for deep networks designed to augment standard loss functions such as Softmax, and applies soft normalization, where it gradually learns to constrain the norm to the scaled unit circle while preserving convexity leading to more robust features.
A Discriminative Feature Learning Approach for Deep Face Recognition
TLDR
This paper proposes a new supervision signal, called center loss, for face recognition task, which simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers.
...
...