Expression-Invariant Age Estimation


In this paper, we investigate and exploit the influence of facial expressions on automatic age estimation. Different from existing approaches, our method jointly learns the age and the expression by introducing a new graphical model with a latent layer between the age/expression labels and the features. This layer aims to learn the relationship between the age and the expression and captures the face changes which induce the aging and the expression appearance, and thus obtaining expression-invariant age estimation. Conducted on two age-expression datasets (FACES [4] and Lifespan [10]), our experiments illustrate the improvement in performance when the age is jointly learnt with expression in comparison to expression-independent age estimation. The age estimation error is reduced by 14.43% and 37.75% for the FACES and Lifespan datasets respectively. Furthermore, the results obtained by our graphical model, without prior-knowledge of the expressions of the tested faces, are better that the best reported ones for both datasets.

Extracted Key Phrases

7 Figures and Tables

Cite this paper

@inproceedings{Alnajar2014ExpressionInvariantAE, title={Expression-Invariant Age Estimation}, author={Fares Alnajar and Zhongyu Lou and Jos{\'e} Manuel {\'A}lvarez and Theo Gevers}, booktitle={BMVC}, year={2014} }