In this paper, we investigate and exploit the influence of facial expressions on automatic age estimation. Different from existing approaches, our method jointly learns the age and the expression by introducing a new graphical model with a latent layer between the age/expression labels and the features. This layer aims to learn the relationship between the age and the expression and captures the face changes which induce the aging and the expression appearance, and thus obtaining expression-invariant age estimation. Conducted on two age-expression datasets (FACES  and Lifespan ), our experiments illustrate the improvement in performance when the age is jointly learnt with expression in comparison to expression-independent age estimation. The age estimation error is reduced by 14.43% and 37.75% for the FACES and Lifespan datasets respectively. Furthermore, the results obtained by our graphical model, without prior-knowledge of the expressions of the tested faces, are better that the best reported ones for both datasets.