Age estimation performance has been greatly improved by using convolutional neural network. However, existing methods have an inconsistency between the training objectives and evaluation metric, so they may be suboptimal. In addition, these methods always adopt image classification or face recognition models with a large amount of parameters which bring expensive computation cost and storage overhead. To alleviate these issues, we design a light network architecture and propose a unified framework which can jointly learn age distribution and regress age. The effectiveness of our approach has been demonstrated on apparent and real age estimation tasks. Our method achieves new state-of-the-art results using the single model with 36$\times$ fewer parameters and 2.6$\times$ reduction in inference time. Moreover, our method can achieve comparable results as the state-of-the-art even though model parameters are further reduced to 0.9M~(3.8MB disk storage). We also analyze that Ranking methods are implicitly learning label distributions.


The source code are coming soon.
The pre-trained models and align&cropped face imgaes are publicly avaliable.(May 1, 2018).

  • Align&Cropped

  • Train&Test list

  • ThinAgeNet
    14.9 MB

  • TinyAgeNet
    3.8 MB

  • Main Results

    Low Error:

    High Efficiency:

    Visual Assessment:

    Real-time Video Demo

    This demo shows real-time apparent age estimation on videos and runs on a PC (4x i7-4510U CPU@2.00GHz), and its prediction results come from our DLDL-v2 (TinyAgeNet) trained on an apparent age dataset (ChaLearn16: 5613 training images)..

    Image Demo

    The propopsed ThinAgeNet (trained on ChaLearn16) on the Trump Family Photo.

    The propopsed ThinAgeNet (trained on ChaLearn16) on Oscars 2017.

    The propopsed ThinAgeNet (trained on ChaLearn16) on Elementary school students.


    	title={Age Estimation Using Expectation of Label Distribution Learning},
    	author={Gao, Bin-Bin and Zhou, Hong-Yu and Wu, Jianxin and Geng, Xin},
    	booktitle={Proc. The 27th International Joint Conference on Artificial Intelligence (IJCAI 2018)},


    Please contact Prof. Jianxin Wu (email) and Bin-Bin Gao (email) for questions about the paper.