中国邮电高校学报(英文) ›› 2023, Vol. 30 ›› Issue (1): 28-38.doi: 10.19682/j.cnki.1005-8885.2023.2003

• Artificial Intelligence • 上一篇    下一篇

Facial expression recognition based on improved ResNet

Wang Xianlun, Wang Guangyu, Cui Yuxia   

  1. College of Electromechanical Engineering, Qingdao University of Science and Technology, Qingdao 266061, China
  • 收稿日期:2021-08-10 修回日期:2022-06-30 接受日期:2023-02-13 出版日期:2023-02-28 发布日期:2023-02-28
  • 通讯作者: Wang Guangyu, E-mail: 904887985@qq.com E-mail:904887985@qq.com
  • 基金资助:
    This work was supported by the National Natural Science Foundation of China (51105213).

Facial expression recognition based on improved ResNet

Wang Xianlun, Wang Guangyu, Cui Yuxia   

  1. College of Electromechanical Engineering, Qingdao University of Science and Technology, Qingdao 266061, China
  • Received:2021-08-10 Revised:2022-06-30 Accepted:2023-02-13 Online:2023-02-28 Published:2023-02-28
  • Contact: Wang Guangyu, E-mail: 904887985@qq.com E-mail:904887985@qq.com
  • Supported by:
    This work was supported by the National Natural Science Foundation of China (51105213).

摘要: Facial expression recognition (FER) is a vital application of image processing technology. In this paper, a FER model based on the residual network is proposed. The proposed model introduces the idea of the DenseNet, in which the outputs of the residual blocks are not simply added but are linked to the channel dimension. In addition, transfer learning is used to reduce training costs and accelerate training speed. The accuracy and robustness of the proposed FER model were tested by K-fold cross-validation. Experimental results show that the proposed method has competitive performances on FER2013, FER plus (FERPlus), and the real-world affective faces database (RAF-DB).

关键词: facial expression recognition, convolutional neural networks, ResNet, transfer learning, K-fold cross-validation

Abstract: Facial expression recognition (FER) is a vital application of image processing technology. In this paper, a FER model based on the residual network is proposed. The proposed model introduces the idea of the DenseNet, in which the outputs of the residual blocks are not simply added but are linked to the channel dimension. In addition, transfer learning is used to reduce training costs and accelerate training speed. The accuracy and robustness of the proposed FER model were tested by K-fold cross-validation. Experimental results show that the proposed method has competitive performances on FER2013, FER plus (FERPlus), and the real-world affective faces database (RAF-DB).

Key words: facial expression recognition, convolutional neural networks, ResNet, transfer learning, K-fold cross-validation

中图分类号: