Title: Fusion deep capsule-network based facial expression recognition

Authors: Yilihamu YaErmaimaiti; Guohang Zhuang; Tusongjiang Kari

Addresses: Xinjiang University School of Electrical Engineering, Xinjiang University, Urumqi, 830017, China ' Xinjiang University School of Electrical Engineering, Xinjiang University, Urumqi, 830017, China ' Xinjiang University School of Electrical Engineering, Xinjiang University, Urumqi, 830017, China

Abstract: Facial expression recognition plays a crucial role in many automated system applications such as human-computer interaction, multi-modal emotion analysis, and medical assistance. Recognising facial expressions accurately is challenging. The existing facial expression recognition methods are limited in exploring the correlation between features, and most of them ignore the significance of feature relationship to FER. In this article, we propose a neural network FDC for facial expression identification. The main purpose is to design a deep capsule neural network that is a fusion of multi-layer convolutional neural network and capsule neural network. In the light-layer network, we have added an attention mechanism based on LPB, and the loss function of the FDC network is constructed. The main purpose is: 1) consider the influence of feature relationship on facial expression recognition; 2) use a designed attention mechanism to capture the important local information in facial features; 3) balance overall loss. Our method can achieve 97.93%, 92.86% and 98.57% recognition accuracy for RaFD, KDEF and CK+, respectively, which is more accurate than most state-of-the-art techniques while maintaining real-time performance, which is a great improvement compared to current public facial expression recognition methods.

Keywords: facial expression; capsule network; attention mechanism; loss function; facial expression recognition.

DOI: 10.1504/IJICT.2024.137216

International Journal of Information and Communication Technology, 2024 Vol.24 No.2, pp.165 - 180

Received: 01 Dec 2021
Accepted: 24 Dec 2021

Published online: 05 Mar 2024 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article