Title: Research on facial expression recognition based on multimodal data fusion and neural network
Authors: Yi Han; Xubin Wang; Zhengyu Lu
Addresses: School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei, China; Anyang Institute of Technology, Department of Computer Science and Information Engineering, Anyang, Henan Province, China ' School of Computer Science and Information Engineering, Shanghai Institute of Technology, FengXian District, Shanghai, China ' Anyang Institute of Technology, Department of Computer Science and Information Engineering, Anyang, Henan Province, China
Abstract: Facial expression recognition is a challenging task when neural network is applied to pattern recognition. Most of the current recognition research is based on single source facial data, which generally has the disadvantages of low accuracy and low robustness. In this paper, a neural network algorithm of facial expression recognition based on multimodal data fusion is proposed. The algorithm is based on the multimodal data, and it takes the facial image, the histogram of oriented gradient of the image and the facial landmarks as the input, and establishes Convolutional Neural Network (CNN) designed to extract features from facial image, neural network designed to extract features from facial Landmarks and Neural Network (LNN) designed to extract features from Histogram of gradient (HNN), three sub-neural networks to extract data features, using multimodal data feature fusion mechanism to improve the accuracy of facial expression recognition. Experimental results show that, the algorithm has a great improvement in accuracy, robustness and detection speed.
Keywords: multimodal data; deep learning; neural network; facial expression recognition; data fusion.
DOI: 10.1504/IJWMC.2024.139663
International Journal of Wireless and Mobile Computing, 2024 Vol.27 No.1, pp.47 - 55
Accepted: 09 Jun 2023
Published online: 05 Jul 2024 *