Character emotion recognition algorithm in small sample video based on multimodal feature fusion Online publication date: Mon, 06-Jan-2025
by Jian Xie; Dan Chu
International Journal of Biometrics (IJBM), Vol. 17, No. 1/2, 2025
Abstract: In order to overcome the problems of poor recognition accuracy and low recognition accuracy in traditional character emotion recognition algorithms, this paper proposes a small sample video character emotion recognition algorithm based on multimodal feature fusion, aiming to overcome the problems of low accuracy and poor precision in traditional algorithms. The steps of this algorithm include extracting facial image scene features and expression features from small sample videos, using GloVe technology to extract text features, and obtaining character speech features through filter banks. Subsequently, a bidirectional LSTM model was used to fuse multimodal features, and emotions were classified using fully connected layers and softmax functions. The experimental results show that the method achieves an emotion recognition accuracy of up to 98.6%, with a recognition rate of 64% for happy emotions and 62% for neutral emotions.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Biometrics (IJBM):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com