Title: Dynamic hand gesture recognition of sign language using geometric features learning

Authors: Saba Joudaki; Amjad Rehman

Addresses: Department of Computer Engineering, Islamic Azad University, Khorramabad Branch, Khorramabad, Iran ' Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia

Abstract: In the sign language alphabet, several hand signs are in use. Automatic recognition of dynamic hand gestures could facilitate several applications such as people with a speech impairment to communicate with healthy people. This research presents dynamic hand gesture recognition of the sign language alphabet based on the neural network model with enhanced geometric features fusion. A 3D depth-based sensor camera captures the user's hand in motion. Consequently, the hand is segmented by extracting depth features. The proposed system is termed as depth-based geometrical sign language recognition (DGSLR). The DGSLR adopted in easier hand segmentation approach, which is further used in other segmentation applications. The proposed geometrical features fusion improves the accuracy of recognition due to unchangeable features against hand orientation or rotation compared to discrete cosine transform (DCT) and moment invariant. The findings of the iterations demonstrated that the fusion of the extracted features resulted in a better accuracy rate. Finally, a trained neural network is employed to enhance recognition accuracy. The proposed framework is proficient for sign language recognition using dynamic hand gesture and produces an accuracy of up to 89.52%.

Keywords: digital learning; deaf community; healthcare; sign language; dynamic hand gesture; best features selection.

DOI: 10.1504/IJCVR.2022.119239

International Journal of Computational Vision and Robotics, 2022 Vol.12 No.1, pp.1 - 16

Received: 17 May 2019
Accepted: 14 Aug 2020

Published online: 30 Nov 2021 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article