Title: Real-time sign language recognition and speech conversion using VGG16
Authors: Dona Mary Cherian; Jincy J. Fernandez
Addresses: Department of Computer Science and Engineering, Rajagiri School of Engineering and Technology, Kakkanad, Kochi, India ' Department of Computer Science and Engineering, Rajagiri School of Engineering and Technology, Kakkanad, Kochi, India
Abstract: Sign language is used to communicate non-verbally by the deaf and mute community. This method consists of hand gestures or sign for representing the language. Hand gesture recognition extends human-computer interaction (HCI) more convenient and flexible to society. Therefore, it is important to classify each character correctly without error. In this time, online interpreters are available for translating the sign language or gestures to corresponding common language and vice versa. But it requires an expert or intermediate who can translate in both ways. Sensors are also used with hand gloves for tracking hand articulates. Thus, the communication for the deaf/dumb community and the rest has become difficult and costly. This paper mainly describes the classification of sign language hand gestures to its corresponding alphabets in text form using deep neural networks. After classification the text is converted to speech which helps the visually challenged people to understand the sign. The method will classify real-time images captured using a desktop camera. The accuracy of the model obtained using convolution neural network was 97%.
Keywords: American sign language; ASL; convolutional neural network; CNN; visual geometry group 16; VGG16.
DOI: 10.1504/IJCVR.2023.129438
International Journal of Computational Vision and Robotics, 2023 Vol.13 No.2, pp.174 - 185
Received: 25 Aug 2021
Accepted: 07 Jan 2022
Published online: 09 Mar 2023 *