Title: Unsupervised voice activity detection with improved signal-to-noise ratio in noisy environment
Authors: Shilpa Sharma; Rahul Malhotra; Anurag Sharma; Jeevan Bala; Punam Rattan; Sheveta Vashisht
Addresses: Computer Science and Engineering, CT Group of Institutions, Jalandhar, India; Lovely Professional University, 144411, India ' Department of Electronics Communication Engineering, CT Group of Institutions, Jalandhar, 144020, India ' Faculty of Engineering, Design and Automation, GNA University, Phagwara, 144401, India ' Department of Computer Science and Engineering, Lovely Professional University, Phagwara, 144411, India ' Department of Computer Application, Lovely Professional University, Phagwara, 144411, India ' Department of Computer Science and Engineering, Lovely Professional University, Phagwara, 144411, India
Abstract: To identify voiced and unvoiced signals, this research provides an extended voice characteristic detection strategy for noisy settings that uses feature extraction and unvoiced feature normalisation. In a high signal to noise ratio environment, the proposed method develops a recognition model by recovering characteristics for categorisation of spoken and unvoiced signals. The novelty of this method is that it uses feature extraction to classify voiced and unvoiced signals with a higher signal-to-noise ratio (SNR). Furthermore, by combining two classifiers in a hybrid model, the model is less affected by noise for speech features, and identification performance improves. The model was tested for its ability to increase recognition accuracy. The proposed method produces better results than existing methods, with an accuracy of 99.73% and SNR of 25.61 dB. The proposed model LFV-KANN also handles increases in noise power efficiently through the hybridisation of two classifiers: artificial neural network (ANN) and K-means clustering.
Keywords: TIMIT dataset; support vector machine; voice activity detector; unsupervised learning.
International Journal of Nanotechnology, 2023 Vol.20 No.1/2/3/4, pp.421 - 432
Received: 04 Jan 2022
Accepted: 24 Mar 2022
Published online: 31 May 2023 *