Improved Speech Emotion Classification Using Deep Neural Network

Published at: 29-07-2023

Abstract

Speech emotion recognition (SER), which has gained greater attention in recent years, is a key aspect of the human–computer interaction process. However, a wide range of strategies has been offered in SER, and these approaches have yet to increase performance. In this study, a deep neural network model for classifying voice emotions is suggested. It is divided into three stages: feature extraction, normalization, and emotion recognition. The Librosa Python Toolkit is used to acquire the MFCC, Mel-Spectrogram Frequency, Chroma, and Poly Features during feature extraction. Data augmentation for the minority class using SMOTE (synthetic minority oversampling technique) and the Min–Max scaler for the normalization process were used. The model was evaluated on three frequently used languages: German, English, and French, using the Berlin Emotional Speech Database (EMODB), Surrey Audio-Visual Expressed Emotion Dataset (SAVEE), and the Canadian French Emotional (CaFE) speech datasets. The recognition rates of unweighted accuracy of 95% on EMODB, 90% on SAVEE, and 92% on CaFE are gained in speaker-dependent experiments. The results show that the suggested method is capable of efficiently recognizing emotions and outperformed the other approaches utilized for comparison in terms of performance indicators.

Research Link
chevron-down