Show simple item record

dc.contributor.authorKumari, HMLS
dc.contributor.authorKumari, HMNS
dc.contributor.authorNawarathne, UMMPK
dc.date.accessioned2025-09-26T04:38:43Z
dc.date.available2025-09-26T04:38:43Z
dc.date.issued2024-07
dc.identifier.urihttps://ir.kdu.ac.lk/handle/345/8907
dc.identifier.uri10.64701/ijrc/345/8907
dc.description.abstractEmotional recognition and classification using artificial intelligence (AI) techniques play a crucial role in human-computer interaction (HCI). It enables the prediction of human emotions from audio signals with broad applications in psychology, medicine, education, entertainment, etc. This research focused on speech-emotion recognition (SER) by employing classification methods and transformer models using the Toronto Emotional Speech Set (TESS). Initially, acoustic features were extracted using different feature extraction techniques, including chroma, Mel-scaled spectrogram, contrast features, and Mel Frequency Cepstral Coefficients (MFCCs) from the audio dataset. Then, this study employed a Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and a hybrid CNN-LSTM model to classify emotions. To compare the performance of these models, classical image transformer models such as ViT (Visual Image Transformer) and BEiT (Bidirectional Encoder Representation of Images) were employed on the Mel-spectograms derived from the same dataset. Evaluation metrics such as accuracy, precision, recall, and F1-score were calculated for each of these models to ensure a comprehensive performance comparison. According to the results, the hybrid model performed better than other models by achieving an accuracy of 99.01%, while the CNN, LSTM, ViT, and BEiT models demonstrated accuracies of 95.37%, 98.57%, 98%, and 98.3%, respectively. To interpret the output of this hybrid model and to provide visual explanations of its predictions, the Grad-CAM (Gradient-weighted Class Activation Mappings) was obtained. This technique reduced the black-box character of deep models, making them more reliable to use in clinical and other delicate contexts. In conclusion, the hybrid CNN-LSTM model showed strong performance in audio-based emotion classification.en_US
dc.language.isoenen_US
dc.subjectConvolutional neural network, Grad-CAM, Hybrid model, Image transformers, Long Short-Term Memory, Speech emotion recognition.en_US
dc.titleSpeech Emotion Recognition with Hybrid CNN- LSTM and Transformers Models: Evaluating the Hybrid Model Using Grad-CAMen_US
dc.typeJournal articleen_US
dc.identifier.journalIJRCen_US
dc.identifier.issue01en_US
dc.identifier.volume03en_US
dc.identifier.pgnos56-66en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record