Show simple item record

dc.contributor.authorSasanka, SSV
dc.contributor.authorWedasinghe, N
dc.date.accessioned2026-03-11T07:15:48Z
dc.date.available2026-03-11T07:15:48Z
dc.date.issued2026-01
dc.identifier.urihttps://ir.kdu.ac.lk/handle/345/9073
dc.description.abstractThis review examines developments in emotion-aware music recommendation systems, wherein artificial intelligence utilizes emotional data to personalize listening experiences. Based on an analysis of 14 key studies published between 2015 and 2025, selected following PRISMA guidelines, this work synthesizes findings on predominant emotion detection methods. These include facial expression analysis, vocal tone interpretation, brainwave measurement via wearable sensors, and lyrical text processing. A principal finding indicates that multimodal approaches, which integrate multiple data sources, yield significantly more accurate and robust systems compared to unimodal methods. The most advanced implementations employ sophisticated deep learning models to effectively map emotional states to musical selections. Future research directions must address critical challenges including cultural bias mitigation, user privacy protection, and the development of lightweight algorithms for efficient deployment on personal devices.en_US
dc.language.isoenen_US
dc.subjectemotion recognition, music recommendation, affective computing, deep learning, multi-modal systemsen_US
dc.titleEmotions - Based Music Recommendations Systemen_US
dc.typeArticle Abstracten_US
dc.identifier.facultyFOCen_US
dc.identifier.journalFOCSSen_US
dc.identifier.issue6en_US
dc.identifier.pgnos42en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record