Emotions - Based Music Recommendations System
Abstract
This review examines developments in emotion-aware music recommendation systems,
wherein artificial intelligence utilizes emotional data to personalize listening experiences.
Based on an analysis of 14 key studies published between 2015 and 2025, selected
following PRISMA guidelines, this work synthesizes findings on predominant emotion
detection methods. These include facial expression analysis, vocal tone interpretation,
brainwave measurement via wearable sensors, and lyrical text processing. A principal
finding indicates that multimodal approaches, which integrate multiple data sources,
yield significantly more accurate and robust systems compared to unimodal methods.
The most advanced implementations employ sophisticated deep learning models to
effectively map emotional states to musical selections. Future research directions must
address critical challenges including cultural bias mitigation, user privacy protection,
and the development of lightweight algorithms for efficient deployment on personal
devices.
