A PROJECT REPORT
ON
“EMOTION BASED MUSIC RECOMMENDATION”
SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
AWARD OF
DIPLOMA IN
INFORMATION TECHMOLOGY
AFFILIATED TO
MAHARASHTRA STATE BOARD OF TECHNICAL EDUCATION
MUMBAI
SUBMITTED BY
Name of Students Enrollment No.
1.Pratik Bajarang More 2211440009
2.Yuvraj Navneet Vaity 2211440030
3.Sameer Santosh Gupta 2211440023
4.Dhananjay Kashinath Karale 23112230441
GUIDED BY
MRS. DIKSHIKA MEHER
G.V. ACHARYA POLYTECHNIC, SHELU
2024-25
MAHARASHTRA STATE BOARD OF TECHNICAL EDUCATION
CERTIFICATE
This is to certify that Mr. Pratik Bajarang More Enrollment No.2211440009 of Sixth
Semester of Diploma in Information Technology at G.V. ACHARYA POLYTECHNIC has
completed the Micro Project satisfactorily in Subject Capstone Project Execution (CPE) in
the academic year 2024-2025 as per the MSBTE prescribed curriculum of I scheme.
Place:- Shelu Enrollment No :-2211440009
Date :- / / 2025 Seat No:-143176
Project Guide Head of Department Principal
MAHARASHTRA STATE BOARD OF TECHNICAL EDUCATION
CERTIFICATE
This is to certify that Mr. Sameer Santosh Gupta Enrollment No.2211440023 of Sixth
Semester of Diploma in Information Technology at G.V. ACHARYA POLYTECHNIC has
completed the Micro Project satisfactorily in Subject Capstone Project Execution (CPE) in
the academic year 2024-2025 as per the MSBTE prescribed curriculum of I scheme.
Place:- Shelu Enrollment No :-2211440023
Date :- / / 2025 Seat No:-143186
Project Guide Head of Department Principal
MAHARASHTRA STATE BOARD OF TECHNICAL EDUCATION
CERTIFICATE
This is to certify that Mr. Yuvraj Navneet Vaity Enrollment No.2211440030 of Sixth
Semester of Diploma in Information Technology at G.V. ACHARYA POLYTECHNIC has
completed the Micro Project satisfactorily in Subject Capstone Project Execution (CPE) in
the academic year 2024-2025 as per the MSBTE prescribed curriculum of I scheme.
Place:- Shelu Enrollment No :-2211440030
Date :- / / 2025 Seat No:-143192
Project Guide Head of Department Principal
MAHARASHTRA STATE BOARD OF TECHNICAL EDUCATION
CERTIFICATE
This is to certify that Mr. Dhananjay Kashinath Karale Enrollment No. 23112230441 of
Sixth Semester of Diploma in Information Technology at G.V. ACHARYA
POLYTECHNIC has completed the Micro Project satisfactorily in Subject Capstone Project
Execution (CPE) in the academic year 2023-2024 as per the MSBTE prescribed curriculum of
I scheme.
Place:- Shelu Enrollment No :- 23112230441
Date :- / / 2025 Seat No:-143195
Project Guide Head of Department Principal
CANDIDATES DECLARATION
This is to certify that the project titled “Emotion Based Music Recommendation” work
carried out by the student of Diploma in Computer Engineering as a part of curriculum as
prescribed by MSBTE.
I hereby declare that the project work has not formed the basis for the award previously of any
Diploma, Associateship, Fellowship or any other similar title according to my knowledge.
Signature of Student
1.
2.
3.
4.
ACKNOWLEDGEMENT
This project work titled “Emotion Based Music Recommendation” is a part of curriculum as
prescribed by MSBTE. We are really thankful to our Principal Mrs. Madhura Mahindrakar
and our HOD Mrs. Madhura Mahindrakar, Information Technology Department, G.V
Acharya Polytechnic, Shelu. For her invaluable guidance and assistance, without which the
accomplishment of the task would have never been possible. We are also thankful to our Guide
Mrs. Dikshika Meher for giving this opportunity to explore into the real world and realize the
interrelation without which a Project can never progress. We are also thankful to parents, friend
and all staff of Information Technology Department, for providing us relevant information
and necessary clarifications, and great support.
G.V. ACHARYA POLYTECHNIC, SHELU
CERTIFICATE
This is certify that the project report entitled “Emotion Based Music Recommendation” Was
successfully completed by student of Sixth semester Diploma in Information Technology.
1) Pratik Bajarang More
2) Sameer Santosh Gupta
3) Yuvraj Navneet Vaity
4) Dhananjay Kashinath Karale
In partial fulfillment of the requirements for the award of the Diploma in Information
Technology and submitted to the Department of Computer Department of G.V.Acharya
Polytechnic, Shelu. The matter embodied is the actual work done by the student during a period
for the academic year 2024-25 under the supervision of our guide.
Name of Guide Name of HOD
Mrs. Dikshika Meher Mrs. Madhura Mahindraker
Internal Examiner External Examiner
Mrs. Madhura Mahindraker
ABSTRACT
The Emotion-Based Music Recommendation System aims to enhance user experience by
suggesting songs that align with the user's current emotional state. This project utilizes facial
expression recognition or sentiment analysis from text input to detect the user's mood and
recommends suitable music tracks accordingly. Leveraging Python and libraries such as
OpenCV, TensorFlow/Keras for emotion detection, and pandas for handling song datasets, the
system builds a bridge between human emotion and musical response. By combining machine
learning techniques with content-based filtering, this system offers a personalized and
emotionally intuitive way of discovering music, making listening more engaging and relevant.
LIST OF FIGURES
Figure No Title Page No
3.1 Gantt Chart 14
3.2 Context Diagram (DFD) 15
3.3 Use Case Diagram 17
3.4 Activity Diagram 18
3.5 Sequence Diagram 19
LIST OF TABLES
Table No Title Page No
2.1 Software Required 8
2.2 Hardware Required 8
3.3 Project Plan Table 14
LIST OF SYMBOLS AND ABBREVIATIONS
List of Symbols :-
• ↑ – Increase
• ↓ – Decrease
• → – Flow or transition
• ← – Reverse flow or input
• µ – Micro (e.g., microseconds µs)
List of Abbreviations :-
• AI – Artificial Intelligence
• ML – Machine Learning
• DL – Deep Learning
• CNN – Convolutional Neural Network (used for facial expression detection)
• UI – User Interface (for front-end interaction)
• API – Application Programming Interface (used for integrating music platforms or
backend)
• DB – Database (for storing user data, song lists, etc.)
• Acc – Accuracy (model performance metric)
TABLE OF CONTENT
SR.NO CONTENTS PG.NO
CERTIFICATES
CANDIDATE’S DECLARATION
ACKNOWLEDGEMENT
ABSTRACT
LIST OF FIGURES
LIST OF TABLES
LIST OF SYMBOLS AND ABBREVIATIONS
1 INTRODUCTION 1-4
1.1 Introduction of Project 2
1.2 Relevance 3
1.3 Organization of Report 4
2 LITERATURE REVIEW 5-8
2.1 Literature Survey 6
2.2 Problem Statement 7
2.3 Software Requirement 8
3 PROPOSED WORK 9-27
3.1 Aim of Project 10
3.2 Needs and Objectives 11
3.3 Methodologies 12-13
3.4 Design/ System Structure/ Coding 14-24
3.5 Conclusion 25
3.6 Future Scope 26
3.7 List of Reference 27
CHAPTER 1
INTRODUCTION
1
1.1 INTRODUCTION OF PROJECT
In today’s digital age, music has become more than just entertainment — it’s a powerful
emotional outlet and companion in our daily lives. With the rapid advancement of artificial
intelligence and machine learning, music recommendation systems have evolved from genre-
based suggestions to more personalized and context-aware recommendations.
This project, "Emotion-Based Music Recommendation Using Python," aims to take
personalization one step further by recommending music based on the user’s emotional state.
The idea is simple but powerful: when you're happy, you might want upbeat tracks; when you're
sad, maybe something calm or soothing fits better.
By leveraging technologies such as facial expression analysis, natural language processing,
or voice tone detection, this system can identify a user's emotion and dynamically suggest a
playlist that aligns with or helps alter their mood. Python’s rich ecosystem of libraries — like
OpenCV, TensorFlow/Keras, Librosa, Scikit-learn, and Spotipy (for Spotify integration)
— provides the perfect toolkit to bring this concept to life.
The project not only explores emotion detection but also delves into music feature extraction,
machine learning for emotion classification, and intelligent mapping between emotions and
music genres or individual tracks.
Ultimately, the goal is to enhance user experience by creating a more empathetic and intuitive
music recommendation system — one that doesn't just know what you like, but how you feel.
2
1.2 RELEVANCE
As the volume of digital music continues to grow, users are often overwhelmed with choices,
making intelligent recommendation systems more essential than ever. Traditional
recommendation methods, which rely on listening history or collaborative filtering, often miss
the mark when it comes to capturing the listener’s current emotional context. This is where
emotion-based recommendations can offer a significant edge.
Understanding and responding to a user's emotional state can create a deeper, more meaningful
connection between technology and the user. Music is closely tied to human emotions, making
it an ideal medium for exploring affective computing. By recommending music that resonates
with a user’s mood, such systems can improve mental well-being, enhance user satisfaction,
and even provide emotional support in real time.
Furthermore, with the increasing integration of AI into everyday life — from smartphones to
virtual assistants — emotion-aware applications are becoming not just relevant, but necessary
for more empathetic and user-centric design. This project demonstrates how Python and AI can
be used to create a more intuitive and emotionally intelligent music experience, aligning with
ongoing trends in personalized digital media and mental health tech.
3
1.3 ORGANIZATION OF REPORT
This report is organized into several key sections to provide a comprehensive overview of the
Emotion-Based Music Recommendation System:
1. Introduction
This section outlines the background, motivation, and objectives of the project. It
highlights the importance of personalized music recommendations based on
emotional states.
2. Literature Review
A review of existing systems and research related to music recommendation, emotion
detection, and affective computing. It also compares traditional recommendation
approaches with emotion-aware methods.
3. System Design and Architecture
This section presents the overall architecture of the system, detailing the main
components such as emotion detection, music mapping, and recommendation engine.
4. Methodology
Describes the technologies, tools, and algorithms used — including facial expression
or text-based emotion detection, feature extraction, and music classification
techniques.
5. Implementation
Covers the actual development process, including code structure, data sources, model
training, and integration with music APIs (e.g., Spotify).
6. Results and Evaluation
Provides analysis of the system’s performance, accuracy of emotion detection, and
effectiveness of music recommendations based on user feedback or metrics.
7. Conclusion and Future Work
Summarizes the key findings of the project, discusses limitations, and suggests
potential enhancements such as real-time emotion tracking, multi-modal input, or
larger-scale deployment.
8. References
Lists all the research papers, tools, APIs, and libraries referenced throughout the
project.
4
CHAPTER 2
LITERATURE SURVEY
5
2.1 LITERATURE SURVEY
Emotion-based music recommendation lies at the intersection of affective computing, music
information retrieval (MIR), and machine learning. Over the years, several studies and
systems have attempted to improve the personalization of music recommendations by
incorporating emotional context.
1. Traditional Music Recommendation Systems
Traditional systems typically use collaborative filtering, content-based filtering, or hybrid
approaches. Spotify, YouTube, and Apple Music recommend music based on user history,
likes, and playlist behavior. However, these methods lack real-time emotional adaptability and
often ignore the user’s current mood.
2. Emotion Recognition from Facial Expressions
Techniques for facial emotion recognition often use deep learning models like Convolutional
Neural Networks (CNNs). The FER-2013 dataset and tools like OpenCV, Dlib, and Keras
have been widely used for training emotion classification models. Research by Mollahosseini
et al. (2016) demonstrated the effectiveness of deep neural networks in recognizing emotions
from facial cues with high accuracy.
4. Music Emotion Recognition (MER)
Music can be classified emotionally using models that analyze audio features such as tempo,
rhythm, mode (major/minor), and spectral properties. Datasets like DEAM (Database for
Emotion Analysis using Music) and Million Song Dataset have been used in research to
correlate these features with emotional tags like happy, sad, relaxed, or angry.
6. Previous Works and Applications
Projects like Emotify, Moodify, and various research prototypes have demonstrated emotion-
aware music selection using machine learning models. These systems typically map user
emotion to music metadata (genre, mood tags, lyrics) or directly to audio features for selecting
suitable tracks.
6
2.2 PROBLEM STATEMENT
In today’s digital landscape, music streaming platforms offer an overwhelming amount of
content, yet most recommendation systems rely heavily on user history, playlists, or popular
trends, often ignoring the user's current emotional state. This leads to a mismatch between
the user’s mood and the music being recommended, reducing the effectiveness and satisfaction
of the listening experience.
The core problem is the lack of emotion-awareness in traditional music recommendation
systems. Users may want music that reflects or helps manage their emotions — whether it's
calming tracks during stress, energetic beats when feeling happy, or soothing melodies when
feeling down.
This project aims to address this gap by developing a Python-based system that can detect a
user’s emotion using techniques such as facial expression recognition or text sentiment
analysis, and recommend music that aligns with or positively influences that emotional state.
7
2.3 SOFTWARE AND HARDWARE REQUIREMENT
Software:-
Sr.no Name of Resources Specifications
1. Operating System Windows 11, MacOS
2. Programming Language Html, Css, python
3. IDE/ Code Editor Visual Studio Code
Table 2.1 Software Required
Hardware;-
Sr.no Name of Resources Specifications
1 Processor Intel i3
2 Storage 8 GB
3 RAM 256 GB
Table 2.2 Hardware Required
8
CHAPTER 3
PROPOSED WORK
9
3.1 AIM OF PROJECT
The primary aim of this project is to design and implement an emotion-based music
recommendation system using Python that can understand and respond to the emotional state
of the user in real time. Unlike traditional recommendation systems that rely on user
preferences, listening history, or popularity trends, this system focuses on making music
suggestions that are emotionally intelligent and context-aware.
The goal is to create a more personalized and empathetic music experience by detecting
emotions through facial expressions, textual input, or voice signals, and then mapping those
emotions to appropriate music tracks or genres. By doing so, the system can recommend songs
that either complement the user's current mood or help shift it in a desired direction — such as
calming music for stress relief, or upbeat music for motivation.
This project combines machine learning, computer vision, natural language processing,
and music information retrieval, all implemented using Python, to demonstrate the potential
of AI in enhancing emotional well-being and digital user experience.
10
3.2 NEED AND OBJECTIVES
Needs
• Traditional music recommendation systems fail to consider the real-time emotional state
of the user.
• Music has a significant role in influencing and reflecting human emotions.
• Users often seek music based on their current emotional state, not just listening history or
preferences.
• Existing recommendation systems lack emotional intelligence, which can lead to irrelevant
or generalized suggestions.
• There is an increasing demand for AI-powered, emotion-aware systems to improve
personalization and emotional engagement.
• Emotion-based music recommendations can be applied in areas like mental health,
relaxation, motivation, and therapeutic environments.
Objectives
• To detect and classify emotions from the user using facial expressions, text sentiment, or
voice tone.
• To categorize emotions into different states such as happy, sad, angry, relaxed, etc.
• To recommend music tracks that align with the user's emotional state.
• To use Python libraries and machine learning models for emotion detection and music
recommendation.
• To integrate external music APIs (like Spotify) to fetch real-time music suggestions.
• To analyze audio features such as tempo, rhythm, and mood for more accurate emotion-to-
music mapping.
• To develop a user-friendly interface for interaction with the system (image upload, text
input, or live emotion detection).
• To evaluate the accuracy and effectiveness of the emotion detection and music
recommendations.
• To explore how emotion-based music recommendations can positively impact the user
experience and mood.
11
3.3 METHODOLOGY
1. Emotion Detection
The first step is detecting the user’s emotional state using one or more of the following methods:
• Facial Expression Recognition: Using libraries like OpenCV and Keras, the system detects
emotions through facial expressions via a webcam, classifying emotions such as happy, sad,
angry, or neutral.
• Text Sentiment Analysis: NLP tools like VADER and TextBlob analyze user-provided text to
determine the sentiment behind it.
• Voice Tone Detection: Speech input is analyzed using Librosa to extract features such as pitch
and tone, mapping them to emotions.
2. Music Feature Extraction
• Audio Features: Extract musical characteristics such as tempo, key, and rhythm using Librosa
or pydub to classify songs based on mood.
• Mood Tags: Songs are tagged with moods like happy, sad, or energetic, which help match the
music to the user's emotional state.
3. Emotion-Music Mapping
The detected emotions are mapped to corresponding music features:
• Example Mapping:
o Happy → Upbeat tempo, major key.
o Sad → Slow tempo, minor key.
o Angry → Fast tempo, intense beats.
4. Music Recommendation Engine
• API Integration: The system connects to music platforms like Spotify using Spotipy to fetch
tracks that align with the detected emotion.
• Algorithm: An algorithm selects songs based on the emotion-music mapping, either
recommending a single song or creating a playlist.
5. User Interface
A user-friendly interface allows interaction with the system:
• Emotion Input: Users can input their emotions through facial expression, text, or voice.
12
• Music Playback: The system displays the emotion detected and plays the recommended
music.
6. Evaluation and Feedback
• Testing Emotion Detection: The accuracy of emotion detection is evaluated using standard
datasets like FER-2013.
• User Feedback: Users provide feedback on the music recommendations, helping assess the
system’s effectiveness.
13
3.4 DESIGN / SYSTEM STRUCTURE / CODING
Gantt Chart :-
Bs da3.1 Gantt Chart
Project Plan Table :-
Task Duration Start Date End Date
Topic Selection 5 days 15 July 2024 20 July 2024
Literature Survey 11 days 21 July 2024 31 July 2024
Collecting Data Required 10 days 1 August 2024 10 August 2024
Defining System Design 10 days 11 August 2024 20 August 2024
Data Preprocessing 11 days 21 August 2024 31 August 2024
Model Selection & 15 days 1 September 15 September
Implementation 2024 2024
Model Training & Evaluation 15 days 16 September 30 September
2024 2024
Developing Recommendation 10 days 1 October 2024 10 October 2024
System
Web Interface Design 10 days 11 October 2024 20 October 2024
Backend Development & API 11 days 21 October 2024 31 October 2024
Integration
System Integration 5 days 1 November 2024 5 November 2024
Optimization & Debugging 7 days 6 November 2024 12 November
2024
Security & Scalability Testing 7 days 13 November 19 November
2024 2024
Testing 15 days 1 January 2025 15 January 2025
Final Review & Submission 24 days 16 January 2025 9 March 2025
Table 2.3 Project Plan
14
Context Diagram (DFD) :-
Fig 3.2 Context Diagram (DFD)
Level 0 DFD (Context Diagram)
The Level 0 DFD provides a high-level overview of the system:
• External Entities:
o User: Interacts with the system to get emotion-based music recommendations.
• Processes:
15
o Emotion Detection: Captures and analyzes the user's facial expressions.
o Music Recommendation: Fetches and displays song recommendations based on
detected emotions.
o User Management: Handles user data and authentication.
• Data Stores:
o User Data Store: Stores user information.
o Emotion Data Store: Stores emotion detection results.
o Song Data Store: Stores song details and recommendations.
In the Level 0 DFD:
1) User Interaction:
• The user accesses the system and provides input (video feed).
• The system processes the input to detect emotions.
2) Emotion Detection:
• The system captures the video feed and analyzes it.
• The detected emotion is stored in the Emotion Data Store.
3) Music Recommendation:
• Based on the detected emotion, the system fetches relevant playlists.
• Song recommendations are displayed to the user and stored in the Song Data
Store.
16
Use Case Diagram
A Use Case Diagram represents the interactions between users (actors) and the system. It
identifies the primary use cases and shows the flow of actions.
Use Case Diagram for Emotion Music Recommender System
1. Actors:
• User
• Admin (for system management)
• Emotion Detection System
• Spotify API
2. Use Cases:
• Register
• Login
• Capture Emotion
• Detect Emotion
• Get Music Recommendations
• Update Recommendations
• Manage Users (Admin)
• System Maintenance (Admin)
Fig 3.3 Use Case Diagram
17
Activity Diagram
Fig 3.4 Activity Diagram
Activities :
1. User starts video feed.
2. System captures video feed.
3. System analyzes video for emotion.
4. System detects emotion.
5. System queries Spotify for recommendations.
6. Spotify returns recommendations.
7. System displays recommendations.
18
Sequence Diagram :
Fig 3.5 Sequence Diagram
Participants:
• User
• UI
• Emotion Detection System
• Music Recommendation System
• Spotify API
Sequence :
1) User starts video feed.
2) UI captures video.
3) Emotion Detection System analyzes video.
4) Detected emotion sent to Music Recommendation System.
5) Music Recommendation System queries Spotify API.
6) Spotify API returns relevant playlists.
7) Music Recommendation System displays recommendations on UI
19
Code :
Main File :- app.py
from flask import Flask, render_template, Response, jsonify
import gunicorn
from camera import *
app = Flask(__name__)
headings = ("Name","Album","Artist")
df1 = music_rec()
df1 = df1.head(15)
@app.route('/')
def index():
print(df1.to_json(orient='records'))
return render_template('index.html', headings=headings, data=df1)
def gen(camera):
while True:
global df1
frame, df1 = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
20
@app.route('/video_feed')
def video_feed():
return Response(gen(VideoCamera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
@app.route('/t')
def gen_table():
return df1.to_json(orient='records')
if __name__ == '__main__':
app.debug = True
app.run()
21
Output:-
22
23
24
3.5 CONCLUSION
In this project, we developed an emotion-based music recommendation system that
leverages emotion detection techniques, such as facial expression recognition, text sentiment
analysis, and voice tone detection, to provide personalized music suggestions. By integrating
these technologies with a music recommendation engine, the system enhances the user
experience by offering music that aligns with or positively impacts the user's emotional state.
Through the use of Python libraries like OpenCV, Librosa, and Spotipy, the project
successfully maps detected emotions to appropriate musical features and tracks, demonstrating
the potential of AI in creating more empathetic, personalized digital experiences.
The system not only offers a more intuitive music recommendation process but also paves the
way for future applications in mental health, well-being, and entertainment. Further
improvements, such as real-time emotion tracking and multi-modal emotion detection, could
enhance the accuracy and responsiveness of the system, offering even more personalized
experiences for users.
25
3.6 FUTURE SCOPE
While this project successfully demonstrates the potential of emotion-based music
recommendation, there are several areas where it can be expanded and improved:
1. Real-Time Emotion Detection:
o Integrating real-time emotion detection from video feeds or live voice input
will make the system more interactive and dynamic. Currently, emotion
detection relies on static inputs (images or text), but live feedback could allow
for more accurate and immediate recommendations.
2. Multimodal Emotion Detection:
o Combining multiple sources of input, such as facial expressions, speech tone,
and textual sentiment, can improve the accuracy of emotion detection. A
multimodal system would be more robust and adaptable to different user
environments.
3. Improved Music Recommendation Algorithm:
o Enhancing the recommendation engine to account for user preferences or
historical listening habits could make suggestions even more personalized.
This could be achieved through hybrid models that combine emotion with user-
based collaborative filtering.
4. Integration with Wearables:
o Future versions of the system could integrate with wearable devices (such as
heart rate monitors or smartwatches) to track physiological indicators of
emotion, providing a more holistic and accurate view of the user's emotional
state.
5. Mood-Influencing Features:
o Exploring how music can actively influence emotional states (e.g., using
calming music to reduce anxiety or energetic beats to improve focus) can open
new possibilities in therapeutic applications, such as for mental health or stress
relief.
26
3.7 List of References
• www.youtube.com
• www.github.com
• www.wikipedia.com
• www.google.com
• FER-2013 Dataset – Kaggle
• CK+ Dataset – Cohn-Kanade
• Million Song Dataset – Columbia University
• OpenCV – https://opencv.org/
• TensorFlow & Keras – https://www.tensorflow.org/
• Librosa – https://librosa.org/
• Flask – https://flask.palletsprojects.com/
27