Skip to content

Commit 93778f3

Browse files
Added a project
1 parent 3b008dd commit 93778f3

14 files changed

+197
-12
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
<h2>Emotion Based Music Player</h2>
2+
3+
### Goal 🎯
4+
The objective of the emotion-based music player project is to create an intelligent system that detects and analyzes users' emotions in real-time through techniques like facial recognition, voice analysis, or biosensors. Based on the detected emotional state, the player automatically curates and adjusts music playlists to enhance the user's mood and provide a personalized listening experience. The system aims to reduce the burden of manual song selection, adapt to emotional changes dynamically, and offer privacy-conscious and culturally relevant music suggestions, while giving users the flexibility to override or customize the music based on their preferences.
5+
6+
### Model(s) used for the Web App 🧮
7+
8+
The models and technologies used in the emotion-based music player project include:
9+
10+
1. Pretrained Keras Model (model.h5): A deep learning model, likely a Convolutional Neural Network (CNN), is loaded to predict emotions based on processed facial landmarks and hand movements.
11+
12+
2. Mediapipe Library: Mediapipe is used for extracting facial landmarks and hand landmarks, which serve as input features for emotion recognition. It captures key points from the user's face and hands for emotion detection.
13+
14+
3. Streamlit and WebRTC: Used for the web interface and real-time video streaming, capturing the users face for emotion recognition through a web camera.
15+
16+
4. The project leverages deep learning (Keras) and computer vision (Mediapipe) to detect emotions based on facial and hand landmark data, then uses the model to predict the emotion, which influences the music recommendation. ​
17+
Binary file not shown.
Binary file not shown.
Loading
Loading
Loading
Loading
Loading
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
<h1>Emotion Based Music Player</h1>
2+
3+
<h1>Description</h1>
4+
Welcome to new project emotion based music built by using mediapipe and keras. also opencv and streamlit is used to create a webapp. for capturing the webcam in the browser i used streamlit-webrtc module.
5+
<br><br>
6+
7+
<h1>Connect with me</h1>
8+
If you have any queries regarding any of the topic, feel free to talk to e using below links:<br>
9+
10+
github : https://github.com/shanmukhi-developer<br>
11+
linkedin : https://www.linkedin.com/in/nadipudi-shanmukhi-satya-6904a0242/<br>
Binary file not shown.
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
1-
*Requirements for Running the Project*
2-
3-
Python 3.x
4-
Python libraries:
5-
1. streamlit
6-
2. streamlit-webrtc
7-
3. opencv-python
8-
4. mediapipe
9-
5. keras
10-
6. numpy
11-
12-
-->A pre-trained Keras model (model.h5) and a NumPy labels file (labels.npy), both included in the project.
1+
*Requirements for Running the Project*
2+
3+
Python 3.x
4+
Python libraries:
5+
1. streamlit
6+
2. streamlit-webrtc
7+
3. opencv-python
8+
4. mediapipe
9+
5. keras
10+
6. numpy
11+
12+
-->A pre-trained Keras model (model.h5) and a NumPy labels file (labels.npy), both included in the project.
1313
-->A webcam to capture live video input.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
*Emotion Based Music Player*<br>
2+
*🎯 Goal*<br>
3+
An emotion-based music player solves key challenges by using emotion detection techniques, such as facial recognition or voice analysis, to automatically curate playlists that match the user’s mood. It adapts to real-time emotional changes, providing a dynamic music experience tailored to the user's feelings.<br>
4+
5+
*🧵 Dataset*<br>
6+
The dataset used in this project are emotion.npy and label.npy which are typically used to store NumPy arrays, which may contain data such as numerical values, model weights, or feature sets, possibly for your emotion-based music player project..<br>
7+
<br>
8+
9+
*🧾 Description*<br>
10+
This project utilizes various python libraries such as streamlit, keras, numpy to create the model,to take the input from user and give the required output<br>
11+
<br>
12+
13+
14+
<br>
15+
16+
*🚀 Models Implemented*<br>
17+
1. Pretrained Deep Learning Model (model.h5):
18+
19+
-->Likely a Convolutional Neural Network (CNN) used for emotion recognition, especially in processing facial and hand landmarks for detecting user emotions.
20+
-->The model is loaded using Keras' load_model function, indicating it's a neural network trained on emotion-labeled data.<br>
21+
2. Mediapipe's Holistic and Hands Models:
22+
23+
-->Mediapipe Holistic: Used for detecting key facial and body landmarks.
24+
-->Mediapipe Hands: Used for detecting hand landmarks to infer gestures that may also be used for emotion recognition.
25+
<br>
26+
27+
28+
*📚 Libraries Needed*<br>
29+
1. Streamlit
30+
2. Streamlit-webrtc
31+
3. Opencv-python
32+
4. Mediapipe
33+
5. Keras
34+
6. Numpy
35+
36+
<br>
37+
38+
39+
*📢 Conclusion*<br>
40+
The emotion-based music player successfully integrates deep learning and computer vision techniques to create a personalized, emotion-driven music experience. By leveraging facial expression and hand gesture recognition through Mediapipe, combined with a pretrained deep learning model, the system can detect the user's emotional state in real-time. This allows for dynamic music recommendations that adapt to the user's mood, enhancing the listening experience.<br>
41+
42+
The project demonstrates how artificial intelligence can transform user interaction with media, making it more intuitive, personalized, and engaging. With future improvements, such as more advanced emotion recognition and enhanced music recommendations, this system could revolutionize how users interact with digital content, making it more emotionally responsive and contextually aware.<br>
43+
44+
<br>
45+
46+
✒️ Your Signature<br>
47+
Nadipudi Shanmukhi satya<br>
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
import streamlit as st
2+
from streamlit_webrtc import webrtc_streamer
3+
import av
4+
import cv2
5+
import numpy as np
6+
import mediapipe as mp
7+
from keras.models import load_model
8+
import webbrowser
9+
10+
model = load_model("model.h5")
11+
label = np.load("labels.npy")
12+
holistic = mp.solutions.holistic
13+
hands = mp.solutions.hands
14+
holis = holistic.Holistic()
15+
drawing = mp.solutions.drawing_utils
16+
17+
st.header("Emotion Based Music Recommender")
18+
19+
if "run" not in st.session_state:
20+
st.session_state["run"] = "true"
21+
22+
try:
23+
emotion = np.load("emotion.npy")[0]
24+
except:
25+
emotion=""
26+
27+
if not(emotion):
28+
st.session_state["run"] = "true"
29+
else:
30+
st.session_state["run"] = "false"
31+
32+
class EmotionProcessor:
33+
def recv(self, frame):
34+
frm = frame.to_ndarray(format="bgr24")
35+
36+
##############################
37+
frm = cv2.flip(frm, 1)
38+
39+
res = holis.process(cv2.cvtColor(frm, cv2.COLOR_BGR2RGB))
40+
41+
lst = []
42+
43+
if res.face_landmarks:
44+
for i in res.face_landmarks.landmark:
45+
lst.append(i.x - res.face_landmarks.landmark[1].x)
46+
lst.append(i.y - res.face_landmarks.landmark[1].y)
47+
48+
if res.left_hand_landmarks:
49+
for i in res.left_hand_landmarks.landmark:
50+
lst.append(i.x - res.left_hand_landmarks.landmark[8].x)
51+
lst.append(i.y - res.left_hand_landmarks.landmark[8].y)
52+
else:
53+
for i in range(42):
54+
lst.append(0.0)
55+
56+
if res.right_hand_landmarks:
57+
for i in res.right_hand_landmarks.landmark:
58+
lst.append(i.x - res.right_hand_landmarks.landmark[8].x)
59+
lst.append(i.y - res.right_hand_landmarks.landmark[8].y)
60+
else:
61+
for i in range(42):
62+
lst.append(0.0)
63+
64+
lst = np.array(lst).reshape(1,-1)
65+
66+
pred = label[np.argmax(model.predict(lst))]
67+
68+
print(pred)
69+
cv2.putText(frm, pred, (50,50),cv2.FONT_ITALIC, 1, (255,0,0),2)
70+
71+
np.save("emotion.npy", np.array([pred]))
72+
73+
74+
drawing.draw_landmarks(frm, res.face_landmarks, holistic.FACEMESH_TESSELATION,
75+
landmark_drawing_spec=drawing.DrawingSpec(color=(0,0,255), thickness=-1, circle_radius=1),
76+
connection_drawing_spec=drawing.DrawingSpec(thickness=1))
77+
drawing.draw_landmarks(frm, res.left_hand_landmarks, hands.HAND_CONNECTIONS)
78+
drawing.draw_landmarks(frm, res.right_hand_landmarks, hands.HAND_CONNECTIONS)
79+
80+
81+
##############################
82+
83+
return av.VideoFrame.from_ndarray(frm, format="bgr24")
84+
85+
lang = st.text_input("Language")
86+
singer = st.text_input("singer")
87+
choose = st.text_input("Select")
88+
if lang and singer and st.session_state["run"] != "false":
89+
webrtc_streamer(key="key", desired_playing_state=True,
90+
video_processor_factory=EmotionProcessor)
91+
btn = st.button("Recommend me ")
92+
93+
if btn:
94+
if not emotion:
95+
st.warning("Please let me capture your emotion first")
96+
st.session_state["run"] = "true"
97+
elif choose == "youtube":
98+
webbrowser.open(f"https://www.youtube.com/results?search_query={lang}+{emotion}+song+{singer}")
99+
else:
100+
webbrowser.open(f"https://open.spotify.com/search/{lang}%20{emotion}%20songs%20{singer}")
101+
np.save("emotion.npy", np.array([""]))
102+
st.session_state["run"] = "false"
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
import streamlit as st
2+
from streamlit_webrtc import webrtc_streamer
3+
import av
4+
import cv2
5+
import numpy as np
6+
import mediapipe as mp
7+
from keras.models import load_model
8+
import webbrowser

0 commit comments

Comments
 (0)