You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The objective of the emotion-based music player project is to create an intelligent system that detects and analyzes users' emotions in real-time through techniques like facial recognition, voice analysis, or biosensors. Based on the detected emotional state, the player automatically curates and adjusts music playlists to enhance the user's mood and provide a personalized listening experience. The system aims to reduce the burden of manual song selection, adapt to emotional changes dynamically, and offer privacy-conscious and culturally relevant music suggestions, while giving users the flexibility to override or customize the music based on their preferences.
5
+
6
+
### Model(s) used for the Web App 🧮
7
+
8
+
The models and technologies used in the emotion-based music player project include:
9
+
10
+
1. Pretrained Keras Model (model.h5): A deep learning model, likely a Convolutional Neural Network (CNN), is loaded to predict emotions based on processed facial landmarks and hand movements.
11
+
12
+
2. Mediapipe Library: Mediapipe is used for extracting facial landmarks and hand landmarks, which serve as input features for emotion recognition. It captures key points from the user's face and hands for emotion detection.
13
+
14
+
3. Streamlit and WebRTC: Used for the web interface and real-time video streaming, capturing the users face for emotion recognition through a web camera.
15
+
16
+
4. The project leverages deep learning (Keras) and computer vision (Mediapipe) to detect emotions based on facial and hand landmark data, then uses the model to predict the emotion, which influences the music recommendation.
Welcome to new project emotion based music built by using mediapipe and keras. also opencv and streamlit is used to create a webapp. for capturing the webcam in the browser i used streamlit-webrtc module.
5
+
<br><br>
6
+
7
+
<h1>Connect with me</h1>
8
+
If you have any queries regarding any of the topic, feel free to talk to e using below links:<br>
An emotion-based music player solves key challenges by using emotion detection techniques, such as facial recognition or voice analysis, to automatically curate playlists that match the user’s mood. It adapts to real-time emotional changes, providing a dynamic music experience tailored to the user's feelings.<br>
4
+
5
+
*🧵 Dataset*<br>
6
+
The dataset used in this project are emotion.npy and label.npy which are typically used to store NumPy arrays, which may contain data such as numerical values, model weights, or feature sets, possibly for your emotion-based music player project..<br>
7
+
<br>
8
+
9
+
*🧾 Description*<br>
10
+
This project utilizes various python libraries such as streamlit, keras, numpy to create the model,to take the input from user and give the required output<br>
11
+
<br>
12
+
13
+
14
+
<br>
15
+
16
+
*🚀 Models Implemented*<br>
17
+
1. Pretrained Deep Learning Model (model.h5):
18
+
19
+
-->Likely a Convolutional Neural Network (CNN) used for emotion recognition, especially in processing facial and hand landmarks for detecting user emotions.
20
+
-->The model is loaded using Keras' load_model function, indicating it's a neural network trained on emotion-labeled data.<br>
21
+
2. Mediapipe's Holistic and Hands Models:
22
+
23
+
-->Mediapipe Holistic: Used for detecting key facial and body landmarks.
24
+
-->Mediapipe Hands: Used for detecting hand landmarks to infer gestures that may also be used for emotion recognition.
25
+
<br>
26
+
27
+
28
+
*📚 Libraries Needed*<br>
29
+
1. Streamlit
30
+
2. Streamlit-webrtc
31
+
3. Opencv-python
32
+
4. Mediapipe
33
+
5. Keras
34
+
6. Numpy
35
+
36
+
<br>
37
+
38
+
39
+
*📢 Conclusion*<br>
40
+
The emotion-based music player successfully integrates deep learning and computer vision techniques to create a personalized, emotion-driven music experience. By leveraging facial expression and hand gesture recognition through Mediapipe, combined with a pretrained deep learning model, the system can detect the user's emotional state in real-time. This allows for dynamic music recommendations that adapt to the user's mood, enhancing the listening experience.<br>
41
+
42
+
The project demonstrates how artificial intelligence can transform user interaction with media, making it more intuitive, personalized, and engaging. With future improvements, such as more advanced emotion recognition and enhanced music recommendations, this system could revolutionize how users interact with digital content, making it more emotionally responsive and contextually aware.<br>
0 commit comments