Our project focuses on designing and developing a system that will recognize emotion by integrating OpenCV and AI. OpenCV will allow us to efficiently capture facial features from images and/or video streams. These features will be subsequently translated into emotions such as happy, sad, angry, excited, and more by utilizing a deep learning model that will be trained on a comprehensive emotion-labeled dataset. The specific deep learning model that we will be utilizing will be a Convolutional Neural Network which will be very helpful in recognizing the intricate details in visual images and allow us to get more accurate results.
Our project addresses the challenge of understanding and interpreting human emotions from facial images and expressions. In a world where remote interactions and communication have become so common, it would be very beneficial to have technology be more empathetic and responsive to human emotions. In this project, AI techniques, particularly deep learning, will be leveraged to build a model that can accurately classify and interpret a range of emotions from facial images. By the end of our project, we would have hoped to enhance the quality of virtual interactions by making them more context-aware and empathetic which would in turn would improve user experience in a range of virtual applications. The ability to automatically recognize and classify emotions from facial features has numerous applications in areas such as human-computer interaction (entertainment and gaming), customer sentiment analysis, and mental health monitoring. Our project aligns with AI topics discussed in class through its use of computer vision, deep learning, and pattern recognition.