This project combines sentiment analysis with image captioning using the Salesforce BLIP model and Hugging Face's Transformers library. The chatbot analyzes text input sentiment and provides captions for uploaded images.
Python: Backend development and machine learning. Flask: Web framework for handling backend requests. Streamlit: Tool for deploying and serving the web application. Hugging Face Transformers: Library for NLP and image captioning. HTML, CSS, JavaScript: Frontend development for user interaction.
Install Dependencies: pip install transformers torch torchvision flask streamlit pillow
Backend:
python app.py Frontend (Streamlit):
streamlit run frontend.py Accessing the Application:
Open your browser and navigate to http://localhost:8501 to interact with the chatbot and image captioning interface.
app.py: Flask backend for processing requests. frontend.py: Streamlit frontend for user interface. requirements.txt: List of Python dependencies.
Sentiment Analysis:
Enter text in the chat interface and receive sentiment analysis results.
Upload an image to get automated captions generated by the BLIP model. [RESULT GENERATION WINDOW] [EXPLAINS VARIUOUS FUNCTIONBALITY OF ADDING AN IMAGE THROUGH WEBCAM, SYSTEM IMAGES AND SO ON]
Deployed using GRADIO for easy web deployment and sharing (Streamlit/Flask or other cloud platforms can also be used). Please find the deployed link of web application in description for testing/implementation purposes