VidTOMCQ is an AI-powered project that allows users to upload video content and automatically generates 5 multiple-choice questions (MCQs) based on the content of the video. This project integrates speech recognition, video processing, and generative AI to create an innovative tool for learning and assessment.
- Extracts audio from user-uploaded video input.
- Transcribes audio content using advanced speech recognition technology.
- Utilizes AI to generate 5 relevant MCQs based on the transcribed content.
- Simple and intuitive web interface for easy video upload and question display.
- Flask: Backend web framework to manage routing and handling.
- MoviePy: For processing and extracting audio from video.
- SpeechRecognition: To transcribe video audio into text.
- Google Generative AI: To create meaningful and context-based MCQs from the transcribed text.
- The user uploads a video file via the web interface.
- The system extracts the audio from the video.
- Speech recognition transcribes the audio into text.
- Google Generative AI analyzes the text and generates 5 MCQs.
- The questions are displayed on the results page.
-
Clone the repository:
git clone https://github.com/abhirajadhikary06/VidToMCQ.git
-
Navigate to the project directory:
cd VidTOMCQ
-
Install the required dependencies:
pip install -r requirements.txt
-
Run the Flask application:
flask run
- Access the web interface by navigating to
http://localhost:5000
in your web browser. - Upload a video file (supported formats:
.mp4
,.avi
, etc.). - The system processes the video and generates 5 MCQs.
- View the generated questions on the result page.
After uploading a video, the system generates questions such as:
- What was the main topic discussed in the video?
- Which key point was emphasized in the video?
- What is the significance of [concept] in the video?
- How did the speaker define [term]?
- What solution was proposed for [problem]?
Upload file page
Video file processing
Generated questions page
---Developed with ❤️ by the VidTOMCQ team.