This is a simple Streamlit UI for OpenAI's Whisper speech-to-text model. It let's you download and transcribe media from YouTube videos, playlists, or local files. You can then browse, filter, and search through your saved audio files. Feel free to raise an issue for bugs or feature requests or send a PR.
whisper-ui-update-demo.mp4
This was built & tested on Python 3.11 but should also work on Python 3.9+ as with the original Whisper repo).
You'll need to install ffmpeg
on your system. Then, install the requirements with pip
.
sudo apt install ffmpeg
pip install -r requirements.txt
If you're using conda, you can create a new environment with the following command:
conda env create -f environment.yml
Note: If you're using a CPU-only machine, your runtime can be sped-up by using quantization implemented by @MicellaneousStuff by swapping out pip install openai-whisper
from requirements.txt
and replacing it with their fork pip install git+https://github.com/MiscellaneousStuff/whisper.git
(See related discussion here - hayabhay#20)
This mic_support
branch supports "mic input". Due to browser media access issue, you should use SSL termination configuration in a reverse proxy or load balancer, or you can use streamlit HTTPS support. streamlit 1.20.0 or later supports HTTPS. "mic input" data will be treated as "uploaded".
As the easiest way, you can create a self-signed-cert as follows:
cd ssl
create-self-signed-cert.sh
Once you're set up, you can run the app with:
streamlit run app/01_🏠_Home.py --server.sslCertFile 'ssl/server.crt' --server.sslKeyFile 'ssl/server.key'
This will open a new tab in your browser with the app. You can then select a YouTube URL or local file & click "Run Whisper" to run the model on the selected media.
Alternatively, you can run the app containerized with Docker via the included docker-compose.yml. Simply run:
docker compose up
Then open up a new tab and navigate to https://localhost:8501/
NOTE: For existing users, this will break the database since absolute paths of files are saved. A future fix will be added to fix this.
All notable changes to this project alongside potential feature roadmap will be documented in this file
Whisper is licensed under MIT while Streamlit is licensed under Apache 2.0. Everything else is licensed under MIT.