diff --git a/stories/How to create high-quality offline video transcriptions and subtitles using Whisper and Python.md b/stories/How to create high-quality offline video transcriptions and subtitles using Whisper and Python.md index f6d9af3..3fea2ae 100644 --- a/stories/How to create high-quality offline video transcriptions and subtitles using Whisper and Python.md +++ b/stories/How to create high-quality offline video transcriptions and subtitles using Whisper and Python.md @@ -1,6 +1,7 @@ # How to create high-quality offline video transcriptions and subtitles using Whisper and Python +
I always thought that 'doing things with AI' was equivalant to smoking data centers, overheated servers, and massive cloud computing power. @@ -30,10 +31,10 @@ As I work with ChatGPT regularly, I had heard of [Whisper, OpenAI’s speech-to- After some research to see if this could suit my ASR (Automatic Speech Recognition) needs, I found out that [this model excels in Dutch](https://github.com/openai/whisper?tab=readme-ov-file#available-models-and-languages). But it also performs well in English. - - OK, that already sounds promising. But Whisper doesn’t have a user-friendly front end (as far as I know), so I had to work with the API and Python. Fortunately, I found [this short blog post](https://nicobytes.com/blog/en/how-to-use-whisper/) to help me get started, and, combined with the [documentation](https://platform.openai.com/docs/guides/speech-to-text), it was straightforward to set up. + + Further in this article, you’ll read about what I ultimately created with it and find ready-to-use Python code to try it out yourself. ## FFmpeg is needed @@ -41,10 +42,10 @@ If you’re using the Whisper API with Python, you’ll need [FFmpeg](https://ww Here’s what it looks like on my home laptop. I followed the guide mentioned above. - +

- +
## Offline use, so privacy friendly @@ -54,7 +55,7 @@ When you run this piece of Python code for the first time, the ‘large’ model is downloaded to your machine once. (See here for [the available models](https://github.com/openai/whisper?tab=readme-ov-file#available-models-and-languages).) To my great surprise, it’s just a 3GB file that handles all speech-to-text tasks without needing any further internet connection. So no smoking data centers, overheated servers, or massive cloud computing power—just a file on your own computer that you can use offline. And it’s great for privacy since everything happens offline on your own machine. Here’s the model on my home laptop. What happens inside that .pt file is pure magic! - +
## Speed @@ -63,9 +64,9 @@ Does transcription go reasonably fast? The 'large-v2' model I use operates at ab ## And such quality! With subtitles! Even with poor input! Beyond offline use, I am utterly amazed by the quality of the generated text. I’ll show this best through this (rather dull and quite lengthy) test video where I used myself as the test subject: -[https://commons.wikimedia.org/wiki/File:Wikidata_Workshop_-_Theoretical_part_-_Maastricht_University_-_15_October_2024.webm](https://commons.wikimedia.org/wiki/File:Wikidata_Workshop_-_Theoretical_part_-_Maastricht_University_-_15_October_2024.webm) +https://commons.wikimedia.org/wiki/File:Wikidata_Workshop_-_Theoretical_part_-_Maastricht_University_-_15_October_2024.webm - +
The unformatted block of text shown in the file description was fully created using Whisper, with only minimal post-correction. Pay particular attention to how well it generates all named entities and technical terms, including proper capitalization, etc. WOW!