Replies: 1 comment
-
Hi @popov-ig ! It is a good question.
You can choose any model, I recommend a Large one if your hardware is fast enough. The main consideration from my side was response time. I recently tested different ways to run Whisper. A simple conclusion is if you use it on a machine with a powerful GPU you can run it locally and it will be the fastest option. If not - OpenAI API is the fastest one. I usually run it from my laptop or using HF spaces (without GPU), so my default choice is OpenAI API. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi @IliaLarchenko!
Thanks for creating your AI Interviewer. It looks really interesting!
I noticed you're using the OpenAI Whisper API instead of running it locally.
Could you share your reasoning behind this decision? I'm curious to understand the trade-offs you considered.
Specifically, I saw this line in the audio.py file.
Would you mind elaborating on why you chose to go with the API approach?
I'm looking forward to learning more about your project!
Beta Was this translation helpful? Give feedback.
All reactions