You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The request takes a very long time, even when I'm giving it a tiny audio file and using the tiny.en model, and it eventually always ends in an internal server error.
Could it be that the micro-service only accepts audio files in some subset of file formats / codecs? If yes, which ones?
I've tried .mp3 and .wav so far.
I was hoping that running the tiny.en model locally in my docker environment would return a result faster than using OpenAI's whisper API, but that does not seem to be the case.
The text was updated successfully, but these errors were encountered:
I've tried 2 audio files, one of 5 seconds or so, and one of 40 seconds. Both in mp3 format and wav format they were no more than just a couple of kilobytes.
show me the internal server error?
I don't understand, how would that help you? It's not like it gives any more information other than "internal server error". Are you saying that I should be able to get a more detailed error with a stack-trace from somewhere? If yes, where should I be able to find that?
The request takes a very long time, even when I'm giving it a tiny audio file and using the
tiny.en
model, and it eventually always ends in an internal server error.Could it be that the micro-service only accepts audio files in some subset of file formats / codecs? If yes, which ones?
I've tried
.mp3
and.wav
so far.I was hoping that running the
tiny.en
model locally in my docker environment would return a result faster than using OpenAI's whisper API, but that does not seem to be the case.The text was updated successfully, but these errors were encountered: