You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not yet but whisper at inference doesnt take up a lot of memory so, with batching supported a single 4090 for whisper-small should support around 25-30 clients.
I am asking about the scalability of this framework can it be used in production with too many users sending streams and the same time?
The text was updated successfully, but these errors were encountered: