-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supporting multiple GPU models #10
Comments
Another question that comes up if we support multi-GPU support is what the default number of GPUs used is. Should it be just 1? Or should it be the maximum number of GPUs available? |
I think supporting multiple GPU's would be nice, but also definitely not critical for the MVP. If people have access to multiple GPUs they can always parallelize over the data to make use of all the GPUs, without the need to explicitly support multi-GPU inference. Also - would this require saving additional model files, since keras multi-GPU model files are stored differently on disk? |
Fair enough, let's not prioritize this for the MVP. Supporting multiple GPUs wouldn't require saving additional model files, we can make each model supported by multi-GPU inference after loading it |
I imagine we could support it via an |
Should supporting running the embedding models on multiple-GPUs be prioritized? Here are the pros/cons as I see it (not necessarily equally weighted in terms of importance):
Pros
Cons
All in all, I think that if we believe that using multiple GPUs will be a common use case, then we should include it. But if it's something that will be rarely used, if at all, we shouldn't prioritize it (at least for an MVP).
The text was updated successfully, but these errors were encountered: