-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to limit the number of cores used for training #6
Comments
If you set |
I set |
It could be that your HPC is treating these commands differently, because HPCs run a little differently than standard computers. Are you using a GPU? If so, how many. |
Hi @mikeyEcology |
The issue here is the release of tensorflow; @hannaboe this will probably help your problem as well. The installation of tensorflow is different if you are using a gpu, so when you install it, you should use |
I'm training a model on a supercomputer and since
train()
is using all cores I would like to limit the number of cores used. I tried to setnum_cores = 20
but that doesn't change anything and I still use all cores.Is there another way to limit the number of cores used when running
train
?The text was updated successfully, but these errors were encountered: