-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiprocessing is slow and doesn't use much GPU #19
Comments
r u sure? |
Yeah, I am running the multiprocessing training and it is using 35% of my GPU and also when running on CPU it is the same speed? Also I am using Binance API to get 1 minute candles to train which will of course take 60x longer across the same time range., but still no difference in time for each training iteration between GPU multiprocessing and CPU? |
Hey, this is a tutorial, not a release, was not working on it to get maximum efficiency... :) |
Yeah, Thanks for replying, first I must say thanks for the tutorials they are amazing!! I was wondering if maybe I was doing something different, The agents come in sequentially ie. 0, 1, 2... Should multiprocessing work to run the agents simultaneously? |
I get that it's a tutorial about RL but I had hoped to learn about multiprocessing too :) |
Yeh, I wasn't good enough with multiprocessing/multithreading at the point when I was writing this tutorial. Right now, I don't have time to continue developing this tutorial. Now when I look I see that the model is written not in its most efficient way, this means that models spend a lot of time on CPU, that doesn't allow him to use more of GPU power |
Thank you for the tutorials, you're an awesome dude! Maybe I'll learn more about it in my own time, put together my own tutorial, and see if you wouldn't pull it back as a #8 tutorial. |
One way I figured out that the learning speed is increasing with learning rate decay.I tried to implement exponential learning rate in the shared model. |
Is there any way to speed up training and get the program to use more GPU?
The text was updated successfully, but these errors were encountered: