You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have added: del job.process del job gc.collect() torch.cuda.empty_cache()
reducing the memory but I still have some memory leak.
Another memory problem I have is that I try to make two trainings in parallel using two gpus, but the one using cuda:1 always use some memory in cuda:0 when executing the line:
This is for bugs only
Did you already ask in the discord?
Yes/No
You verified that this is a bug and not a feature request or question by asking in the discord?
Yes/No
Describe the bug
The text was updated successfully, but these errors were encountered: