Replies: 1 comment 1 reply
-
Thank you for pointing out that empty_cache() is missing. The underlying library does keep a high-watermark of allocated GPU memory, so even when you dispose of tensors, the overall allocation won't necessary go down. I'll see how I can get empty_cache() implemented. Please file a bug! |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I'm hoping that I can get some guidance on releasing GPU memory. When I run the code sample below, my GPU monitoring seems to show that the memory that was allocated during execution (in a jupyter notebook) is still allocated despite the fact that I've done all I know to do to have intermediate tensors disposed
`
#r "nuget:TorchSharp-cuda-linux"
#r "nuget:TorchSharp"
open TorchSharp
let test () =
use d = torch.NewDisposeScope()
use tt = torch.randn(50000,50000,device=torch.device("cuda:7"))
tt.MoveToOuterDisposeScope()
let test2() =
use d2 = torch.NewDisposeScope()
use ttt = test()
()
let empty_result = test2()
`
I get a similar result when I do the same experiment in python with pytorch
`
import torch
def test():
tt = torch.randn(50000,50000,device=torch.device('cuda:7'))
def test2():
ttt = test()
return 0
empty_result = test2()
`
but I can free the memory by calling torch.cuda.empty_cache(). I hope I'm not missing something obvious, but I have not found an analogous function in TorchSharp. Can anyone let me know if there is something like that, or it there's a better way for me to go about things so that GPU memory is released after execution?
Beta Was this translation helpful? Give feedback.
All reactions