-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release Buffer Command? #8
Comments
@bamonroe Unter normal circumstances the regular memory management rules apply: as soon as the R reference is freed, so is the corresponding OpenCL object. Maybe the only important part here is that R doesn't know about the size of the represented objects in the GPU, so R doesn't use them to decide when to trigger garbage collections, so you should call |
I've had the same issue in the past and Would be nice to have R handle this automatically though. Any idea if we can tell the garbage collector about externally allocated memory and perhaps even the (usually lower) memory limit? Not sure if OpenCL tells us that— |
@aaronpuchert unfortunately, no. R only tracks only memory/objects it allocates, and it has no concept of "foreign" memory. That would require really a custom code deep in R internals on allocation to also include some external measure other than its own allocator to trigger garbage collection. In order for it to be actually useful, it would have to track each source separately, since collecting "real" memory won't help the GPU and vice-versa. However, the one thing we could do internally in OpenCL would be to track the allocations ourselves and trigger R GC if we see high GPU RAM pressure. Because of the fact that all allocations go through our code, we'd just have to add tracking either to |
Thanks so much for the quick responses. It looks like explicitly calling garbage collection did the trick. This seems very much like the sensible way to approach this problem. The more I thought about it, having an explicit "clDeleteBuffer" function could run the risk of dangling pointers. It would be great to mention in the documentation for the clBuffer function. Something like "When all references to the pointer created by clBuffer have been deleted from R, the buffer is deleted on the corresponding device, freeing the memory allocated to the buffer." I guess "deleted from R" is not a good way to say it, "go out of scope" maybe? |
@bamonroe @aaronpuchert I have added support for memory tracking and automatic garbage collection. See |
Nice, that seems like a good start.
We could query clGetDeviceInfo with We could allow setting a limit per context, though it probably makes more sense to track per device, with the default taken as the global memory size. |
My machine:
OS: Arch Linux
GPU: NVIDIA GeForce GTX 750ti (2GB)
Native Nvidia OpenCL drivers, version 495.46
R info:
R version: 4.1.2
Package version: CRAN OpenCL_0.2-2
I'm doing some pretty heavy simulations which tend to max out the VRAM on my GPU. This is fine because I can simply call "as.numeric" on the result of the oclRun command to retrieve the stored simulations off my GPU. I do something like this:
I can see as I'm saving the data that my GPU's memory is still full using
nvtop
in the terminal. The output vector is what's eating the ram because there are about a million results being stored. If I dorm(res)
, the pointer to the clBuffer on the GPU is lost to the R session, but there isn't a corresponding call toclReleaseMemObject
, or something like that to free up the VRAM. Can a method forrm
be added to do this call? Or can some other way for releasing the buffer memory on the GPU be added? I'd like to put that "res-dat-save" block into a loop, but currently I have to let R quit for the GPU memory to be released.Thanks for all the work with this package, I'm finding it very useful.
The text was updated successfully, but these errors were encountered: