You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're interested in some form of GPU checkpointing - is this something that the gvisor team plans on supporting at any point?
Generally, existing GPU checkpointing implementations described in papers like Singularity or Cricket intercept CUDA calls via LD_PRELOAD. Prior to a checkpoint, they record stateful calls in a log, which is stored at checkpoint time along with the contents of GPU memory. At restore time, GPU memory is reloaded and the log is replayed. Both frameworks have to do some of virtualization of device pointers as well.
It seems (perhaps naively) that a similar scheme might be possible within nvproxy, which already intercepts calls to the GPU driver. In theory, nvproxy could record a subset of calls made to the GPU driver and replay them at checkpoint-restore time, virtualizing file descriptors and device pointers as needed; and separately, support copying contents of GPU memory off the device to a file and back.
This is clearly complex. I'm curious if you all believe it to be viable and plan on exploring the scheme described above, or a different one, at any point?
Is this feature related to a specific bug?
No response
Do you have a specific solution in mind?
No response
The text was updated successfully, but these errors were encountered:
Have you looked at #10478 (which I believe was filed by from one of your colleagues :))?
I believe cuda-checkpoint should work well within gVisor now that NVIDIA has fixed the issue described in that bug, and should allow GPU checkpointing to work in gVisor without the complexity of recording and replaying CUDA calls.
Interesting, I assumed that NVIDIA hadn't fixed the issue since NVIDIA/cuda-checkpoint#4 is still open, but honestly, I haven't tried running cuda-checkpoint again recently on pytorch within gvisor. I will do that.
Description
We're interested in some form of GPU checkpointing - is this something that the gvisor team plans on supporting at any point?
Generally, existing GPU checkpointing implementations described in papers like Singularity or Cricket intercept CUDA calls via
LD_PRELOAD
. Prior to a checkpoint, they record stateful calls in a log, which is stored at checkpoint time along with the contents of GPU memory. At restore time, GPU memory is reloaded and the log is replayed. Both frameworks have to do some of virtualization of device pointers as well.It seems (perhaps naively) that a similar scheme might be possible within nvproxy, which already intercepts calls to the GPU driver. In theory, nvproxy could record a subset of calls made to the GPU driver and replay them at checkpoint-restore time, virtualizing file descriptors and device pointers as needed; and separately, support copying contents of GPU memory off the device to a file and back.
This is clearly complex. I'm curious if you all believe it to be viable and plan on exploring the scheme described above, or a different one, at any point?
Is this feature related to a specific bug?
No response
Do you have a specific solution in mind?
No response
The text was updated successfully, but these errors were encountered: