-
Notifications
You must be signed in to change notification settings - Fork 3
Using cached memory allocation on GPU
Bei Wang edited this page Dec 19, 2019
·
6 revisions
We rely on the cached GPU memory allocator developed as part of the patatrack project CMSSW for GPU memory allocation. This has significantly reduce the memory allocation and deallocation overhead in processing multiple events. The cached GPU allocator in patatrack is based on the caching device allocator in cub library. The allocator is thread-safe and stream-safe and is capable of managing cached device allocations on multiple devices.