You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So on occasions, a regular RAJA view associated with a chai working array may not be the most convenient as we might like to also use that data on a different memory execution space for some purpose (debugging info, logging info, or something else like a calculation can only take on host for example). I'm contemplating a more generic class that can hold RAJA views for both the host and device class and then swap over to the appropriate one automatically based on the RAJA::plugins. I would imagine being able to do this in a manner similar to how CHAI's ManagedArrays work in automatically swapping things and not really too complicated to implement. Mainly I see this being useful in codes that make use of SNLS's device forall and memory manager abstraction layers. Also, I do see some areas it could be useful with the batch stuff in terms of debugging, or in the crazy edge case that someone decides to swap the execution strategy from GPU to CPU or GPU to CPU after the batch solver object has been created...
The text was updated successfully, but these errors were encountered:
So on occasions, a regular RAJA view associated with a chai working array may not be the most convenient as we might like to also use that data on a different memory execution space for some purpose (debugging info, logging info, or something else like a calculation can only take on host for example). I'm contemplating a more generic class that can hold RAJA views for both the host and device class and then swap over to the appropriate one automatically based on the RAJA::plugins. I would imagine being able to do this in a manner similar to how CHAI's ManagedArrays work in automatically swapping things and not really too complicated to implement. Mainly I see this being useful in codes that make use of SNLS's device forall and memory manager abstraction layers. Also, I do see some areas it could be useful with the batch stuff in terms of debugging, or in the crazy edge case that someone decides to swap the execution strategy from GPU to CPU or GPU to CPU after the batch solver object has been created...
The text was updated successfully, but these errors were encountered: