You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need to offer a more consistent way to define which type of hardware acceleration devices happypose runs on. Here are the current and desired behaviors I have in mind so far:
Choose whether torch runs on cpu on gpu
current: use cuda if available, use cpu otherwise
desired: user should be able define the preferred acceleration device (e.g. through environment variable)
Choose whether to run the renderers on gpu or cpu.
current: for pybullet, the choice is possible. For panda3d, use GPU if cuda is available.
desired: configurable for both
AMD ROCm support: Some computing cluster (e.g. LUMI) require the use of AMD GPUs. ardware acceleration through AMD ROCm (~ equivalent of Nvidia Cuda) should be supported by latest torch versions. The refactoring should take this into account.
EDIT: ROCm support seems to "masquarade" itself as cuda so there should not be any adaptation in this regard.
See this gist
The text was updated successfully, but these errors were encountered:
We need to offer a more consistent way to define which type of hardware acceleration devices happypose runs on. Here are the current and desired behaviors I have in mind so far:
EDIT: ROCm support seems to "masquarade" itself as cuda so there should not be any adaptation in this regard.
See this gist
The text was updated successfully, but these errors were encountered: