running pytorch models without CUDA/cuDNN #52
igorshmukler
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In addition to speed improvements achieved when running under TF, one obvious benefit of moving to tensorflow would be running training and inference while using
tensowflow-macos
andtensorflow-metal
rather thantorch.backends.cuda
and/ortorch.backends.cudnn
.Apple hardware, as well as other non nVidia devices simply will not run cudnn under torch. Therefore, currently, almost any meaningful work with torch requires nVidia hardware locally, or have a remote rig in the cloud somewhere with CUDA cards. Would be nice to be able to use a laptop for development/debugging. Unfortunately, Apple hardware has different silicon, supported by
tensowflow-metal
and I believe that there are no nVidia drivers for newer macOS versions.Any ideas on the subject?
[Not to mention that some essential packages, like
triton
only work on Linux.]Beta Was this translation helpful? Give feedback.
All reactions