GitAuto: pytorch実行時にエラー(RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.) #173
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Resolves #68
Why the bug occurs
The application attempts to deserialize a PyTorch model on a CUDA device when CUDA is not available. This results in a
RuntimeError
because the model was trained on a GPU, but the current environment does not have CUDA support enabled.How to reproduce
torch.load
without specifying themap_location
.RuntimeError
is raised:How to fix
Update the
torch.load
call ingame_manager/machine_learning/block_controller_train_sample.py
to include themap_location
parameter. This ensures that the model is loaded onto the CPU when CUDA is not available.Changes to be made:
This change ensures compatibility across environments with or without CUDA support.
Test these changes locally