Fix edge case in PyTorchPredictor.deserialize
#2994
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of changes: Since #2965 (not sure before), if a
PyTorchPredictor
object was serialized from the CPU memory, thendeserialize
ing it withdevice="cuda"
would actually not work. This would happen:torch.nn.Module
model allocated on CPU (since that was thedevice
the predictor was serialized with)device="cuda"
,torch.load
withmap_location="cuda"
would put thestate_dict
values on GPU as expectedtorch.nn.Module.load_state_dict
would however copy the parameters back to CPU.device
attribute set to"cpu"
, prediction would not complain (data & model on the same device) but would be really slow.This PR makes sure that the predictor object being created is moved
.to(device)
after step 1, so that step 3 actually keeps parameters on GPU.The same issue happens inverting CPU and GPU, as in the following example
Expected to have the resulting model on CPU. Output before the PR:
Output after the PR:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Please tag this pr with at least one of these labels to make our release process faster: BREAKING, new feature, bug fix, other change, dev setup