Add support for Adapt which enables converting tensor networks to GPU #187
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
With this we can now do:
and we can see the tensors have been moved to GPU:
@JoeyT1994 with this you should then be able to follow the same instructions that are shown here: https://itensor.github.io/ITensors.jl/dev/RunningOnGPUs.html to run calculation on GPU, for example for gate application before calling
apply
load the relevant GPU package and transfer the gates and tensor network to GPU using the corresponding conversion function, i.e.cu
,mtl
,roc
, etc.There may be parts of the library code that aren't generic enough for GPU, for example making implicit assumptions about the element type, constructing intermediate tensors on CPU instead of on the GPU device of the tensor network, etc. We had to go through a process of stamping out those kinds of issues in ITensors.jl and ITensorMPS.jl but they aren't hard to fix using
Adapt.adapt
.