Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ITensorGPU] [BUG] DMRG with quantum number conservation not working #2

Closed
MasonProtter opened this issue May 12, 2022 · 2 comments
Closed
Labels
bug Something isn't working

Comments

@MasonProtter
Copy link

MasonProtter commented May 12, 2022

Description of bug

I tried to run the code from https://itensor.github.io/ITensors.jl/dev/tutorials/QN_DMRG.html on a GPU using ITensorGPU.jl and encountered an error where the dimensions of the dims of the tensors in the Hamiltonian did not coincide with the arrays inside the Hamiltonian itself.

(Click me) Minimal runnable code

julia> using ITensors, ITensorGPU

julia> let
         N = 100
         sites = siteinds("S=1",N;conserve_qns=true)

         ampo = OpSum()
         for j=1:N-1
           ampo += "Sz",j,"Sz",j+1
           ampo += 1/2,"S+",j,"S-",j+1
           ampo += 1/2,"S-",j,"S+",j+1
         end
         H = MPO(ampo,sites)

         state = [isodd(n) ? "Up" : "Dn" for n=1:N]
         psi0 = productMPS(sites,state)

         sweeps = Sweeps(5)
         setmaxdim!(sweeps, 10,20,100,100,200)
         setcutoff!(sweeps, 1E-10)

         energy, psi = dmrg(cu(H),cu(psi0), sweeps) # Note the use of cu here

         return energy
         end

Expected output or behavior

I expected this to produce the same answer as the CPU version

Actual output or behavior

It errored.

Output of minimal runnable code

flux(psi0) = QN("Sz",0)
ERROR: DimensionMismatch("new dimensions (5, 3, 3) must be consistent with array size (13,)")
Stacktrace:
  [1] reshape(a::CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, dims::Tuple{Int64, Int64, Int64})
    @ CUDA ~/.julia/packages/CUDA/5jdFl/src/array.jl:665
  [2] reshape
    @ ./reshapedarray.jl:116 [inlined]
  [3] permute!(B::NDTensors.DenseTensor{Float64, 3, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.Dense{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}}, A::NDTensors.DenseTensor{Float64, 3, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.Dense{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}})
    @ ITensorGPU ~/.julia/dev/ITensors/ITensorGPU/src/tensor/cudense.jl:532
  [4] ITensor/ITensors.jl#1
    @ ~/.julia/dev/ITensors/ITensorGPU/src/tensor/cudense.jl:61 [inlined]
  [5] permutedims!!
    @ ~/.julia/dev/ITensors/ITensorGPU/src/tensor/cudense.jl:65 [inlined]
  [6] permutedims!!
    @ ~/.julia/dev/ITensors/ITensorGPU/src/tensor/cudense.jl:63 [inlined]
  [7] permutedims
    @ ~/.julia/dev/ITensors/NDTensors/src/dense.jl:438 [inlined]
  [8] permutedims
    @ ~/.julia/dev/ITensors/src/itensor.jl:1686 [inlined]
  [9] _permute(as::NDTensors.NeverAlias, T::NDTensors.DenseTensor{Float64, 3, Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}}, NDTensors.Dense{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}}, new_inds::Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}})
    @ ITensors ~/.julia/dev/ITensors/src/itensor.jl:1691
 [10] permute(as::NDTensors.NeverAlias, T::ITensor, new_inds::Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}})
    @ ITensors ~/.julia/dev/ITensors/src/itensor.jl:1695
 [11] #permute#219
    @ ~/.julia/dev/ITensors/src/itensor.jl:1676 [inlined]
 [12] permute(T::ITensor, new_inds::Tuple{Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}, Index{Vector{Pair{QN, Int64}}}})
    @ ITensors ~/.julia/dev/ITensors/src/itensor.jl:1661
 [13] permute(M::MPO, #unused#::Tuple{typeof(linkind), typeof(siteinds), typeof(linkind)})
    @ ITensors ~/.julia/dev/ITensors/src/mps/dmrg.jl:14
 [14] #dmrg#954
    @ ~/.julia/dev/ITensors/src/mps/dmrg.jl:45 [inlined]
 [15] dmrg(H::MPO, psi0::MPS, sweeps::Sweeps)
    @ ITensors ~/.julia/dev/ITensors/src/mps/dmrg.jl:41
 [16] top-level scope
    @ REPL[3]:21
 [17] top-level scope
    @ ~/.julia/packages/CUDA/5jdFl/src/initialization.jl:52

Version information

  • Output from versioninfo():
julia> versioninfo()
Julia Version 1.7.0-rc2
Commit f23fc0d27a (2021-10-20 12:45 UTC)
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: AMD Ryzen Threadripper 2990WX 32-Core Processor
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, znver1)
Environment:
  JULIA_NUM_THREADS = 26
  • Output from using Pkg; Pkg.status("ITensors"):
(@v1.7) pkg> st ITensors
      Status `~/.julia/environments/v1.7/Project.toml`
  [9136182c] ITensors v0.3.10 `~/.julia/dev/ITensors`

(@v1.7) pkg> st ITensorGPU
      Status `~/.julia/environments/v1.7/Project.toml`
  [d89171c1] ITensorGPU v0.0.5 `~/.julia/dev/ITensors/ITensorGPU`

(@v1.7) pkg> st NDTensors
      Status `~/.julia/environments/v1.7/Project.toml`
  [23ae76d9] NDTensors v0.1.37 `~/.julia/dev/ITensors/NDTensors`
@MasonProtter MasonProtter added the bug Something isn't working label May 12, 2022
@mtfishman
Copy link
Member

Block sparse operations are not defined on GPU right now, I'll leave this open for others to find.

@mtfishman mtfishman transferred this issue from ITensor/ITensors.jl May 7, 2024
@mtfishman
Copy link
Member

This should be working now, though we are still testing out the performance. I'll close in favor of more specific issues. Also future issues will go in the issue tracker in the https://github.com/ITensor/ITensors.jl repository, most of the GPU implementation is in package extensions of the NDTensors.jl subdirectory package of that repository, this is now just a shell package for backwards compatibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants