Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add MPO version of VUMPS/TDVP #39

Open
mtfishman opened this issue Jan 28, 2022 · 19 comments
Open

Add MPO version of VUMPS/TDVP #39

mtfishman opened this issue Jan 28, 2022 · 19 comments

Comments

@mtfishman
Copy link
Member

mtfishman commented Jan 28, 2022

Currently, the implementation of VUMPS/TDVP only supports Hamiltonians represented explicitly in terms of sums of local terms (with extensions to long range interactions in #31).

Ideally, we would have a version that works for an MPO. I attempted to implement one initially but there was a bug in it that I couldn't track down.

EDIT: The old, broken implementation is here: https://github.com/ITensor/ITensorInfiniteMPS.jl/blob/main/src/broken/vumps_mpo.jl. There may be useful functionality in there that could be reused for a new version, such as transforming an MPO into a set of operator/ITensor-valued matrices.

@LHerviou expressed interest in possibly looking into this. Additionally, an alternative that was discussed was to extend #31 (EDIT: and #32) to make use of sums of local MPOs with finite support, which would be an intermediate solution (ultimately, we would still want a single MPO version).

Please comment on this issue if you are interested in looking into implementing this.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 2, 2022

I have started having a look at it, but if anyone wants to contribute, let me know.

As of now, I am actually struggling on something simple. It should be possible to automatically generate an IMPO from the InfiniteTensorSum we have in models. If the 2-local Hamiltonian is of the form T1 - T2, then we necessarily have
W_inf = 1 0 0
T1 0 0
0 T2 1
It is not necessarily the optimal form, but it should not be too bad (and can be improved).
It turns out I am struggling to implement the Index fusion required, in particular if the tensors are sparse (which they pretty much always are/should be). Am I missing something? The combiner function do tensor products, which is not what I am looking for.

Another alternative would be to directly work with the InfiniteTensorSum (it is just a question of relabeling the index then).

@mtfishman
Copy link
Member Author

Just so I understand the issue fully, you are talking about implementing VUMPS in terms of a single MPO, not a sum of local MPOs, correct?

Which kind of Index fusion are you trying to do? You form the MPO using the construction you describe above, but then want to fuse the quantum number sectors of the link indices of the MPO to form larger QN blocks? If that is the case, combiners work on single indices as well, in which case they fuse quantum number blocks automatically.

However, I'll note that we've actually found that for DMRG, it can in fact be better for performance to not fuse the quantum number blocks of the link indices of the MPO, since the MPO often becomes more sparse of you keep the blocks split. Because of this, I introduced a function splitblocks that takes an MPO and splits the link indices into smaller QN blocks, while also dropping zero blocks that get introduced to make it more sparse:

https://github.com/ITensor/ITensors.jl/blob/921e2116beb74d21028d2dde05a26170d6d5f622/examples/dmrg/1d_heisenberg_conserve_spin.jl#L30

(Credit goes to Johannes Hauschild, lead developer of TenPy, for pointing this out to us!)

Also, fusing the QN blocks may destroy the upper/lower block diagonal form that is useful for implementing the algorithm for obtaining the quasi-fixed point environment of the MPO transfer matrix, so it may be best to just leave the MPO as-is.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 2, 2022

Yes, as a single translation invariant MPO.

I am really (for now) just trying to create the object W_inf. I am trying to write it automatically for arbitrary (potentially large) MPO, and I am just struggling a bit with making sure the indices work correctly. I guess I can just keep the correct T1, T2 without really having an explicit MPO built in.

I agree it is probably best not fusing the QN blocks.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 2, 2022

Nevermind, I think I fixed that problem: would taking InfiniteMPO.data to be CelledVector{Matrix{ITensor}} be a satisfactory solution?

@mtfishman
Copy link
Member Author

Oh I see, you are actually trying to write a general code for converting some sort of operator representation into a uniform MPO?

Getting the indices consistent with the correct quantum numbers can be tricky. Indeed, if it easier for now, we could just use the operator-valued matrix representation, like what is output by:

https://github.com/ITensor/ITensorInfiniteMPS.jl/blob/main/src/broken/vumps_mpo.jl#L51-L79

I think I found it was easier to write the VUMPS code in terms of that form anyway, and we could always just convert to that form once we have code that actually makes an infinite MPO explicitly.

I don't think we should define the InfiniteMPO type in that way directly since ultimately we want that to store order-4 ITensors, but we could just make a new type for the operator-valued matrix representation, like InfiniteOpMatrix (I'm sure there is a better name).

To give you some context, in finite DMRG in ITensor, this is handled by the OpSum/AutoMPO system. The OpSum is the representation of the Hamiltonian in terms of string representation of operators and which sites they act on, and the AutoMPO system converts that representation into an MPO:

https://github.com/ITensor/ITensors.jl/blob/main/src/physics/autompo.jl

One goal was to extend that code to form infinite MPOs, also starting from an OpSum representation.

There was a PR for that to the C++ version of ITensor: ITensor/ITensor#161, but we haven't had time to properly review it and port it to the Julia code. The Julia code for finite OpSum to MPO conversion was refactored and simplified a lot from the C++ version of the code by Miles, so likely the extension of converting and OpSum to an InfiniteMPO could be simplified as well.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 2, 2022

I will start like that then, it should be easy to make the basic infrastructure work.

Thanks for the additional context.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 8, 2022

For info, I managed to write a full MPO treatment of VUMPS. The subspace expansion could be improved, but I would like to talk to several people in Benasque before looking into it.

I am not completely sure it fits my (selfish) requirements.

@mtfishman
Copy link
Member Author

Amazing, thanks!

My understanding is that the subspace expansion could be implemented by applying the 2-site effective Hamiltonian (the same one you would use in a 2-site VUMPS update), and then project into the nullspace. So I think it would be very similar to the current implementation. Or do you mean that there is a particular issue with the FQH Hamiltonian?

@LHerviou
Copy link
Contributor

LHerviou commented Feb 8, 2022

Well, the original proposition for the subspace expansion is indeed just looking at the two site effective Hamiltonian. That is what I am currently using in the code.

On the other hand, there were more recent papers discussing a better way to do it
https://scipost.org/SciPostPhysCore.4.1.004/pdf
https://scipost.org/SciPostPhysLectNotes.7/pdf 3.3

I discussed with some coworkers and we are not sure

  1. how important this improvement is
  2. whether it helps in the FQHE case where symmetries are blocking (as you discussed with Mike Zaletel) and therefore we really need some mixing in order to explore the symmetry resolved Hilbert space.

@mtfishman
Copy link
Member Author

My understanding is that https://scipost.org/SciPostPhysCore.4.1.004/pdf is focused on the problem of truncating, not expanding, the bond dimension of an MPS. From what I can tell, the algorithm assumes you already have an MPS manifold to project into. Perhaps some variant could be used for bond dimension expansion but it also has the disadvantage that it is an iterative algorithm, as opposed to the direct method currently used.

I guess I would hope the current method or an extension would work (such as extending to a 3-site variation, if needed), though improved bond dimension expansion techniques is definitely a worthy area of exploration in general.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 9, 2022

The method can be in principle also used to find the best way to extend the system. In fact, the Ghent team uses a variation of that proposal to do the subspace expansion in their vumps code for 2D classical systems (some of my coworkers work with them).

I do agree it is not completely obvious from the papers themselves --- I had to discuss with my coworkers to clarify how it worked.
Additionally, they do it without symmetries in their code.

@mtfishman
Copy link
Member Author

Yes, it is much clearer how it could work without symmetries, since you can just select a bond dimension larger than the one you have. You could just apply the Hamiltonian or exponential of the Hamiltonian to your MPS and then compress back to an MPS with the bond dimension you request. Symmetries are always the subtle part with these subspace expansion procedures.

@mtfishman
Copy link
Member Author

mtfishman commented Feb 9, 2022

I guess you could mix the two strategies, by first expanding the bond dimension using the subspace expansion we are using, and then variationally optimize within the expanded tensor space using the compression algorithm in https://scipost.org/SciPostPhysCore.4.1.004/pdf.

Essentially that is like alternating steps of VUMPS and iTEBD, where iTEBD is implemented with the algorithm in https://scipost.org/SciPostPhysCore.4.1.004/pdf.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 9, 2022

Yes, pretty much what I had in mind.
But then I am not sure it is significantly better than "just applying vumps" after the subspace expansion.
Hence I need to talk with them in 10 days.

@jtschneider
Copy link

Hi guys,
I wanted to ping this issue and ask about the status of a general MPO as an input for the VUMPS algorithm..?
Is there a working code that can take an upper or lower triangular MPO (which convention is it in ITensor again?) and produce the appropriate projections onto the subspaces and the energy minimisation?

@LHerviou
Copy link
Contributor

Hi,

As far as I know, currently you cannot use a general MPO. On the other hand, you can convert it to an InfiniteBlockMPO fairly trivially. The problem is that you will not really take advantage of the (triangular) structure of the MPO.
If it is defined by a model, it will be simpler to directly call the InfiniteBlockMPO(model, ...) as shown in the relevant test.
The current code can struggle a bit with very long-range models though (for compression).

As far as I reminder, convention is lower triangular currently.

@jtschneider
Copy link

jtschneider commented Jan 13, 2025

Hi Loïc,
Thanks for the quick response!
I am interested in proper long-range models, i.e. those where the support of the interaction operator is extensive with the system size, which is not true for next-to-nearest neighbour interactions (or any k-nearest neighbour interaction). For example, the exponentially decaying interactions which is for example realised with an MPO like that:

$$W_{MPO} = \begin{pmatrix} \mathbb{1} & 0 & 0\\\ J\sigma^x & \lambda \mathbb{1} & 0\\\ h_z \sigma^z & \sigma^x & \mathbb{1} \end{pmatrix}$$

So I gave it a try and constructed my own MPO as an InfiniteBlockMPO type to employ the VUMPS or TDVP algorithm on it.
I coded an example of a InfiniteBlockMPO in the exponentially interacting Ising model given the example of the InfiniteBlockMPO from the regular Ising model:

"""
build the transverse field Ising model with exponential interactions in the thermodynamic limit 𝑁 → ∞
use convention H = -J ∑_{n,m=−∞}^∞ σˣₙ σˣₘ ⋅ λ^(-|n-m-1|)  - hz ∑_{n=−∞}^∞ σᶻₙ
with 
"""
function InfiniteExpHTFI( 
  sites::CelledVector{<:Index},
  λ, # exponential decay base
  J, # interaction kinetic (tunneling) coupling
  hz; # interaction strength of the transverse field
  kwargs...
)
  if abs(λ) > 1.0
    throw(ArgumentError("cannot implement exponential decay with base larger than 1, λ = $(λ)!"))
  end

  link_dimension = 3

  N = length(sites)

  EType = eltype(union(λ, J, hz))

  linkindices = CelledVector(
    hasqns(sites.data) ?
    [Index([QN("SzParity",1,2) => 1], "Link,c=1,n=$n") for n in 1:N] : [Index(1, "Link,c=1,n=$(n)") for n in 1:N]
  )
  
  mpos = [Matrix{ITensor}(undef, link_dimension, link_dimension) for i in 1:N]
  for n in 1:N
    # define local matrix Hmat with empty tensors as local operators
    Hmat = fill(
      ITensor(EType, dag(sites[n]), prime(sites[n])), link_dimension, link_dimension
    )
    # left link index ll with daggered QN conserving direction (if applicable)
    ll = dag(linkindices[n-1])
    # right link index rl
    rl = linkindices[n]

    # add both Identities as netral elements in the MPO
    # replace all known tensors from empty to known interactions
    Hmat[1,1] = op("Id", sites[n])
    Hmat[3,3] = op("Id", sites[n])
    # local nearest neighbour and exp. decaying interaction terms
    Hmat[2,1] = op("X", sites[n])
    if !iszero(λ)
      Hmat[2,2] = op("Id", sites[n]) * λ  # λ Id,  on the diagonal
    end
    Hmat[3,2] = op("X", sites[n]) * -J # Jxx σˣ
    if !iszero(hz)
      Hmat[3,1] = op("Z", sites[n]) * -hz # hz σᶻ
    end

    # add all missing links that connect the interaction
    # operators in the unit cell
    Hmat[2,1] = setelt(ll[1]) * Hmat[2,1]
    Hmat[1,2] = Hmat[1,2] * setelt(rl[1])
    Hmat[2,2] = setelt(ll[1]) * Hmat[2,2] * setelt(rl[1])
    Hmat[3,2] = setelt(rl[1]) * Hmat[3,2]
    Hmat[2,3] = setelt(ll[1]) * Hmat[2,3]
    mpos[n] = Hmat
  end

  return InfiniteBlockMPO(mpos, sites.translator)
end

However, this throws an error with the following example code (run from within the ITensorIfniniteMPS foulder at examples/vumps

In my example file, you can also see a sanity check for my "style" of creating the MPO by not including the exponential interaction. The VUMPS algorithm then runs without errors and converged to the analytical result (at least at the critical point):

`vumps_ising_exp.jl`

using ITensors, ITensorMPS
using ITensorInfiniteMPS

base_path = joinpath(pkgdir(ITensorInfiniteMPS), "examples", "vumps", "src")
src_files = ["vumps_subspace_expansion.jl", "entropy.jl"]
for f in src_files
  include(joinpath(base_path, f))
end


"""
build the transverse field Ising model with exponential interactions in the thermodynamic limit 𝑁 → ∞
use convention H = -J ∑_{n,m=−∞}^∞ σˣₙ σˣₘ ⋅ λ^(-|n-m-1|)  - hz ∑_{n=−∞}^∞ σᶻₙ
with 
"""
function InfiniteExpHTFI( 
  sites::CelledVector{<:Index},
  λ, # exponential decay base
  J, # interaction kinetic (tunneling) coupling
  hz; # interaction strength of the transverse field
  kwargs...
)
  if abs(λ) > 1.0
    throw(ArgumentError("cannot implement exponential decay with base larger than 1, λ = $(λ)!"))
  end

  link_dimension = 3

  N = length(sites)

  EType = eltype(union(λ, J, hz))

  linkindices = CelledVector(
    hasqns(sites.data) ?
    [Index([QN("SzParity",1,2) => 1], "Link,c=1,n=$n") for n in 1:N] : [Index(1, "Link,c=1,n=$(n)") for n in 1:N]
  )
  
  mpos = [Matrix{ITensor}(undef, link_dimension, link_dimension) for i in 1:N]
  for n in 1:N
    # define local matrix Hmat with empty tensors as local operators
    Hmat = fill(
      ITensor(EType, dag(sites[n]), prime(sites[n])), link_dimension, link_dimension
    )
    # left link index ll with daggered QN conserving direction (if applicable)
    ll = dag(linkindices[n-1])
    # right link index rl
    rl = linkindices[n]

    # add both Identities as netral elements in the MPO
    # replace all known tensors from empty to known interactions
    Hmat[1,1] = op("Id", sites[n])
    Hmat[3,3] = op("Id", sites[n])
    # local nearest neighbour and exp. decaying interaction terms
    Hmat[2,1] = op("X", sites[n])
    if !iszero(λ)
      Hmat[2,2] = op("Id", sites[n]) * λ  # λ Id,  on the diagonal
    end
    Hmat[3,2] = op("X", sites[n]) * -J # Jxx σˣ
    if !iszero(hz)
      Hmat[3,1] = op("Z", sites[n]) * -hz # hz σᶻ
    end

    # add all missing links that connect the 
    # interaction operators in the unit cell
    Hmat[2,1] = setelt(ll[1]) * Hmat[2,1]
    Hmat[1,2] = Hmat[1,2] * setelt(rl[1])
    Hmat[2,2] = setelt(ll[1]) * Hmat[2,2] * setelt(rl[1])
    Hmat[3,2] = setelt(rl[1]) * Hmat[3,2]
    Hmat[2,3] = setelt(ll[1]) * Hmat[2,3]
    mpos[n] = Hmat
  end
  return InfiniteBlockMPO(mpos, sites.translator)
end


# sanity check if I can construct the MPO for the normal TFI correctly
"""
build the transverse field Ising model with nearest neighbour interactions in the thermodynamic limit 𝑁 → ∞
use convention H = -J ∑_{n=−∞}^∞ σˣₙ σˣₙ₊₁  - hz ∑_{n=−∞}^∞ σᶻₙ
with 
"""
function InfiniteHTFI( 
  sites::CelledVector{<:Index},
  kinetic_coupling,
  hz; # interaction kinetic (tunneling) coupling
  kwargs...
)
  if abs(λ) > 1.0
    throw(ArgumentError("cannot implement exponential decay with base larger than 1, λ = $(λ)!"))
  end
  link_dimension = 3

  N = length(sites)

  EType = eltype(union(J, hz))

  linkindices = CelledVector(
    hasqns(sites.data) ?
    [Index([QN("SzParity",1,2) => 1], "Link,c=1,n=$n") for n in 1:N] : [Index(1, "Link,c=1,n=$(n)") for n in 1:N]
  )
  
  mpos = [Matrix{ITensor}(undef, link_dimension, link_dimension) for i in 1:N]
  for n in 1:N
    # define local matrix Hmat with empty tensors as local operators
    Hmat = fill(
      ITensor(EType, dag(sites[n]), prime(sites[n])), link_dimension, link_dimension
    )
    # left link index ll with daggered QN conserving direction (if applicable)
    ll = dag(linkindices[n-1])
    # right link index rl
    rl = linkindices[n]

    # add both Identities as netral elements in the MPO
    # replace all known tensors from empty to known interactions
    Hmat[1,1] = op("Id", sites[n])
    Hmat[3,3] = op("Id", sites[n])
    # local nearest neighbour and exp. decaying interaction terms
    Hmat[2,1] = op("X", sites[n])
    Hmat[3,2] = op("X", sites[n]) * -J # Jxx σˣ
    if !iszero(hz)
      Hmat[3,1] = op("Z", sites[n]) * -hz # hz σᶻ
    end

    # add all missing links that connect the 
    # interaction operators in the unit cell
    Hmat[2,1] = setelt(ll[1]) * Hmat[2,1]
    Hmat[1,2] = Hmat[1,2] * setelt(rl[1])
    Hmat[2,2] = setelt(ll[1]) * Hmat[2,2] * setelt(rl[1])
    Hmat[3,2] = setelt(rl[1]) * Hmat[3,2]
    Hmat[2,3] = setelt(ll[1]) * Hmat[2,3]
    mpos[n] = Hmat
  end
  return InfiniteBlockMPO(mpos, sites.translator)
end


##############################################################################
# VUMPS/TDVP parameters
#

maxdim = 20 # Maximum bond dimension
cutoff = 1e-6 # Singular value cutoff when increasing the bond dimension
max_vumps_iters = 10 # Maximum number of iterations of the VUMPS/TDVP algorithm at a fixed bond dimension
tol = 1e-5 # Precision error tolerance for outer loop of VUMPS or TDVP
outer_iters = 5 # Number of times to increase the bond dimension
time_step = -Inf # -Inf corresponds to VUMPS, finite time_step corresponds to TDVP
solver_tol = (x -> x / 100) # Tolerance for the local solver (eigsolve in VUMPS and exponentiate in TDVP)
multisite_update_alg = "parallel" # Choose between ["sequential", "parallel"]. Only parallel works with TDVP.
conserve_qns = true # Whether or not to conserve spin parity


nsite = 2 # Number of sites in the unit cell
initstate(n) = ""
s = infsiteinds("S=1/2", nsite; initstate, conserve_szparity=conserve_qns)

ψ = InfMPS(s, initstate)

@show norm(contract.AL[1:nsite]..., ψ.C[nsite]) - contract.C[0], ψ.AR[1:nsite]...))

J  = 1.0
hz = 1.0
λ  = 0.6

H_test = InfiniteHTFI(s,J,hz)
H_test_exp = InfiniteExpHTFI(s,λ,J,hz)

vumps_kwargs = (
  tol=tol,
  maxiter=max_vumps_iters,
  solver_tol=solver_tol,
  multisite_update_alg=multisite_update_alg,
)
subspace_expansion_kwargs = (cutoff=cutoff, maxdim=maxdim)

ψ_NN  = vumps_subspace_expansion(H_test, ψ; outer_iters, subspace_expansion_kwargs, vumps_kwargs)
# Check translational invariance
@show norm(contract(ψ_NN.AL[1:nsite]..., ψ_NN.C[nsite]) - contract(ψ_NN.C[0], ψ_NN.AR[1:nsite]...))

ψ_exp = vumps_subspace_expansion(H_test_exp, InfMPS(s, initstate); outer_iters, subspace_expansion_kwargs, vumps_kwargs)

the error message produced is the following:

error message

ERROR: BoundsError: attempt to access Dim 1: (dim=1|id=647|"Link,c=0,n=2") <In>
 1: QN("SzParity",1,2) => 1
Dim 2: (dim=2|id=254|"S=1/2,Site,c=1,n=1")' <Out>
 1: QN("SzParity",1,2) => 1
 2: QN("SzParity",0,2) => 1
Dim 3: (dim=2|id=254|"S=1/2,Site,c=1,n=1") <In>
 1: QN("SzParity",1,2) => 1
 2: QN("SzParity",0,2) => 1
Dim 4: (dim=1|id=932|"Link,c=1,n=1") <Out>
 1: QN("SzParity",1,2) => 1
NDTensors.BlockSparse{Float64, Vector{Float64}, 4}
 1×2×2×1
 at index [1, 1]
Stacktrace:
  [1] throw_boundserror(A::NDTensors.BlockSparseTensor{…}, I::Tuple{…})
    @ Base ./essentials.jl:14
  [2] checkbounds
    @ ./abstractarray.jl:699 [inlined]
  [3] _getindex
    @ ./abstractarray.jl:1352 [inlined]
  [4] getindex(::NDTensors.BlockSparseTensor{Float64, 4, NTuple{…}, NDTensors.BlockSparse{…}}, ::Int64, ::Int64)
    @ Base ./abstractarray.jl:1312
  [5] getindex(::ITensor, ::Int64, ::Int64)
    @ ITensors ~/.julia/packages/ITensors/KvHvB/src/itensor.jl:1085
  [6] left_environment(H::InfiniteBlockMPO, ψ::InfiniteCanonicalMPS; tol::Float64)
    @ ITensorInfiniteMPS ~/.julia/dev/ITensorInfiniteMPS/src/vumps_mpo.jl:129
  [7] left_environment
    @ ~/.julia/dev/ITensorInfiniteMPS/src/vumps_mpo.jl:92 [inlined]
  [8] generate_twobody_nullspace::InfiniteCanonicalMPS, H::InfiniteBlockMPO, b::Tuple{Int64, Int64}; atol::Float64)
    @ ITensorInfiniteMPS ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:124
  [9] generate_twobody_nullspace
    @ ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:120 [inlined]
 [10] subspace_expansion::InfiniteCanonicalMPS, H::InfiniteBlockMPO, b::Tuple{…}; maxdim::Int64, cutoff::Float64, atol::Float64, kwargs::@Kwargs{})
    @ ITensorInfiniteMPS ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:197
 [11] subspace_expansion
    @ ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:166 [inlined]
 [12] subspace_expansion::InfiniteCanonicalMPS, H::InfiniteBlockMPO; kwargs::@Kwargs{cutoff::Float64, maxdim::Int64})
    @ ITensorInfiniteMPS ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:323
 [13] subspace_expansion
    @ ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:315 [inlined]
 [14] macro expansion
    @ ./timing.jl:581 [inlined]
 [15] macro expansion
    @ ~/.julia/dev/ITensorInfiniteMPS/examples/vumps/src/vumps_subspace_expansion.jl:16 [inlined]
 [16] macro expansion
    @ ./timing.jl:581 [inlined]
 [17] tdvp_subspace_expansion(H::InfiniteBlockMPO, ψ::InfiniteCanonicalMPS; time_step::Float64, outer_iters::Int64, subspace_expansion_kwargs::@NamedTuple{}, vumps_kwargs::@NamedTuple{})
    @ Main ~/.julia/dev/ITensorInfiniteMPS/examples/vumps/src/vumps_subspace_expansion.jl:9
 [18] tdvp_subspace_expansion
    @ ~/.julia/dev/ITensorInfiniteMPS/examples/vumps/src/vumps_subspace_expansion.jl:6 [inlined]
 [19] #vumps_subspace_expansion#12
    @ ~/.julia/dev/ITensorInfiniteMPS/examples/vumps/src/vumps_subspace_expansion.jl:27 [inlined]
 [20] top-level scope
    @ ~/.julia/dev/ITensorInfiniteMPS/examples/vumps/vumps_ising_exp.jl:187
Some type information was truncated. Use `show(err)` to see complete types.

The most important reduction of above error message (I believe) is: ERROR: BoundsError: attempt to access [...] 1×2×2×1 at index [1, 1]
which originates in the VUMPS code in this location:

λ = H[1][n, n][1, 1]

within the wider loop around it.

I reckon that the exponential interaction is not anticipated by the code as it expects a local operator on the diagonal like the identity at location [1,1] and [3,3] in this example in the Matrix of ITensors representing the MPO; and it does not treat an identity which should connect two distant $\sigma^x$ operators to a contribution like

$$\begin{align} \sigma_n^x \otimes \lambda^{n-m-1} \mathbb{1}_{n+1} \otimes \mathbb{1}_{n+2} \otimes \ldots \sigma^x_m \ . \end{align}$$

One can get away with not equipping the terminal identity operators with link indices from a contribution to the Hamiltonian like such:

$$\begin{align} \mathbb{1}_{n-2} \otimes \mathbb{1}_{n-1} \otimes \sigma^x_n \otimes \sigma^x_{n+1} \otimes \mathbb{1}_{n+2} \otimes \mathbb{1}_{n+3} \ , \end{align}$$

because I believe they won't carry any internal interactions as they terminate the contribution of that interaction term in the automata style of thinking (Am I making myself clear??).

However, my take would be that the diagonal elements indeed should be equiped with a link index daggered from the left and one from the right, as this very identity operator connects two spin flip operators. therefore, this operation creates a local current of a (possibly) conserved charge which the link has to keep track of.
this is done by the line Hmat[2,2] = setelt(ll[1]) * Hmat[2,2] * setelt(rl[1]).
This, however, produces an other error message, beyond that I currently believe it shouldn't be correct...

@LHerviou
Copy link
Contributor

Indeed you are right, it seems I did not implement it correctly here (it is also possible that the interface changed in 2 years and I did not realize it). Now I will go into more details.

  • Your construction of the InfiniteBlockMPO seems perfectly correct to me.

  • You are correct indeed that a leg is missing. Basically, as we assumed it should be a constant time identity, we did not take care of that leg properly. This part of the code needs a rewrite, if you are up to it. There is actually quite a bit to fix. Here is the pseudocode of what we would want:

if !isempty(H[1][n, n])
      step1 -> Check that the largest eigenvalue of H[1][n, n] is smaller than 1 (we can limit ourselves to the identitymatrix if you want).
      step2 -> implement the equation C.19 in https://arxiv.org/pdf/1701.07035m through linsolve, or C.25 depending on lambda
         Both these equations do not care about the extra leg if implemented correctly.
    end

I can try to do that in the next few days, but if you have the time, it will probably be faster.

@mtfishman
Copy link
Member Author

Just chiming in to say that I don't have time to work on this right now, but this would be great to have working, thanks @LHerviou and @jtschneider for looking into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants