-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MPO version of VUMPS/TDVP #39
Comments
I have started having a look at it, but if anyone wants to contribute, let me know. As of now, I am actually struggling on something simple. It should be possible to automatically generate an IMPO from the InfiniteTensorSum we have in models. If the 2-local Hamiltonian is of the form T1 - T2, then we necessarily have Another alternative would be to directly work with the InfiniteTensorSum (it is just a question of relabeling the index then). |
Just so I understand the issue fully, you are talking about implementing VUMPS in terms of a single MPO, not a sum of local MPOs, correct? Which kind of Index fusion are you trying to do? You form the MPO using the construction you describe above, but then want to fuse the quantum number sectors of the link indices of the MPO to form larger QN blocks? If that is the case, combiners work on single indices as well, in which case they fuse quantum number blocks automatically. However, I'll note that we've actually found that for DMRG, it can in fact be better for performance to not fuse the quantum number blocks of the link indices of the MPO, since the MPO often becomes more sparse of you keep the blocks split. Because of this, I introduced a function (Credit goes to Johannes Hauschild, lead developer of TenPy, for pointing this out to us!) Also, fusing the QN blocks may destroy the upper/lower block diagonal form that is useful for implementing the algorithm for obtaining the quasi-fixed point environment of the MPO transfer matrix, so it may be best to just leave the MPO as-is. |
Yes, as a single translation invariant MPO. I am really (for now) just trying to create the object W_inf. I am trying to write it automatically for arbitrary (potentially large) MPO, and I am just struggling a bit with making sure the indices work correctly. I guess I can just keep the correct T1, T2 without really having an explicit MPO built in. I agree it is probably best not fusing the QN blocks. |
Nevermind, I think I fixed that problem: would taking InfiniteMPO.data to be CelledVector{Matrix{ITensor}} be a satisfactory solution? |
Oh I see, you are actually trying to write a general code for converting some sort of operator representation into a uniform MPO? Getting the indices consistent with the correct quantum numbers can be tricky. Indeed, if it easier for now, we could just use the operator-valued matrix representation, like what is output by: https://github.com/ITensor/ITensorInfiniteMPS.jl/blob/main/src/broken/vumps_mpo.jl#L51-L79 I think I found it was easier to write the VUMPS code in terms of that form anyway, and we could always just convert to that form once we have code that actually makes an infinite MPO explicitly. I don't think we should define the To give you some context, in finite DMRG in ITensor, this is handled by the OpSum/AutoMPO system. The OpSum is the representation of the Hamiltonian in terms of string representation of operators and which sites they act on, and the AutoMPO system converts that representation into an MPO: https://github.com/ITensor/ITensors.jl/blob/main/src/physics/autompo.jl One goal was to extend that code to form infinite MPOs, also starting from an OpSum representation. There was a PR for that to the C++ version of ITensor: ITensor/ITensor#161, but we haven't had time to properly review it and port it to the Julia code. The Julia code for finite OpSum to MPO conversion was refactored and simplified a lot from the C++ version of the code by Miles, so likely the extension of converting and OpSum to an InfiniteMPO could be simplified as well. |
I will start like that then, it should be easy to make the basic infrastructure work. Thanks for the additional context. |
For info, I managed to write a full MPO treatment of VUMPS. The subspace expansion could be improved, but I would like to talk to several people in Benasque before looking into it. I am not completely sure it fits my (selfish) requirements. |
Amazing, thanks! My understanding is that the subspace expansion could be implemented by applying the 2-site effective Hamiltonian (the same one you would use in a 2-site VUMPS update), and then project into the nullspace. So I think it would be very similar to the current implementation. Or do you mean that there is a particular issue with the FQH Hamiltonian? |
Well, the original proposition for the subspace expansion is indeed just looking at the two site effective Hamiltonian. That is what I am currently using in the code. On the other hand, there were more recent papers discussing a better way to do it I discussed with some coworkers and we are not sure
|
My understanding is that https://scipost.org/SciPostPhysCore.4.1.004/pdf is focused on the problem of truncating, not expanding, the bond dimension of an MPS. From what I can tell, the algorithm assumes you already have an MPS manifold to project into. Perhaps some variant could be used for bond dimension expansion but it also has the disadvantage that it is an iterative algorithm, as opposed to the direct method currently used. I guess I would hope the current method or an extension would work (such as extending to a 3-site variation, if needed), though improved bond dimension expansion techniques is definitely a worthy area of exploration in general. |
The method can be in principle also used to find the best way to extend the system. In fact, the Ghent team uses a variation of that proposal to do the subspace expansion in their vumps code for 2D classical systems (some of my coworkers work with them). I do agree it is not completely obvious from the papers themselves --- I had to discuss with my coworkers to clarify how it worked. |
Yes, it is much clearer how it could work without symmetries, since you can just select a bond dimension larger than the one you have. You could just apply the Hamiltonian or exponential of the Hamiltonian to your MPS and then compress back to an MPS with the bond dimension you request. Symmetries are always the subtle part with these subspace expansion procedures. |
I guess you could mix the two strategies, by first expanding the bond dimension using the subspace expansion we are using, and then variationally optimize within the expanded tensor space using the compression algorithm in https://scipost.org/SciPostPhysCore.4.1.004/pdf. Essentially that is like alternating steps of VUMPS and iTEBD, where iTEBD is implemented with the algorithm in https://scipost.org/SciPostPhysCore.4.1.004/pdf. |
Yes, pretty much what I had in mind. |
Hi guys, |
Hi, As far as I know, currently you cannot use a general MPO. On the other hand, you can convert it to an InfiniteBlockMPO fairly trivially. The problem is that you will not really take advantage of the (triangular) structure of the MPO. As far as I reminder, convention is lower triangular currently. |
Hi Loïc, So I gave it a try and constructed my own MPO as an """
build the transverse field Ising model with exponential interactions in the thermodynamic limit 𝑁 → ∞
use convention H = -J ∑_{n,m=−∞}^∞ σˣₙ σˣₘ ⋅ λ^(-|n-m-1|) - hz ∑_{n=−∞}^∞ σᶻₙ
with
"""
function InfiniteExpHTFI(
sites::CelledVector{<:Index},
λ, # exponential decay base
J, # interaction kinetic (tunneling) coupling
hz; # interaction strength of the transverse field
kwargs...
)
if abs(λ) > 1.0
throw(ArgumentError("cannot implement exponential decay with base larger than 1, λ = $(λ)!"))
end
link_dimension = 3
N = length(sites)
EType = eltype(union(λ, J, hz))
linkindices = CelledVector(
hasqns(sites.data) ?
[Index([QN("SzParity",1,2) => 1], "Link,c=1,n=$n") for n in 1:N] : [Index(1, "Link,c=1,n=$(n)") for n in 1:N]
)
mpos = [Matrix{ITensor}(undef, link_dimension, link_dimension) for i in 1:N]
for n in 1:N
# define local matrix Hmat with empty tensors as local operators
Hmat = fill(
ITensor(EType, dag(sites[n]), prime(sites[n])), link_dimension, link_dimension
)
# left link index ll with daggered QN conserving direction (if applicable)
ll = dag(linkindices[n-1])
# right link index rl
rl = linkindices[n]
# add both Identities as netral elements in the MPO
# replace all known tensors from empty to known interactions
Hmat[1,1] = op("Id", sites[n])
Hmat[3,3] = op("Id", sites[n])
# local nearest neighbour and exp. decaying interaction terms
Hmat[2,1] = op("X", sites[n])
if !iszero(λ)
Hmat[2,2] = op("Id", sites[n]) * λ # λ Id, on the diagonal
end
Hmat[3,2] = op("X", sites[n]) * -J # Jxx σˣ
if !iszero(hz)
Hmat[3,1] = op("Z", sites[n]) * -hz # hz σᶻ
end
# add all missing links that connect the interaction
# operators in the unit cell
Hmat[2,1] = setelt(ll[1]) * Hmat[2,1]
Hmat[1,2] = Hmat[1,2] * setelt(rl[1])
Hmat[2,2] = setelt(ll[1]) * Hmat[2,2] * setelt(rl[1])
Hmat[3,2] = setelt(rl[1]) * Hmat[3,2]
Hmat[2,3] = setelt(ll[1]) * Hmat[2,3]
mpos[n] = Hmat
end
return InfiniteBlockMPO(mpos, sites.translator)
end However, this throws an error with the following example code (run from within the ITensorIfniniteMPS foulder at In my example file, you can also see a sanity check for my "style" of creating the MPO by not including the exponential interaction. The VUMPS algorithm then runs without errors and converged to the analytical result (at least at the critical point): `vumps_ising_exp.jl`
using ITensors, ITensorMPS
using ITensorInfiniteMPS
base_path = joinpath(pkgdir(ITensorInfiniteMPS), "examples", "vumps", "src")
src_files = ["vumps_subspace_expansion.jl", "entropy.jl"]
for f in src_files
include(joinpath(base_path, f))
end
"""
build the transverse field Ising model with exponential interactions in the thermodynamic limit 𝑁 → ∞
use convention H = -J ∑_{n,m=−∞}^∞ σˣₙ σˣₘ ⋅ λ^(-|n-m-1|) - hz ∑_{n=−∞}^∞ σᶻₙ
with
"""
function InfiniteExpHTFI(
sites::CelledVector{<:Index},
λ, # exponential decay base
J, # interaction kinetic (tunneling) coupling
hz; # interaction strength of the transverse field
kwargs...
)
if abs(λ) > 1.0
throw(ArgumentError("cannot implement exponential decay with base larger than 1, λ = $(λ)!"))
end
link_dimension = 3
N = length(sites)
EType = eltype(union(λ, J, hz))
linkindices = CelledVector(
hasqns(sites.data) ?
[Index([QN("SzParity",1,2) => 1], "Link,c=1,n=$n") for n in 1:N] : [Index(1, "Link,c=1,n=$(n)") for n in 1:N]
)
mpos = [Matrix{ITensor}(undef, link_dimension, link_dimension) for i in 1:N]
for n in 1:N
# define local matrix Hmat with empty tensors as local operators
Hmat = fill(
ITensor(EType, dag(sites[n]), prime(sites[n])), link_dimension, link_dimension
)
# left link index ll with daggered QN conserving direction (if applicable)
ll = dag(linkindices[n-1])
# right link index rl
rl = linkindices[n]
# add both Identities as netral elements in the MPO
# replace all known tensors from empty to known interactions
Hmat[1,1] = op("Id", sites[n])
Hmat[3,3] = op("Id", sites[n])
# local nearest neighbour and exp. decaying interaction terms
Hmat[2,1] = op("X", sites[n])
if !iszero(λ)
Hmat[2,2] = op("Id", sites[n]) * λ # λ Id, on the diagonal
end
Hmat[3,2] = op("X", sites[n]) * -J # Jxx σˣ
if !iszero(hz)
Hmat[3,1] = op("Z", sites[n]) * -hz # hz σᶻ
end
# add all missing links that connect the
# interaction operators in the unit cell
Hmat[2,1] = setelt(ll[1]) * Hmat[2,1]
Hmat[1,2] = Hmat[1,2] * setelt(rl[1])
Hmat[2,2] = setelt(ll[1]) * Hmat[2,2] * setelt(rl[1])
Hmat[3,2] = setelt(rl[1]) * Hmat[3,2]
Hmat[2,3] = setelt(ll[1]) * Hmat[2,3]
mpos[n] = Hmat
end
return InfiniteBlockMPO(mpos, sites.translator)
end
# sanity check if I can construct the MPO for the normal TFI correctly
"""
build the transverse field Ising model with nearest neighbour interactions in the thermodynamic limit 𝑁 → ∞
use convention H = -J ∑_{n=−∞}^∞ σˣₙ σˣₙ₊₁ - hz ∑_{n=−∞}^∞ σᶻₙ
with
"""
function InfiniteHTFI(
sites::CelledVector{<:Index},
kinetic_coupling,
hz; # interaction kinetic (tunneling) coupling
kwargs...
)
if abs(λ) > 1.0
throw(ArgumentError("cannot implement exponential decay with base larger than 1, λ = $(λ)!"))
end
link_dimension = 3
N = length(sites)
EType = eltype(union(J, hz))
linkindices = CelledVector(
hasqns(sites.data) ?
[Index([QN("SzParity",1,2) => 1], "Link,c=1,n=$n") for n in 1:N] : [Index(1, "Link,c=1,n=$(n)") for n in 1:N]
)
mpos = [Matrix{ITensor}(undef, link_dimension, link_dimension) for i in 1:N]
for n in 1:N
# define local matrix Hmat with empty tensors as local operators
Hmat = fill(
ITensor(EType, dag(sites[n]), prime(sites[n])), link_dimension, link_dimension
)
# left link index ll with daggered QN conserving direction (if applicable)
ll = dag(linkindices[n-1])
# right link index rl
rl = linkindices[n]
# add both Identities as netral elements in the MPO
# replace all known tensors from empty to known interactions
Hmat[1,1] = op("Id", sites[n])
Hmat[3,3] = op("Id", sites[n])
# local nearest neighbour and exp. decaying interaction terms
Hmat[2,1] = op("X", sites[n])
Hmat[3,2] = op("X", sites[n]) * -J # Jxx σˣ
if !iszero(hz)
Hmat[3,1] = op("Z", sites[n]) * -hz # hz σᶻ
end
# add all missing links that connect the
# interaction operators in the unit cell
Hmat[2,1] = setelt(ll[1]) * Hmat[2,1]
Hmat[1,2] = Hmat[1,2] * setelt(rl[1])
Hmat[2,2] = setelt(ll[1]) * Hmat[2,2] * setelt(rl[1])
Hmat[3,2] = setelt(rl[1]) * Hmat[3,2]
Hmat[2,3] = setelt(ll[1]) * Hmat[2,3]
mpos[n] = Hmat
end
return InfiniteBlockMPO(mpos, sites.translator)
end
##############################################################################
# VUMPS/TDVP parameters
#
maxdim = 20 # Maximum bond dimension
cutoff = 1e-6 # Singular value cutoff when increasing the bond dimension
max_vumps_iters = 10 # Maximum number of iterations of the VUMPS/TDVP algorithm at a fixed bond dimension
tol = 1e-5 # Precision error tolerance for outer loop of VUMPS or TDVP
outer_iters = 5 # Number of times to increase the bond dimension
time_step = -Inf # -Inf corresponds to VUMPS, finite time_step corresponds to TDVP
solver_tol = (x -> x / 100) # Tolerance for the local solver (eigsolve in VUMPS and exponentiate in TDVP)
multisite_update_alg = "parallel" # Choose between ["sequential", "parallel"]. Only parallel works with TDVP.
conserve_qns = true # Whether or not to conserve spin parity
nsite = 2 # Number of sites in the unit cell
initstate(n) = "↑"
s = infsiteinds("S=1/2", nsite; initstate, conserve_szparity=conserve_qns)
ψ = InfMPS(s, initstate)
@show norm(contract(ψ.AL[1:nsite]..., ψ.C[nsite]) - contract(ψ.C[0], ψ.AR[1:nsite]...))
J = 1.0
hz = 1.0
λ = 0.6
H_test = InfiniteHTFI(s,J,hz)
H_test_exp = InfiniteExpHTFI(s,λ,J,hz)
vumps_kwargs = (
tol=tol,
maxiter=max_vumps_iters,
solver_tol=solver_tol,
multisite_update_alg=multisite_update_alg,
)
subspace_expansion_kwargs = (cutoff=cutoff, maxdim=maxdim)
ψ_NN = vumps_subspace_expansion(H_test, ψ; outer_iters, subspace_expansion_kwargs, vumps_kwargs)
# Check translational invariance
@show norm(contract(ψ_NN.AL[1:nsite]..., ψ_NN.C[nsite]) - contract(ψ_NN.C[0], ψ_NN.AR[1:nsite]...))
ψ_exp = vumps_subspace_expansion(H_test_exp, InfMPS(s, initstate); outer_iters, subspace_expansion_kwargs, vumps_kwargs) the error message produced is the following: error message
ERROR: BoundsError: attempt to access Dim 1: (dim=1|id=647|"Link,c=0,n=2") <In>
1: QN("SzParity",1,2) => 1
Dim 2: (dim=2|id=254|"S=1/2,Site,c=1,n=1")' <Out>
1: QN("SzParity",1,2) => 1
2: QN("SzParity",0,2) => 1
Dim 3: (dim=2|id=254|"S=1/2,Site,c=1,n=1") <In>
1: QN("SzParity",1,2) => 1
2: QN("SzParity",0,2) => 1
Dim 4: (dim=1|id=932|"Link,c=1,n=1") <Out>
1: QN("SzParity",1,2) => 1
NDTensors.BlockSparse{Float64, Vector{Float64}, 4}
1×2×2×1
at index [1, 1]
Stacktrace:
[1] throw_boundserror(A::NDTensors.BlockSparseTensor{…}, I::Tuple{…})
@ Base ./essentials.jl:14
[2] checkbounds
@ ./abstractarray.jl:699 [inlined]
[3] _getindex
@ ./abstractarray.jl:1352 [inlined]
[4] getindex(::NDTensors.BlockSparseTensor{Float64, 4, NTuple{…}, NDTensors.BlockSparse{…}}, ::Int64, ::Int64)
@ Base ./abstractarray.jl:1312
[5] getindex(::ITensor, ::Int64, ::Int64)
@ ITensors ~/.julia/packages/ITensors/KvHvB/src/itensor.jl:1085
[6] left_environment(H::InfiniteBlockMPO, ψ::InfiniteCanonicalMPS; tol::Float64)
@ ITensorInfiniteMPS ~/.julia/dev/ITensorInfiniteMPS/src/vumps_mpo.jl:129
[7] left_environment
@ ~/.julia/dev/ITensorInfiniteMPS/src/vumps_mpo.jl:92 [inlined]
[8] generate_twobody_nullspace(ψ::InfiniteCanonicalMPS, H::InfiniteBlockMPO, b::Tuple{Int64, Int64}; atol::Float64)
@ ITensorInfiniteMPS ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:124
[9] generate_twobody_nullspace
@ ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:120 [inlined]
[10] subspace_expansion(ψ::InfiniteCanonicalMPS, H::InfiniteBlockMPO, b::Tuple{…}; maxdim::Int64, cutoff::Float64, atol::Float64, kwargs::@Kwargs{})
@ ITensorInfiniteMPS ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:197
[11] subspace_expansion
@ ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:166 [inlined]
[12] subspace_expansion(ψ::InfiniteCanonicalMPS, H::InfiniteBlockMPO; kwargs::@Kwargs{cutoff::Float64, maxdim::Int64})
@ ITensorInfiniteMPS ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:323
[13] subspace_expansion
@ ~/.julia/dev/ITensorInfiniteMPS/src/subspace_expansion.jl:315 [inlined]
[14] macro expansion
@ ./timing.jl:581 [inlined]
[15] macro expansion
@ ~/.julia/dev/ITensorInfiniteMPS/examples/vumps/src/vumps_subspace_expansion.jl:16 [inlined]
[16] macro expansion
@ ./timing.jl:581 [inlined]
[17] tdvp_subspace_expansion(H::InfiniteBlockMPO, ψ::InfiniteCanonicalMPS; time_step::Float64, outer_iters::Int64, subspace_expansion_kwargs::@NamedTuple{…}, vumps_kwargs::@NamedTuple{…})
@ Main ~/.julia/dev/ITensorInfiniteMPS/examples/vumps/src/vumps_subspace_expansion.jl:9
[18] tdvp_subspace_expansion
@ ~/.julia/dev/ITensorInfiniteMPS/examples/vumps/src/vumps_subspace_expansion.jl:6 [inlined]
[19] #vumps_subspace_expansion#12
@ ~/.julia/dev/ITensorInfiniteMPS/examples/vumps/src/vumps_subspace_expansion.jl:27 [inlined]
[20] top-level scope
@ ~/.julia/dev/ITensorInfiniteMPS/examples/vumps/vumps_ising_exp.jl:187
Some type information was truncated. Use `show(err)` to see complete types. The most important reduction of above error message (I believe) is: ITensorInfiniteMPS.jl/src/vumps_mpo.jl Line 129 in d0efafc
within the wider loop around it. I reckon that the exponential interaction is not anticipated by the code as it expects a local operator on the diagonal like the identity at location [1,1] and [3,3] in this example in the Matrix of ITensors representing the MPO; and it does not treat an identity which should connect two distant One can get away with not equipping the terminal identity operators with link indices from a contribution to the Hamiltonian like such: because I believe they won't carry any internal interactions as they terminate the contribution of that interaction term in the automata style of thinking (Am I making myself clear??). However, my take would be that the diagonal elements indeed should be equiped with a link index daggered from the left and one from the right, as this very identity operator connects two spin flip operators. therefore, this operation creates a local current of a (possibly) conserved charge which the link has to keep track of. |
Indeed you are right, it seems I did not implement it correctly here (it is also possible that the interface changed in 2 years and I did not realize it). Now I will go into more details.
I can try to do that in the next few days, but if you have the time, it will probably be faster. |
Just chiming in to say that I don't have time to work on this right now, but this would be great to have working, thanks @LHerviou and @jtschneider for looking into this. |
Currently, the implementation of VUMPS/TDVP only supports Hamiltonians represented explicitly in terms of sums of local terms (with extensions to long range interactions in #31).
Ideally, we would have a version that works for an MPO. I attempted to implement one initially but there was a bug in it that I couldn't track down.
EDIT: The old, broken implementation is here: https://github.com/ITensor/ITensorInfiniteMPS.jl/blob/main/src/broken/vumps_mpo.jl. There may be useful functionality in there that could be reused for a new version, such as transforming an MPO into a set of operator/ITensor-valued matrices.
@LHerviou expressed interest in possibly looking into this. Additionally, an alternative that was discussed was to extend #31 (EDIT: and #32) to make use of sums of local MPOs with finite support, which would be an intermediate solution (ultimately, we would still want a single MPO version).
Please comment on this issue if you are interested in looking into implementing this.
The text was updated successfully, but these errors were encountered: