Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manually defining MPOs #254

Closed
benjamints opened this issue Feb 20, 2025 · 3 comments · Fixed by #257
Closed

Manually defining MPOs #254

benjamints opened this issue Feb 20, 2025 · 3 comments · Fixed by #257

Comments

@benjamints
Copy link

I am interested in Hamiltonians that includes all-to-all interactions like

$$ H = \sum_{1\le m < n \le N} n S_m \cdot S_n . $$

This can implemented as an MPO

$$ A_n = \begin{pmatrix} 1& S^a_n & \\ & 1 & nS^a_n \\ && 1 \end{pmatrix}. $$

As far as I can tell FiniteMPOHamiltonian(lattice::AbstractArray{<:VectorSpace}, local_operators) generates a very inefficient MPO for $H$, so I want manually enter the $A_n$.

In v0.12 MPOHamiltonian(x::AbstractArray{<:Any,3}) was removed which seems to make manual input much more involved. What is the recommended way of inputting the above example?

My attempt at mimicking the behavior of MPOHamiltonian(x::AbstractArray{<:Any,3}):

using MPSKit, TensorKit, BlockTensorKit

# Utility function to generate SparseBlockTensorMap
function tosparse(mpolist)
    T = MPSKit._find_tensortype(mpolist)
    E = scalartype(T)
    S = spacetype(T)
    Otype = MPSKit.jordanmpotensortype(S, E)

    return map(mpolist) do mpo
        VL, VR = (SumSpace(fill(oneunit(S), size(mpo, i))) for i = 1:2) # Defaulting to singlets?
        Vphy = physicalspace(mpo[findfirst(x -> x isa AbstractTensorMap, mpo)])
        data = Dict{CartesianIndex{4},T}()
        for i = axes(mpo, 1), j = axes(mpo, 2)
            if mpo[i, j] isa AbstractTensorMap
                data[CartesianIndex((i, 1, 1, j))] = mpo[i, j]
                VL[i] = left_virtualspace(mpo[i, j])
                VR[j] = right_virtualspace(mpo[i, j])
            end
        end
        return Otype(data, VL ⊗ Vphy ← Vphy ⊗ VR)
    end
end

# Spaces
Vsing = SU₂Space(0 => 1)
Vsite = SU₂Space(1 / 2 => 1)
Vadj = SU₂Space(1 => 1)

# Define operators
SL = -3 / 4 * TensorMap(ones, Float64, Vsing ⊗ Vsite ← Vsite ⊗ Vadj)
SR = TensorMap(ones, Float64, Vadj ⊗ Vsite ← Vsite ⊗ Vsing)
Is = BraidingTensor(Vsite, Vsing)
Ia = BraidingTensor(Vsite, Vadj)

N = 6
A = Array{Any,1}(missing, N)

# Input MPO
A[1] = [Is SL 0;]
for n = 2:N-1
    A[n] = [Is SL 0; 0 Ia n*SR; 0 0 Is]
end
A[N] = [0; N * SR; Is]

# Solve for eigenvalues
H = FiniteMPOHamiltonian(tosparse(A))
(val, states, _) = exact_diagonalization(H; num=10, sector=Irrep[SU₂](1))
println("ED completed:")
println(val)
@lkdvos
Copy link
Member

lkdvos commented Feb 20, 2025

Thanks for opening an issue, I didn't realize people were using this to manually construct the MPO. I'm definitely not against having a better interface for this, so I'll gladly add this feature back. Your implementation definitely looks close to what I would do as well.

If I may ask, my main question to you is mostly what should be supported, and what you would think of as a convenient interface. I'll gladly help with the implementation, but figuring out what is useful tends to be a hard problem 😬 Here are some thoughts:

If you have access to the virtualspaces in advance, everything is somewhat easy:

S = spacetype(t) # t is any of the operators
T = scalartype(t) # desired element type
A = MPSKit.jordanmpotensortype(spacetype(t), scalartype(t))(undef, V_left * P, P * V_right)

# fill tensors as wanted:
A[i, 1, 1, j] = O_ij

Automatically determining the virtualspaces for a single tensor can also be supported, but the simple option can only do so in the case where there is at least 1 tensor in every row and every column, otherwise it needs to assume some default. (typically you don't want empty rows/columns, so that might not be a problem?)

Automatically determining the virtualspaces for an entire MPO has slightly more information, and should be able to deduce more of the virtualspaces.

If I recall correctly, the previous versions also supported having missing or ::Number as elements. Is this something you would want?

What should be a convenient interface for this? Do you want to enter a Vector{Matrix{Union{Missing,Number,AbstractTensorMap}}} to support different sizes on different sites, or rather an Array{Union{...},3}?

Alternatively, would it be sufficient to simply provide a function that automatically attempts to deduce spaces? Something like SparseBlockTensorMap(array_of_tensors)?

@benjamints
Copy link
Author

Thanks for getting back to me so quickly! Entering a Vector{Matrix{Union{Missing,Number,AbstractTensorMap}}} without having to specify the virtualspaces would be a nice interface! Supporting multiples of the identity as ::Number would be especially convenient but the virtualspaces would be harder to infer as you would have to trace it through the MPO. (This is why I opted for explicitly defining identies using BraidingTensor in my implementation). If support for ::Number is too complicated, I would also be happy with a simple implementation like SparseBlockTensorMap(array_of_tensors) that infers the virtualspace from the entries.

Also, if a combination of row and column has no tensor, then I think you can safely assume that it is a singlet. Else the MPO would be creating zero using a non-trivial representation.

@lkdvos
Copy link
Member

lkdvos commented Feb 21, 2025

I have an initial implementation that I think is working (linked PR). It would be really great if you could test it out and see if it doesn't break, and even better if you could provide some more trying examples that I could use as tests.
Note that currently, I really am enforcing the Matrix{Union{Missing,T,O}} where {T<:Number,O<:MPOTensor} type, if everything works and I find the time, I will of course try to accept more generic matrix types.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants