Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NDTensor] Allow general objects are storage backends for Tensor #1206

Merged
merged 7 commits into from
Sep 28, 2023

Conversation

mtfishman
Copy link
Member

@mtfishman mtfishman commented Sep 28, 2023

Description

This relaxes the type constraint on the storage field of Tensor to allow any objects as storage backends.

The idea would be that the storage should adhere to a minimal interface requirement, for example in general that it is an AbstractArray (though that isn't required) and defines some contraction and permutation functionality (though if we provide a general TTGT backend, we could in general only require permutations and matrix multiplications) as well as factorizations like SVD or QR.

You can see the interface requirements demonstrated in arraytensor/array.jl. Some of that can be made more generic, and therefore won't need to be defined for every storage backend.

For now, the design is that non-TensorStorage storage backends are "registered" in the type union ArrayStorage, only for the purpose of ensuring that dispatch doesn't get confused between new functionality we write for generic storage backends as opposed to the current TensorStorage type which has particular requirements, so that we don't have to change current TensorStorage code. Once we replace the current TensorStorage types in favor of AbstractArray backends (Array, BlockSparseArray, DiagonalArray, etc.) we can just remove the ArrayStorage type union (and maybe replace it with AbstractArray where appropriate).

@kmp5VT your PR #1161 aligns with this new design since you are getting rid of the EmptyStorage type (already in the right direction of removing sTensorStorage types) and also the UnallocatedZeros type will still be relevant here since it will either be used directly as a Tensor storage or used as a data backend for one of the other storage types like BlockSparseArray or DiagonalArray. Additionally, I don't anticipate getting rid of our contract or permutedims code, just making sure they both work for the new storage types, so the optimizations you are doing in #1194 are definitely still important.

A strategy for how to proceed with the goal of removing TensorStorage types is to "peel off" the storage types one by one. For example, we can start by making sure that using an Array as a storage backend for Tensor has all of the same functionality as Dense (which I already started to do in this PR, as you can see it already works for a number of operations like addition, permutation, contraction, etc.). Then we can just change the default ITensor constructor to use Array (or UnallocatedZeros) storage instead of Dense storage when there are dense indices. Then, once we develop external BlockSparseArray and DiagonalArray AbstractArrays (BlockSparseArray already started in #1205), once we make them feature-complete with BlockSparse and Diag we can use them as defaults as well, and then just remove BlockSparse and Diag.

The Combiner is a slightly awkward corner case, but I think we can basically just leave it as-is for now and hopefully it get replaced by a more principle fusion tree type in the future.

@mtfishman
Copy link
Member Author

@emstoudenmire I think as you design the FusionTensor type (or whatever it is called), you should add it to the ArrayStorage type union instead of making it a TensorStorage subtype, and design it as a standalone array-like object independent of the Tensor type at first (whether or not is AbstractArray shouldn't really matter, but I could see an argument for making at an AbstractArray but maybe disallowing getindex and setindex!).

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@codecov-commenter
Copy link

codecov-commenter commented Sep 28, 2023

Codecov Report

All modified lines are covered by tests ✅

Comparison is base (f6c575b) 85.41% compared to head (e406418) 67.37%.
Report is 5 commits behind head on main.

❗ Current head e406418 differs from pull request most recent head fefae9d. Consider uploading reports for the commit fefae9d to get more accurate results

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@             Coverage Diff             @@
##             main    #1206       +/-   ##
===========================================
- Coverage   85.41%   67.37%   -18.05%     
===========================================
  Files          88       87        -1     
  Lines        8426     8388       -38     
===========================================
- Hits         7197     5651     -1546     
- Misses       1229     2737     +1508     
Files Coverage Δ
src/itensor.jl 80.96% <100.00%> (-1.30%) ⬇️

... and 35 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mtfishman mtfishman marked this pull request as ready for review September 28, 2023 17:56
@mtfishman
Copy link
Member Author

@emstoudenmire @kmp5VT I'll merge this so we can build on top of it going forward.

@mtfishman mtfishman merged commit d56b4a7 into main Sep 28, 2023
@mtfishman mtfishman deleted the NDTensors_arraytensor branch September 28, 2023 17:57
@emstoudenmire
Copy link
Collaborator

Looks really good and thanks for the guidance about the fusion storage type. That's what I needed to know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants