-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NDTensor] Allow general objects are storage backends for Tensor #1206
Conversation
@emstoudenmire I think as you design the |
Codecov ReportAll modified lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1206 +/- ##
===========================================
- Coverage 85.41% 67.37% -18.05%
===========================================
Files 88 87 -1
Lines 8426 8388 -38
===========================================
- Hits 7197 5651 -1546
- Misses 1229 2737 +1508
☔ View full report in Codecov by Sentry. |
@emstoudenmire @kmp5VT I'll merge this so we can build on top of it going forward. |
Looks really good and thanks for the guidance about the fusion storage type. That's what I needed to know. |
Description
This relaxes the type constraint on the
storage
field ofTensor
to allow any objects as storage backends.The idea would be that the
storage
should adhere to a minimal interface requirement, for example in general that it is anAbstractArray
(though that isn't required) and defines some contraction and permutation functionality (though if we provide a general TTGT backend, we could in general only require permutations and matrix multiplications) as well as factorizations like SVD or QR.You can see the interface requirements demonstrated in
arraytensor/array.jl
. Some of that can be made more generic, and therefore won't need to be defined for every storage backend.For now, the design is that non-TensorStorage storage backends are "registered" in the type union
ArrayStorage
, only for the purpose of ensuring that dispatch doesn't get confused between new functionality we write for generic storage backends as opposed to the currentTensorStorage
type which has particular requirements, so that we don't have to change currentTensorStorage
code. Once we replace the currentTensorStorage
types in favor ofAbstractArray
backends (Array
,BlockSparseArray
,DiagonalArray
, etc.) we can just remove theArrayStorage
type union (and maybe replace it withAbstractArray
where appropriate).@kmp5VT your PR #1161 aligns with this new design since you are getting rid of the
EmptyStorage
type (already in the right direction of removing sTensorStorage
types) and also theUnallocatedZeros
type will still be relevant here since it will either be used directly as aTensor
storage or used as a data backend for one of the other storage types likeBlockSparseArray
orDiagonalArray
. Additionally, I don't anticipate getting rid of ourcontract
orpermutedims
code, just making sure they both work for the new storage types, so the optimizations you are doing in #1194 are definitely still important.A strategy for how to proceed with the goal of removing
TensorStorage
types is to "peel off" the storage types one by one. For example, we can start by making sure that using anArray
as a storage backend forTensor
has all of the same functionality asDense
(which I already started to do in this PR, as you can see it already works for a number of operations like addition, permutation, contraction, etc.). Then we can just change the defaultITensor
constructor to useArray
(orUnallocatedZeros
) storage instead ofDense
storage when there are dense indices. Then, once we develop externalBlockSparseArray
andDiagonalArray
AbstractArrays (BlockSparseArray
already started in #1205), once we make them feature-complete withBlockSparse
andDiag
we can use them as defaults as well, and then just removeBlockSparse
andDiag
.The
Combiner
is a slightly awkward corner case, but I think we can basically just leave it as-is for now and hopefully it get replaced by a more principle fusion tree type in the future.