Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NDTensors] In the effort to replace EmptyStorage with an empty DataType #1161

Closed
wants to merge 289 commits into from
Closed
Show file tree
Hide file tree
Changes from 39 commits
Commits
Show all changes
289 commits
Select commit Hold shift + click to select a range
523d9d7
Clean up emptyITensor
kmp5VT Aug 3, 2023
b908996
format
kmp5VT Aug 3, 2023
68cab5e
ElT -> elt
kmp5VT Aug 4, 2023
a7f4bc8
eltype to elt
kmp5VT Aug 4, 2023
70b052f
eltype -> elt
kmp5VT Aug 4, 2023
fe08b9e
Remove where
kmp5VT Aug 4, 2023
b90b31e
Itensor(DataT, indices) operator
kmp5VT Aug 4, 2023
e15eddd
format
kmp5VT Aug 4, 2023
2030915
Update per commit 32cc44fc907e7f8ff249dd6b51907b0d11aa9d30
kmp5VT Aug 7, 2023
787b436
Merge branch 'kmp5/refactor/fillarrays_redo' of github.com:kmp5VT/ITe…
kmp5VT Aug 7, 2023
311a5fe
Fix some naming
kmp5VT Aug 7, 2023
4d13fb8
Another name change
kmp5VT Aug 7, 2023
9dc72c8
Comment out emptystorage from NDTensorsCUDAExt
kmp5VT Aug 7, 2023
990c50a
Add axes to zeros and iszero -> is_unallocated_zeros
kmp5VT Aug 7, 2023
62d052e
Flatten indices before providing to zeros
kmp5VT Aug 7, 2023
3c90dd1
isemptystorage -> iszerodata
kmp5VT Aug 7, 2023
8b1f05e
deprecate emptyITensor
kmp5VT Aug 7, 2023
961fd3c
Merge branch 'main' into kmp5/refactor/fillarrays_redo
kmp5VT Aug 7, 2023
75de10d
format
kmp5VT Aug 7, 2023
024ca5d
Remove emptyITensors, it is convered in deprecated
kmp5VT Aug 7, 2023
d9f6c7b
remove emptyitensor.jl from include
kmp5VT Aug 7, 2023
6e293a5
Add axes to `FillArrays.Zeros` definition
kmp5VT Aug 7, 2023
6a6dfc2
update comment
kmp5VT Aug 7, 2023
3238705
format
kmp5VT Aug 7, 2023
068d6a2
Update Zeros to have type information for alloc.
kmp5VT Aug 7, 2023
f783de0
format
kmp5VT Aug 7, 2023
887e116
Fix constructors
kmp5VT Aug 7, 2023
5d69331
Restructor randomITensor constructors
kmp5VT Aug 7, 2023
1c0ce67
Move empty randomITensors to oneitensor.jl
kmp5VT Aug 7, 2023
93e5cda
format
kmp5VT Aug 7, 2023
f2b738a
Update set_parameters since Zeros now has 4 parameters
kmp5VT Aug 7, 2023
1f22a49
Rename zeros functions
kmp5VT Aug 7, 2023
884f89a
Don't use Index(0), fix constructors to allow inds of `()`
kmp5VT Aug 8, 2023
5bb0cd6
format
kmp5VT Aug 8, 2023
978b760
Assume zeros has all its parameters always
kmp5VT Aug 8, 2023
f1271e2
format
kmp5VT Aug 8, 2023
7606225
datatype->alloctype
kmp5VT Aug 8, 2023
c8ff2e7
Add back accidentally forced out code
kmp5VT Aug 8, 2023
2c6812b
Add back removed changes
kmp5VT Aug 8, 2023
5515051
Merge branch 'kmp5/refactor/fillarrays_redo' of github.com:kmp5VT/ITe…
kmp5VT Aug 9, 2023
eb5107f
Make ITensor default_datatype Zeros and default_eltype bool
kmp5VT Aug 10, 2023
4850b2d
format
kmp5VT Aug 10, 2023
4322269
Merge branch 'main' into kmp5/refactor/fillarrays_redo
kmp5VT Aug 10, 2023
afc44d6
Merge branch 'main' into kmp5/refactor/fillarrays_redo
kmp5VT Aug 11, 2023
aaa56e2
Create Zeros similar functions
kmp5VT Aug 11, 2023
09ee97b
Fix some naming conventions and make new constructor
kmp5VT Aug 11, 2023
af10dc5
create ITensors default_eltype (don't overwrite NDTensors)
kmp5VT Aug 11, 2023
02b4604
Fix diagITensor definition
kmp5VT Aug 11, 2023
c688e96
format
kmp5VT Aug 11, 2023
17232fb
Merge commit 'afc44d618cd745e4780e1645780fc162f45afe80' into kmp5/ref…
kmp5VT Aug 11, 2023
866da11
if A or B unallocated return empty tensor with correct inds
kmp5VT Aug 11, 2023
84409c8
Create some necessary functions for zeros
kmp5VT Aug 11, 2023
4b59a8b
Fix some issues with zeros similar
kmp5VT Aug 11, 2023
831a54c
format
kmp5VT Aug 11, 2023
eb0d5e5
Datatype -> alloctype
kmp5VT Aug 11, 2023
57ff8fa
If trying to setindex!!, exchange Zeros with real datatype
kmp5VT Aug 11, 2023
50eda9a
Use NDTensors.is_unallocated_zeros
kmp5VT Aug 11, 2023
c4986d5
Fix combine contract by converting lazy zero of output.
kmp5VT Aug 11, 2023
9f49d84
Add vector def for default_storagetype
kmp5VT Aug 11, 2023
8384411
Force elt to be a number so indices aren't passed on accident
kmp5VT Aug 11, 2023
5ba72db
NDTensors needs to have a vector for the data right now
kmp5VT Aug 11, 2023
32876d7
format
kmp5VT Aug 11, 2023
13bd8a3
iszerodata ->is_unallocated_zeros
kmp5VT Aug 14, 2023
af82a88
Flatten higher order data in ITensors, if provided.
kmp5VT Aug 14, 2023
17d804d
Split up QN itensor constructors to get them working with general ITe…
kmp5VT Aug 15, 2023
f72cb41
format
kmp5VT Aug 15, 2023
0937ee5
Merge commit '2d2a81f776db409d54004c4c52b9eb52e256b14d' into kmp5/ref…
kmp5VT Aug 21, 2023
9e30a1e
Reorder project.toml put QN's first because they could (should?) live…
kmp5VT Aug 21, 2023
e95d9c0
Need to define an abstractVector oneITensor constructor to match that…
kmp5VT Aug 21, 2023
d1e160c
Fix QN itensor calls when blocks not provided
kmp5VT Aug 21, 2023
34697fd
use ITensors.default_eltype()
kmp5VT Aug 21, 2023
336ea44
Merge branch 'main' into kmp5/refactor/fillarrays_redo
kmp5VT Aug 22, 2023
c37dad9
format
kmp5VT Aug 24, 2023
4beefd2
Merge branch 'kmp5/refactor/fillarrays_redo' of github.com:kmp5VT/ITe…
kmp5VT Aug 24, 2023
8bd5747
emptynumber -> UnspecifiedZeros
kmp5VT Aug 24, 2023
d05a099
Change emptynumber -> UnspecifiedZero
kmp5VT Aug 24, 2023
061241a
Make () version of NDTensors.Zeros
kmp5VT Aug 24, 2023
4f13a71
format
kmp5VT Aug 24, 2023
4007761
Use AbstractFloat instead of number to prevent errors
kmp5VT Aug 25, 2023
ed3327d
create data_isa function for ITensors and NDTensors
kmp5VT Aug 25, 2023
fa840fd
Add todo in fill.jl use NDTensors.Zeros in generic_zeros when no data…
kmp5VT Aug 27, 2023
9b5ae82
use new definition of `is_unallocated_zeros` everywhere
kmp5VT Aug 27, 2023
d40beea
create convert for `UnspecifiedZero` -> `complex{UnspecifiedZero}`
kmp5VT Aug 27, 2023
fb80cd7
Add `data_isa` for number comparison
kmp5VT Aug 27, 2023
d61e04d
format
kmp5VT Aug 27, 2023
08eb312
Remove allocation from dense tensor; start working on a general alloc…
kmp5VT Aug 28, 2023
ce89679
Make some allocate functions
kmp5VT Aug 28, 2023
db0ce75
Add allocate to `setindex!!` in NDTensors
kmp5VT Aug 28, 2023
d71e953
Make complex functions for `UnspecifiedZero` to make more number like
kmp5VT Aug 28, 2023
4018613
complex and alloc functions in `zeros.jl`
kmp5VT Aug 28, 2023
2042d69
format
kmp5VT Aug 28, 2023
f51f9d2
Use `iszero(z)`
kmp5VT Aug 28, 2023
1751a6c
format
kmp5VT Aug 28, 2023
174989d
Remove show
kmp5VT Aug 28, 2023
f9695cc
Move unallocated_zeros check earlier in contract stack
kmp5VT Aug 28, 2023
9caea0c
No longer using emptystorage so start removing it and check if unallo…
kmp5VT Aug 28, 2023
e6127a4
format
kmp5VT Aug 28, 2023
9dd5609
Use default_datatype instead of vector
kmp5VT Aug 29, 2023
b57277e
Start trying to create a constructor where you provide datatype and b…
kmp5VT Aug 29, 2023
6cb3fff
default_datatype should return datatype. But also make function which…
kmp5VT Aug 29, 2023
2d2177f
Some fixes to oneitensor
kmp5VT Aug 29, 2023
0d84932
julia> Since flux will always be set now, need to check if unallocate…
kmp5VT Aug 29, 2023
a03ec3f
format
kmp5VT Aug 29, 2023
341272f
Zeros -> UnallocatedZeros
kmp5VT Aug 31, 2023
d6c5660
format
kmp5VT Aug 31, 2023
d25c37a
remove is_zerodata function
kmp5VT Aug 31, 2023
89f5b6f
use allocate function
kmp5VT Aug 31, 2023
e532856
`zeros.jl` -> `unallocated_zeros.jl`
kmp5VT Sep 1, 2023
84bf349
fix constructor in diagITensor
kmp5VT Sep 1, 2023
cbc4ff2
If `tensor1` and `tensor2` are allocated, make sure to allocate outpu…
kmp5VT Sep 1, 2023
51e162c
Allocate before filling ITensor
kmp5VT Sep 1, 2023
c3586a3
Update/fix test
kmp5VT Sep 1, 2023
bb9df2b
Fix allocate function
kmp5VT Sep 1, 2023
88ed0cd
Update test_itensor, some tests broken because not mismatched index i…
kmp5VT Sep 1, 2023
f4cf44e
format
kmp5VT Sep 1, 2023
f8b029d
Organization
kmp5VT Sep 6, 2023
15cd3be
Re-organize blocksparse constructors
kmp5VT Sep 6, 2023
f0bf64d
Move allocate to its own file
kmp5VT Sep 6, 2023
d8e27be
remove allocate from unallocated_zeros.jl
kmp5VT Sep 6, 2023
d59e04c
Update BlockSparse to construct with UnallocatedZeros
kmp5VT Sep 6, 2023
4a4a7ce
Add default constructor for unallocatedzeros
kmp5VT Sep 6, 2023
2bfce58
format
kmp5VT Sep 6, 2023
1194c95
`set_eltype_if_unspecified`-> `specify_eltype`
kmp5VT Sep 6, 2023
acff8d5
Create specify_eltype for UnspecifiedZero arrays
kmp5VT Sep 6, 2023
7dd034b
`set_parameter_if_unspecified` -> specify_parameters
kmp5VT Sep 6, 2023
09a5a3b
Fix set_parameter 1 for UnallocatedZeros
kmp5VT Sep 6, 2023
3307ec0
In specify parameters, always try to specify eltype just in case usin…
kmp5VT Sep 6, 2023
23bdb02
format
kmp5VT Sep 6, 2023
577da17
Add comment about Number based `data_isa` function
kmp5VT Sep 6, 2023
3a19696
Ensure only indices are moving into the variadic ITensor constructor
kmp5VT Sep 6, 2023
e31ae3f
Don't specify_eltype because this is wrong
kmp5VT Sep 6, 2023
7cc6af2
Working on allocate function for tensors/tensor storage types
kmp5VT Sep 6, 2023
8c801c3
Remove NDTensors.
kmp5VT Sep 7, 2023
ef03e3c
Temporarily move allocate to Blocksparse (will replace with Adapt)
kmp5VT Sep 7, 2023
cb8e6a1
AbstractArray -> AbstractFill
kmp5VT Sep 7, 2023
ae79e15
ITensor import UnspecifiedZero and UnallocatedZeros
kmp5VT Sep 7, 2023
c540c53
Simplify and imporve allocate
kmp5VT Sep 7, 2023
7d62776
Remove allocate from blocksparse
kmp5VT Sep 7, 2023
48cbda2
To make sure these functions work when allocated define alloctype for…
kmp5VT Sep 7, 2023
9b4e9b4
revert ITensorGPU
kmp5VT Sep 7, 2023
3799808
Remove EmptyStorage
kmp5VT Sep 7, 2023
ad323cd
Change comment
kmp5VT Sep 7, 2023
5fe6c6d
Add comment about do we need isempty
kmp5VT Sep 7, 2023
e398173
Make a specify_eltype function for tensors
kmp5VT Sep 7, 2023
d21fa47
Don't use unallocatedZeros in blocksparse
kmp5VT Sep 7, 2023
316a38e
Create another specify_eltype with UnspecifiedZero type
kmp5VT Sep 7, 2023
0737360
Have code use specify_eltype
kmp5VT Sep 7, 2023
da38c5d
format
kmp5VT Sep 7, 2023
07cf784
Merge branch 'main' into kmp5/refactor/fillarrays_redo
kmp5VT Sep 7, 2023
86b9173
Create generic_zeros function for unallocatedzeros type
kmp5VT Sep 7, 2023
41e292d
use generic_zeros in blocksparse
kmp5VT Sep 7, 2023
0de7b94
format
kmp5VT Sep 7, 2023
a65d601
need tuple(dim)
kmp5VT Sep 7, 2023
3a5a5db
make specify_eltype for complex unspecifiedZero
kmp5VT Sep 8, 2023
7181a7a
add alloctype for a number for
kmp5VT Sep 11, 2023
fe7b7ca
Fix bug, need to match storage and tensor eltypes
kmp5VT Sep 11, 2023
42c661a
Make functions which allocate types, must provide inds
kmp5VT Sep 11, 2023
72a73f9
Remove commented code
kmp5VT Sep 11, 2023
b88ca48
Fix generic zeros code
kmp5VT Sep 11, 2023
147cfe9
Use set_eltype instead of similartype
kmp5VT Sep 11, 2023
1ec008e
Create alloctype for array type (instead of instance)
kmp5VT Sep 11, 2023
e2512b5
add `if is_unallocated_zeros` around allocate temporarily
kmp5VT Sep 11, 2023
975ad4b
Make allocate function which just wraps adapt.
kmp5VT Sep 11, 2023
131d786
Format
kmp5VT Sep 11, 2023
ec33cdf
typeof -> eltype in contract!!
kmp5VT Sep 12, 2023
6c4ce67
\alpha -> x
kmp5VT Sep 12, 2023
77c59d1
Just adapt the alloc/storage
kmp5VT Sep 12, 2023
154973e
remove `setindex` from unallocatedzeros
kmp5VT Sep 12, 2023
0a1f4ea
Add `alloctype` for `type{<:Number}`
kmp5VT Sep 12, 2023
ba1aa3d
Add RealOrComplex for specify_eltype
kmp5VT Sep 12, 2023
20741b0
vec shouldn't convert `UnallocatedZero` to `
kmp5VT Sep 12, 2023
e878b7c
First set ndims of tensortype, then set the storagetype then construc…
kmp5VT Sep 12, 2023
4a14c61
Accidentally deleted important implementation
kmp5VT Sep 12, 2023
4d99d2e
typeof -> eltype
kmp5VT Sep 12, 2023
8aec050
use `Itensors.default_eltype()`
kmp5VT Sep 12, 2023
751b1f1
format
kmp5VT Sep 12, 2023
4efb969
No data permutation needed just return R
kmp5VT Sep 12, 2023
745fc86
Adding some todo comments
kmp5VT Sep 12, 2023
141f0eb
With no flux, no memory is constructed
kmp5VT Sep 12, 2023
0a1147a
get index returns zero if unallocated, don't look for offset
kmp5VT Sep 12, 2023
3d206eb
remove show
kmp5VT Sep 12, 2023
4c62115
format
kmp5VT Sep 12, 2023
d6270c3
Start adding new zero tests in NDTensors
kmp5VT Sep 13, 2023
ef29678
Add more tests
kmp5VT Sep 13, 2023
0ec277d
format
kmp5VT Sep 13, 2023
86c3876
Make some fixes to length calls
kmp5VT Sep 13, 2023
e9d1f57
More fixes to zero
kmp5VT Sep 13, 2023
43ce955
Add more unit tests
kmp5VT Sep 13, 2023
678a17a
format
kmp5VT Sep 13, 2023
ebb9c6c
remove fill arrays
kmp5VT Sep 13, 2023
36d2633
Fix the zero test
kmp5VT Sep 13, 2023
bf9853b
Fix similar implementation
kmp5VT Sep 13, 2023
e7021b0
fix emptytensor tests
kmp5VT Sep 13, 2023
e902cee
Need old similar too
kmp5VT Sep 13, 2023
4e55066
format
kmp5VT Sep 13, 2023
eb977ac
Fix test again
kmp5VT Sep 13, 2023
91f3a83
format
kmp5VT Sep 13, 2023
497faa9
Add FillArrays to project.toml
kmp5VT Sep 13, 2023
29848ea
If there are no blocks create a blocksparse ITensor with no blocks. I…
kmp5VT Sep 13, 2023
e2e7cea
Blockoffsets is wrong for dense index types
kmp5VT Sep 13, 2023
4f13eb6
Using 1 tries to set itensor eltype to int64
kmp5VT Sep 13, 2023
5d9e2a0
format
kmp5VT Sep 13, 2023
3b45be6
use NDTensors.ndim so it doesn't fail in julia 1.6
kmp5VT Sep 13, 2023
ba463f0
Create two other variadic inputs to try and allow almost any kind of …
kmp5VT Sep 14, 2023
23baf1d
2 tests are broken in dense
kmp5VT Sep 14, 2023
bee51fd
Code didn't work properly with aliasstyle. Fix that and reshape to flat
kmp5VT Sep 14, 2023
5efc6a3
Comment out broken tests
kmp5VT Sep 14, 2023
ec99135
Remove broken blocksparse tests
kmp5VT Sep 14, 2023
5e4fb05
Convert Int types to Float
kmp5VT Sep 14, 2023
8c0cc21
Allocate the directsum projectors in case they are unallocatedzeros
kmp5VT Sep 14, 2023
be980d3
format
kmp5VT Sep 14, 2023
5fcdc5d
Merge branch 'main' into kmp5/refactor/fillarrays_redo
kmp5VT Sep 14, 2023
97f3b9e
Update unspecified promote_type
kmp5VT Sep 15, 2023
7276181
format
kmp5VT Sep 15, 2023
ccfd2f3
Fix promote_type
kmp5VT Sep 15, 2023
5043695
Complex definition was breaking promote_type
kmp5VT Sep 15, 2023
7cabff9
Read wrote broken for type UnspecifiedZero
kmp5VT Sep 15, 2023
ff3ad67
Make typewise alloc for TensorStorage as well as tensor
kmp5VT Sep 15, 2023
4eb3374
Allocate the data if writing to HDF5
kmp5VT Sep 15, 2023
a93b008
Experimental QN itensor constructor, having issues with this
kmp5VT Sep 15, 2023
114b4dc
To deal with the `default_datatype` not working for BlockSparse tenso…
kmp5VT Sep 15, 2023
5268395
Create a way to drop zeros for a NDTensor
kmp5VT Sep 15, 2023
3931c3a
Drop zero blocks affter setting an element
kmp5VT Sep 15, 2023
f1ff431
format
kmp5VT Sep 15, 2023
7ead7f7
Merge branch 'main' into kmp5/refactor/fillarrays_redo
kmp5VT Sep 18, 2023
8842b1c
Forgot to return the updated blocksparse tensor.
kmp5VT Sep 18, 2023
32e7f91
Force allocate on construction because `allocate` changes the memory …
kmp5VT Sep 18, 2023
ffcb548
Remove unecessary comment
kmp5VT Sep 18, 2023
e91043e
If the block already exists, don't do anything
kmp5VT Sep 18, 2023
455ae28
Allocate the tensor when trying to insert a block
kmp5VT Sep 18, 2023
7cefc1b
Update empty test
kmp5VT Sep 18, 2023
364e219
format
kmp5VT Sep 18, 2023
c53744d
emptyITensor to ITensor
kmp5VT Sep 18, 2023
057c69b
Commenting out the flux checking code, Need to verify the logic
kmp5VT Sep 18, 2023
4fd7b6e
use NDTensors dropzeros
kmp5VT Sep 19, 2023
ca49f98
remove comment code
kmp5VT Sep 19, 2023
dcedd0c
Better way to check the indices for QNs
kmp5VT Sep 19, 2023
28d07ad
remove @show
kmp5VT Sep 19, 2023
169dee9
format
kmp5VT Sep 19, 2023
49aea9a
emptyITensor -> ITensor
kmp5VT Sep 19, 2023
e057ac2
Add hasqns for Vector{<:Integer} for unit test
kmp5VT Sep 20, 2023
db9b06c
pass kwargs to QNItensor constructor
kmp5VT Sep 20, 2023
2194bf8
Potentially need to drop zero blocks
kmp5VT Sep 20, 2023
3af4658
allocate T in randn!!
kmp5VT Sep 20, 2023
fd7a622
Add an element type variable so `UnspecifiedZero` isn't used here
kmp5VT Sep 20, 2023
2f38cef
format
kmp5VT Sep 20, 2023
ca75862
Fix adapt functions for UnallocatedZeros
kmp5VT Sep 20, 2023
5597fe7
format
kmp5VT Sep 20, 2023
ada189b
fix the adapt function for CUDA
kmp5VT Sep 20, 2023
6536ee5
format
kmp5VT Sep 20, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion NDTensors/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
Compat = "34da2185-b29b-5c13-b0c7-acf172513d20"
Dictionaries = "85a47980-9c8c-11e8-2b9f-f7ca1fa99fb4"
FLoops = "cc61a311-1640-44b5-9fba-1b764f453329"
FillArrays = "1a297f60-69ca-5386-bcde-b61e274b549b"
Folds = "41a02a25-b8f0-4f67-bc48-60067656b558"
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
HDF5 = "f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"
Expand Down Expand Up @@ -41,8 +42,8 @@ julia = "1.6"
[weakdeps]
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
Metal = "dde4c033-4e86-420c-a63e-0dd931031962"
TBLIS = "48530278-0828-4a49-9772-0f3830dfa1e9"
Octavian = "6fd5a793-0b7e-452c-907f-f8bfe9c57db4"
TBLIS = "48530278-0828-4a49-9772-0f3830dfa1e9"

[extensions]
NDTensorsCUDAExt = "CUDA"
Expand Down
9 changes: 4 additions & 5 deletions NDTensors/src/NDTensors.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ using Adapt
using Base.Threads
using Compat
using Dictionaries
using FillArrays
using FLoops
using Folds
using Random
Expand Down Expand Up @@ -101,12 +102,10 @@ include("blocksparse/combiner.jl")
include("blocksparse/linearalgebra.jl")

#####################################
# Empty
# Zeros
#
include("empty/empty.jl")
include("empty/EmptyTensor.jl")
include("empty/tensoralgebra/contract.jl")
include("empty/adapt.jl")
include("zeros/zeros.jl")
include("zeros/set_types.jl")

#####################################
# Deprecations
Expand Down
5 changes: 0 additions & 5 deletions NDTensors/src/exports.jl
Original file line number Diff line number Diff line change
Expand Up @@ -59,11 +59,6 @@ export
Diag,
DiagTensor,

# empty.jl
EmptyStorage,
EmptyTensor,
EmptyBlockSparseTensor,

# tensorstorage.jl
data,
TensorStorage,
Expand Down
6 changes: 6 additions & 0 deletions NDTensors/src/tensor/tensor.jl
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,10 @@ function Tensor(datatype::Type{<:AbstractArray}, inds::Tuple)
return Tensor(generic_zeros(datatype, dim(inds)), inds)
end

function Tensor()
return Tensor(Zeros{default_eltype(),1,default_datatype(default_eltype())}(()), ())
kmp5VT marked this conversation as resolved.
Show resolved Hide resolved
end

## End Tensor constructors

## Random Tensor
Expand Down Expand Up @@ -184,6 +188,8 @@ setstorage(T, nstore) = tensor(nstore, inds(T))

setinds(T, ninds) = tensor(storage(T), ninds)

iszerodata(t::Tensor) = data(t) isa NDTensors.Zeros
kmp5VT marked this conversation as resolved.
Show resolved Hide resolved
kmp5VT marked this conversation as resolved.
Show resolved Hide resolved

#
# Generic Tensor functions
#
Expand Down
42 changes: 42 additions & 0 deletions NDTensors/src/zeros/set_types.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
import .SetParameters: set_parameter, nparameters, default_parameter

# `SetParameters.jl` overloads.
get_parameter(::Type{<:Zeros{P1}}, ::Position{1}) where {P1} = P1
get_parameter(::Type{<:Zeros{<:Any,P2}}, ::Position{2}) where {P2} = P2
get_parameter(::Type{<:Zeros{<:Any,<:Any,P3}}, ::Position{3}) where {P3} = P3

# Set parameter 1
set_parameter(::Type{<:Zeros}, ::Position{1}, P1) = Zeros{P1}
set_parameter(::Type{<:Zeros{<:Any,P2}}, ::Position{1}, P1) where {P2} = Zeros{P1,P2}
function set_parameter(::Type{<:Zeros{<:Any,<:Any,P3}}, ::Position{1}, P1) where {P3}
return Zeros{P1,<:Any,P3}
end
function set_parameter(::Type{<:Zeros{<:Any,P2,P3}}, ::Position{1}, P1) where {P2,P3}
return Zeros{P1,P2,P3}
end

# Set parameter 2
set_parameter(::Type{<:Zeros}, ::Position{2}, P2) = Zeros{<:Any,P2}
set_parameter(::Type{<:Zeros{P1}}, ::Position{2}, P2) where {P1} = Zeros{P1,P2}
function set_parameter(::Type{<:Zeros{<:Any,<:Any,P3}}, ::Position{2}, P2) where {P3}
P1 = eltype(P3)
return Zeros{P1,P2,P3}
end
function set_parameter(::Type{<:Zeros{P1,<:Any,P3}}, ::Position{2}, P2) where {P1,P3}
return Zeros{P1,P2,P3}
end

# Set parameter 3
set_parameter(::Type{<:Zeros}, ::Position{3}, P3) = Zeros{<:Any,<:Any,P3}
set_parameter(::Type{<:Zeros{P1}}, ::Position{3}, P3) where {P1} = Zeros{P1,<:Any,P3}
function set_parameter(::Type{<:Zeros{<:Any,P2}}, ::Position{3}, P3) where {P2}
P1 = eltype(P3)
return Zeros{P1,P2,P3}
end
set_parameter(::Type{<:Zeros{P1,P2}}, ::Position{3}, P3) where {P1,P2} = Zeros{P1,P2,P3}

default_parameter(::Type{<:Zeros}, ::Position{1}) = Float64
default_parameter(::Type{<:Zeros}, ::Position{2}) = 1
default_parameter(::Type{<:Zeros}, ::Position{3}) = Vector{Float64}

nparameters(::Type{<:Zeros}) = Val(3)
46 changes: 46 additions & 0 deletions NDTensors/src/zeros/zeros.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
struct Zeros{ElT,N,DataT} <: AbstractArray{ElT,N}
z::FillArrays.Zeros
kmp5VT marked this conversation as resolved.
Show resolved Hide resolved
is::Tuple
function NDTensors.Zeros{ElT,N,DataT}(inds::Tuple) where {ElT,N,DataT}
@assert eltype(DataT) == ElT
@assert ndims(DataT) == N
z = FillArrays.Zeros(ElT, dim(inds))
return new{ElT,N,DataT}(z, inds)
end
end
kmp5VT marked this conversation as resolved.
Show resolved Hide resolved

Base.ndims(::NDTensors.Zeros{ElT,N}) where {ElT,N} = N
ndims(::NDTensors.Zeros{ElT,N}) where {ElT,N} = N
Base.eltype(::Zeros{ElT}) where {ElT} = ElT
datatype(::NDTensors.Zeros{ElT,N,DataT}) where {ElT,N,DataT} = DataT
kmp5VT marked this conversation as resolved.
Show resolved Hide resolved
datatype(::Type{<:NDTensors.Zeros{ElT,N,DataT}}) where {ElT,N,DataT} = DataT

Base.size(zero::Zeros) = Base.size(zero.z)

Base.print_array(io::IO, X::Zeros) = Base.print_array(io, X.z)

data(zero::Zeros) = zero.z
getindex(zero::Zeros) = getindex(zero.z)

array(zero::Zeros) = datatype(zero)(zero.z)
Array(zero::Zeros) = array(zero)
dims(z::Zeros) = z.is
copy(z::Zeros) = Zeros{eltype(z),1,datatype(z)}(dims(z))

Base.convert(x::Type{T}, z::NDTensors.Zeros) where {T<:Array} = Base.convert(x, z.z)

Base.getindex(a::Zeros, i) = Base.getindex(a.z, i)
Base.sum(z::Zeros) = sum(z.z)
LinearAlgebra.norm(z::Zeros) = norm(z.z)
setindex!(A::NDTensors.Zeros, v, I) = setindex!(A.z, v, I)

Base.iszero(t::Tensor) = iszero(storage(t))
Base.iszero(st::TensorStorage) = data(st) isa Zeros
kmp5VT marked this conversation as resolved.
Show resolved Hide resolved

function (arraytype::Type{<:Zeros})(::AllowAlias, A::Zeros)
return A
end

function (arraytype::Type{<:Zeros})(::NeverAlias, A::Zeros)
return copy(A)
end
2 changes: 0 additions & 2 deletions NDTensors/test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,6 @@ end
"blocksparse.jl",
"diagblocksparse.jl",
"diag.jl",
"emptynumber.jl",
"emptystorage.jl",
"combiner.jl",
]
println("Running $filename")
Expand Down
11 changes: 8 additions & 3 deletions src/ITensors.jl
Original file line number Diff line number Diff line change
Expand Up @@ -129,8 +129,13 @@ include("indexset.jl")
#####################################
# ITensor
#
include("itensor.jl")
include("oneitensor.jl")
include("itensor/itensor.jl")
include("itensor/indexops.jl")
include("itensor/oneitensor.jl")
include("itensor/emptyitensor.jl")
include("itensor/diagitensor.jl")
include("itensor/randomitensor.jl")
include("itensor/specialitensors.jl")
include("tensor_operations/tensor_algebra.jl")
include("tensor_operations/matrix_algebra.jl")
include("tensor_operations/permutations.jl")
Expand All @@ -152,7 +157,7 @@ include("qn/flux.jl")
include("qn/qn.jl")
include("qn/qnindex.jl")
include("qn/qnindexset.jl")
include("qn/qnitensor.jl")
include("itensor/qnitensor.jl")
include("nullspace.jl")

#####################################
Expand Down
1 change: 0 additions & 1 deletion src/imports.jl
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,6 @@ import LinearAlgebra:
using ITensors.NDTensors:
Algorithm,
@Algorithm_str,
EmptyNumber,
_Tuple,
_NTuple,
blas_get_num_threads,
Expand Down
107 changes: 107 additions & 0 deletions src/itensor/diagitensor.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
#
# Diag ITensor constructors
#

"""
diagITensor([ElT::Type, ]v::Vector, inds...)
diagitensor([ElT::Type, ]v::Vector, inds...)

Make a sparse ITensor with non-zero elements only along the diagonal.
In general, the diagonal elements will be those stored in `v` and
the ITensor will have element type `eltype(v)`, unless specified explicitly
by `ElT`. The storage will have `NDTensors.Diag` type.

In the case when `eltype(v) isa Union{Int, Complex{Int}}`, by default it will
be converted to `float(v)`. Note that this behavior is subject to change
in the future.

The version `diagITensor` will never output an ITensor whose storage data
is an alias of the input vector data.

The version `diagitensor` might output an ITensor whose storage data
is an alias of the input vector data in order to minimize operations.
"""
function diagITensor(
as::AliasStyle, ElT::Type{<:Number}, v::AbstractVector{<:Number}, is::Indices
)
length(v) ≠ mindim(is) && error(
"Length of vector for diagonal must equal minimum of the dimension of the input indices",
)
data = set_eltype(typeof(v), ElT)(as, v)
return itensor(Diag(data), is)
end

function diagITensor(as::AliasStyle, ElT::Type{<:Number}, v::AbstractVector{<:Number}, is...)
return diagITensor(as, ElT, v, indices(is...))
end

function diagITensor(as::AliasStyle, v::AbstractVector, is...)
return diagITensor(as, eltype(v), v, is...)
end

function diagITensor(as::AliasStyle, v::AbstractVector{<:RealOrComplex{Int}}, is...)
return diagITensor(as, float(eltype(v)), v, is...)
end

diagITensor(v::AbstractVector{<:Number}, is...) = diagITensor(NeverAlias(), v, is...)
function diagITensor(ElT::Type{<:Number}, v::AbstractVector{<:Number}, is...)
return diagITensor(NeverAlias(), ElT, v, is...)
end

diagitensor(args...; kwargs...) = diagITensor(AllowAlias(), args...; kwargs...)

# XXX TODO: explain conversion from Int
# XXX TODO: proper conversion
"""
diagITensor([ElT::Type, ]x::Number, inds...)
diagitensor([ElT::Type, ]x::Number, inds...)

Make a sparse ITensor with non-zero elements only along the diagonal.
In general, the diagonal elements will be set to the value `x` and
the ITensor will have element type `eltype(x)`, unless specified explicitly
by `ElT`. The storage will have `NDTensors.Diag` type.

In the case when `x isa Union{Int, Complex{Int}}`, by default it will
be converted to `float(x)`. Note that this behavior is subject to change
in the future.
kmp5VT marked this conversation as resolved.
Show resolved Hide resolved
"""
function diagITensor(::AliasStyle, ElT::Type{<:Number}, x::Number, is::Indices)
mtfishman marked this conversation as resolved.
Show resolved Hide resolved
return diagITensor(AllowAlias(), ElT, fill(eltype(x), mindim(is)), is...)
end

function diagITensor(as::AliasStyle, ElT::Type{<:Number}, x::Number, is...)
return diagITensor(as, ElT, x, indices(is...))
end

function diagITensor(as::AliasStyle, x::Number, is...)
return diagITensor(as, typeof(x), x, is...)
end

function diagITensor(as::AliasStyle, x::RealOrComplex{Int}, is...)
return diagITensor(as, float(typeof(x)), x, is...)
end

function diagITensor(ElT::Type{<:Number}, x::Number, is...)
return diagITensor(NeverAlias(), ElT, x, is...)
end

"""
diagITensor([::Type{ElT} = Float64, ]inds)
diagITensor([::Type{ElT} = Float64, ]inds::Index...)

Make a sparse ITensor of element type `ElT` with only elements
along the diagonal stored. Defaults to having `zero(T)` along
the diagonal.

The storage will have `NDTensors.Diag` type.
"""
function diagITensor(::Type{ElT}, is::Indices) where {ElT<:Number}
return diagITensor(NeverAlias(), ElT, 0, is)
end

diagITensor(::Type{ElT}, is...) where {ElT<:Number} = diagITensor(ElT, indices(is...))

diagITensor(is::Indices) = diagITensor(Float64, is)
diagITensor(is...) = diagITensor(indices(is...))

diagITensor(x::Number, is...) = diagITensor(NeverAlias(), x, is...)
26 changes: 26 additions & 0 deletions src/itensor/emptyitensor.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#
# EmptyStorage ITensor constructors
#

# TODO: Deprecated!
"""
emptyITensor([::Type{ElT} = NDTensors.default_eltype(), ]inds)
emptyITensor([::Type{ElT} = NDTensors.default_eltype(), ]inds::Index...)

Construct an ITensor with storage type `NDTensors.EmptyStorage`, indices `inds`, and element type `ElT`. If the element type is not specified, it defaults to `NDTensors.default_eltype()`, which represents a number type that can take on any value (for example, the type of the first value it is set to).
"""
function emptyITensor(::Type{ElT}, is::Indices) where {ElT<:Number}
return itensor(NDTensors.Zeros{ElT,1,NDTensors.default_datatype(ElT)}(is), is)
end

function emptyITensor(::Type{ElT}, is...) where {ElT<:Number}
return emptyITensor(ElT, indices(is...))
end

emptyITensor(is::Indices) = emptyITensor(NDTensors.default_eltype(), is)

emptyITensor(is...) = emptyITensor(NDTensors.default_eltype(), indices(is...))

function emptyITensor(::Type{ElT}=NDTensors.default_eltype()) where {ElT<:Number}
return itensor(NDTensors.Zeros{ElT,1,NDTensors.default_datatype(ElT)}(()), ())
end
Loading