Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BlockTensorMap #199

Merged
merged 151 commits into from
Dec 12, 2024
Merged

BlockTensorMap #199

merged 151 commits into from
Dec 12, 2024

Conversation

lkdvos
Copy link
Member

@lkdvos lkdvos commented Dec 6, 2024

This PR contains a (rather large) amount of breaking changes, by factoring out a large amount of code into a separate package. In particular, we now rely on BlockTensorKit to deal with the logic of (sparse) block MPO tensors and environments, which simplifies the code here quite a bit.

Additionally, many parts of the code have been cleaned up and renamed, so this change should be considered very breaking.

TODO

  • port over last changes from current master
  • release last fixes for current version, since we will have to release a new minor version
  • verify if there are additional checks that can be done to ensure everything is still behaving as required

@lkdvos
Copy link
Member Author

lkdvos commented Dec 10, 2024

I'm sorry, there's not really a shortcut here. This is a rather large software project, and if you want to understand the full stack, there's not a lot better to do than just reading through One thing that might be useful is that while there are quite a lot of changes, many of these are inconsequential: some slight syntax changes and conventions. Just scrolling through the changes in this PR, you can gloss over these and mark them as "unchanged for all intents and purposes".
That way, you can distill where the actual changes are happening. To set you on your way: in this case the biggest changes are in the operator/ implementations, as the sparsemposlice and sparsempo distinction now no longer has to exist. Sparse and dense MPO tensors are all treated on equal footing. As a result, the structure of the environments also changes, from requiring a vector of tensors for the different "levels of the MPO", to a single tensor per site, which might be blocked. Otherwise, there is not really that much change of functionality.

@Gertian
Copy link
Collaborator

Gertian commented Dec 10, 2024

Thanks, that exactly what I was after :)
I started looking and saw that most changes were very minor, so I wanted to know where the relevant changes where.
I should have worded that better.

Do you want me to mark things that I find unclear ?

@lkdvos
Copy link
Member Author

lkdvos commented Dec 10, 2024

It depends a bit, if it's user-side things that are unclear, yes. If it's implementation-side things, maybe, but probably not. While I value clarity and readability, I don't have the time to make sure the implementations are always as clear as possible, since MPSKit is not really meant to be educational...

@lkdvos
Copy link
Member Author

lkdvos commented Dec 10, 2024

Unless there are strong opinions against this, I will merge this tomorrow, and keep track of the todos before release in the discussions. The idea is that this PR is a monster, and it's hard to keep an overview of what's going on. There are a few concrete things left to do, which will be easier to track in separate PRs.

@lkdvos lkdvos enabled auto-merge (squash) December 12, 2024 00:38
@lkdvos lkdvos disabled auto-merge December 12, 2024 00:39
@lkdvos lkdvos merged commit 19fae27 into master Dec 12, 2024
27 of 28 checks passed
@lkdvos lkdvos deleted the blocktensor2 branch December 12, 2024 00:41
@lkdvos lkdvos restored the blocktensor2 branch December 12, 2024 00:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

@show for sparse_MPO does not work. GradientGrassmann and LazySum performance issue
3 participants