-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: increase the data commitment blocks limit in the API #1268
Conversation
This is done to support 4 hours batches of attestations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have sufficient context to form a strong opinion about this change. @rach-id, given your greater familiarity with blobstream module, I trust your judgment and approve it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a breaking change because DataCommitmentBlocksLimit
is exported by celestia-core and used by celestia-app here. If this is a breaking change, it needs a !
in the PR title.
pkg/consts/consts.go
Outdated
@@ -41,5 +41,5 @@ var ( | |||
NewBaseHashFunc = sha256.New | |||
|
|||
// DataCommitmentBlocksLimit is the limit to the number of blocks we can generate a data commitment for. | |||
DataCommitmentBlocksLimit = 1000 | |||
DataCommitmentBlocksLimit = 2048 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How was 2048
derived? I would expect 1200
instead of 2048
because:
4 (hours) * 60 (minutes) * 60 (seconds) = 14,400 seconds in 4 hours
14,400 / 12 (slot time on Ethereum) = 1,200
14,400 / 12 (block time on Celestia) = 1,200
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To have a small extra wiggle room. I am currently discussing bumping it to 10000, but we're still investigating the impacts with Succinct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2048 * 12 = 24,576 / 60 / 60 = 6.8 hours so almost 7 hours. Given this isn't close to 4 hours, can the PR description be updated?
Also can the rationale for the value chosen be documented in a code comment next to the line if it's not immediately obvious.
Increasing this value is not breaking. But decreasing it requires version management etc. That's why I didn't add the |
Can you please expand on why this change is non-breaking? It's an exported constant from celestia-core so it still seems breaking to me. |
The way I see it is that if previously a data commitment window was able to be setup without error, then using the new value that would also be the case.
And add a new field:
Even if the API changed and the response is exported, the downstream existing implementations and code doesn't necessarily need to be updated to match the new API. Similar to this batch size increase, if anyone is using that constant, they don't need to update their logic for their existing logic to work with the new release. This is my reasoning, let me know if I missed something so that I add the making sure a change is breaking or not can be troublesome sometimes :D |
I disagree. This constant will be used to validate governance param changes because
since the test fails, we can't adopt this change in a minor release of celestia-app because then we could have some consensus nodes consider a param change to Even if it's only enforced at genesis, I think it's breaking. Assume we mark this as non-breaking here and cut the next celestia-core release with it. Then assume we bump the celestia-core dep in celestia-app. The next celestia-app release (v1.8.0) would be incompatible with the current celestia-app release (v1.7.0) because the two versions would produce different results when validating a genesis file with [tangent] This constant seems used here so if it is changed, then I think it violates this comment at the top of the file. If we modify this constant, we should either: update that comment or move the constant out of that file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO this is still breaking. See the last comment.
6761c7e
@rootulp Good catch 👍 switched to using a local limit in the API. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On second thought, I'm not going to block on the comment for explaining 10_000 but I think it would be a nice addition.
…limit' into increase-data-commitment-blocks-limit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
## Description This is done to support 4 hours batches of attestations in the API without having to make a breaking change #### PR checklist - [ ] Tests written/updated - [ ] Changelog entry added in `.changelog` (we use [unclog](https://github.com/informalsystems/unclog) to manage our changelog) - [ ] Updated relevant documentation (`docs/` or `spec/`) and code comments (cherry picked from commit 1d70a8e)
Description
This is done to support 4 hours batches of attestations in the API without having to make a breaking change
PR checklist
.changelog
(we useunclog to manage our changelog)
docs/
orspec/
) and code comments