Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: increase the data commitment blocks limit in the API #1268

Merged
merged 6 commits into from
Mar 27, 2024

Conversation

rach-id
Copy link
Member

@rach-id rach-id commented Mar 13, 2024

Description

This is done to support 4 hours batches of attestations in the API without having to make a breaking change

PR checklist

  • Tests written/updated
  • Changelog entry added in .changelog (we use
    unclog to manage our changelog)
  • Updated relevant documentation (docs/ or spec/) and code comments

This is done to support 4 hours batches of attestations
ninabarbakadze
ninabarbakadze previously approved these changes Mar 13, 2024
staheri14
staheri14 previously approved these changes Mar 14, 2024
Copy link
Collaborator

@staheri14 staheri14 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have sufficient context to form a strong opinion about this change. @rach-id, given your greater familiarity with blobstream module, I trust your judgment and approve it.

Copy link
Collaborator

@rootulp rootulp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a breaking change because DataCommitmentBlocksLimit is exported by celestia-core and used by celestia-app here. If this is a breaking change, it needs a ! in the PR title.

@@ -41,5 +41,5 @@ var (
NewBaseHashFunc = sha256.New

// DataCommitmentBlocksLimit is the limit to the number of blocks we can generate a data commitment for.
DataCommitmentBlocksLimit = 1000
DataCommitmentBlocksLimit = 2048
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How was 2048 derived? I would expect 1200 instead of 2048 because:

4 (hours) * 60 (minutes) * 60 (seconds) = 14,400 seconds in 4 hours
14,400 / 12 (slot time on Ethereum) = 1,200
14,400 / 12 (block time on Celestia) = 1,200

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To have a small extra wiggle room. I am currently discussing bumping it to 10000, but we're still investigating the impacts with Succinct

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2048 * 12 = 24,576 / 60 / 60 = 6.8 hours so almost 7 hours. Given this isn't close to 4 hours, can the PR description be updated?

Also can the rationale for the value chosen be documented in a code comment next to the line if it's not immediately obvious.

@rach-id rach-id marked this pull request as draft March 14, 2024 10:41
@rach-id
Copy link
Member Author

rach-id commented Mar 14, 2024

This seems like a breaking change because DataCommitmentBlocksLimit is exported by celestia-core and used by celestia-app here. If this is a breaking change, it needs a ! in the PR title.

Increasing this value is not breaking. But decreasing it requires version management etc. That's why I didn't add the ! as the change is backwards compatible

@rootulp
Copy link
Collaborator

rootulp commented Mar 14, 2024

Can you please expand on why this change is non-breaking? It's an exported constant from celestia-core so it still seems breaking to me.

@rach-id
Copy link
Member Author

rach-id commented Mar 14, 2024

Can you please expand on why this change is non-breaking? It's an exported constant from celestia-core so it still seems breaking to me.

The way I see it is that if previously a data commitment window was able to be setup without error, then using the new value that would also be the case.
It's like when you have an API that returns, for example:

type Response struct {
    Resp1 string
}

And add a new field:

type Response struct {
    Resp1 string
    Extra string
}

Even if the API changed and the response is exported, the downstream existing implementations and code doesn't necessarily need to be updated to match the new API. Similar to this batch size increase, if anyone is using that constant, they don't need to update their logic for their existing logic to work with the new release.
Furthermore, this was only enforced in the genesis, so if a first block was generated based on a genesis that uses a window < 1000, it will still pass with < 2048 or whichever value that is higher.

This is my reasoning, let me know if I missed something so that I add the ! :D

making sure a change is breaking or not can be troublesome sometimes :D

evan-forbes
evan-forbes previously approved these changes Mar 15, 2024
@rootulp
Copy link
Collaborator

rootulp commented Mar 15, 2024

Furthermore, this was only enforced in the genesis

I disagree. This constant will be used to validate governance param changes because DataCommitmentWindow is considered governance modifiable (see specs). See this branch and this test output:

inside validate data commitment window
...
key: DataCommitmentWindow, value: "10000", err: invalid parameter value: data commitment window 10000 must be <= data commitment blocks limit 2048: invalid data commitment window: failed to set parameter

since the test fails, we can't adopt this change in a minor release of celestia-app because then we could have some consensus nodes consider a param change to DataCommitmentBlocksLimit = 2000 as valid and others would consider it invalid. This definitely seems consensus breaking to me so IMO we should mark this change as breaking. We should also document that we can't backport this to celestia-app v1.x and it must be introduced in v2.x.

Even if it's only enforced at genesis, I think it's breaking. Assume we mark this as non-breaking here and cut the next celestia-core release with it. Then assume we bump the celestia-core dep in celestia-app. The next celestia-app release (v1.8.0) would be incompatible with the current celestia-app release (v1.7.0) because the two versions would produce different results when validating a genesis file with DataCommitmentBlocksLimit = 2000. v1.7.0 would consider the genesis file invalid. v.1.8.0 would consider it valid.

[tangent] This constant seems used here so if it is changed, then I think it violates this comment at the top of the file. If we modify this constant, we should either: update that comment or move the constant out of that file.

Copy link
Collaborator

@rootulp rootulp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO this is still breaking. See the last comment.

@rach-id rach-id dismissed stale reviews from evan-forbes, staheri14, and ninabarbakadze via 6761c7e March 16, 2024 00:57
@rach-id
Copy link
Member Author

rach-id commented Mar 16, 2024

@rootulp Good catch 👍 switched to using a local limit in the API.

@rach-id rach-id changed the title chore: increase the data commitment blocks limit chore: increase the data commitment blocks limit in the API Mar 18, 2024
@rach-id rach-id marked this pull request as ready for review March 25, 2024 14:04
@rach-id rach-id requested a review from rootulp March 25, 2024 14:04
evan-forbes
evan-forbes previously approved these changes Mar 26, 2024
rootulp
rootulp previously approved these changes Mar 26, 2024
Copy link
Collaborator

@rootulp rootulp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On second thought, I'm not going to block on the comment for explaining 10_000 but I think it would be a nice addition.

@rach-id rach-id dismissed stale reviews from rootulp and evan-forbes via 2193ac5 March 26, 2024 16:19
Copy link
Collaborator

@staheri14 staheri14 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@rach-id rach-id enabled auto-merge (squash) March 27, 2024 01:27
@rach-id rach-id merged commit 1d70a8e into main Mar 27, 2024
18 checks passed
@rach-id rach-id deleted the increase-data-commitment-blocks-limit branch March 27, 2024 01:32
rach-id added a commit that referenced this pull request May 9, 2024
## Description

This is done to support 4 hours batches of attestations in the API
without having to make a breaking change

#### PR checklist

- [ ] Tests written/updated
- [ ] Changelog entry added in `.changelog` (we use
[unclog](https://github.com/informalsystems/unclog) to manage our
changelog)
- [ ] Updated relevant documentation (`docs/` or `spec/`) and code
comments

(cherry picked from commit 1d70a8e)
rach-id added a commit that referenced this pull request May 9, 2024
…1268) (#1353)

## Description

This is done to support 33 hours batches of attestations in the API
without having to make a breaking change
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants