-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bundle submission rework #1966
Bundle submission rework #1966
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it looks good in principle.
Left a few minor comments
@@ -12,6 +12,9 @@ interface Structs { | |||
|
|||
struct RollupStorage { | |||
mapping(bytes32=>MetaRollup) byHash; | |||
mapping(uint256=>bytes32) byOrder; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if this can be optimised a bit for gas usage.
Also, we need to think about how to clean data that is no longer necessary from the mapping (old rollups).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not without saving things in the db
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
But let's keep an eye on the performance
Why this change is needed
The old system does not account for forks and rollbacks. This one does. Furthermore it doesn't publish unlimited data, but rather goes per rollup. (if anything can be found within it.)
The publishing system for the bundle needs to account for when a rollup is republished. This could mean that a different set of batches are active for the L1 blocks. Thus we need to be able to resubmit bundles with potentially different hashes bound to the other L1 blocks.
What changes were made as part of this PR
The management contract is extended to create unique IDs for rollups. They are based on the previous blockhash. As a rollup cannot contain batches pointing to the block it is mined it, then the entire uniqueness can be derived from the previous block as this is the last one where a batch might be applicable. Rollups published on two seperate blocks that have the same parent are thus also considered identical and would produce the same unique fork ID.
A new system is introduced where a state machine tracks the management contract. Rollups are ordered sequentially and whenever one is inserted a unique fork ID is created at the sequence number. This means that if a rollup transaction is remined for a different fork, the fork ID will be different, but for example at height 55 both times.
The state machine tracks if there are new fork IDs and publishes bundles for the rollups behind them. Whenever it attempts to synchronize it would check that the last fork ID it knows about is present in the smart contract. If it has disappeared, then it starts backtracking to the last known common ancestor and moves the "current rollup needle" to it. This would in turn make the publish start going through all the missing rollups that are for the forks.
In ideal conditions this mechanism will almost never be used or have to go back through a lot of rollups.
On startup the system goes from the beginning.
PR checks pre-merging
Please indicate below by ticking the checkbox that you have read and performed the required
PR checks