-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Guardian now produces rollup based on interval time and available bat… #1536
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic looks good.
A couple of small cleanup things .
@BedrockSquirrel , could you have a look as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approach seems good to me, just couple of ideas and couple of questions that came to mind that I wasn't sure about.
go/host/enclave/guardian.go
Outdated
@@ -608,3 +646,40 @@ func (g *Guardian) streamEnclaveData() { | |||
} | |||
} | |||
} | |||
|
|||
func (g *Guardian) getBatchesAfterSize(seqNo uint64) (uint64, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Find this method name a bit confusing. Maybe could be something like getSizeOfBatchesAfter()
(maybe since
instead of after
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I found that confusing as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Third here, hate the name. WDYT ofcalculateUnrolledupBatchesSize
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor thing: I'd put in the minimum time check to make sure it doesn't spam dup rollups
if !g.state.IsUpToDate() { | ||
// if we're behind the L1, we don't want to produce rollups | ||
g.logger.Debug("skipping rollup production because L1 is not up to date", "state", g.state) | ||
continue | ||
} | ||
|
||
fromBatch, err := g.getLastestBatchNo() | ||
fromBatch, err := g.getLatestBatchNo() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should have another skip here that's like:
if time.Since(lastSuccessfulRollup) < minRollupInterval {
continue // only just produced a rollup, allow time for enclave to consume
}
where minRollupInterval can probably just be like 1*blocktime (it's mostly just incase the timer fires soon after the previous run, not sure how the timer works and if that's possible but I assume it is)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I like to have a minRollupInterval as a performance improvement, perhaps at a later stage?
Mainly due to the not being clear what the minRollupInterval should be.
Should be ok to not have this check for now because:
- This is an awaited method. Once the rollup is created, it will wait for the receipt, which guarantees one block spacing.
- This whole process then is called after the 1*l1blocktime duration which is AFTER the success tx mint. So after the await of the receipt (step above) it waits again another block to check if it should rollup.
Would it make sense to add a few timers/logs so that it's possible to monitor this behavior? (for future improvements?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be slightly confusing when the timer fires straightaway but shouldn't be a problem, like you say probably just optimisation at worst.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah as discussed on discord the race condition I was worried about isn't an issue any more because you're using the L1 data directly to decide how far we have rolled up. Disregard this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
…ches size
Why this change is needed
https://github.com/obscuronet/obscuro-internal/issues/2153
availableBatchesSize
>=maxRollupSize
seqNo
maxRollupSize
as a flagWhat changes were made as part of this PR
Please provide a high level list of the changes made
PR checks pre-merging
Please indicate below by ticking the checkbox that you have read and performed the required
PR checks