Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Guardian now produces rollup based on interval time and available bat… #1536

Merged
merged 9 commits into from
Sep 25, 2023

Conversation

otherview
Copy link
Contributor

…ches size

Why this change is needed

https://github.com/obscuronet/obscuro-internal/issues/2153

  • Guardian now produces rollups if availableBatchesSize >= maxRollupSize
  • When Guardian produces rollups it resets both (existing) IntervalTimeTicker and SizeCheckTicker
  • Enclave now has a method to fetch size of batches from a given seqNo
  • Host now receives maxRollupSize as a flag

What changes were made as part of this PR

Please provide a high level list of the changes made

PR checks pre-merging

Please indicate below by ticking the checkbox that you have read and performed the required
PR checks

  • PR checks reviewed and performed

Copy link
Collaborator

@tudor-malene tudor-malene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic looks good.

A couple of small cleanup things .

@BedrockSquirrel , could you have a look as well?

go/common/rpc/generated/enclave_grpc.pb.go Outdated Show resolved Hide resolved
go/host/enclave/guardian.go Show resolved Hide resolved
go/host/enclave/guardian.go Outdated Show resolved Hide resolved
go/host/enclave/guardian.go Outdated Show resolved Hide resolved
Copy link
Collaborator

@BedrockSquirrel BedrockSquirrel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approach seems good to me, just couple of ideas and couple of questions that came to mind that I wasn't sure about.

go/host/enclave/guardian.go Outdated Show resolved Hide resolved
go/host/enclave/guardian.go Outdated Show resolved Hide resolved
@@ -608,3 +646,40 @@ func (g *Guardian) streamEnclaveData() {
}
}
}

func (g *Guardian) getBatchesAfterSize(seqNo uint64) (uint64, error) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Find this method name a bit confusing. Maybe could be something like getSizeOfBatchesAfter() (maybe since instead of after)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I found that confusing as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Third here, hate the name. WDYT ofcalculateUnrolledupBatchesSize ?

go/host/enclave/guardian.go Show resolved Hide resolved
go/config/host_config.go Show resolved Hide resolved
Copy link
Collaborator

@BedrockSquirrel BedrockSquirrel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor thing: I'd put in the minimum time check to make sure it doesn't spam dup rollups

if !g.state.IsUpToDate() {
// if we're behind the L1, we don't want to produce rollups
g.logger.Debug("skipping rollup production because L1 is not up to date", "state", g.state)
continue
}

fromBatch, err := g.getLastestBatchNo()
fromBatch, err := g.getLatestBatchNo()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have another skip here that's like:

if time.Since(lastSuccessfulRollup) < minRollupInterval {
   continue // only just produced a rollup, allow time for enclave to consume
}

where minRollupInterval can probably just be like 1*blocktime (it's mostly just incase the timer fires soon after the previous run, not sure how the timer works and if that's possible but I assume it is)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I like to have a minRollupInterval as a performance improvement, perhaps at a later stage?
Mainly due to the not being clear what the minRollupInterval should be.

Should be ok to not have this check for now because:

  • This is an awaited method. Once the rollup is created, it will wait for the receipt, which guarantees one block spacing.
  • This whole process then is called after the 1*l1blocktime duration which is AFTER the success tx mint. So after the await of the receipt (step above) it waits again another block to check if it should rollup.

Would it make sense to add a few timers/logs so that it's possible to monitor this behavior? (for future improvements?)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be slightly confusing when the timer fires straightaway but shouldn't be a problem, like you say probably just optimisation at worst.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah as discussed on discord the race condition I was worried about isn't an issue any more because you're using the L1 data directly to decide how far we have rolled up. Disregard this!

Copy link
Collaborator

@BedrockSquirrel BedrockSquirrel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Collaborator

@tudor-malene tudor-malene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@otherview otherview merged commit 4b2d0e9 into main Sep 25, 2023
2 checks passed
@otherview otherview deleted the pedro/rollup_production_based_on_size branch September 25, 2023 13:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants