Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimize mutexes and increase batch size #2215

Merged
merged 5 commits into from
Dec 19, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions go/config/defaults/0-base-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ network:
batch:
interval: 1s
maxInterval: 1s # if this is greater than batch.interval then we make batches more slowly when there are no transactions
maxSize: 45000 # around 45kb - around 200 transactions / batch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there was a reason we picked batch size to be lower but i cant recall. The only thing i recall as a hard limit was 64kb for accepting eth transactions in the geth code

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually rollups had more metadata than batches so there was an overhead that needed to be free, if rollups match batch size we might have issues

maxSize: 131072 # 128kb - the size of the rollup
rollup:
interval: 5s
maxInterval: 10m # rollups will be produced after this time even if the data blob is not full
maxSize: 131072 # 128kb
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do blobs have any overhead on top of rollup metadata? @badgersrus

maxSize: 131072 # 128kb - the size of a blob
gas: # todo: ask stefan about these fields
baseFee: 1000000000 # using geth's initial base fee for EIP-1559 blocks.
minGasPrice: 1000000000 # using geth's initial base fee for EIP-1559 blocks.
Expand Down
21 changes: 12 additions & 9 deletions go/enclave/enclave_admin_service.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,8 @@ import (

type enclaveAdminService struct {
config *enclaveconfig.EnclaveConfig
mainMutex sync.Mutex // serialises all data ingestion or creation to avoid weird races
mainMutex sync.Mutex // locks the admin operations
dataInMutex sync.RWMutex // controls access to data ingestion
logger gethlog.Logger
l1BlockProcessor components.L1BlockProcessor
validatorService nodetype.Validator
Expand Down Expand Up @@ -92,6 +93,7 @@ func NewEnclaveAdminAPI(config *enclaveconfig.EnclaveConfig, storage storage.Sto
eas := &enclaveAdminService{
config: config,
mainMutex: sync.Mutex{},
dataInMutex: sync.RWMutex{},
logger: logger,
l1BlockProcessor: blockProcessor,
service: validatorService,
Expand Down Expand Up @@ -176,8 +178,8 @@ func (e *enclaveAdminService) MakeActive() common.SystemError {

// SubmitL1Block is used to update the enclave with an additional L1 block.
func (e *enclaveAdminService) SubmitL1Block(ctx context.Context, blockHeader *types.Header, receipts []*common.TxAndReceiptAndBlobs) (*common.BlockSubmissionResponse, common.SystemError) {
e.mainMutex.Lock()
defer e.mainMutex.Unlock()
e.dataInMutex.Lock()
defer e.dataInMutex.Unlock()

e.logger.Info("SubmitL1Block", log.BlockHeightKey, blockHeader.Number, log.BlockHashKey, blockHeader.Hash())

Expand Down Expand Up @@ -237,8 +239,8 @@ func (e *enclaveAdminService) SubmitBatch(ctx context.Context, extBatch *common.
return err
}

e.mainMutex.Lock()
defer e.mainMutex.Unlock()
e.dataInMutex.Lock()
defer e.dataInMutex.Unlock()

// if the signature is valid, then store the batch together with the converted hash
err = e.storage.StoreBatch(ctx, batch, convertedHeader.Hash())
Expand All @@ -261,8 +263,8 @@ func (e *enclaveAdminService) CreateBatch(ctx context.Context, skipBatchIfEmpty

defer core.LogMethodDuration(e.logger, measure.NewStopwatch(), "CreateBatch call ended")

e.mainMutex.Lock()
defer e.mainMutex.Unlock()
e.dataInMutex.RLock()
defer e.dataInMutex.RUnlock()

err := e.sequencer().CreateBatch(ctx, skipBatchIfEmpty)
if err != nil {
Expand All @@ -278,8 +280,9 @@ func (e *enclaveAdminService) CreateRollup(ctx context.Context, fromSeqNo uint64
}
defer core.LogMethodDuration(e.logger, measure.NewStopwatch(), "CreateRollup call ended")

e.mainMutex.Lock()
defer e.mainMutex.Unlock()
// allow the simultaneous production of rollups and batches
e.dataInMutex.RLock()
defer e.dataInMutex.RUnlock()

if e.registry.HeadBatchSeq() == nil {
return nil, responses.ToInternalError(fmt.Errorf("not initialised yet"))
Expand Down
Loading