-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: follower node sync from DA rebased to syncUpstream/active
#1013
feat: follower node sync from DA rebased to syncUpstream/active
#1013
Conversation
Conflicts: cmd/geth/main.go core/state_processor_test.go core/txpool/legacypool/legacypool.go eth/backend.go eth/ethconfig/config.go eth/gasprice/gasprice_test.go eth/handler.go eth/protocols/eth/broadcast.go eth/protocols/eth/handlers.go go.mod go.sum miner/miner.go miner/miner_test.go miner/scroll_worker.go miner/scroll_worker_test.go params/config.go params/version.go rollup/rollup_sync_service/rollup_sync_service_test.go
Semgrep found 6
Risk: Affected versions of golang.org/x/net, golang.org/x/net/http2, and net/http are vulnerable to Uncontrolled Resource Consumption. An attacker may cause an HTTP/2 endpoint to read arbitrary amounts of header data by sending an excessive number of CONTINUATION frames. Fix: Upgrade this library to at least version 0.23.0 at go-ethereum/go.mod:144. Reference(s): GHSA-4v7x-pqxf-cx7m, CVE-2023-45288 Ignore this finding from ssc-46663897-ab0c-04dc-126b-07fe2ce42fb2. |
we can upgrade the da-codec to 41c6486 now |
…ync-directly-from-da-rebased Conflicts: eth/backend.go go.mod go.sum miner/scroll_worker.go rollup/rollup_sync_service/rollup_sync_service.go
…ync-directly-from-da-rebased
1. Purpose or design rationale of this PR
This PR (originally implemented in #631, but moved here to
syncUpstream/active
branch) implements afollower node from DA/L1
mode which reproduces the L2 state solely from L1 events and loading data from DA (calldata is retrieved from L1 RPC directly, historical blobs are loaded via beacon node or blob APIs and verified viaversionedHash
).On a high level, it works as follows: the L2 functionality of the node is disabled and instead it connects only to the configured L1 RPC, beacon node or blob APIs, retrieves all rollup events (commit batch, revert, finalize), L1 messages and batch data (i.e. calldata or blobs since Bernoulli). Once an event is finalized on L1 the resulting state (meaning L2 state and blocks) are derived and verified from this data.
The derivation process works by implementing a pipeline with following steps:
-
DAQueue
: usesDataSource
to retrieve events and corresponding data (calldata or blob).-
BatchQueue
: sorts differentDATypes
and returns committed, finalized batches in order.-
BlockQueue
: converts batches toPartialBlocks
that can be used to create the L2 state-
DASyncer
: executesPartialBlock
and inserts into chainHow to run?
Run
l2geth
with the--da.sync
flag. Provide blob APIs and beacon node with--da.blob.beaconnode "<L1 beacon node>"
(recommended, if beacon node supports historical blobs)--da.blob.blobscan "https://api.blobscan.com/blobs/"
--da.blob.blocknative "https://api.ethernow.xyz/v1/blob/"
for mainnet--da.blob.blobscan "https://api.sepolia.blobscan.com/blobs/"
for Sepolia.Strictly speaking only one of the blob providers is necessary, but during testing blobscan and blocknative were not fully reliable. That's why using a beacon node with historical blob data is recommended (can be additionally to blobscan and blobnative). The pipeline rotates the blob providers and retries if one of them fails.
mainnet
A full sync will take about 2 weeks depending on the speed of the RPC node, beacon node and the local machine. Progess is reported as follows for every 1000 blocks applied:
INFO [08-01|16:44:42.173] L1 sync progress blockhain height=87000 block hash=608eec..880ebd root=218215..9a58a2
Sepolia
A full sync will take about 2-3 days depending on the speed of the RPC node, beacon node and the local machine. Progess is reported as follows for every 1000 blocks applied:
INFO [08-01|16:44:42.173] L1 sync progress blockhain height=87000 block hash=608eec..880ebd root=218215..9a58a2
Troubleshooting
You should see something like this shortly after starting:
L1 sync progress [...]
.L1 sync progress [...]
to appear as the L1 blocks are more sparse at the beginningTemporary errors
Especially at the beginning some errors like below might appear in the console. This is expected, as the pipeline relies on the L1 messages but in case they're not synced fast enough such an error might pop up. It will continue once the L1 messages are available.
Limitations
The
state root
of a block can be reproduced when using this mode of syncing but currently not theblock hash
. This is due to the fact that currently the header fieldsdifficulty
andextraData
are not stored on DA but these fields are utilized by Clique consensus which is used by the Scroll protocol. This will be fixed in a future upgrade where the main implementation on l2geth is already done: #903 #913.To verify the locally created
state root
against mainnet (https://sepolia-rpc.scroll.io/
for Sepolia), we can do the following:By comparing the headers we can most importantly see that
state root
,receiptsRoot
and everything that has to do with the state matches. However, the following fields will be different:difficulty
and thereforetotalDifficulty
extraData
size
due to differences in header sizehash
and thereforeparentHash
Example local output for block 11000:
Example remote output:
2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?