-
Notifications
You must be signed in to change notification settings - Fork 619
feat(bridge-history): support codecv7 #1609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(bridge-history): support codecv7 #1609
Conversation
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe update introduces versioned event signatures and structures for revert operations, refines error handling when instantiating fetcher components, and adds new configuration options. It integrates blob client functionality in both event parsing and fetcher logic, updates utility functions to return an event version with block ranges, and bumps dependency and software version tags. Additionally, new tests are added to verify the event signature hashes. Changes
Sequence Diagram(s)sequenceDiagram
participant App
participant L1Fetcher as NewL1MessageFetcher
participant BlobClient as blob_client.NewBlobClients
participant API_Check as Endpoint Checker
participant Log
App->>L1Fetcher: Invoke NewL1MessageFetcher(ctx, cfg, db, client)
L1Fetcher->>BlobClient: Initialize blob clients
BlobClient-->>L1Fetcher: Return blobClient instance
L1Fetcher->>API_Check: Check BeaconNode, BlobScan, BlockNative endpoints
API_Check-->>L1Fetcher: Return configured clients or warnings
alt No blob client configured
L1Fetcher->>Log: Log critical error ("no blob client configured")
L1Fetcher-->>App: Return nil, error
else Valid configuration
L1Fetcher->>L1Fetcher: Instantiate L1FetcherLogic with blobClient
L1Fetcher-->>App: Return instance, nil error
end
sequenceDiagram
participant Parser as L1EventParser
participant LogEvent as Event Log
participant BlobRange as getBatchBlockRangeFromBlob
LogEvent->>Parser: Send event log (V0 or V7 signature)
alt Event is V7
Parser->>BlobRange: Retrieve block range from blob using version and hash
BlobRange-->>Parser: Return block range or error
else V0 event processing
Parser->>Parser: Process event normally (BatchIndex, BatchHash)
end
Parser-->>LogEvent: Return parsed event details
Suggested labels
Suggested reviewers
Poem
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## feat/use-codec-v6 #1609 +/- ##
====================================================
Coverage ? 41.46%
====================================================
Files ? 222
Lines ? 18098
Branches ? 0
====================================================
Hits ? 7504
Misses ? 9862
Partials ? 732
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
bridge-history-api/internal/logic/l1_event_parser.go (1)
285-285
: Potential performance consideration with header retrievalThe code retrieves a new block header for each batch in the same transaction. Consider caching the header per block hash to avoid redundant network calls when processing multiple batches in the same block.
- header, err := client.HeaderByHash(ctx, vlog.BlockHash) - if err != nil { - return nil, fmt.Errorf("failed to get L1 block header for blob context, blockHash: %s, err: %w", vlog.BlockHash.Hex(), err) - } + // Use a map to cache headers by block hash + // This would need to be added as a local variable at the beginning of the method + header, exists := blockHeaderCache[vlog.BlockHash] + if !exists { + var err error + header, err = client.HeaderByHash(ctx, vlog.BlockHash) + if err != nil { + return nil, fmt.Errorf("failed to get L1 block header for blob context, blockHash: %s, err: %w", vlog.BlockHash.Hex(), err) + } + blockHeaderCache[vlog.BlockHash] = header + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
bridge-history-api/internal/logic/l1_event_parser.go
(7 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: check
- GitHub Check: test
- GitHub Check: tests
🔇 Additional comments (6)
bridge-history-api/internal/logic/l1_event_parser.go (6)
14-14
: Correct integration of blob client supportThe addition of
blobClient
to theL1EventParser
struct and its constructor is well implemented. This enables the parser to interact with blobs for retrieving batch data.Also applies to: 27-27, 31-31, 35-35
241-253
: Well-documented code with clear explanationsThe comments explaining the handling of multiple CommitBatch events per transaction in codecV7 are thorough and helpful. The explanation of the
txBlobIndexMap
purpose and functionality provides excellent context for understanding the implementation.
273-298
: Effective implementation of versioned batch handlingThe code correctly handles batches with version >= 7 by maintaining transaction-specific blob indices and extracting block ranges from blobs. Error handling is comprehensive with detailed error messages.
325-331
: Robust handling of batch range reversionThe implementation properly handles reverting multiple batches at once by iterating through the range from
StartBatchIndex
toFinishBatchIndex
and creating individual batch events for each.
450-477
: Well-structured blob retrieval and decoding methodThe
getBatchBlockRangeFromBlob
method is thoroughly implemented with:
- Proper error handling for all possible failure scenarios
- Detailed error messages with context
- Appropriate logging of successful operations
- Clean separation of concerns (fetching, decoding, validation)
This implementation ensures reliable extraction of block ranges from blobs.
321-321
:✅ Verification successful
Check event name in UnpackLog call
There appears to be a naming inconsistency. The event signature is
L1RevertBatchV7EventSig
but the unpacking uses "RevertBatch0" as the event name.
🏁 Script executed:
#!/bin/bash # Verify the event name consistency between signature and unpacking # Check the event signature definition in the backend ABI grep -A 1 "L1RevertBatchV7EventSig" --include="*.go" -r . # Check the event name in the ABI definition grep -A 10 "RevertBatch0" --include="*.go" -r .Length of output: 3151
Event Name Consistency Confirmed – No Changes Needed
After a thorough verification of the ABI definitions and their usage, we confirmed that:
- The event signature
L1RevertBatchV7EventSig
is defined asIScrollChainABI.Events["RevertBatch0"].ID
in the ABI file.- The
UnpackLog
call inl1_event_parser.go
correctly uses"RevertBatch0"
, which matches the ABI mapping.- Test cases also corroborate the expected event signature.
The suggested naming inconsistency does not exist. Please disregard this comment.
0c4016c
to
1e1c3b8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
bridge-history-api/internal/logic/l1_event_parser.go (1)
321-321
:⚠️ Potential issueEvent name mismatch in UnpackLog.
The event name "RevertBatch0" doesn't match the signature name L1RevertBatchV7EventSig. This could lead to unpacking errors.
- if err := utils.UnpackLog(backendabi.IScrollChainABI, &event, "RevertBatch0", vlog); err != nil { + if err := utils.UnpackLog(backendabi.IScrollChainABI, &event, "RevertBatch", vlog); err != nil {Note: This matches a previous reviewer's suggestion.
🧹 Nitpick comments (8)
bridge-history-api/internal/controller/fetcher/l1_fetcher.go (1)
40-58
: Consider clarifying error handling and partial endpoint failures.
The approach of continuing after a failed endpoint client creation suggests that endpoints are optional. If all endpoints fail or none are configured, you correctly return an error. However, you may want to document or log more clearly that it is expected to proceed after partial failures.Additionally, the doc comment on line 39 still implies a simple constructor but does not mention returning an error. You may wish to update that comment to reflect the new behavior:
-// NewL1MessageFetcher creates a new L1MessageFetcher instance. +// NewL1MessageFetcher creates a new L1MessageFetcher instance. Returns an error if no valid blob client is configured.bridge-history-api/internal/utils/utils_test.go (1)
23-27
: Rename the test function to match the updated method.You're now testing
GetBatchVersionAndBlockRangeFromCalldata
within a function still namedTestGetBatchRangeFromCalldata
. Consider renaming it for clarity:-func TestGetBatchRangeFromCalldata(t *testing.T) { +func TestGetBatchVersionAndBlockRangeFromCalldata(t *testing.T) {bridge-history-api/internal/utils/utils.go (4)
69-70
: Prominent function signature update.Renaming and returning an extra
version
parameter is clear. Consider adding a descriptive docstring to clarify usage and expected return values:+// GetBatchVersionAndBlockRangeFromCalldata extracts the batch version, start block, and finish block from calldata. +// Returns 0s if these values do not apply for certain methods. func GetBatchVersionAndBlockRangeFromCalldata(txData []byte) (uint8, uint64, uint64, error) {
73-83
: Include method name in error messages.Currently, errors like
"insufficient arguments for commitBatches"
provide minimal context. For improved debugging, consider referencing the method name:-return 0, 0, 0, fmt.Errorf("insufficient arguments for commitBatches") +return 0, 0, 0, fmt.Errorf("insufficient arguments for %s: need at least one argument", method.Name)
140-140
: Improve clarity of the “invalid chunks” error.To assist debugging, specify which method or chunk index is invalid. A small detail helps future maintainers:
-return 0, 0, 0, errors.New("invalid chunks") +return 0, 0, 0, fmt.Errorf("invalid chunks for method %s; chunk array cannot be empty", method.Name)
151-151
: Return nil instead of err when there’s no error assigned.By this point,
err
is always nil if the code path hasn’t exited. Simplify:-return version, start, finish, err +return version, start, finish, nilbridge-history-api/abi/backend_abi.go (1)
280-284
: New revert batch version struct.
L1RevertBatchV7Event
clarifies the extended revert batch info. Optionally, add inline documentation to highlight usage differences compared toV0
.bridge-history-api/internal/logic/l1_event_parser.go (1)
273-298
: Potential performance optimization for header retrieval.The code retrieves a block header for each blob in a transaction at line 281, which could lead to duplicate header retrievals for the same block. Consider caching the header by blockHash to avoid redundant RPC calls.
- header, err := client.HeaderByHash(ctx, vlog.BlockHash) - if err != nil { - return nil, fmt.Errorf("failed to get L1 block header for blob context, blockHash: %s, err: %w", vlog.BlockHash.Hex(), err) - } + // Cache headers by blockHash to avoid redundant RPC calls + var header *types.Header + var err error + if blockHeaderCache == nil { + blockHeaderCache = make(map[common.Hash]*types.Header) + } + if cachedHeader, ok := blockHeaderCache[vlog.BlockHash]; ok { + header = cachedHeader + } else { + header, err = client.HeaderByHash(ctx, vlog.BlockHash) + if err != nil { + return nil, fmt.Errorf("failed to get L1 block header for blob context, blockHash: %s, err: %w", vlog.BlockHash.Hex(), err) + } + blockHeaderCache[vlog.BlockHash] = header + }This optimization would require adding a cache map as a parameter or method variable.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
bridge-history-api/go.sum
is excluded by!**/*.sum
go.work.sum
is excluded by!**/*.sum
📒 Files selected for processing (13)
bridge-history-api/abi/backend_abi.go
(4 hunks)bridge-history-api/abi/backend_abi_test.go
(1 hunks)bridge-history-api/cmd/fetcher/app/app.go
(1 hunks)bridge-history-api/conf/config.json
(1 hunks)bridge-history-api/go.mod
(1 hunks)bridge-history-api/internal/config/config.go
(1 hunks)bridge-history-api/internal/controller/fetcher/l1_fetcher.go
(4 hunks)bridge-history-api/internal/logic/l1_event_parser.go
(7 hunks)bridge-history-api/internal/logic/l1_fetcher.go
(6 hunks)bridge-history-api/internal/orm/batch_event.go
(1 hunks)bridge-history-api/internal/utils/utils.go
(5 hunks)bridge-history-api/internal/utils/utils_test.go
(1 hunks)common/version/version.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
- bridge-history-api/abi/backend_abi_test.go
- bridge-history-api/cmd/fetcher/app/app.go
- bridge-history-api/conf/config.json
- common/version/version.go
- bridge-history-api/internal/config/config.go
- bridge-history-api/internal/logic/l1_fetcher.go
- bridge-history-api/internal/orm/batch_event.go
- bridge-history-api/go.mod
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: test
- GitHub Check: tests
🔇 Additional comments (16)
bridge-history-api/internal/controller/fetcher/l1_fetcher.go (4)
5-5
: Importing "fmt" for error construction looks appropriate.
No concerns here—this import is necessary for creating and returning meaningful error messages.
14-14
: Adding the blob_client import is aligned with the new logic.
This import cleanly integrates the blob client package for multi-endpoint usage.
65-65
: Integrating blobClient into NewL1FetcherLogic is straightforward and consistent.
No issues found—passing the newly instantiated blobClient ensures the fetcher logic has access to the correct endpoints.
82-82
: Return statement correctly propagates the new error-signature approach.
Leaving the constructor with(*L1MessageFetcher, error)
is consistent with the earlier checks. Looks good.bridge-history-api/internal/utils/utils_test.go (4)
30-34
: Good coverage for multiple chunk scenario.This second scenario thoroughly tests multi-chunk data decoding. The assertions look correct.
37-41
: Confirmation of genesis batch handling.Verifying the genesis batch with version
0x0
and blocks at 0 is appropriate. All checks are consistent.
44-48
: Correct test for blob-proof batch scenario.Testing version
0x3
along with the expected block range confirms correct blob-proof processing. Nicely done.
50-60
: Validate handling of version=7 with zeroed block range.It's correct to assert
start = 0
andfinish = 0
ifcommitBatches
logic doesn't populate them. If desired, confirm this behavior matches actual usage in production or add more scenarios.bridge-history-api/abi/backend_abi.go (3)
58-59
: Versioned event signatures introduced.Declaring
L1RevertBatchV0EventSig
andL1RevertBatchV7EventSig
alongside the old signature fosters clarity on which version is processed. This is a good approach to handle event signature evolution.
167-167
: Verify overlapping “RevertBatch” event definitions.Two "RevertBatch" events with different parameters is valid ABI, but can cause confusion in some tooling or logs. Please confirm watchers correctly distinguish them by their unique signatures.
275-277
: Retaining the legacy revert batch structure.
L1RevertBatchV0Event
preserves backward compatibility. Looks good.bridge-history-api/internal/logic/l1_event_parser.go (5)
27-27
: Good addition of blobClient field.The addition of the blobClient field to the L1EventParser struct is a necessary change to support the new functionality for handling blob data in codecv7.
30-36
: Constructor correctly updated to include blobClient parameter.The NewL1EventParser function has been properly updated to accept and initialize the new blobClient field, ensuring the parser can interact with blob data.
241-253
: Well-documented design for handling multiple CommitBatch events.The added comments provide excellent documentation explaining how multiple CommitBatch events per transaction are handled using the txBlobIndexMap. This makes the code more maintainable and helps other developers understand the implementation.
325-331
: Proper handling of batch range in RevertBatchV7 event.The implementation correctly processes all batch indices within the specified range, ensuring that all affected batches are marked as reverted. This is an improvement over the previous implementation that only handled a single batch index.
450-477
: Well-structured getBatchBlockRangeFromBlob method with good error handling.This new method includes comprehensive error handling for various failure scenarios (blob not found, unsupported codec version, decoding errors, empty blocks). The error messages are detailed and provide context for easier debugging. The method is also well-structured, following a clear flow of retrieving, decoding, and validating blob data.
1e1c3b8
to
a39f89d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 16
🔭 Outside diff range comments (1)
zkvm-prover/Makefile (1)
1-49
: 💡 Verification agent🧩 Analysis chain
Well-structured Makefile with clear targets.
The Makefile provides good version derivation logic and appropriate targets for building, testing, and linting. The conditional logic for different make versions and GPU availability shows thoughtful handling of different environments.
🏁 Script executed:
#!/bin/bash # Verify that the required shell scripts exist if [ ! -f ./print_high_zkevm_version.sh ] || [ ! -f ./print_halo2gpu_version.sh ]; then echo "❌ Missing one or more required shell scripts:" [ ! -f ./print_high_zkevm_version.sh ] && echo "- print_high_zkevm_version.sh is missing" [ ! -f ./print_halo2gpu_version.sh ] && echo "- print_halo2gpu_version.sh is missing" exit 1 else echo "✅ Required shell scripts exist" fi # Check that common/version/version.go exists and contains the expected tag variable if [ ! -f ../common/version/version.go ]; then echo "❌ Missing ../common/version/version.go file" exit 1 else if ! grep -q "var tag = " ../common/version/version.go; then echo "❌ ../common/version/version.go does not contain 'var tag = '" exit 1 else echo "✅ ../common/version/version.go exists and contains tag variable" fi fiLength of output: 426
Action Required: Missing Helper Scripts
The Makefile itself is well-structured with clear targets and thoughtful conditional logic. However, verification revealed that the following critical helper scripts are missing from the repository:
- print_high_zkevm_version.sh
- print_halo2gpu_version.sh
These scripts are essential for the version derivation logic to work correctly. Please add them to the repository or update the Makefile to appropriately handle their absence to avoid build failures.
🧹 Nitpick comments (57)
common/libzkp/interface/libzkp.h (1)
11-12
: Add documentation and consider return code for error handling.Currently,
dump_vk
does not have any accompanying docstring or indication of how it handles failures. To improve maintainability and clarity, add a short docstring explaining the usage offork_name
andfile
, and consider returning a status code or an indicative signal for error scenarios, rather than silently failing.common/libzkp/impl/src/verifier.rs (1)
1-2
: Evaluate concurrency implications of allowingstatic_mut_refs
.Using
#![allow(static_mut_refs)]
can mask potential concurrency or memory safety hazards. If a mutable static is really needed, ensure robust synchronization. Consider using concurrency-safe primitives (e.g.,sync::OnceLock
) or an alternative design to guard against data races or undefined behavior.common/libzkp/impl/src/verifier/euclid.rs (2)
10-14
: Consider adding documentation comments for the struct.Adding brief Rust doc comments (///) explaining the purpose and usage context of
EuclidVerifier
will improve code clarity and make the code more approachable for new contributors or future maintenance.
16-31
: Replace.expect(...)
with proper error handling.Panicking (
.expect(...)
) upon setup failures might be undesirable in certain environments, especially for a library. Instead, consider propagating errors to the caller viaResult<Self>
to allow callers to handle failures gracefully.-pub fn new(assets_dir: &str) -> Self { +pub fn new(assets_dir: &str) -> Result<Self> { let verifier_bin = Path::new(assets_dir).join("verifier.bin"); let config = Path::new(assets_dir).join("root-verifier-vm-config"); let exe = Path::new(assets_dir).join("root-verifier-committed-exe"); Ok(Self { - chunk_verifier: ChunkVerifier::setup(&config, &exe, &verifier_bin) - .expect("Setting up chunk verifier"), - batch_verifier: BatchVerifier::setup(&config, &exe, &verifier_bin) - .expect("Setting up batch verifier"), - bundle_verifier: BundleVerifier::setup(&config, &exe, &verifier_bin) - .expect("Setting up bundle verifier"), + chunk_verifier: ChunkVerifier::setup(&config, &exe, &verifier_bin)?, + batch_verifier: BatchVerifier::setup(&config, &exe, &verifier_bin)?, + bundle_verifier: BundleVerifier::setup(&config, &exe, &verifier_bin)?, }) }rollup/internal/config/l2.go (1)
50-50
: Consider using uint64 for consistency with other similar fields.The
MaxChunksPerBatch
field usesint
type while other max/limit fields in this struct and related structs useuint64
. For consistency, consider usinguint64
unless there's a specific reason to useint
.- MaxChunksPerBatch int `json:"max_chunks_per_batch"` + MaxChunksPerBatch uint64 `json:"max_chunks_per_batch"`zkvm-prover/print_high_zkevm_version.sh (1)
1-10
: Script looks good but could be improved with modern shell practices.The script effectively extracts the highest version of zkevm-circuits from Cargo.lock, but consider these improvements:
- Use
$(command)
instead of backticks for command substitution- Add error handling if no matching lines are found
- Add a check for file existence
#!/bin/bash -set -ue +set -euo pipefail + +if [ ! -f ./Cargo.lock ]; then + echo "Error: Cargo.lock file not found" >&2 + exit 1 +fi -higher_zkevm_item=`grep "zkevm-circuits.git" ./Cargo.lock | sort | uniq | awk -F "[#=]" '{print $3" "$4}' | sort -k 1 | tail -n 1` +higher_zkevm_item=$(grep "zkevm-circuits.git" ./Cargo.lock | sort | uniq | awk -F "[#=]" '{print $3" "$4}' | sort -k 1 | tail -n 1) + +if [ -z "$higher_zkevm_item" ]; then + echo "Error: No zkevm-circuits.git entry found in Cargo.lock" >&2 + exit 1 +fi -higher_version=`echo $higher_zkevm_item | awk '{print $1}'` +higher_version=$(echo "$higher_zkevm_item" | awk '{print $1}') -higher_commit=`echo $higher_zkevm_item | cut -d ' ' -f2 | cut -c-7` +higher_commit=$(echo "$higher_zkevm_item" | cut -d ' ' -f2 | cut -c-7) echo "$higher_version $higher_commit"zkvm-prover/print_halo2gpu_version.sh (1)
1-22
: Add error handling to improve script reliabilityThis script looks mostly good, but it's missing error handling in a few critical places:
- The grep command in line 13 could fail if the pattern isn't found
- The pushd/popd commands don't check for failures
- The git command assumes the directory is a git repository
Here's an improved version with better error handling:
#!/bin/bash +set -e # Exit immediately if a command exits with a non-zero status config_file="$HOME/.cargo/config" if [ ! -e "$config_file" ]; then exit 0 fi if [[ $(head -n 1 "$config_file") == "#"* ]]; then exit 0 fi -halo2gpu_path=$(grep -Po '(?<=paths = \[")([^"]*)' $config_file) +halo2gpu_path=$(grep -Po '(?<=paths = \[")([^"]*)' $config_file || echo "") +if [ -z "$halo2gpu_path" ]; then + echo "Error: Could not find halo2gpu path in $config_file" >&2 + exit 1 +fi -pushd $halo2gpu_path +pushd "$halo2gpu_path" || { echo "Error: Failed to change directory to $halo2gpu_path" >&2; exit 1; } -commit_hash=$(git log --pretty=format:%h -n 1) -echo "${commit_hash:0:7}" +if git rev-parse --is-inside-work-tree > /dev/null 2>&1; then + commit_hash=$(git log --pretty=format:%h -n 1) + echo "${commit_hash:0:7}" +else + echo "Error: $halo2gpu_path is not a git repository" >&2 + popd || true + exit 1 +fi -popd +popd || { echo "Error: Failed to return to original directory" >&2; exit 1; }🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 15-15: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.
(SC2164)
[warning] 20-20: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.
(SC2164)
database/migrate/migrations/00025_add_l1_message_queue_hash.sql (1)
1-26
: Consider specifying column length for VARCHAR fieldsThe migration looks good overall, but I noticed that the VARCHAR columns are created without an explicit length constraint.
It's generally a good practice to specify the maximum length for VARCHAR columns to help with database planning and consistency. Consider updating the column definitions:
ALTER TABLE chunk -ADD COLUMN prev_l1_message_queue_hash VARCHAR DEFAULT '', -ADD COLUMN post_l1_message_queue_hash VARCHAR DEFAULT ''; +ADD COLUMN prev_l1_message_queue_hash VARCHAR(66) DEFAULT '', +ADD COLUMN post_l1_message_queue_hash VARCHAR(66) DEFAULT ''; ALTER TABLE batch -ADD COLUMN prev_l1_message_queue_hash VARCHAR DEFAULT '', -ADD COLUMN post_l1_message_queue_hash VARCHAR DEFAULT ''; +ADD COLUMN prev_l1_message_queue_hash VARCHAR(66) DEFAULT '', +ADD COLUMN post_l1_message_queue_hash VARCHAR(66) DEFAULT '';If these columns are intended to store Ethereum hashes, a length of 66 characters would be appropriate (0x prefix + 64 hex chars).
rollup/abi/bridge_abi_test.go (1)
80-95
: Consider enhancing test utility function with assertionsThe
TestPrintABISignatures
function is helpful for debugging but doesn't include any assertions to validate correctness. In a testing context, it's beneficial to include verifications.Consider adding assertions to verify that important method signatures are present with the expected values. This would make this function more useful as an actual test:
func TestPrintABISignatures(t *testing.T) { + assert := assert.New(t) // print all error signatures of ABI abi, err := ScrollChainMetaData.GetAbi() - if err != nil { - t.Fatal(err) - } + assert.NoError(err) + + // Map to collect method IDs for verification + methodIDs := make(map[string]bool) for _, method := range abi.Methods { fmt.Println(hexutil.Encode(method.ID[:4]), method.Sig, method.Name) + methodIDs[hexutil.Encode(method.ID[:4])] = true } + // Verify critical methods exist + assert.True(methodIDs["0xfa1a1b25"], "commitBatch method should exist") + assert.True(methodIDs["0x6f12b0c9"], "finalizeBatchWithProof method should exist") + fmt.Println("------------------------------") + + // Map to collect error IDs for verification + errorIDs := make(map[string]bool) + for _, errors := range abi.Errors { fmt.Println(hexutil.Encode(errors.ID[:4]), errors.Sig, errors.Name) + errorIDs[hexutil.Encode(errors.ID[:4])] = true } + + // Verify critical errors exist if applicable }This enhancement makes the test both informative (printing signatures) and useful for catching regressions (verifying critical methods exist).
build/dockerfiles/prover.Dockerfile (2)
15-15
: Missing WORKDIR and destination for COPY command.The COPY command doesn't specify a destination directory, so it will copy to the current working directory. It's better to explicitly set a WORKDIR before COPY operations.
-COPY ./zkvm-prover . +WORKDIR /build +COPY ./zkvm-prover ./
19-23
: Consider adding a non-root user for the runtime container.Running containers as root is a security risk. Consider adding a non-root user for the runtime container.
FROM ubuntu:24.04 AS runtime COPY --from=builder /target/release/prover /usr/local/bin/ +RUN useradd -u 1000 -m prover +USER prover ENTRYPOINT ["prover"]Makefile (1)
11-16
: Removed-u
flag from go get commands.The
-u
flag has been removed from allgo get
commands, which means dependencies will no longer be updated to their latest minor or patch version. This gives more control over dependency versions but requires manual updates for security patches.Consider adding a comment explaining why the
-u
flag was removed, particularly if this is important for version stability or compatibility.coordinator/cmd/tool/tool.go (1)
67-67
: Hardcoded fork name could be problematic.The hardcoded "darwinV2" string passed to
NewBatchProof
might be brittle if there are other possible values or if this value needs to change in the future.Consider extracting this as a configurable parameter or constant:
-proof := message.NewBatchProof("darwinV2") +const defaultHardForkName = "darwinV2" +proof := message.NewBatchProof(defaultHardForkName)Or better yet, derive it from a configuration or environment variable:
-proof := message.NewBatchProof("darwinV2") +proof := message.NewBatchProof(cfg.HardForkName)rollup/internal/orm/chunk.go (1)
113-113
: Error message inconsistency with function nameThe error message still refers to the old function name "getLatestChunk error" even though the function has been renamed to "GetLatestChunk".
- return nil, fmt.Errorf("Chunk.getLatestChunk error: %w", err) + return nil, fmt.Errorf("Chunk.GetLatestChunk error: %w", err)zkvm-prover/Makefile (1)
45-48
: Consider adding linting check for integration tests.The lint target covers all-features and all-targets, but you might want to add specific checks for integration tests if they exist.
lint: cargo check --all-features cargo clippy --all-features --all-targets -- -D warnings cargo fmt --all + # If you have integration tests + cargo clippy --tests -- -D warningsbridge-history-api/internal/utils/utils_test.go (2)
23-27
: Consider adding negative test coverage.
These tests for single-chunk scenarios look correct. However, testing invalid or malformed calldata (e.g., truncated or random bytes) would improve reliability.
30-34
: Looks good for multi-chunk coverage.
Having a test case that checks for error conditions (like corrupted chunk data) could further strengthen the scenario coverage.zkvm-prover/src/main.rs (1)
1-47
: Potential refinement for version flag handling.
Usingstd::process::exit(0)
inside an async context can abruptly halt the runtime. Consider returning early instead of callingexit()
, allowing any asynchronous tasks to finish gracefully.rollup/internal/controller/sender/sender.go (2)
174-253
: Enhance test coverage for multiple blobs.
Your logic to handle multiple blobs and check pending transactions is sound. Adding dedicated tests—particularly covering edge cases such as empty, partial, or large blobs slices—would bolster confidence.I can help write those test cases if needed. Would you like me to open a new issue for this?
684-720
: Robust sidecar creation for multiple blobs.
This function carefully checks for empty or nil blobs and computes commitments and proofs. If performance becomes a concern with large arrays, consider parallelizing the blob processing.zkvm-prover/src/types.rs (4)
5-12
: Use more descriptive documentation comments.While the
Task
struct is straightforward, consider adding///
doc comments to explain howtask_type
,task_data
, andhard_fork_name
are used, especially if they're critical to proof construction or process control.
14-21
: Consider usingOption<String>
forerror
field inProofDetail
.When there's no error, storing an empty string consumes unnecessary space. An
Option<String>
could convey missing errors more explicitly.- pub error: String, + pub error: Option<String>,
23-28
: Evaluate whether more specific error variants would be beneficial.
Panic
vs.NoPanic
is a start, but you may benefit from more detailed error categories (e.g., compile-time failure, runtime error, etc.) to handle distinct proof failure modes.
69-74
: Add clarifying doc comments onProofStatus
.Indicate under what conditions
Ok
is used versusError
. This can help external crates or consumers of the API to better interpret proof results.coordinator/internal/logic/submitproof/proof_receiver.go (2)
180-184
: Add logging for batch proofs before verification.To aid debugging, consider logging at least the length or partial content of the deserialized
batchProof
before callingVerifyBatchProof
.
298-304
: Avoid returning generic verification errors for previously verified tasks.Returning
ErrValidatorFailureTaskHaveVerifiedSuccess
is logical, but consider including details about the task or context, so logs and error handlers can distinguish previously verified tasks from newly failing ones.rollup/internal/controller/watcher/chunk_proposer.go (1)
11-11
: Importedcommon
package might be unused or partially utilized.Double-check if this import is essential. If not actively used, removing it could reduce clutter.
bridge-history-api/internal/utils/utils.go (1)
151-151
: Consider returning nil explicitly instead of err.
Sinceerr
isn’t re-assigned after prior checks, returningerr
here is always returning nil in successful cases. For clarity, consider just returning(version, startBlock, finishBlock, nil)
.coordinator/internal/logic/provertask/bundle_prover_task.go (1)
147-154
: Reintroducing hard fork check.
The reinstated conditional properly prevents assigning an incompatible task to the prover. Consider adding a specialized error type or code for clarity, or logging more context about the mismatch to aid debugging.rollup/internal/controller/relayer/l2_relayer_metrics.go (1)
11-11
: Consider a Counter metric instead of a Gauge.
If "batches processed per transaction" is cumulative or always increasing, a Counter or a Summation-based metric might be more appropriate than a Gauge. However, if the intention is a snapshot or ratio, then a Gauge is correct.Also applies to: 43-46
rollup/internal/controller/sender/sender_test.go (2)
298-299
: Streamline transaction hash tracking in tests.
Storing intermediate transaction hashes in a slice and waiting for any to confirm is correct. If speed is a concern, reduce the 30-second timeout or parallelize checks to shorten test duration.Also applies to: 307-307, 311-317
833-835
: Test stub returns a generic error message.
Consider using a descriptive error message to clarify scenarios in logs or debugging output.rollup/internal/controller/watcher/bundle_proposer.go (2)
164-168
: Double-check the forced single-batch rule for CodecV5.Limiting
maxBatchesThisBundle
to 1 whencodecVersion == encoding.CodecV5
may be surprising to callers used to multi-batch behavior. Ensure all code paths are aware of this condition, and confirm there's no confusion when a different codec version is used within the same system.
204-208
: Repetitive calls toallBatchesCommittedInSameTXIncluded
could affect performance.Both the maximum-batch logic and the timeout logic invoke the same method. If these calls happen frequently in large-scale scenarios, consider caching or deferring repeated database lookups to reduce overhead.
bridge-history-api/abi/backend_abi.go (1)
58-59
: Clarify naming convention for revert event signatures.Renaming
L1RevertBatchEventSig
toL1RevertBatchV0EventSig
and introducingL1RevertBatchV7EventSig
is a clear versioning strategy. However, consider surfacing the version consistently (e.g., “RevertBatchV...” for all references) to avoid confusion.rollup/internal/orm/orm_test.go (1)
487-488
: Incorporate additional negative tests.After updating the proof with “new test proof,” consider adding a negative test that attempts to retrieve a proof from a non-existent bundle hash to ensure robust error handling in production scenarios.
coordinator/internal/logic/verifier/verifier.go (2)
60-65
: Consider adding a doc comment or explanation.
This newrustVkDump
struct is descriptive, but a short doc comment explaining its purpose could aid maintainability.
232-264
: Potential concurrency and security considerations with temp files
The function writes a VK dump to a temporary file and removes it afterward. While this approach is valid, consider the following:
- Concurrency: If multiple invocations occur in parallel, the same temp file name might trigger conflicts.
- Security: Using
os.TempDir()
can lead to potential collisions or malicious interference.Switching to
os.CreateTemp
or generating a unique file name can mitigate these risks.zkvm-prover/src/zk_circuits_handler/euclid.rs (2)
12-19
:EuclidHandler
struct design
This struct neatly holds separate provers for chunk, batch, and bundle proofs. Ensure concurrency usage is safe if multiple methods are invoked simultaneously on these provers.
48-94
: Async trait implementation
get_vk
returns the verification key from each prover. Usingtry_lock().unwrap()
can panic if the lock is already held elsewhere. Consider a more graceful fallback.get_proof_data
properly deserializes input, filters by proof type, and generates the proof. Error handling for unknown proof types is partially handled, but keep it consistent across the application.zkvm-prover/src/prover.rs (5)
1-13
: Consider structured logging and context tracing for debug & error analysis.These imports and crate usages are a good start, but consider adding structured logging or context-based error tracking (e.g.,
tracing
crate) to help debug issues in asynchronous calls, especially since this code will likely run in production.
57-76
: Document the rationale for enumerating all proof types.In
get_vks
, you iterate over all known circuits and gather verification keys for each requested proof type. This is correct if the code expects to handle many proof types for each circuit, but consider clarifying in docs or comments how large this can get in production and if any caching or filtering is needed for performance.
78-90
: Clarify error message and finalize the default fields.When
do_prove
fails, the error message is "failed to request proof". The message might be improved to clarify context, e.g., "Proof generation failed" or adding thereq.proof_type
for easier troubleshooting. Also confirm that default fields are correct when returning a failed response.
133-141
: Consolidate initialization logic for modularity.
LocalProver::new
directly constructs its fields, but if you anticipate adding more setup steps (e.g., preloading circuit data), you could break them out into dedicated helper methods for clarity.
143-168
: Check performance implications of blocking indo_prove
.Using
spawn_blocking
intokio
is acceptable for CPU-bound tasks, but repeatedly spawning blocking tasks can degrade performance if not carefully managed. Ensure you have enough worker threads in the runtime for heavy-lift tasks or consider a custom thread pool if usage is high.bridge-history-api/go.mod (1)
13-14
: Double-check pinned version approach.
github.com/scroll-tech/da-codec
andgithub.com/scroll-tech/go-ethereum
are pinned to specific commits. This is useful for stability but can lead to missing security patches if not revisited periodically. Consider adding a schedule for reviewing/updating pinned commits.coordinator/go.mod (1)
14-15
: Keep pinned commits updated.
github.com/scroll-tech/da-codec
andgithub.com/scroll-tech/go-ethereum
rely on pinned commits here as well. Periodically review or automate checks for updated stable releases to avoid drift.rollup/internal/controller/relayer/l2_relayer.go (6)
396-449
: Validate chunk and batch codec version alignment carefully.This block immediately returns upon discovering a mismatch. Confirm that entirely aborting the process is desired instead of partially committing unaffected batches. If parallel multi-version support is envisioned, partial commits might be necessary.
450-541
: Potential concurrency conflict with force submission.If
ProcessPendingBatches
is invoked concurrently, multiple goroutines may attempt force submission, possibly resulting in double commits. Consider enforcing mutual exclusion or a single-threaded execution model to avoid race conditions.
543-551
: Potential collision in context ID strings.Constructing context IDs by concatenating all batch hashes could lead to very long strings or collisions in extreme cases. Consider a hashing approach or other scheme to reduce the size and chance of collisions.
561-565
: Struct naming clarity.
dbBatchWithChunksAndParent
could be more descriptive, for exampleBatchSubmissionContext
, to clarify its purpose.
712-729
: Consider partial re-try instead of returning.Returning immediately on errors (e.g., chunk retrieval failures) prevents later items from being processed. This could hide feasible work and stall subsequent processing.
1190-1247
: Consider extracting shared finalize logic.Adding specialized logic for each codec version can lead to code duplication. Factor out common behavior for improved maintainability and readability.
zkvm-prover/Cargo.toml (2)
11-14
: Check inconsistent double slash in Git URLs.
The URLs contain//
(double slash) which might be a typo. For example:https://github.com/scroll-tech//revm
Ensure this is intentional or correct it to avoid possible fetch errors or confusion.
9-17
: Prefer pinned commit references over tracking branches.
Using"branch"
references can cause unexpected breakage if the branch changes. To enhance build reproducibility, consider pinning to a specific commit revision or a verified tag.common/types/message/message.go (1)
82-95
: Consider adding robust consistency checks toChunkProof
.
UnlikeBatchProof
andBundleProof
,ChunkProof
lacks aSanityCheck
method. If chunk proofs require validation (similar to batches and bundles), implement a method to guard against malformed proofs.rollup/abi/bridge_abi.go (1)
27-27
: Consider renaming the duplicated “RevertBatch” events.The ABI lists two
RevertBatch
events with different input signatures. While Solidity supports overloaded events, it can cause confusion in log parsing or indexing if event filters rely solely on the event name. Consider giving them distinct names to improve clarity.
🛑 Comments failed to post (16)
common/libzkp/impl/src/lib.rs (1)
69-82: 🛠️ Refactor suggestion
Handle or log errors when retrieving verifiers.
The
_dump_vk
function silently does nothing ifget_verifier
fails (i.e., if theOk
branch isn't taken). Consider logging an error, returning a code, or handling the situation more explicitly to inform callers that no verifier was found. Otherwise, debugging potential misconfigurations can be more difficult.common/libzkp/impl/src/verifier.rs (1)
50-50: 🛠️ Refactor suggestion
Double check the omission of
low_version_circuit
.Here, only
high_version_circuit
is referenced. Iflow_version_circuit
is still relevant, consider initializing it or ensuring there's clarity around its removal. Otherwise, callers may expect both circuits to be set.Also applies to: 54-54
common/libzkp/impl/src/verifier/euclid.rs (1)
55-64: 🛠️ Refactor suggestion
Return a
Result<()>
fromdump_vk
.Relying on
.expect(...)
for file I/O and JSON serialization can cause abrupt termination if an error occurs. By returningResult<()>
, you can propagate errors and handle them appropriately upstream.-fn dump_vk(&self, file: &Path) { - let f = File::create(file).expect("Failed to open file to dump VK"); +fn dump_vk(&self, file: &Path) -> Result<()> { + let f = File::create(file).map_err(|e| anyhow!("Failed to open file: {}", e))?; let dump = VKDump { chunk_vk: base64::encode(self.chunk_verifier.get_app_vk()), batch_vk: base64::encode(self.batch_verifier.get_app_vk()), bundle_vk: base64::encode(self.bundle_verifier.get_app_vk()), }; - serde_json::to_writer(f, &dump).expect("Failed to dump VK"); + serde_json::to_writer(f, &dump) + .map_err(|e| anyhow!("Failed to dump VK: {}", e))?; + + Ok(()) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.fn dump_vk(&self, file: &Path) -> Result<()> { let f = File::create(file).map_err(|e| anyhow!("Failed to open file: {}", e))?; let dump = VKDump { chunk_vk: base64::encode(self.chunk_verifier.get_app_vk()), batch_vk: base64::encode(self.batch_verifier.get_app_vk()), bundle_vk: base64::encode(self.bundle_verifier.get_app_vk()), }; serde_json::to_writer(f, &dump) .map_err(|e| anyhow!("Failed to dump VK: {}", e))?; Ok(()) }
common/libzkp/impl/Cargo.toml (2)
20-41: 💡 Verification agent
🧩 Analysis chain
Added patches for revm and reth dependencies
Comprehensive patches have been added for revm and reth components to ensure compatibility with the scroll-zkvm implementation.
🏁 Script executed:
#!/bin/bash # Check if the repository URLs with double slashes are valid echo "URLs with double slashes might cause issues with cargo. Checking the pattern:" grep -n "scroll-tech//" common/libzkp/impl/Cargo.tomlLength of output: 2345
Attention: Verify and Fix Extra Slash in Git URLs
The patches for the
revm
andreth
dependencies have been added to support the scroll-zkvm implementation. However, the verification check confirms that all Git URLs incommon/libzkp/impl/Cargo.toml
include an extra slash (e.g.,"https://github.com/scroll-tech//revm"
). While Cargo might normalize these URLs, it's best to verify if the extra slash is intentional. If it's not required, please update the URLs to use a single slash (for example,"https://github.com/scroll-tech/revm"
) to prevent any potential issues.
- Action Required:
- Confirm whether the extra slash is accepted by Cargo.
- If not, update all occurrences in both the
revm*
andreth*
patches accordingly.
11-18: 💡 Verification agent
🧩 Analysis chain
Updated core dependencies with custom forks
The dependencies have been updated to use custom forks from the scroll-tech repositories. This ensures compatibility with project-specific requirements.
🏁 Script executed:
#!/bin/bash # Check for any duplicate or conflicting dependencies grep -n "branch" common/libzkp/impl/Cargo.toml | sort echo "" echo "Checking for double slashes in repository URLs that might cause issues:" grep -n "//" common/libzkp/impl/Cargo.tomlLength of output: 5589
Action Required: Clean Up Dependency Declarations
- The custom fork updates for core dependencies are in place to meet our project requirements.
- However, the Cargo.toml currently shows duplicate declarations. For instance, dependencies like
revm
,revm-interpreter
,revm-precompile
, andrevm-primitives
appear both in the main dependency block (lines 12–18) and again under a[patch."https://github.com/scroll-tech/revm.git"]
section (lines 21–24). Please verify if this duplication is intentional or if leftover entries should be removed.- Additionally, several Git URLs (e.g.,
"https://github.com/scroll-tech//revm"
,"https://github.com/scroll-tech//reth"
) contain extraneous double slashes. Although these might be normalized by Cargo, it’s best to correct them for clarity and consistency.Please address these issues to ensure the manifest is clear and free from potential ambiguities.
coordinator/internal/types/prover.go (1)
40-41: 🛠️ Refactor suggestion
Update String() method to handle the new ProverType.
The new ProverTypeOpenVM constant is added but not handled in the String() method (lines 22-31), which will cause it to fall into the default case and return "illegal prover type" when stringified.
func (r ProverType) String() string { switch r { case ProverTypeChunk: return "prover type chunk" case ProverTypeBatch: return "prover type batch" + case ProverTypeOpenVM: + return "prover type openvm" default: return fmt.Sprintf("illegal prover type: %d", r) } }Additionally, you should consider updating the MakeProverType function to handle this new prover type if needed.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (r ProverType) String() string { switch r { case ProverTypeChunk: return "prover type chunk" case ProverTypeBatch: return "prover type batch" case ProverTypeOpenVM: return "prover type openvm" default: return fmt.Sprintf("illegal prover type: %d", r) } }
rollup/internal/controller/watcher/chunk_proposer.go (1)
426-453: 🛠️ Refactor suggestion
Confirm the choice of
encoding.CodecV5
for blocks flagged as Euclid.If the code is truly entering the Euclid fork, consider using a version >=
CodecV7
. UsingCodecV5
might introduce confusion or inconsistent handling of proof data.- codecVersion := encoding.CodecV5 + codecVersion := encoding.CodecV7 // or higher, depending on intended version📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (p *ChunkProposer) tryProposeEuclidTransitionChunk(blocks []*encoding.Block) (bool, error) { if !p.chainCfg.IsEuclid(blocks[0].Header.Time) { return false, nil } prevBlocks, err := p.l2BlockOrm.GetL2BlocksGEHeight(p.ctx, blocks[0].Header.Number.Uint64()-1, 1) if err != nil || len(prevBlocks) == 0 || prevBlocks[0].Header.Hash() != blocks[0].Header.ParentHash { return false, fmt.Errorf("failed to get parent block: %w", err) } if p.chainCfg.IsEuclid(prevBlocks[0].Header.Time) { // Parent is still Euclid, transition happened already return false, nil } // blocks[0] is Euclid, but parent is not, propose a chunk with only blocks[0] chunk := encoding.Chunk{Blocks: blocks[:1]} codecVersion := encoding.CodecV7 // or higher, depending on intended version metrics, calcErr := utils.CalculateChunkMetrics(&chunk, codecVersion) if calcErr != nil { return false, fmt.Errorf("failed to calculate chunk metrics: %w", calcErr) } p.recordTimerChunkMetrics(metrics) if err := p.updateDBChunkInfo(&chunk, codecVersion, metrics); err != nil { return false, err } return true, nil }
bridge-history-api/internal/utils/utils.go (1)
140-140:
⚠️ Potential issueCheck chunk size requirements to avoid out-of-bounds.
The code references slices[1:61]
and[1+lastBlockIndex*60 : 1+lastBlockIndex*60+60]
without verifying the chunk length is ≥61 bytes. This might lead to runtime panics if the data is malformed.+ if len(chunks[0]) < 61 { + return 0, 0, 0, errors.New("invalid chunk length for the first block") + } + // similarly check for the last chunk before parsing📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if len(chunks[0]) < 61 { return 0, 0, 0, errors.New("invalid chunk length for the first block") } // similarly check for the last chunk before parsing return 0, 0, 0, errors.New("invalid chunks")
rollup/internal/controller/watcher/bundle_proposer.go (1)
169-174:
⚠️ Potential issueUse the correct batch index for error reporting.
The error message refers to
batches[0].Index
andbatches[0].Hash
instead of the batch that actually has the emptyCommitTxHash
. Consider switching tobatches[i].Index
andbatches[i].Hash
to precisely identify the problematic batch.-if len(batches[i].CommitTxHash) == 0 { - return fmt.Errorf("commit tx hash is empty for batch %v %s", batches[0].Index, batches[0].Hash) +if len(batches[i].CommitTxHash) == 0 { + return fmt.Errorf("commit tx hash is empty for batch %v %s", batches[i].Index, batches[i].Hash)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.for i := 1; i < len(batches); i++ { // Make sure that all batches have been committed. if len(batches[i].CommitTxHash) == 0 { return fmt.Errorf("commit tx hash is empty for batch %v %s", batches[i].Index, batches[i].Hash) }
zkvm-prover/src/prover.rs (3)
50-55:
⚠️ Potential issueEnsure thread safety when reusing
LocalProver
across async tasks.
current_task
andactive_handler
are stored as fields without any synchronization. It's unclear whether multiple concurrent calls to this service are expected. If so, consider wrapping these fields in a mutex or using atomic references to prevent potential race conditions in multi-threaded scenarios.
170-177:
⚠️ Potential issueConsider concurrency in
set_active_handler
.
set_active_handler
changes the active handler without synchronization. In scenarios where multiple proofs might be requested for different circuits concurrently, the active handler might get swapped unexpectedly. Confirm single-thread usage or add synchronization around this.
179-188: 🛠️ Refactor suggestion
Handle unknown hard forks more gracefully.
Currently
new_handler
usesunwrap()
and thenunreachable!()
for anything other than"euclid"
. If new hard forks are introduced or misconfigured requests come in, the code will panic. Consider returning a descriptive error instead, or handle additional circuit types gracefully.rollup/internal/controller/relayer/l2_relayer.go (1)
839-840:
⚠️ Potential issueOverriding 'withProof' with 'false' breaks finalization.
This override renders the parameter ineffective and bypasses the proof logic. It appears to be a bug.
Apply this diff to remove the forced override:
var aggProof message.BundleProof -withProof = false
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.var aggProof message.BundleProof
common/types/message/message.go (3)
281-288: 🛠️ Refactor suggestion
Repeat the same fix for
OpenVMBatchProof
.
Likewise, replace panics with proper error handling during the JSON marshalling process.
250-257: 🛠️ Refactor suggestion
Avoid panics in library code when marshalling JSON.
Panic can crash your application ifjson.Marshal
fails (e.g., invalid UTF-8 inVmProof
). As a safer approach, consider returning an error to the caller or storing the error in the proof object.func (p *OpenVMChunkProof) Proof() ([]byte, error) { - proofJson, err := json.Marshal(p.VmProof) - if err != nil { - panic(fmt.Sprint("marshaling error", err)) - } - return proofJson + // Return the error to the caller instead of panicking + data, err := json.Marshal(p.VmProof) + if err != nil { + return nil, fmt.Errorf("failed to marshal chunk proof: %w", err) + } + return data, nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (p *OpenVMChunkProof) Proof() ([]byte, error) { // Return the error to the caller instead of panicking data, err := json.Marshal(p.VmProof) if err != nil { return nil, fmt.Errorf("failed to marshal chunk proof: %w", err) } return data, nil }
354-357:
⚠️ Potential issuePrevent out-of-range indexing in
OpenVMBundleProof.Proof()
.
p.EvmProof.Instances[:384]
can panic iflen(p.EvmProof.Instances) < 384
. Consider adding a boundary check:func (p *OpenVMBundleProof) Proof() []byte { + if len(p.EvmProof.Instances) < 384 { + panic(fmt.Sprintf("instances length (%d) < 384, cannot slice safely", len(p.EvmProof.Instances))) + } proofBytes := make([]byte, 0, 384+len(p.EvmProof.Proof)) proofBytes = append(proofBytes, p.EvmProof.Instances[:384]...) return append(proofBytes, p.EvmProof.Proof...) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (p *OpenVMBundleProof) Proof() []byte { if len(p.EvmProof.Instances) < 384 { panic(fmt.Sprintf("instances length (%d) < 384, cannot slice safely", len(p.EvmProof.Instances))) } proofBytes := make([]byte, 0, 384+len(p.EvmProof.Proof)) proofBytes = append(proofBytes, p.EvmProof.Instances[:384]...) return append(proofBytes, p.EvmProof.Proof...) }
a39f89d
to
3c6d1f3
Compare
Purpose or design rationale of this PR
Update the Pectra-compatible header hotfix. Same as this PR: #1607
Monitor new RevertBatch events, update batches status in range.
Monitor message queue v2.
Support getting block range from
commitBatches
andcommitAndFinalizeBatch
.PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Deployment tag versioning
Has
tag
incommon/version.go
been updated or have you addedbump-version
label to this PR?Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
LocalProver
for handling proof generation tasks in a local context.Enhancements
Bug Fixes
Chores