Skip to content

feat(bridge-history): support codecv7 #1609

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Mar 8, 2025

Conversation

colinlyguo
Copy link
Member

@colinlyguo colinlyguo commented Mar 6, 2025

Purpose or design rationale of this PR

Update the Pectra-compatible header hotfix. Same as this PR: #1607
Monitor new RevertBatch events, update batches status in range.
Monitor message queue v2.
Support getting block range from commitBatches and commitAndFinalizeBatch.

PR title

Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:

  • feat: A new feature

Deployment tag versioning

Has tag in common/version.go been updated or have you added bump-version label to this PR?

  • Yes

Breaking change label

Does this PR have the breaking-change label?

  • No, this PR is not a breaking change

Summary by CodeRabbit

  • New Features

    • Introduced versioned event processing with enhanced differentiation for revert events.
    • Added new configuration options for improved connectivity with Message Queue V2 and Blob Scan services.
    • Enhanced batch submission configurations, including minimum and maximum batch limits and timeout settings.
    • Implemented a new LocalProver for handling proof generation tasks in a local context.
  • Enhancements

    • Improved error handling for message fetching and event parsing, ensuring more reliable operations.
    • Integrated advanced blob data processing for precise batch event handling.
    • Enhanced logging and metrics for batch processing and transaction handling.
  • Bug Fixes

    • Adjusted database query logic for handling reverted batch events.
    • Fixed issues related to proof data retrieval and verification key management.
  • Chores

    • Updated the Ethereum dependency.
    • Upgraded the software version to v4.4.91.
    • Updated various dependencies to their latest versions, ensuring compatibility and improved functionality.

Copy link

coderabbitai bot commented Mar 6, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The update introduces versioned event signatures and structures for revert operations, refines error handling when instantiating fetcher components, and adds new configuration options. It integrates blob client functionality in both event parsing and fetcher logic, updates utility functions to return an event version with block ranges, and bumps dependency and software version tags. Additionally, new tests are added to verify the event signature hashes.

Changes

File(s) Change Summary
bridge-history-api/abi/backend_abi.go
bridge-history-api/abi/backend_abi_test.go
Replaces L1RevertBatchEventSig with L1RevertBatchV0EventSig and L1RevertBatchV7EventSig, renames event struct to L1RevertBatchV0Event, adds new L1RevertBatchV7Event, and introduces tests for event signature hashes.
bridge-history-api/cmd/fetcher/app/app.go Adds error handling for L1MessageFetcher instantiation with critical error logging on creation failure.
bridge-history-api/conf/config.json
bridge-history-api/internal/config/config.go
Adds new configuration entries: MessageQueueV2Addr, BlobScanAPIEndpoint, BeaconNodeAPIEndpoint, and BlockNativeAPIEndpoint.
bridge-history-api/go.mod
common/version/version.go
Updates the go-ethereum dependency version and bumps the software version tag from v4.4.90 to v4.4.91.
bridge-history-api/internal/controller/fetcher/l1_fetcher.go
bridge-history-api/internal/logic/l1_fetcher.go
Modifies the instantiation of fetcher components by adding a blobClient parameter and handling additional API endpoints.
bridge-history-api/internal/logic/l1_event_parser.go Enhances the event parser by integrating a blobClient, adding new methods to extract batch block range from blob data, and updating handling for versioned revert events.
bridge-history-api/internal/orm/batch_event.go Alters the database query logic for reverted batch events by using a status condition instead of a specific hash match.
bridge-history-api/internal/utils/utils.go
bridge-history-api/internal/utils/utils_test.go
Renames and updates the utility function to return a version along with block ranges and refines error handling and tests accordingly.

Sequence Diagram(s)

sequenceDiagram
    participant App
    participant L1Fetcher as NewL1MessageFetcher
    participant BlobClient as blob_client.NewBlobClients
    participant API_Check as Endpoint Checker
    participant Log
    
    App->>L1Fetcher: Invoke NewL1MessageFetcher(ctx, cfg, db, client)
    L1Fetcher->>BlobClient: Initialize blob clients
    BlobClient-->>L1Fetcher: Return blobClient instance
    L1Fetcher->>API_Check: Check BeaconNode, BlobScan, BlockNative endpoints
    API_Check-->>L1Fetcher: Return configured clients or warnings
    alt No blob client configured
        L1Fetcher->>Log: Log critical error ("no blob client configured")
        L1Fetcher-->>App: Return nil, error
    else Valid configuration
        L1Fetcher->>L1Fetcher: Instantiate L1FetcherLogic with blobClient
        L1Fetcher-->>App: Return instance, nil error
    end
Loading
sequenceDiagram
    participant Parser as L1EventParser
    participant LogEvent as Event Log
    participant BlobRange as getBatchBlockRangeFromBlob
    
    LogEvent->>Parser: Send event log (V0 or V7 signature)
    alt Event is V7
        Parser->>BlobRange: Retrieve block range from blob using version and hash
        BlobRange-->>Parser: Return block range or error
    else V0 event processing
        Parser->>Parser: Process event normally (BatchIndex, BatchHash)
    end
    Parser-->>LogEvent: Return parsed event details
Loading

Suggested labels

bump-version

Suggested reviewers

  • georgehao
  • Thegaram

Poem

I'm a little rabbit, hopping through code so bright,
New events and blob handling make my day just right.
With version bumps and tests in place,
I nibble on bugs without a trace.
In fields of config and logic so grand,
I celebrate these changes with a joyful hop and a twitch of my hand!
🐰💻


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@codecov-commenter
Copy link

codecov-commenter commented Mar 6, 2025

Codecov Report

Attention: Patch coverage is 9.84456% with 174 lines in your changes missing coverage. Please review.

Please upload report for BASE (feat/use-codec-v6@03c63a6). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...idge-history-api/internal/logic/l1_event_parser.go 0.00% 96 Missing ⚠️
bridge-history-api/internal/utils/utils.go 37.77% 21 Missing and 7 partials ⚠️
bridge-history-api/internal/logic/l1_fetcher.go 0.00% 25 Missing ⚠️
...tory-api/internal/controller/fetcher/l1_fetcher.go 0.00% 20 Missing ⚠️
bridge-history-api/cmd/fetcher/app/app.go 0.00% 4 Missing ⚠️
bridge-history-api/internal/orm/batch_event.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@                 Coverage Diff                  @@
##             feat/use-codec-v6    #1609   +/-   ##
====================================================
  Coverage                     ?   41.46%           
====================================================
  Files                        ?      222           
  Lines                        ?    18098           
  Branches                     ?        0           
====================================================
  Hits                         ?     7504           
  Misses                       ?     9862           
  Partials                     ?      732           
Flag Coverage Δ
bridge-history-api 8.15% <9.84%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
bridge-history-api/internal/logic/l1_event_parser.go (1)

285-285: Potential performance consideration with header retrieval

The code retrieves a new block header for each batch in the same transaction. Consider caching the header per block hash to avoid redundant network calls when processing multiple batches in the same block.

-				header, err := client.HeaderByHash(ctx, vlog.BlockHash)
-				if err != nil {
-					return nil, fmt.Errorf("failed to get L1 block header for blob context, blockHash: %s, err: %w", vlog.BlockHash.Hex(), err)
-				}
+				// Use a map to cache headers by block hash
+				// This would need to be added as a local variable at the beginning of the method
+				header, exists := blockHeaderCache[vlog.BlockHash]
+				if !exists {
+					var err error
+					header, err = client.HeaderByHash(ctx, vlog.BlockHash)
+					if err != nil {
+						return nil, fmt.Errorf("failed to get L1 block header for blob context, blockHash: %s, err: %w", vlog.BlockHash.Hex(), err)
+					}
+					blockHeaderCache[vlog.BlockHash] = header
+				}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 73cdbfb and 37bae91.

📒 Files selected for processing (1)
  • bridge-history-api/internal/logic/l1_event_parser.go (7 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: check
  • GitHub Check: test
  • GitHub Check: tests
🔇 Additional comments (6)
bridge-history-api/internal/logic/l1_event_parser.go (6)

14-14: Correct integration of blob client support

The addition of blobClient to the L1EventParser struct and its constructor is well implemented. This enables the parser to interact with blobs for retrieving batch data.

Also applies to: 27-27, 31-31, 35-35


241-253: Well-documented code with clear explanations

The comments explaining the handling of multiple CommitBatch events per transaction in codecV7 are thorough and helpful. The explanation of the txBlobIndexMap purpose and functionality provides excellent context for understanding the implementation.


273-298: Effective implementation of versioned batch handling

The code correctly handles batches with version >= 7 by maintaining transaction-specific blob indices and extracting block ranges from blobs. Error handling is comprehensive with detailed error messages.


325-331: Robust handling of batch range reversion

The implementation properly handles reverting multiple batches at once by iterating through the range from StartBatchIndex to FinishBatchIndex and creating individual batch events for each.


450-477: Well-structured blob retrieval and decoding method

The getBatchBlockRangeFromBlob method is thoroughly implemented with:

  1. Proper error handling for all possible failure scenarios
  2. Detailed error messages with context
  3. Appropriate logging of successful operations
  4. Clean separation of concerns (fetching, decoding, validation)

This implementation ensures reliable extraction of block ranges from blobs.


321-321:

✅ Verification successful

Check event name in UnpackLog call

There appears to be a naming inconsistency. The event signature is L1RevertBatchV7EventSig but the unpacking uses "RevertBatch0" as the event name.


🏁 Script executed:

#!/bin/bash
# Verify the event name consistency between signature and unpacking

# Check the event signature definition in the backend ABI
grep -A 1 "L1RevertBatchV7EventSig" --include="*.go" -r .

# Check the event name in the ABI definition
grep -A 10 "RevertBatch0" --include="*.go" -r .

Length of output: 3151


Event Name Consistency Confirmed – No Changes Needed

After a thorough verification of the ABI definitions and their usage, we confirmed that:

  • The event signature L1RevertBatchV7EventSig is defined as IScrollChainABI.Events["RevertBatch0"].ID in the ABI file.
  • The UnpackLog call in l1_event_parser.go correctly uses "RevertBatch0", which matches the ABI mapping.
  • Test cases also corroborate the expected event signature.

The suggested naming inconsistency does not exist. Please disregard this comment.

@colinlyguo colinlyguo force-pushed the feat-bridge-history-support-codecv7 branch from 0c4016c to 1e1c3b8 Compare March 7, 2025 13:59
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
bridge-history-api/internal/logic/l1_event_parser.go (1)

321-321: ⚠️ Potential issue

Event name mismatch in UnpackLog.

The event name "RevertBatch0" doesn't match the signature name L1RevertBatchV7EventSig. This could lead to unpacking errors.

-			if err := utils.UnpackLog(backendabi.IScrollChainABI, &event, "RevertBatch0", vlog); err != nil {
+			if err := utils.UnpackLog(backendabi.IScrollChainABI, &event, "RevertBatch", vlog); err != nil {

Note: This matches a previous reviewer's suggestion.

🧹 Nitpick comments (8)
bridge-history-api/internal/controller/fetcher/l1_fetcher.go (1)

40-58: Consider clarifying error handling and partial endpoint failures.
The approach of continuing after a failed endpoint client creation suggests that endpoints are optional. If all endpoints fail or none are configured, you correctly return an error. However, you may want to document or log more clearly that it is expected to proceed after partial failures.

Additionally, the doc comment on line 39 still implies a simple constructor but does not mention returning an error. You may wish to update that comment to reflect the new behavior:

-// NewL1MessageFetcher creates a new L1MessageFetcher instance.
+// NewL1MessageFetcher creates a new L1MessageFetcher instance. Returns an error if no valid blob client is configured.
bridge-history-api/internal/utils/utils_test.go (1)

23-27: Rename the test function to match the updated method.

You're now testing GetBatchVersionAndBlockRangeFromCalldata within a function still named TestGetBatchRangeFromCalldata. Consider renaming it for clarity:

-func TestGetBatchRangeFromCalldata(t *testing.T) {
+func TestGetBatchVersionAndBlockRangeFromCalldata(t *testing.T) {
bridge-history-api/internal/utils/utils.go (4)

69-70: Prominent function signature update.

Renaming and returning an extra version parameter is clear. Consider adding a descriptive docstring to clarify usage and expected return values:

+// GetBatchVersionAndBlockRangeFromCalldata extracts the batch version, start block, and finish block from calldata.
+// Returns 0s if these values do not apply for certain methods.
 func GetBatchVersionAndBlockRangeFromCalldata(txData []byte) (uint8, uint64, uint64, error) {

73-83: Include method name in error messages.

Currently, errors like "insufficient arguments for commitBatches" provide minimal context. For improved debugging, consider referencing the method name:

-return 0, 0, 0, fmt.Errorf("insufficient arguments for commitBatches")
+return 0, 0, 0, fmt.Errorf("insufficient arguments for %s: need at least one argument", method.Name)

140-140: Improve clarity of the “invalid chunks” error.

To assist debugging, specify which method or chunk index is invalid. A small detail helps future maintainers:

-return 0, 0, 0, errors.New("invalid chunks")
+return 0, 0, 0, fmt.Errorf("invalid chunks for method %s; chunk array cannot be empty", method.Name)

151-151: Return nil instead of err when there’s no error assigned.

By this point, err is always nil if the code path hasn’t exited. Simplify:

-return version, start, finish, err
+return version, start, finish, nil
bridge-history-api/abi/backend_abi.go (1)

280-284: New revert batch version struct.

L1RevertBatchV7Event clarifies the extended revert batch info. Optionally, add inline documentation to highlight usage differences compared to V0.

bridge-history-api/internal/logic/l1_event_parser.go (1)

273-298: Potential performance optimization for header retrieval.

The code retrieves a block header for each blob in a transaction at line 281, which could lead to duplicate header retrievals for the same block. Consider caching the header by blockHash to avoid redundant RPC calls.

-				header, err := client.HeaderByHash(ctx, vlog.BlockHash)
-				if err != nil {
-					return nil, fmt.Errorf("failed to get L1 block header for blob context, blockHash: %s, err: %w", vlog.BlockHash.Hex(), err)
-				}
+				// Cache headers by blockHash to avoid redundant RPC calls
+				var header *types.Header
+				var err error
+				if blockHeaderCache == nil {
+					blockHeaderCache = make(map[common.Hash]*types.Header)
+				}
+				if cachedHeader, ok := blockHeaderCache[vlog.BlockHash]; ok {
+					header = cachedHeader
+				} else {
+					header, err = client.HeaderByHash(ctx, vlog.BlockHash)
+					if err != nil {
+						return nil, fmt.Errorf("failed to get L1 block header for blob context, blockHash: %s, err: %w", vlog.BlockHash.Hex(), err)
+					}
+					blockHeaderCache[vlog.BlockHash] = header
+				}

This optimization would require adding a cache map as a parameter or method variable.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0c4016c and 1e1c3b8.

⛔ Files ignored due to path filters (2)
  • bridge-history-api/go.sum is excluded by !**/*.sum
  • go.work.sum is excluded by !**/*.sum
📒 Files selected for processing (13)
  • bridge-history-api/abi/backend_abi.go (4 hunks)
  • bridge-history-api/abi/backend_abi_test.go (1 hunks)
  • bridge-history-api/cmd/fetcher/app/app.go (1 hunks)
  • bridge-history-api/conf/config.json (1 hunks)
  • bridge-history-api/go.mod (1 hunks)
  • bridge-history-api/internal/config/config.go (1 hunks)
  • bridge-history-api/internal/controller/fetcher/l1_fetcher.go (4 hunks)
  • bridge-history-api/internal/logic/l1_event_parser.go (7 hunks)
  • bridge-history-api/internal/logic/l1_fetcher.go (6 hunks)
  • bridge-history-api/internal/orm/batch_event.go (1 hunks)
  • bridge-history-api/internal/utils/utils.go (5 hunks)
  • bridge-history-api/internal/utils/utils_test.go (1 hunks)
  • common/version/version.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
  • bridge-history-api/abi/backend_abi_test.go
  • bridge-history-api/cmd/fetcher/app/app.go
  • bridge-history-api/conf/config.json
  • common/version/version.go
  • bridge-history-api/internal/config/config.go
  • bridge-history-api/internal/logic/l1_fetcher.go
  • bridge-history-api/internal/orm/batch_event.go
  • bridge-history-api/go.mod
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test
  • GitHub Check: tests
🔇 Additional comments (16)
bridge-history-api/internal/controller/fetcher/l1_fetcher.go (4)

5-5: Importing "fmt" for error construction looks appropriate.
No concerns here—this import is necessary for creating and returning meaningful error messages.


14-14: Adding the blob_client import is aligned with the new logic.
This import cleanly integrates the blob client package for multi-endpoint usage.


65-65: Integrating blobClient into NewL1FetcherLogic is straightforward and consistent.
No issues found—passing the newly instantiated blobClient ensures the fetcher logic has access to the correct endpoints.


82-82: Return statement correctly propagates the new error-signature approach.
Leaving the constructor with (*L1MessageFetcher, error) is consistent with the earlier checks. Looks good.

bridge-history-api/internal/utils/utils_test.go (4)

30-34: Good coverage for multiple chunk scenario.

This second scenario thoroughly tests multi-chunk data decoding. The assertions look correct.


37-41: Confirmation of genesis batch handling.

Verifying the genesis batch with version 0x0 and blocks at 0 is appropriate. All checks are consistent.


44-48: Correct test for blob-proof batch scenario.

Testing version 0x3 along with the expected block range confirms correct blob-proof processing. Nicely done.


50-60: Validate handling of version=7 with zeroed block range.

It's correct to assert start = 0 and finish = 0 if commitBatches logic doesn't populate them. If desired, confirm this behavior matches actual usage in production or add more scenarios.

bridge-history-api/abi/backend_abi.go (3)

58-59: Versioned event signatures introduced.

Declaring L1RevertBatchV0EventSig and L1RevertBatchV7EventSig alongside the old signature fosters clarity on which version is processed. This is a good approach to handle event signature evolution.


167-167: Verify overlapping “RevertBatch” event definitions.

Two "RevertBatch" events with different parameters is valid ABI, but can cause confusion in some tooling or logs. Please confirm watchers correctly distinguish them by their unique signatures.


275-277: Retaining the legacy revert batch structure.

L1RevertBatchV0Event preserves backward compatibility. Looks good.

bridge-history-api/internal/logic/l1_event_parser.go (5)

27-27: Good addition of blobClient field.

The addition of the blobClient field to the L1EventParser struct is a necessary change to support the new functionality for handling blob data in codecv7.


30-36: Constructor correctly updated to include blobClient parameter.

The NewL1EventParser function has been properly updated to accept and initialize the new blobClient field, ensuring the parser can interact with blob data.


241-253: Well-documented design for handling multiple CommitBatch events.

The added comments provide excellent documentation explaining how multiple CommitBatch events per transaction are handled using the txBlobIndexMap. This makes the code more maintainable and helps other developers understand the implementation.


325-331: Proper handling of batch range in RevertBatchV7 event.

The implementation correctly processes all batch indices within the specified range, ensuring that all affected batches are marked as reverted. This is an improvement over the previous implementation that only handled a single batch index.


450-477: Well-structured getBatchBlockRangeFromBlob method with good error handling.

This new method includes comprehensive error handling for various failure scenarios (blob not found, unsupported codec version, decoding errors, empty blocks). The error messages are detailed and provide context for easier debugging. The method is also well-structured, following a clear flow of retrieving, decoding, and validating blob data.

@colinlyguo colinlyguo force-pushed the feat-bridge-history-support-codecv7 branch from 1e1c3b8 to a39f89d Compare March 7, 2025 14:08
@colinlyguo colinlyguo changed the base branch from develop to feat/use-codec-v6 March 7, 2025 14:09
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.

Actionable comments posted: 16

🔭 Outside diff range comments (1)
zkvm-prover/Makefile (1)

1-49: 💡 Verification agent

🧩 Analysis chain

Well-structured Makefile with clear targets.

The Makefile provides good version derivation logic and appropriate targets for building, testing, and linting. The conditional logic for different make versions and GPU availability shows thoughtful handling of different environments.


🏁 Script executed:

#!/bin/bash
# Verify that the required shell scripts exist
if [ ! -f ./print_high_zkevm_version.sh ] || [ ! -f ./print_halo2gpu_version.sh ]; then
  echo "❌ Missing one or more required shell scripts:"
  [ ! -f ./print_high_zkevm_version.sh ] && echo "- print_high_zkevm_version.sh is missing"
  [ ! -f ./print_halo2gpu_version.sh ] && echo "- print_halo2gpu_version.sh is missing"
  exit 1
else
  echo "✅ Required shell scripts exist"
fi

# Check that common/version/version.go exists and contains the expected tag variable
if [ ! -f ../common/version/version.go ]; then
  echo "❌ Missing ../common/version/version.go file"
  exit 1
else
  if ! grep -q "var tag = " ../common/version/version.go; then
    echo "❌ ../common/version/version.go does not contain 'var tag = '"
    exit 1
  else
    echo "✅ ../common/version/version.go exists and contains tag variable"
  fi
fi

Length of output: 426


Action Required: Missing Helper Scripts

The Makefile itself is well-structured with clear targets and thoughtful conditional logic. However, verification revealed that the following critical helper scripts are missing from the repository:

  • print_high_zkevm_version.sh
  • print_halo2gpu_version.sh

These scripts are essential for the version derivation logic to work correctly. Please add them to the repository or update the Makefile to appropriately handle their absence to avoid build failures.

🧹 Nitpick comments (57)
common/libzkp/interface/libzkp.h (1)

11-12: Add documentation and consider return code for error handling.

Currently, dump_vk does not have any accompanying docstring or indication of how it handles failures. To improve maintainability and clarity, add a short docstring explaining the usage of fork_name and file, and consider returning a status code or an indicative signal for error scenarios, rather than silently failing.

common/libzkp/impl/src/verifier.rs (1)

1-2: Evaluate concurrency implications of allowing static_mut_refs.

Using #![allow(static_mut_refs)] can mask potential concurrency or memory safety hazards. If a mutable static is really needed, ensure robust synchronization. Consider using concurrency-safe primitives (e.g., sync::OnceLock) or an alternative design to guard against data races or undefined behavior.

common/libzkp/impl/src/verifier/euclid.rs (2)

10-14: Consider adding documentation comments for the struct.

Adding brief Rust doc comments (///) explaining the purpose and usage context of EuclidVerifier will improve code clarity and make the code more approachable for new contributors or future maintenance.


16-31: Replace .expect(...) with proper error handling.

Panicking (.expect(...)) upon setup failures might be undesirable in certain environments, especially for a library. Instead, consider propagating errors to the caller via Result<Self> to allow callers to handle failures gracefully.

-pub fn new(assets_dir: &str) -> Self {
+pub fn new(assets_dir: &str) -> Result<Self> {
     let verifier_bin = Path::new(assets_dir).join("verifier.bin");
     let config = Path::new(assets_dir).join("root-verifier-vm-config");
     let exe = Path::new(assets_dir).join("root-verifier-committed-exe");

     Ok(Self {
-        chunk_verifier: ChunkVerifier::setup(&config, &exe, &verifier_bin)
-            .expect("Setting up chunk verifier"),
-        batch_verifier: BatchVerifier::setup(&config, &exe, &verifier_bin)
-            .expect("Setting up batch verifier"),
-        bundle_verifier: BundleVerifier::setup(&config, &exe, &verifier_bin)
-            .expect("Setting up bundle verifier"),
+        chunk_verifier: ChunkVerifier::setup(&config, &exe, &verifier_bin)?,
+        batch_verifier: BatchVerifier::setup(&config, &exe, &verifier_bin)?,
+        bundle_verifier: BundleVerifier::setup(&config, &exe, &verifier_bin)?,
     })
 }
rollup/internal/config/l2.go (1)

50-50: Consider using uint64 for consistency with other similar fields.

The MaxChunksPerBatch field uses int type while other max/limit fields in this struct and related structs use uint64. For consistency, consider using uint64 unless there's a specific reason to use int.

-	MaxChunksPerBatch               int     `json:"max_chunks_per_batch"`
+	MaxChunksPerBatch               uint64  `json:"max_chunks_per_batch"`
zkvm-prover/print_high_zkevm_version.sh (1)

1-10: Script looks good but could be improved with modern shell practices.

The script effectively extracts the highest version of zkevm-circuits from Cargo.lock, but consider these improvements:

  1. Use $(command) instead of backticks for command substitution
  2. Add error handling if no matching lines are found
  3. Add a check for file existence
#!/bin/bash
-set -ue
+set -euo pipefail
+
+if [ ! -f ./Cargo.lock ]; then
+  echo "Error: Cargo.lock file not found" >&2
+  exit 1
+fi
-higher_zkevm_item=`grep "zkevm-circuits.git" ./Cargo.lock | sort | uniq | awk -F "[#=]" '{print $3" "$4}' | sort -k 1 | tail -n 1`
+higher_zkevm_item=$(grep "zkevm-circuits.git" ./Cargo.lock | sort | uniq | awk -F "[#=]" '{print $3" "$4}' | sort -k 1 | tail -n 1)
+
+if [ -z "$higher_zkevm_item" ]; then
+  echo "Error: No zkevm-circuits.git entry found in Cargo.lock" >&2
+  exit 1
+fi
-higher_version=`echo $higher_zkevm_item | awk '{print $1}'`
+higher_version=$(echo "$higher_zkevm_item" | awk '{print $1}')
-higher_commit=`echo $higher_zkevm_item | cut -d ' ' -f2 | cut -c-7`
+higher_commit=$(echo "$higher_zkevm_item" | cut -d ' ' -f2 | cut -c-7)

echo "$higher_version $higher_commit"
zkvm-prover/print_halo2gpu_version.sh (1)

1-22: Add error handling to improve script reliability

This script looks mostly good, but it's missing error handling in a few critical places:

  1. The grep command in line 13 could fail if the pattern isn't found
  2. The pushd/popd commands don't check for failures
  3. The git command assumes the directory is a git repository

Here's an improved version with better error handling:

 #!/bin/bash
+set -e  # Exit immediately if a command exits with a non-zero status
 
 config_file="$HOME/.cargo/config"
 
 if [ ! -e "$config_file" ]; then
     exit 0
 fi
 
 if [[ $(head -n 1 "$config_file") == "#"* ]]; then
   exit 0
 fi
 
-halo2gpu_path=$(grep -Po '(?<=paths = \[")([^"]*)' $config_file)
+halo2gpu_path=$(grep -Po '(?<=paths = \[")([^"]*)' $config_file || echo "")
+if [ -z "$halo2gpu_path" ]; then
+    echo "Error: Could not find halo2gpu path in $config_file" >&2
+    exit 1
+fi
 
-pushd $halo2gpu_path
+pushd "$halo2gpu_path" || { echo "Error: Failed to change directory to $halo2gpu_path" >&2; exit 1; }
 
-commit_hash=$(git log --pretty=format:%h -n 1)
-echo "${commit_hash:0:7}"
+if git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
+    commit_hash=$(git log --pretty=format:%h -n 1)
+    echo "${commit_hash:0:7}"
+else
+    echo "Error: $halo2gpu_path is not a git repository" >&2
+    popd || true
+    exit 1
+fi
 
-popd
+popd || { echo "Error: Failed to return to original directory" >&2; exit 1; }
🧰 Tools
🪛 Shellcheck (0.10.0)

[warning] 15-15: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.

(SC2164)


[warning] 20-20: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.

(SC2164)

database/migrate/migrations/00025_add_l1_message_queue_hash.sql (1)

1-26: Consider specifying column length for VARCHAR fields

The migration looks good overall, but I noticed that the VARCHAR columns are created without an explicit length constraint.

It's generally a good practice to specify the maximum length for VARCHAR columns to help with database planning and consistency. Consider updating the column definitions:

 ALTER TABLE chunk
-ADD COLUMN prev_l1_message_queue_hash VARCHAR DEFAULT '',
-ADD COLUMN post_l1_message_queue_hash VARCHAR DEFAULT '';
+ADD COLUMN prev_l1_message_queue_hash VARCHAR(66) DEFAULT '',
+ADD COLUMN post_l1_message_queue_hash VARCHAR(66) DEFAULT '';

 ALTER TABLE batch
-ADD COLUMN prev_l1_message_queue_hash VARCHAR DEFAULT '',
-ADD COLUMN post_l1_message_queue_hash VARCHAR DEFAULT '';
+ADD COLUMN prev_l1_message_queue_hash VARCHAR(66) DEFAULT '',
+ADD COLUMN post_l1_message_queue_hash VARCHAR(66) DEFAULT '';

If these columns are intended to store Ethereum hashes, a length of 66 characters would be appropriate (0x prefix + 64 hex chars).

rollup/abi/bridge_abi_test.go (1)

80-95: Consider enhancing test utility function with assertions

The TestPrintABISignatures function is helpful for debugging but doesn't include any assertions to validate correctness. In a testing context, it's beneficial to include verifications.

Consider adding assertions to verify that important method signatures are present with the expected values. This would make this function more useful as an actual test:

 func TestPrintABISignatures(t *testing.T) {
+	assert := assert.New(t)
 	// print all error signatures of ABI
 	abi, err := ScrollChainMetaData.GetAbi()
-	if err != nil {
-		t.Fatal(err)
-	}
+	assert.NoError(err)
+
+	// Map to collect method IDs for verification
+	methodIDs := make(map[string]bool)
 
 	for _, method := range abi.Methods {
 		fmt.Println(hexutil.Encode(method.ID[:4]), method.Sig, method.Name)
+		methodIDs[hexutil.Encode(method.ID[:4])] = true
 	}
 
+	// Verify critical methods exist
+	assert.True(methodIDs["0xfa1a1b25"], "commitBatch method should exist")
+	assert.True(methodIDs["0x6f12b0c9"], "finalizeBatchWithProof method should exist")
+
 	fmt.Println("------------------------------")
+
+	// Map to collect error IDs for verification
+	errorIDs := make(map[string]bool)
+
 	for _, errors := range abi.Errors {
 		fmt.Println(hexutil.Encode(errors.ID[:4]), errors.Sig, errors.Name)
+		errorIDs[hexutil.Encode(errors.ID[:4])] = true
 	}
+
+	// Verify critical errors exist if applicable
 }

This enhancement makes the test both informative (printing signatures) and useful for catching regressions (verifying critical methods exist).

build/dockerfiles/prover.Dockerfile (2)

15-15: Missing WORKDIR and destination for COPY command.

The COPY command doesn't specify a destination directory, so it will copy to the current working directory. It's better to explicitly set a WORKDIR before COPY operations.

-COPY ./zkvm-prover .
+WORKDIR /build
+COPY ./zkvm-prover ./

19-23: Consider adding a non-root user for the runtime container.

Running containers as root is a security risk. Consider adding a non-root user for the runtime container.

FROM ubuntu:24.04 AS runtime

COPY --from=builder /target/release/prover /usr/local/bin/

+RUN useradd -u 1000 -m prover
+USER prover

ENTRYPOINT ["prover"]
Makefile (1)

11-16: Removed -u flag from go get commands.

The -u flag has been removed from all go get commands, which means dependencies will no longer be updated to their latest minor or patch version. This gives more control over dependency versions but requires manual updates for security patches.

Consider adding a comment explaining why the -u flag was removed, particularly if this is important for version stability or compatibility.

coordinator/cmd/tool/tool.go (1)

67-67: Hardcoded fork name could be problematic.

The hardcoded "darwinV2" string passed to NewBatchProof might be brittle if there are other possible values or if this value needs to change in the future.

Consider extracting this as a configurable parameter or constant:

-proof := message.NewBatchProof("darwinV2")
+const defaultHardForkName = "darwinV2"
+proof := message.NewBatchProof(defaultHardForkName)

Or better yet, derive it from a configuration or environment variable:

-proof := message.NewBatchProof("darwinV2")
+proof := message.NewBatchProof(cfg.HardForkName)
rollup/internal/orm/chunk.go (1)

113-113: Error message inconsistency with function name

The error message still refers to the old function name "getLatestChunk error" even though the function has been renamed to "GetLatestChunk".

-		return nil, fmt.Errorf("Chunk.getLatestChunk error: %w", err)
+		return nil, fmt.Errorf("Chunk.GetLatestChunk error: %w", err)
zkvm-prover/Makefile (1)

45-48: Consider adding linting check for integration tests.

The lint target covers all-features and all-targets, but you might want to add specific checks for integration tests if they exist.

lint:
	cargo check --all-features
	cargo clippy --all-features --all-targets -- -D warnings
	cargo fmt --all
+	# If you have integration tests
+	cargo clippy --tests -- -D warnings
bridge-history-api/internal/utils/utils_test.go (2)

23-27: Consider adding negative test coverage.
These tests for single-chunk scenarios look correct. However, testing invalid or malformed calldata (e.g., truncated or random bytes) would improve reliability.


30-34: Looks good for multi-chunk coverage.
Having a test case that checks for error conditions (like corrupted chunk data) could further strengthen the scenario coverage.

zkvm-prover/src/main.rs (1)

1-47: Potential refinement for version flag handling.
Using std::process::exit(0) inside an async context can abruptly halt the runtime. Consider returning early instead of calling exit(), allowing any asynchronous tasks to finish gracefully.

rollup/internal/controller/sender/sender.go (2)

174-253: Enhance test coverage for multiple blobs.
Your logic to handle multiple blobs and check pending transactions is sound. Adding dedicated tests—particularly covering edge cases such as empty, partial, or large blobs slices—would bolster confidence.

I can help write those test cases if needed. Would you like me to open a new issue for this?


684-720: Robust sidecar creation for multiple blobs.
This function carefully checks for empty or nil blobs and computes commitments and proofs. If performance becomes a concern with large arrays, consider parallelizing the blob processing.

zkvm-prover/src/types.rs (4)

5-12: Use more descriptive documentation comments.

While the Task struct is straightforward, consider adding /// doc comments to explain how task_type, task_data, and hard_fork_name are used, especially if they're critical to proof construction or process control.


14-21: Consider using Option<String> for error field in ProofDetail.

When there's no error, storing an empty string consumes unnecessary space. An Option<String> could convey missing errors more explicitly.

-    pub error: String,
+    pub error: Option<String>,

23-28: Evaluate whether more specific error variants would be beneficial.

Panic vs. NoPanic is a start, but you may benefit from more detailed error categories (e.g., compile-time failure, runtime error, etc.) to handle distinct proof failure modes.


69-74: Add clarifying doc comments on ProofStatus.

Indicate under what conditions Ok is used versus Error. This can help external crates or consumers of the API to better interpret proof results.

coordinator/internal/logic/submitproof/proof_receiver.go (2)

180-184: Add logging for batch proofs before verification.

To aid debugging, consider logging at least the length or partial content of the deserialized batchProof before calling VerifyBatchProof.


298-304: Avoid returning generic verification errors for previously verified tasks.

Returning ErrValidatorFailureTaskHaveVerifiedSuccess is logical, but consider including details about the task or context, so logs and error handlers can distinguish previously verified tasks from newly failing ones.

rollup/internal/controller/watcher/chunk_proposer.go (1)

11-11: Imported common package might be unused or partially utilized.

Double-check if this import is essential. If not actively used, removing it could reduce clutter.

bridge-history-api/internal/utils/utils.go (1)

151-151: Consider returning nil explicitly instead of err.
Since err isn’t re-assigned after prior checks, returning err here is always returning nil in successful cases. For clarity, consider just returning (version, startBlock, finishBlock, nil).

coordinator/internal/logic/provertask/bundle_prover_task.go (1)

147-154: Reintroducing hard fork check.
The reinstated conditional properly prevents assigning an incompatible task to the prover. Consider adding a specialized error type or code for clarity, or logging more context about the mismatch to aid debugging.

rollup/internal/controller/relayer/l2_relayer_metrics.go (1)

11-11: Consider a Counter metric instead of a Gauge.
If "batches processed per transaction" is cumulative or always increasing, a Counter or a Summation-based metric might be more appropriate than a Gauge. However, if the intention is a snapshot or ratio, then a Gauge is correct.

Also applies to: 43-46

rollup/internal/controller/sender/sender_test.go (2)

298-299: Streamline transaction hash tracking in tests.
Storing intermediate transaction hashes in a slice and waiting for any to confirm is correct. If speed is a concern, reduce the 30-second timeout or parallelize checks to shorten test duration.

Also applies to: 307-307, 311-317


833-835: Test stub returns a generic error message.
Consider using a descriptive error message to clarify scenarios in logs or debugging output.

rollup/internal/controller/watcher/bundle_proposer.go (2)

164-168: Double-check the forced single-batch rule for CodecV5.

Limiting maxBatchesThisBundle to 1 when codecVersion == encoding.CodecV5 may be surprising to callers used to multi-batch behavior. Ensure all code paths are aware of this condition, and confirm there's no confusion when a different codec version is used within the same system.


204-208: Repetitive calls to allBatchesCommittedInSameTXIncluded could affect performance.

Both the maximum-batch logic and the timeout logic invoke the same method. If these calls happen frequently in large-scale scenarios, consider caching or deferring repeated database lookups to reduce overhead.

bridge-history-api/abi/backend_abi.go (1)

58-59: Clarify naming convention for revert event signatures.

Renaming L1RevertBatchEventSig to L1RevertBatchV0EventSig and introducing L1RevertBatchV7EventSig is a clear versioning strategy. However, consider surfacing the version consistently (e.g., “RevertBatchV...” for all references) to avoid confusion.

rollup/internal/orm/orm_test.go (1)

487-488: Incorporate additional negative tests.

After updating the proof with “new test proof,” consider adding a negative test that attempts to retrieve a proof from a non-existent bundle hash to ensure robust error handling in production scenarios.

coordinator/internal/logic/verifier/verifier.go (2)

60-65: Consider adding a doc comment or explanation.
This new rustVkDump struct is descriptive, but a short doc comment explaining its purpose could aid maintainability.


232-264: Potential concurrency and security considerations with temp files
The function writes a VK dump to a temporary file and removes it afterward. While this approach is valid, consider the following:

  • Concurrency: If multiple invocations occur in parallel, the same temp file name might trigger conflicts.
  • Security: Using os.TempDir() can lead to potential collisions or malicious interference.

Switching to os.CreateTemp or generating a unique file name can mitigate these risks.

zkvm-prover/src/zk_circuits_handler/euclid.rs (2)

12-19: EuclidHandler struct design
This struct neatly holds separate provers for chunk, batch, and bundle proofs. Ensure concurrency usage is safe if multiple methods are invoked simultaneously on these provers.


48-94: Async trait implementation

  • get_vk returns the verification key from each prover. Using try_lock().unwrap() can panic if the lock is already held elsewhere. Consider a more graceful fallback.
  • get_proof_data properly deserializes input, filters by proof type, and generates the proof. Error handling for unknown proof types is partially handled, but keep it consistent across the application.
zkvm-prover/src/prover.rs (5)

1-13: Consider structured logging and context tracing for debug & error analysis.

These imports and crate usages are a good start, but consider adding structured logging or context-based error tracking (e.g., tracing crate) to help debug issues in asynchronous calls, especially since this code will likely run in production.


57-76: Document the rationale for enumerating all proof types.

In get_vks, you iterate over all known circuits and gather verification keys for each requested proof type. This is correct if the code expects to handle many proof types for each circuit, but consider clarifying in docs or comments how large this can get in production and if any caching or filtering is needed for performance.


78-90: Clarify error message and finalize the default fields.

When do_prove fails, the error message is "failed to request proof". The message might be improved to clarify context, e.g., "Proof generation failed" or adding the req.proof_type for easier troubleshooting. Also confirm that default fields are correct when returning a failed response.


133-141: Consolidate initialization logic for modularity.

LocalProver::new directly constructs its fields, but if you anticipate adding more setup steps (e.g., preloading circuit data), you could break them out into dedicated helper methods for clarity.


143-168: Check performance implications of blocking in do_prove.

Using spawn_blocking in tokio is acceptable for CPU-bound tasks, but repeatedly spawning blocking tasks can degrade performance if not carefully managed. Ensure you have enough worker threads in the runtime for heavy-lift tasks or consider a custom thread pool if usage is high.

bridge-history-api/go.mod (1)

13-14: Double-check pinned version approach.

github.com/scroll-tech/da-codec and github.com/scroll-tech/go-ethereum are pinned to specific commits. This is useful for stability but can lead to missing security patches if not revisited periodically. Consider adding a schedule for reviewing/updating pinned commits.

coordinator/go.mod (1)

14-15: Keep pinned commits updated.

github.com/scroll-tech/da-codec and github.com/scroll-tech/go-ethereum rely on pinned commits here as well. Periodically review or automate checks for updated stable releases to avoid drift.

rollup/internal/controller/relayer/l2_relayer.go (6)

396-449: Validate chunk and batch codec version alignment carefully.

This block immediately returns upon discovering a mismatch. Confirm that entirely aborting the process is desired instead of partially committing unaffected batches. If parallel multi-version support is envisioned, partial commits might be necessary.


450-541: Potential concurrency conflict with force submission.

If ProcessPendingBatches is invoked concurrently, multiple goroutines may attempt force submission, possibly resulting in double commits. Consider enforcing mutual exclusion or a single-threaded execution model to avoid race conditions.


543-551: Potential collision in context ID strings.

Constructing context IDs by concatenating all batch hashes could lead to very long strings or collisions in extreme cases. Consider a hashing approach or other scheme to reduce the size and chance of collisions.


561-565: Struct naming clarity.

dbBatchWithChunksAndParent could be more descriptive, for example BatchSubmissionContext, to clarify its purpose.


712-729: Consider partial re-try instead of returning.

Returning immediately on errors (e.g., chunk retrieval failures) prevents later items from being processed. This could hide feasible work and stall subsequent processing.


1190-1247: Consider extracting shared finalize logic.

Adding specialized logic for each codec version can lead to code duplication. Factor out common behavior for improved maintainability and readability.

zkvm-prover/Cargo.toml (2)

11-14: Check inconsistent double slash in Git URLs.
The URLs contain // (double slash) which might be a typo. For example:

https://github.com/scroll-tech//revm

Ensure this is intentional or correct it to avoid possible fetch errors or confusion.


9-17: Prefer pinned commit references over tracking branches.
Using "branch" references can cause unexpected breakage if the branch changes. To enhance build reproducibility, consider pinning to a specific commit revision or a verified tag.

common/types/message/message.go (1)

82-95: Consider adding robust consistency checks to ChunkProof.
Unlike BatchProof and BundleProof, ChunkProof lacks a SanityCheck method. If chunk proofs require validation (similar to batches and bundles), implement a method to guard against malformed proofs.

rollup/abi/bridge_abi.go (1)

27-27: Consider renaming the duplicated “RevertBatch” events.

The ABI lists two RevertBatch events with different input signatures. While Solidity supports overloaded events, it can cause confusion in log parsing or indexing if event filters rely solely on the event name. Consider giving them distinct names to improve clarity.

🛑 Comments failed to post (16)
common/libzkp/impl/src/lib.rs (1)

69-82: 🛠️ Refactor suggestion

Handle or log errors when retrieving verifiers.

The _dump_vk function silently does nothing if get_verifier fails (i.e., if the Ok branch isn't taken). Consider logging an error, returning a code, or handling the situation more explicitly to inform callers that no verifier was found. Otherwise, debugging potential misconfigurations can be more difficult.

common/libzkp/impl/src/verifier.rs (1)

50-50: 🛠️ Refactor suggestion

Double check the omission of low_version_circuit.

Here, only high_version_circuit is referenced. If low_version_circuit is still relevant, consider initializing it or ensuring there's clarity around its removal. Otherwise, callers may expect both circuits to be set.

Also applies to: 54-54

common/libzkp/impl/src/verifier/euclid.rs (1)

55-64: 🛠️ Refactor suggestion

Return a Result<()> from dump_vk.

Relying on .expect(...) for file I/O and JSON serialization can cause abrupt termination if an error occurs. By returning Result<()>, you can propagate errors and handle them appropriately upstream.

-fn dump_vk(&self, file: &Path) {
-    let f = File::create(file).expect("Failed to open file to dump VK");
+fn dump_vk(&self, file: &Path) -> Result<()> {
+    let f = File::create(file).map_err(|e| anyhow!("Failed to open file: {}", e))?;

     let dump = VKDump {
         chunk_vk: base64::encode(self.chunk_verifier.get_app_vk()),
         batch_vk: base64::encode(self.batch_verifier.get_app_vk()),
         bundle_vk: base64::encode(self.bundle_verifier.get_app_vk()),
     };
-    serde_json::to_writer(f, &dump).expect("Failed to dump VK");
+    serde_json::to_writer(f, &dump)
+        .map_err(|e| anyhow!("Failed to dump VK: {}", e))?;
+
+    Ok(())
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    fn dump_vk(&self, file: &Path) -> Result<()> {
        let f = File::create(file).map_err(|e| anyhow!("Failed to open file: {}", e))?;

        let dump = VKDump {
            chunk_vk: base64::encode(self.chunk_verifier.get_app_vk()),
            batch_vk: base64::encode(self.batch_verifier.get_app_vk()),
            bundle_vk: base64::encode(self.bundle_verifier.get_app_vk()),
        };
        serde_json::to_writer(f, &dump)
            .map_err(|e| anyhow!("Failed to dump VK: {}", e))?;

        Ok(())
    }
common/libzkp/impl/Cargo.toml (2)

20-41: 💡 Verification agent

🧩 Analysis chain

Added patches for revm and reth dependencies

Comprehensive patches have been added for revm and reth components to ensure compatibility with the scroll-zkvm implementation.


🏁 Script executed:

#!/bin/bash
# Check if the repository URLs with double slashes are valid
echo "URLs with double slashes might cause issues with cargo. Checking the pattern:"
grep -n "scroll-tech//" common/libzkp/impl/Cargo.toml

Length of output: 2345


Attention: Verify and Fix Extra Slash in Git URLs

The patches for the revm and reth dependencies have been added to support the scroll-zkvm implementation. However, the verification check confirms that all Git URLs in common/libzkp/impl/Cargo.toml include an extra slash (e.g., "https://github.com/scroll-tech//revm"). While Cargo might normalize these URLs, it's best to verify if the extra slash is intentional. If it's not required, please update the URLs to use a single slash (for example, "https://github.com/scroll-tech/revm") to prevent any potential issues.

  • Action Required:
    • Confirm whether the extra slash is accepted by Cargo.
    • If not, update all occurrences in both the revm* and reth* patches accordingly.

11-18: 💡 Verification agent

🧩 Analysis chain

Updated core dependencies with custom forks

The dependencies have been updated to use custom forks from the scroll-tech repositories. This ensures compatibility with project-specific requirements.


🏁 Script executed:

#!/bin/bash
# Check for any duplicate or conflicting dependencies
grep -n "branch" common/libzkp/impl/Cargo.toml | sort
echo ""
echo "Checking for double slashes in repository URLs that might cause issues:"
grep -n "//" common/libzkp/impl/Cargo.toml

Length of output: 5589


Action Required: Clean Up Dependency Declarations

  • The custom fork updates for core dependencies are in place to meet our project requirements.
  • However, the Cargo.toml currently shows duplicate declarations. For instance, dependencies like revm, revm-interpreter, revm-precompile, and revm-primitives appear both in the main dependency block (lines 12–18) and again under a [patch."https://github.com/scroll-tech/revm.git"] section (lines 21–24). Please verify if this duplication is intentional or if leftover entries should be removed.
  • Additionally, several Git URLs (e.g., "https://github.com/scroll-tech//revm", "https://github.com/scroll-tech//reth") contain extraneous double slashes. Although these might be normalized by Cargo, it’s best to correct them for clarity and consistency.

Please address these issues to ensure the manifest is clear and free from potential ambiguities.

coordinator/internal/types/prover.go (1)

40-41: 🛠️ Refactor suggestion

Update String() method to handle the new ProverType.

The new ProverTypeOpenVM constant is added but not handled in the String() method (lines 22-31), which will cause it to fall into the default case and return "illegal prover type" when stringified.

func (r ProverType) String() string {
	switch r {
	case ProverTypeChunk:
		return "prover type chunk"
	case ProverTypeBatch:
		return "prover type batch"
+	case ProverTypeOpenVM:
+		return "prover type openvm"
	default:
		return fmt.Sprintf("illegal prover type: %d", r)
	}
}

Additionally, you should consider updating the MakeProverType function to handle this new prover type if needed.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (r ProverType) String() string {
	switch r {
	case ProverTypeChunk:
		return "prover type chunk"
	case ProverTypeBatch:
		return "prover type batch"
	case ProverTypeOpenVM:
		return "prover type openvm"
	default:
		return fmt.Sprintf("illegal prover type: %d", r)
	}
}
rollup/internal/controller/watcher/chunk_proposer.go (1)

426-453: 🛠️ Refactor suggestion

Confirm the choice of encoding.CodecV5 for blocks flagged as Euclid.

If the code is truly entering the Euclid fork, consider using a version >= CodecV7. Using CodecV5 might introduce confusion or inconsistent handling of proof data.

-    codecVersion := encoding.CodecV5
+    codecVersion := encoding.CodecV7 // or higher, depending on intended version
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (p *ChunkProposer) tryProposeEuclidTransitionChunk(blocks []*encoding.Block) (bool, error) {
	if !p.chainCfg.IsEuclid(blocks[0].Header.Time) {
		return false, nil
	}

	prevBlocks, err := p.l2BlockOrm.GetL2BlocksGEHeight(p.ctx, blocks[0].Header.Number.Uint64()-1, 1)
	if err != nil || len(prevBlocks) == 0 || prevBlocks[0].Header.Hash() != blocks[0].Header.ParentHash {
		return false, fmt.Errorf("failed to get parent block: %w", err)
	}

	if p.chainCfg.IsEuclid(prevBlocks[0].Header.Time) {
		// Parent is still Euclid, transition happened already
		return false, nil
	}

	// blocks[0] is Euclid, but parent is not, propose a chunk with only blocks[0]
	chunk := encoding.Chunk{Blocks: blocks[:1]}
	codecVersion := encoding.CodecV7 // or higher, depending on intended version
	metrics, calcErr := utils.CalculateChunkMetrics(&chunk, codecVersion)
	if calcErr != nil {
		return false, fmt.Errorf("failed to calculate chunk metrics: %w", calcErr)
	}
	p.recordTimerChunkMetrics(metrics)
	if err := p.updateDBChunkInfo(&chunk, codecVersion, metrics); err != nil {
		return false, err
	}
	return true, nil
}
bridge-history-api/internal/utils/utils.go (1)

140-140: ⚠️ Potential issue

Check chunk size requirements to avoid out-of-bounds.
The code references slices [1:61] and [1+lastBlockIndex*60 : 1+lastBlockIndex*60+60] without verifying the chunk length is ≥61 bytes. This might lead to runtime panics if the data is malformed.

+ if len(chunks[0]) < 61 {
+   return 0, 0, 0, errors.New("invalid chunk length for the first block")
+ }
+ // similarly check for the last chunk before parsing
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		if len(chunks[0]) < 61 {
			return 0, 0, 0, errors.New("invalid chunk length for the first block")
		}
		// similarly check for the last chunk before parsing
		return 0, 0, 0, errors.New("invalid chunks")
rollup/internal/controller/watcher/bundle_proposer.go (1)

169-174: ⚠️ Potential issue

Use the correct batch index for error reporting.

The error message refers to batches[0].Index and batches[0].Hash instead of the batch that actually has the empty CommitTxHash. Consider switching to batches[i].Index and batches[i].Hash to precisely identify the problematic batch.

-if len(batches[i].CommitTxHash) == 0 {
-  return fmt.Errorf("commit tx hash is empty for batch %v %s", batches[0].Index, batches[0].Hash)
+if len(batches[i].CommitTxHash) == 0 {
+  return fmt.Errorf("commit tx hash is empty for batch %v %s", batches[i].Index, batches[i].Hash)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	for i := 1; i < len(batches); i++ {
		// Make sure that all batches have been committed.
		if len(batches[i].CommitTxHash) == 0 {
			return fmt.Errorf("commit tx hash is empty for batch %v %s", batches[i].Index, batches[i].Hash)
		}
zkvm-prover/src/prover.rs (3)

50-55: ⚠️ Potential issue

Ensure thread safety when reusing LocalProver across async tasks.

current_task and active_handler are stored as fields without any synchronization. It's unclear whether multiple concurrent calls to this service are expected. If so, consider wrapping these fields in a mutex or using atomic references to prevent potential race conditions in multi-threaded scenarios.


170-177: ⚠️ Potential issue

Consider concurrency in set_active_handler.

set_active_handler changes the active handler without synchronization. In scenarios where multiple proofs might be requested for different circuits concurrently, the active handler might get swapped unexpectedly. Confirm single-thread usage or add synchronization around this.


179-188: 🛠️ Refactor suggestion

Handle unknown hard forks more gracefully.

Currently new_handler uses unwrap() and then unreachable!() for anything other than "euclid". If new hard forks are introduced or misconfigured requests come in, the code will panic. Consider returning a descriptive error instead, or handle additional circuit types gracefully.

rollup/internal/controller/relayer/l2_relayer.go (1)

839-840: ⚠️ Potential issue

Overriding 'withProof' with 'false' breaks finalization.

This override renders the parameter ineffective and bypasses the proof logic. It appears to be a bug.

Apply this diff to remove the forced override:

 var aggProof message.BundleProof
-withProof = false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

	var aggProof message.BundleProof
common/types/message/message.go (3)

281-288: 🛠️ Refactor suggestion

Repeat the same fix for OpenVMBatchProof.
Likewise, replace panics with proper error handling during the JSON marshalling process.


250-257: 🛠️ Refactor suggestion

Avoid panics in library code when marshalling JSON.
Panic can crash your application if json.Marshal fails (e.g., invalid UTF-8 in VmProof). As a safer approach, consider returning an error to the caller or storing the error in the proof object.

 func (p *OpenVMChunkProof) Proof() ([]byte, error) {
-   proofJson, err := json.Marshal(p.VmProof)
-   if err != nil {
-       panic(fmt.Sprint("marshaling error", err))
-   }
-   return proofJson
+   // Return the error to the caller instead of panicking
+   data, err := json.Marshal(p.VmProof)
+   if err != nil {
+       return nil, fmt.Errorf("failed to marshal chunk proof: %w", err)
+   }
+   return data, nil
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (p *OpenVMChunkProof) Proof() ([]byte, error) {
	// Return the error to the caller instead of panicking
	data, err := json.Marshal(p.VmProof)
	if err != nil {
		return nil, fmt.Errorf("failed to marshal chunk proof: %w", err)
	}
	return data, nil
}

354-357: ⚠️ Potential issue

Prevent out-of-range indexing in OpenVMBundleProof.Proof().
p.EvmProof.Instances[:384] can panic if len(p.EvmProof.Instances) < 384. Consider adding a boundary check:

 func (p *OpenVMBundleProof) Proof() []byte {
+   if len(p.EvmProof.Instances) < 384 {
+       panic(fmt.Sprintf("instances length (%d) < 384, cannot slice safely", len(p.EvmProof.Instances)))
+   }
    proofBytes := make([]byte, 0, 384+len(p.EvmProof.Proof))
    proofBytes = append(proofBytes, p.EvmProof.Instances[:384]...)
    return append(proofBytes, p.EvmProof.Proof...)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func (p *OpenVMBundleProof) Proof() []byte {
    if len(p.EvmProof.Instances) < 384 {
        panic(fmt.Sprintf("instances length (%d) < 384, cannot slice safely", len(p.EvmProof.Instances)))
    }
    proofBytes := make([]byte, 0, 384+len(p.EvmProof.Proof))
    proofBytes = append(proofBytes, p.EvmProof.Instances[:384]...)
    return append(proofBytes, p.EvmProof.Proof...)
}

@colinlyguo colinlyguo force-pushed the feat-bridge-history-support-codecv7 branch from a39f89d to 3c6d1f3 Compare March 7, 2025 14:38
@jonastheis jonastheis merged commit 94bee19 into feat/use-codec-v6 Mar 8, 2025
5 checks passed
@jonastheis jonastheis deleted the feat-bridge-history-support-codecv7 branch March 8, 2025 08:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants