Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: cache on jsonrpc #85

Merged
merged 4 commits into from
Oct 26, 2024
Merged

feat: cache on jsonrpc #85

merged 4 commits into from
Oct 26, 2024

Conversation

beer-1
Copy link
Collaborator

@beer-1 beer-1 commented Oct 26, 2024

Description

Closes: #XXXX


Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.

I have...

  • included the correct type prefix in the PR title, you can find examples of the prefixes below:
  • confirmed ! in the type prefix if API or client breaking change
  • targeted the correct branch
  • provided a link to the relevant issue or specification
  • reviewed "Files changed" and left comments if necessary
  • included the necessary unit and integration tests
  • updated the relevant documentation or specification, including comments for documenting Go code
  • confirmed all CI checks have passed

Reviewers Checklist

All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.

I have...

  • confirmed the correct type prefix in the PR title
  • confirmed all author checklist items have been addressed
  • reviewed state machine logic, API design and naming, documentation is accurate, tests and test coverage

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced caching mechanisms for block and transaction data, improving retrieval efficiency.
    • Introduced new methods for better transaction and receipt handling, including caching for improved performance.
  • Bug Fixes

    • Corrected method name in fee history processing for accurate block receipt retrieval.
  • Improvements

    • Streamlined log retrieval process with caching to reduce redundant queries.
    • Simplified error handling across various transaction methods for clearer feedback.
  • Configuration Updates

    • Added new configuration options for log cache size, enhancing server management capabilities.

@beer-1 beer-1 requested a review from a team as a code owner October 26, 2024 16:02
@beer-1 beer-1 self-assigned this Oct 26, 2024
Copy link

coderabbitai bot commented Oct 26, 2024

Walkthrough

The pull request introduces significant changes to the EVMIndexer interface and its implementation, notably removing the TxByBlockAndIndex and BlockHeaderByHash methods while adding TxHashByBlockAndIndex. The modifications streamline transaction retrieval by focusing on hashes. Additionally, the JSONRPCBackend struct has been enhanced with new caching mechanisms and methods for better management of block and transaction data. Other files see improvements in error handling and performance optimizations, particularly in log retrieval and transaction processing.

Changes

File Path Change Summary
indexer/indexer.go - Removed: TxByBlockAndIndex method.
- Added: TxHashByBlockAndIndex method.
- Removed: BlockHeaderByHash method.
indexer/reader.go - Removed: BlockHeaderByHash method.
- Updated: TxByBlockAndIndex renamed to TxHashByBlockAndIndex with return type change.
jsonrpc/backend/backend.go - Updated: historyCache type changed to lru.Cache.
- Added multiple new cache fields (e.g., headerCache, blockTxsCache).
- Modified initialization in NewJSONRPCBackend.
jsonrpc/backend/block.go - Added: blockNumberByHash method.
- Enhanced caching in GetHeaderByNumber and GetHeaderByHash methods.
jsonrpc/backend/feehistory.go - Updated: Fixed method name from getBLockReceipts to getBlockReceipts.
jsonrpc/backend/filters.go - Updated: Enhanced GetLogsByHeight method to check for cached logs before querying.
- Improved error handling and caching for logs.
jsonrpc/backend/tx.go - Added: New methods for retrieving block receipts, transactions, and receipts with caching.
- Enhanced error handling and transaction retrieval logic in several methods.
jsonrpc/config/config.go - Removed: DefaultBlockRangeCap constant.
- Added: DefaultLogCacheSize constant and LogCacheSize field in JSONRPCConfig.
jsonrpc/namespaces/eth/filters/filter.go - Removed: Custom includes function, replaced with slices.Contains from the slices package.
- Simplified error handling in the Logs method.

Possibly related PRs

  • fix: audit #59: Changes in indexer/indexer.go and indexer/reader.go regarding the EVMIndexer interface relate to modifications in ListenFinalizeBlock in indexer/abci.go, which also deals with transaction and block handling.
  • fix: async filter & subscription system #82: Introduction of the blockEvents struct and blockEventsEmitter method in indexer/indexer.go aligns with changes in ListenFinalizeBlock in indexer/abci.go, enhancing event emission capabilities related to block processing.

🐇 In the land of code where rabbits play,
New methods hop in, brightening the day.
With logs and caches, swift as a breeze,
Transactions dance with the greatest of ease.
So let’s celebrate this code we adore,
Hopping along, we’ll code even more! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Oct 26, 2024

Codecov Report

Attention: Patch coverage is 0% with 166 lines in your changes missing coverage. Please review.

Project coverage is 27.77%. Comparing base (63688ba) to head (c763ca6).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
jsonrpc/backend/tx.go 0.00% 94 Missing ⚠️
jsonrpc/backend/block.go 0.00% 28 Missing ⚠️
jsonrpc/backend/backend.go 0.00% 18 Missing ⚠️
jsonrpc/backend/filters.go 0.00% 15 Missing ⚠️
jsonrpc/config/config.go 0.00% 4 Missing ⚠️
indexer/reader.go 0.00% 3 Missing ⚠️
jsonrpc/namespaces/eth/filters/filter.go 0.00% 3 Missing ⚠️
jsonrpc/backend/feehistory.go 0.00% 1 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main      #85      +/-   ##
==========================================
- Coverage   27.85%   27.77%   -0.09%     
==========================================
  Files         124      124              
  Lines       13777    13818      +41     
==========================================
  Hits         3838     3838              
- Misses       9378     9419      +41     
  Partials      561      561              
Files with missing lines Coverage Δ
indexer/indexer.go 0.00% <ø> (ø)
jsonrpc/backend/feehistory.go 0.00% <0.00%> (ø)
indexer/reader.go 0.00% <0.00%> (ø)
jsonrpc/namespaces/eth/filters/filter.go 0.00% <0.00%> (ø)
jsonrpc/config/config.go 0.00% <0.00%> (ø)
jsonrpc/backend/filters.go 0.00% <0.00%> (ø)
jsonrpc/backend/backend.go 0.00% <0.00%> (ø)
jsonrpc/backend/block.go 0.00% <0.00%> (ø)
jsonrpc/backend/tx.go 0.00% <0.00%> (ø)

@beer-1 beer-1 merged commit a2bc6ca into main Oct 26, 2024
7 of 8 checks passed
@beer-1 beer-1 deleted the feat/jsonrpc-cache branch October 26, 2024 16:15
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Outside diff range and nitpick comments (16)
indexer/reader.go (1)

22-30: Consider implementing a tiered caching strategy.

The transition to hash-based lookups enables efficient caching. Consider implementing:

  1. L1 Cache: In-memory cache for frequently accessed hashes
  2. L2 Cache: Persistent cache for transaction data
  3. Cache invalidation strategy based on block heights

This would further improve JSON-RPC performance while maintaining memory efficiency.

jsonrpc/backend/block.go (3)

61-63: Consider cache management strategy

While the caching implementation is good, consider the following improvements:

  1. Handle cache errors instead of silently ignoring them with _
  2. Implement a cache expiration strategy to prevent memory growth
  3. Add metrics for cache hit/miss rates

Consider implementing a cache configuration struct:

type CacheConfig struct {
    MaxSize int
    TTL time.Duration
    EnableMetrics bool
}

Also applies to: 78-79


84-92: Enhance error logging specificity

The error logging could be more informative by including the block hash in the error message.

-b.logger.Error("failed to get block number by hash", "err", err)
+b.logger.Error("failed to get block number by hash", "hash", hash, "err", err)

101-106: LGTM: Improved error handling

The addition of explicit nil checks improves robustness. Consider adding block number to the error log for better debugging.

-b.logger.Error("failed to get block header by number", "err", err)
+b.logger.Error("failed to get block header by number", "blockNumber", blockNumber, "err", err)
indexer/indexer.go (1)

Line range hint 1-180: Consider enhancing the caching strategy.

Since this PR focuses on JSON-RPC caching, consider these architectural improvements:

  1. Add cache hit/miss metrics for monitoring
  2. Consider implementing a TTL-based eviction policy for block headers
  3. Add documentation about the caching behavior and configuration

Would you like me to provide an example implementation of these enhancements?

jsonrpc/namespaces/eth/filters/filter.go (1)

8-8: Consider adding package-level documentation.

While the code is well-structured, adding package-level documentation would help users understand:

  • The purpose and scope of the filters package
  • How it integrates with the JSON-RPC backend
  • The relationship between Filter, bloomFilter, and log filtering

Add this documentation at the package level:

 package filters

+// Package filters implements Ethereum-compatible log filtering functionality.
+// It provides mechanisms to filter logs based on block ranges, addresses, and topics,
+// using bloom filters for efficient filtering and supporting both synchronous and
+// asynchronous log retrieval.
jsonrpc/config/config.go (3)

47-48: Document the rationale for DefaultLogCacheSize value.

The choice of 32 blocks as the default cache size seems arbitrary. Please add a comment explaining why this value was chosen, considering factors like memory usage, typical query patterns, and performance implications.


160-160: Enhance the flag description.

Consider providing more context in the flag description about the performance implications and memory usage of the cache size setting.

Suggested enhancement:

-startCmd.Flags().Int(flagJSONRPCLogCacheSize, DefaultLogCacheSize, "Maximum number of cached blocks for the log filter")
+startCmd.Flags().Int(flagJSONRPCLogCacheSize, DefaultLogCacheSize, "Maximum number of cached blocks for the log filter. Higher values improve query performance but increase memory usage")

240-242: Enhance the config template documentation.

Consider adding more context about the cache configuration in the template comments.

Suggested enhancement:

-# LogCacheSize is the maximum number of cached blocks for the log filter.
+# LogCacheSize is the maximum number of cached blocks for the log filter.
+# Higher values improve query performance for frequently accessed blocks
+# but increase memory usage. Set to 0 to disable caching.
log-cache-size = {{ .JSONRPCConfig.LogCacheSize }}
jsonrpc/backend/filters.go (4)

29-31: Include block height in the error message for clarity.

Providing the block height in the error message can help with debugging when the number of transactions and receipts do not match.

Apply this diff to enhance the error message:

 if len(txs) != len(receipts) {
-    return nil, NewInternalError("number of transactions and receipts do not match")
+    return nil, NewInternalError(fmt.Sprintf("number of transactions (%d) and receipts (%d) do not match at block height %d", len(txs), len(receipts), height))
 }

Line range hint 34-45: Avoid modifying shared Log instances directly; create copies instead.

Modifying the Log instances retrieved from receipt.Logs can lead to unintended side effects if these logs are used elsewhere in the codebase. It's safer to create copies of the logs before modification.

Apply this diff to create copies of the logs:

 for i, tx := range txs {
     receipt := receipts[i]
-    logs := receipt.Logs
+    logs := make([]*coretypes.Log, len(receipt.Logs))
+    for j, originalLog := range receipt.Logs {
+        logCopy := *originalLog // Create a copy of the log
+        logs[j] = &logCopy
+    }
     for idx, log := range logs {
         log.BlockHash = blockHeader.Hash()
         log.BlockNumber = height
         log.TxHash = tx.Hash
         log.Index = uint(idx)
         log.TxIndex = receipt.TransactionIndex
     }
     blockLogs = append(blockLogs, logs...)
 }

Line range hint 38-42: Correct the computation of log.Index to reflect the log's position within the block.

The log.Index should represent the log's index within the entire block, not just within the transaction. The current implementation resets idx for each transaction, which can result in duplicate log indices across different transactions.

Apply this diff to compute the correct log.Index:

+    logIndex := uint(0)
     for i, tx := range txs {
         receipt := receipts[i]
         logs := receipt.Logs
         for _, log := range logs {
             log.BlockHash = blockHeader.Hash()
             log.BlockNumber = height
             log.TxHash = tx.Hash
             log.TxIndex = receipt.TransactionIndex
-            log.Index = uint(idx)
+            log.Index = logIndex
+            logIndex++
         }
         blockLogs = append(blockLogs, logs...)
     }

48-48: Handle the return value of logsCache.Add.

Currently, the return value of b.logsCache.Add(height, blockLogs) is ignored. If Add can fail or return an error, consider handling it appropriately.

If the cache's Add method does not return an error or if errors can be safely ignored, this can be left as is. Otherwise, handle the error as needed.

jsonrpc/backend/backend.go (1)

Line range hint 71-93: Resolve Inconsistent Cache Initializations

In the NewJSONRPCBackend function:

  • queuedTxs is initialized using lrucache.New from github.com/hashicorp/golang-lru/v2.
  • Other caches are initialized using lru.NewCache from github.com/ethereum/go-ethereum/common/lru.

This inconsistency may cause unexpected behavior due to differences in implementation details between the two libraries.

Recommendation:

Standardize the cache initialization using a single library.

If selecting hashicorp/golang-lru:

 func NewJSONRPCBackend(
     app *app.MinitiaApp,
     logger log.Logger,
     svrCtx *server.Context,
     clientCtx client.Context,
     cfg config.JSONRPCConfig,
 ) (*JSONRPCBackend, error) {
     if cfg.QueuedTransactionCap == 0 {
         cfg.QueuedTransactionCap = config.DefaultQueuedTransactionCap
     }
     if cfg.LogCacheSize == 0 {
         cfg.LogCacheSize = config.DefaultLogCacheSize
     }

-    queuedTxs, err := lrucache.New[string, []byte](cfg.QueuedTransactionCap)
+    queuedTxs, err := lru.New[string, []byte](cfg.QueuedTransactionCap)
     if err != nil {
         return nil, err
     }

     return &JSONRPCBackend{
         ...
-        historyCache: lru.NewCache[cacheKey, processedFees](feeHistoryCacheSize),
+        historyCache, _ := lru.New[cacheKey, processedFees](feeHistoryCacheSize),

         // per block caches
-        headerCache:        lru.NewCache[uint64, *coretypes.Header](blockCacheLimit),
-        blockTxsCache:      lru.NewCache[uint64, []*rpctypes.RPCTransaction](blockCacheLimit),
-        blockReceiptsCache: lru.NewCache[uint64, []*coretypes.Receipt](blockCacheLimit),
-        blockHashCache:     lru.NewCache[common.Hash, uint64](blockCacheLimit),
-        logsCache:          lru.NewCache[uint64, []*coretypes.Log](cfg.LogCacheSize),
+        headerCache, _:        lru.New[uint64, *coretypes.Header](blockCacheLimit),
+        blockTxsCache, _:      lru.New[uint64, []*rpctypes.RPCTransaction](blockCacheLimit),
+        blockReceiptsCache, _: lru.New[uint64, []*coretypes.Receipt](blockCacheLimit),
+        blockHashCache, _:     lru.New[common.Hash, uint64](blockCacheLimit),
+        logsCache, _:          lru.New[uint64, []*coretypes.Log](cfg.LogCacheSize),

         // per tx caches
-        txLookupCache: lru.NewCache[common.Hash, *rpctypes.RPCTransaction](txLookupCacheLimit),
-        receiptCache:  lru.NewCache[common.Hash, *coretypes.Receipt](txLookupCacheLimit),
+        txLookupCache, _: lru.New[common.Hash, *rpctypes.RPCTransaction](txLookupCacheLimit),
+        receiptCache, _:  lru.New[common.Hash, *coretypes.Receipt](txLookupCacheLimit),
     }, nil
 }

Note: The hashicorp/golang-lru library's New function returns a cache instance and an error, so adjust the initializations accordingly.

jsonrpc/backend/tx.go (2)

297-297: Consider Handling Error Returned by txLookupCache.Add

The Add method may return an error, which is currently being ignored. If caching failures could impact the application, it's advisable to handle or log the error.

If the error can be safely ignored, consider adding a comment to explain why:

// Errors from cache addition can be safely ignored as it does not affect critical functionality.
_ = b.txLookupCache.Add(hash, rpcTx)

230-232: Handle Missing Block Transactions in Cache

In GetTransactionByBlockNumberAndIndex, when retrieving transactions from blockTxsCache, consider handling the case where blockNumber is present but the transaction slice is nil or empty.

Add a check to handle empty transaction slices:

 if txs, ok := b.blockTxsCache.Get(blockNumber); ok {
+    if len(txs) == 0 {
+        return nil, nil
+    }
     // Existing bounds check here
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 63688ba and c763ca6.

📒 Files selected for processing (9)
  • indexer/indexer.go (1 hunks)
  • indexer/reader.go (1 hunks)
  • jsonrpc/backend/backend.go (5 hunks)
  • jsonrpc/backend/block.go (5 hunks)
  • jsonrpc/backend/feehistory.go (1 hunks)
  • jsonrpc/backend/filters.go (2 hunks)
  • jsonrpc/backend/tx.go (5 hunks)
  • jsonrpc/config/config.go (7 hunks)
  • jsonrpc/namespaces/eth/filters/filter.go (4 hunks)
✅ Files skipped from review due to trivial changes (1)
  • jsonrpc/backend/feehistory.go
🔇 Additional comments (16)
indexer/reader.go (1)

22-30: LGTM! Efficient implementation of transaction hash retrieval.

The change from returning full transaction objects to just hashes is a good optimization that:

  • Reduces memory usage and data transfer overhead
  • Provides better caching potential
  • Allows callers to fetch full transaction details only when needed

Let's verify that all consumers are updated to handle the new return type:

✅ Verification successful

Let me gather more information about the interface changes and their impact:


Let me check one more aspect to ensure all consumers are properly updated:


Based on the gathered evidence, I can now provide the final response:

The interface change from TxByBlockAndIndex to TxHashByBlockAndIndex is properly implemented across the codebase

The verification shows:

  • The interface change is properly defined in indexer/indexer.go
  • The JSON-RPC backend (jsonrpc/backend/tx.go) correctly uses TxHashByBlockAndIndex
  • No references to the old method name TxByBlockAndIndex exist in the codebase
  • All transaction-related RPC endpoints are properly implemented to handle the hash-first approach
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any remaining calls expecting the old return type
ast-grep --pattern 'TxByBlockAndIndex($$$)'

# Search for new method usage patterns
rg -A 3 'TxHashByBlockAndIndex'

Length of output: 1041


Script:

#!/bin/bash
# Check for any references to the old method name in the entire codebase
rg "TxByBlockAndIndex"

# Look for the interface definition history
rg -B 3 -A 3 "type EVMIndexer interface"

# Check for any tests that might need updates
fd -e test.go -x rg -l "TxByBlockAndIndex|TxHashByBlockAndIndex" {}

Length of output: 398


Script:

#!/bin/bash
# Check the implementation and usage in JSON-RPC handlers
rg -B 5 -A 5 "func.*GetTransactionBy(Block|Hash|Number)"

# Look for any potential mock implementations that need updates
fd "mock" -t f -x rg -l "TxByBlockAndIndex|TxHashByBlockAndIndex" {}

# Check test files specifically in the indexer and jsonrpc directories
fd -e test.go -t f . indexer jsonrpc -x cat {}

Length of output: 4509

jsonrpc/backend/block.go (3)

26-26: LGTM: Improved block resolution with caching

The change improves code organization by delegating block number resolution to a dedicated caching method.


119-127: LGTM: Well-structured code reuse

Good refactoring to reuse resolveBlockNrOrHash and maintain consistent error handling patterns.


129-145: Verify thread safety and improve error handling

The new caching implementation needs attention in two areas:

  1. Thread safety: Ensure the cache operations are safe for concurrent access
  2. Error handling: Consider handling cache errors instead of ignoring them

Consider using sync.Map or adding mutex protection:

type JSONRPCBackend struct {
    cacheMu sync.RWMutex
    // ... other fields
}

func (b *JSONRPCBackend) blockNumberByHash(hash common.Hash) (uint64, error) {
    b.cacheMu.RLock()
    if number, ok := b.blockHashCache.Get(hash); ok {
        b.cacheMu.RUnlock()
        return number, nil
    }
    b.cacheMu.RUnlock()
    
    // ... get from indexer ...
    
    b.cacheMu.Lock()
    defer b.cacheMu.Unlock()
    if err := b.blockHashCache.Add(hash, number); err != nil {
        b.logger.Warn("failed to cache block number", "hash", hash, "err", err)
    }
    return number, nil
}
indexer/indexer.go (1)

33-33: Interface change aligns well with caching objectives.

The addition of TxHashByBlockAndIndex replacing TxByBlockAndIndex is a good architectural decision as it:

  • Enables more efficient two-step lookups (hash first, then full tx if needed)
  • Reduces memory usage by not loading full transactions unnecessarily
  • Supports the PR's caching objectives by allowing fine-grained cache control

Let's verify that all implementers have been updated:

✅ Verification successful

Interface change has been properly implemented across the codebase

The verification shows that:

  • The new TxHashByBlockAndIndex method is properly implemented in EVMIndexerImpl (indexer/reader.go)
  • The method is being used correctly in the backend (jsonrpc/backend/tx.go)
  • No traces of the old TxByBlockAndIndex method remain in the codebase
  • All interface implementers have been updated to use the new method
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any implementations of EVMIndexer interface
ast-grep --pattern 'type $_ struct {
  $$$
}

func ($_ $_) TxByBlockAndIndex($$$) {
  $$$
}'

# Search for any calls to the old method
rg "TxByBlockAndIndex"

Length of output: 117


Script:

#!/bin/bash
# Let's try a different approach to find implementations and usages

# 1. Find all files that might implement the interface
rg -l "EVMIndexer"

# 2. Search for struct types that might implement the interface
ast-grep --pattern 'type $name struct'

# 3. Look for both old and new method implementations
rg "TxByBlockAndIndex|TxHashByBlockAndIndex" -A 3

# 4. Look specifically in test files for any usage
fd -e test.go --exec rg "TxByBlockAndIndex|TxHashByBlockAndIndex" {}

Length of output: 1357

jsonrpc/namespaces/eth/filters/filter.go (2)

213-214: Great improvement: Using standard library's slices.Contains.

Replacing the custom includes function with slices.Contains from the standard library is a good improvement that:

  • Reduces maintenance burden
  • Leverages well-tested standard library code
  • Improves code readability

Also applies to: 224-225


130-130: Verify the impact of error handling change.

The error handling has been modified to return immediately on error, which could be a breaking change if clients were previously relying on partial results even when errors occurred.

Let's verify the error handling behavior in client code:

✅ Verification successful

Let me gather more context about the error handling changes by examining the previous implementation and current changes.


Based on the git history, I can now provide a final response about the error handling change:

Error handling change is safe and more consistent

The modification to error handling is actually an improvement in code clarity without being a breaking change. The git history shows that both the old and new implementations return partial logs with errors - the only difference is that the old code had an unnecessary if err != nil check. The behavior remains the same:

  • Both versions collect logs in the channel-based loop until an error occurs
  • Both versions return any collected logs along with the error when one is encountered

The change simply removes redundant code while maintaining the same functionality, making it more maintainable without introducing any breaking changes.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for direct calls to Filter.Logs() to identify potentially affected clients
ast-grep --pattern 'filter.Logs($$$)'

# Search for error handling patterns around Logs() calls
rg -B 3 -A 3 'if.*err.*:=.*Logs\('

Length of output: 506


Script:

#!/bin/bash
# Get git history of the file to see the previous error handling
git log -p jsonrpc/namespaces/eth/filters/filter.go | grep -A 10 -B 10 "return logs, err"

# Look for any tests that might verify the error handling behavior
fd "filter.*test.*go" --exec cat {}

Length of output: 1510

jsonrpc/config/config.go (4)

110-111: LGTM!

The new field is well-documented and follows the existing struct field patterns.


138-139: LGTM!

The initialization is consistent with other fields and uses the defined default constant.


181-181: LGTM!

The config loading implementation follows the established pattern correctly.


47-48: Verify memory usage implications.

The unbounded cache size could potentially lead to excessive memory usage. Let's verify if there are any existing memory-related issues or discussions in the codebase.

Also applies to: 110-111

jsonrpc/backend/backend.go (1)

27-38: ⚠️ Potential issue

Standardize Cache Implementations in JSONRPCBackend Struct

Within the JSONRPCBackend struct, there is inconsistent usage of cache types:

  • queuedTxs uses *lrucache.Cache from github.com/hashicorp/golang-lru/v2.
  • Other caches like historyCache, headerCache, etc., use *lru.Cache from github.com/ethereum/go-ethereum/common/lru.

This inconsistency can lead to confusion and make the code harder to maintain.

Recommendation:

Align all cache fields to use the same cache library.

If choosing hashicorp/golang-lru:

-type JSONRPCBackend struct {
    ...
-   queuedTxs    *lrucache.Cache[string, []byte]
-   historyCache *lru.Cache[cacheKey, processedFees]
+   queuedTxs    *lru.Cache[string, []byte]
+   historyCache *lru.Cache[cacheKey, processedFees]
    // per block caches
-   headerCache        *lru.Cache[uint64, *coretypes.Header]
-   blockTxsCache      *lru.Cache[uint64, []*rpctypes.RPCTransaction]
-   blockReceiptsCache *lru.Cache[uint64, []*coretypes.Receipt]
-   blockHashCache     *lru.Cache[common.Hash, uint64]
-   logsCache          *lru.Cache[uint64, []*coretypes.Log]
+   headerCache        *lru.Cache[uint64, *coretypes.Header]
+   blockTxsCache      *lru.Cache[uint64, []*rpctypes.RPCTransaction]
+   blockReceiptsCache *lru.Cache[uint64, []*coretypes.Receipt]
+   blockHashCache     *lru.Cache[common.Hash, uint64]
+   logsCache          *lru.Cache[uint64, []*coretypes.Log]
    // per tx caches
-   txLookupCache *lru.Cache[common.Hash, *rpctypes.RPCTransaction]
-   receiptCache  *lru.Cache[common.Hash, *coretypes.Receipt]
+   txLookupCache *lru.Cache[common.Hash, *rpctypes.RPCTransaction]
+   receiptCache  *lru.Cache[common.Hash, *coretypes.Receipt]
}

Ensure that all methods and initializations are updated to reflect this change.

Likely invalid or redundant comment.

jsonrpc/backend/tx.go (4)

357-385: Efficient Retrieval and Caching in GetBlockReceipts

The implementation of GetBlockReceipts effectively retrieves transactions and receipts for a block, with appropriate caching mechanisms. This enhances performance by minimizing redundant data fetches.


388-407: Consistent Caching in getTransaction Method

The getTransaction method properly checks the cache before querying the indexer and adds the result to the cache. This approach improves efficiency and maintains consistency across transaction retrievals.


410-430: Proper Error Handling and Caching in getReceipt Method

The getReceipt function includes appropriate error handling for not found errors and caches receipts upon retrieval, enhancing performance for subsequent access.


279-281: ⚠️ Potential issue

Validate Transaction Before Caching

Before adding the transaction to the cache in GetRawTransactionByHash, ensure that rpcTx is not nil to avoid potential nil pointer dereferences.

Apply this diff to add a nil check:

 if rpcTx == nil {
     return nil, nil
 }
+// Ensure rpcTx is valid before caching
 _ = b.txLookupCache.Add(hash, rpcTx)

Likely invalid or redundant comment.

Comment on lines +10 to +12
if blockLogs, ok := b.logsCache.Get(height); ok {
return blockLogs, nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure thread-safe access to the logsCache.

The current implementation accesses b.logsCache without any synchronization mechanisms. If JSONRPCBackend is accessed concurrently, consider using a thread-safe cache or adding synchronization to prevent data races.

Comment on lines +14 to +18
blockHeader, err := b.GetHeaderByNumber(rpc.BlockNumber(height))
if err != nil {
return nil, err
} else if blockHeader == nil {
return nil, nil
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle the case when GetHeaderByNumber returns nil.

Currently, when blockHeader is nil, the function returns (nil, nil). This could lead to a nil slice being returned to the caller, which might not be expected. Consider returning an empty slice instead to avoid potential nil dereference errors in the calling code.

Apply this diff to return an empty slice:

 } else if blockHeader == nil {
-    return nil, nil
+    return []*coretypes.Log{}, nil
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
blockHeader, err := b.GetHeaderByNumber(rpc.BlockNumber(height))
if err != nil {
return nil, err
} else if blockHeader == nil {
return nil, nil
blockHeader, err := b.GetHeaderByNumber(rpc.BlockNumber(height))
if err != nil {
return nil, err
} else if blockHeader == nil {
return []*coretypes.Log{}, nil

if err != nil {
return nil, err
}
blockLogs := []*coretypes.Log{}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Preallocate blockLogs slice for performance improvement.

If the total number of logs can be estimated, preallocating the blockLogs slice can improve performance by reducing allocations.

Apply this diff to preallocate blockLogs:

 blockLogs := []*coretypes.Log{}
+estimatedLogs := 0
+for _, receipt := range receipts {
+    estimatedLogs += len(receipt.Logs)
+}
+blockLogs := make([]*coretypes.Log, 0, estimatedLogs)

Committable suggestion was skipped due to low confidence.

Comment on lines +9 to +10
"github.com/ethereum/go-ethereum/common/lru"
coretypes "github.com/ethereum/go-ethereum/core/types"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use Consistent LRU Cache Libraries for Clarity and Maintainability

The code imports two different LRU cache libraries:

  • "github.com/ethereum/go-ethereum/common/lru" (imported as lru)
  • "github.com/hashicorp/golang-lru/v2" (imported as lrucache)

This mix can lead to confusion and potential bugs, as both libraries provide similar functionality but may have different APIs or behaviors.

Recommendation:

Choose one LRU cache library and standardize its use throughout the codebase to improve clarity and reduce maintenance overhead.

If opting for hashicorp/golang-lru:

-import (
    ...
-   "github.com/ethereum/go-ethereum/common/lru"
+import (
    ...
+   lru "github.com/hashicorp/golang-lru/v2"

Update all usages of lru.Cache to lru.Cache (formerly lrucache.Cache), and ensure all cache initializations and method calls are adjusted accordingly.

Committable suggestion was skipped due to low confidence.

Comment on lines +279 to +280
return tx.ToTransaction().MarshalBinary()
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle Potential Errors from MarshalBinary Method

The MarshalBinary method can return an error, but the current implementation does not handle it. Ignoring the error may lead to unexpected behavior or panics if serialization fails.

Apply this diff to handle errors from MarshalBinary:

-    return tx.ToTransaction().MarshalBinary()
+    txBytes, err := tx.ToTransaction().MarshalBinary()
+    if err != nil {
+        return nil, err
+    }
+    return txBytes, nil
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return tx.ToTransaction().MarshalBinary()
}
txBytes, err := tx.ToTransaction().MarshalBinary()
if err != nil {
return nil, err
}
return txBytes, nil

Comment on lines +226 to +232
blockNumber, err := b.resolveBlockNr(blockNum)
if err != nil {
return nil, err
}
if txs, ok := b.blockTxsCache.Get(blockNumber); ok {
return txs[idx], nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add Bounds Checking for Transaction Index to Prevent Panic

The function GetTransactionByBlockNumberAndIndex does not verify that the provided index idx is within the bounds of the transactions slice txs. Accessing txs[idx] without ensuring that idx is less than the length of txs could cause an index out of range panic if idx is invalid.

Apply this diff to add bounds checking:

 if txs, ok := b.blockTxsCache.Get(blockNumber); ok {
+    if uint64(idx) >= uint64(len(txs)) {
+        return nil, nil // or consider returning an appropriate error
+    }
     return txs[idx], nil
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
blockNumber, err := b.resolveBlockNr(blockNum)
if err != nil {
return nil, err
}
if txs, ok := b.blockTxsCache.Get(blockNumber); ok {
return txs[idx], nil
}
blockNumber, err := b.resolveBlockNr(blockNum)
if err != nil {
return nil, err
}
if txs, ok := b.blockTxsCache.Get(blockNumber); ok {
if uint64(idx) >= uint64(len(txs)) {
return nil, nil // or consider returning an appropriate error
}
return txs[idx], nil
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant