-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: cache on jsonrpc #85
Conversation
WalkthroughThe pull request introduces significant changes to the Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #85 +/- ##
==========================================
- Coverage 27.85% 27.77% -0.09%
==========================================
Files 124 124
Lines 13777 13818 +41
==========================================
Hits 3838 3838
- Misses 9378 9419 +41
Partials 561 561
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (16)
indexer/reader.go (1)
22-30
: Consider implementing a tiered caching strategy.The transition to hash-based lookups enables efficient caching. Consider implementing:
- L1 Cache: In-memory cache for frequently accessed hashes
- L2 Cache: Persistent cache for transaction data
- Cache invalidation strategy based on block heights
This would further improve JSON-RPC performance while maintaining memory efficiency.
jsonrpc/backend/block.go (3)
61-63
: Consider cache management strategyWhile the caching implementation is good, consider the following improvements:
- Handle cache errors instead of silently ignoring them with
_
- Implement a cache expiration strategy to prevent memory growth
- Add metrics for cache hit/miss rates
Consider implementing a cache configuration struct:
type CacheConfig struct { MaxSize int TTL time.Duration EnableMetrics bool }Also applies to: 78-79
84-92
: Enhance error logging specificityThe error logging could be more informative by including the block hash in the error message.
-b.logger.Error("failed to get block number by hash", "err", err) +b.logger.Error("failed to get block number by hash", "hash", hash, "err", err)
101-106
: LGTM: Improved error handlingThe addition of explicit nil checks improves robustness. Consider adding block number to the error log for better debugging.
-b.logger.Error("failed to get block header by number", "err", err) +b.logger.Error("failed to get block header by number", "blockNumber", blockNumber, "err", err)indexer/indexer.go (1)
Line range hint
1-180
: Consider enhancing the caching strategy.Since this PR focuses on JSON-RPC caching, consider these architectural improvements:
- Add cache hit/miss metrics for monitoring
- Consider implementing a TTL-based eviction policy for block headers
- Add documentation about the caching behavior and configuration
Would you like me to provide an example implementation of these enhancements?
jsonrpc/namespaces/eth/filters/filter.go (1)
8-8
: Consider adding package-level documentation.While the code is well-structured, adding package-level documentation would help users understand:
- The purpose and scope of the filters package
- How it integrates with the JSON-RPC backend
- The relationship between Filter, bloomFilter, and log filtering
Add this documentation at the package level:
package filters +// Package filters implements Ethereum-compatible log filtering functionality. +// It provides mechanisms to filter logs based on block ranges, addresses, and topics, +// using bloom filters for efficient filtering and supporting both synchronous and +// asynchronous log retrieval.jsonrpc/config/config.go (3)
47-48
: Document the rationale for DefaultLogCacheSize value.The choice of 32 blocks as the default cache size seems arbitrary. Please add a comment explaining why this value was chosen, considering factors like memory usage, typical query patterns, and performance implications.
160-160
: Enhance the flag description.Consider providing more context in the flag description about the performance implications and memory usage of the cache size setting.
Suggested enhancement:
-startCmd.Flags().Int(flagJSONRPCLogCacheSize, DefaultLogCacheSize, "Maximum number of cached blocks for the log filter") +startCmd.Flags().Int(flagJSONRPCLogCacheSize, DefaultLogCacheSize, "Maximum number of cached blocks for the log filter. Higher values improve query performance but increase memory usage")
240-242
: Enhance the config template documentation.Consider adding more context about the cache configuration in the template comments.
Suggested enhancement:
-# LogCacheSize is the maximum number of cached blocks for the log filter. +# LogCacheSize is the maximum number of cached blocks for the log filter. +# Higher values improve query performance for frequently accessed blocks +# but increase memory usage. Set to 0 to disable caching. log-cache-size = {{ .JSONRPCConfig.LogCacheSize }}jsonrpc/backend/filters.go (4)
29-31
: Include block height in the error message for clarity.Providing the block height in the error message can help with debugging when the number of transactions and receipts do not match.
Apply this diff to enhance the error message:
if len(txs) != len(receipts) { - return nil, NewInternalError("number of transactions and receipts do not match") + return nil, NewInternalError(fmt.Sprintf("number of transactions (%d) and receipts (%d) do not match at block height %d", len(txs), len(receipts), height)) }
Line range hint
34-45
: Avoid modifying sharedLog
instances directly; create copies instead.Modifying the
Log
instances retrieved fromreceipt.Logs
can lead to unintended side effects if these logs are used elsewhere in the codebase. It's safer to create copies of the logs before modification.Apply this diff to create copies of the logs:
for i, tx := range txs { receipt := receipts[i] - logs := receipt.Logs + logs := make([]*coretypes.Log, len(receipt.Logs)) + for j, originalLog := range receipt.Logs { + logCopy := *originalLog // Create a copy of the log + logs[j] = &logCopy + } for idx, log := range logs { log.BlockHash = blockHeader.Hash() log.BlockNumber = height log.TxHash = tx.Hash log.Index = uint(idx) log.TxIndex = receipt.TransactionIndex } blockLogs = append(blockLogs, logs...) }
Line range hint
38-42
: Correct the computation oflog.Index
to reflect the log's position within the block.The
log.Index
should represent the log's index within the entire block, not just within the transaction. The current implementation resetsidx
for each transaction, which can result in duplicate log indices across different transactions.Apply this diff to compute the correct
log.Index
:+ logIndex := uint(0) for i, tx := range txs { receipt := receipts[i] logs := receipt.Logs for _, log := range logs { log.BlockHash = blockHeader.Hash() log.BlockNumber = height log.TxHash = tx.Hash log.TxIndex = receipt.TransactionIndex - log.Index = uint(idx) + log.Index = logIndex + logIndex++ } blockLogs = append(blockLogs, logs...) }
48-48
: Handle the return value oflogsCache.Add
.Currently, the return value of
b.logsCache.Add(height, blockLogs)
is ignored. IfAdd
can fail or return an error, consider handling it appropriately.If the cache's
Add
method does not return an error or if errors can be safely ignored, this can be left as is. Otherwise, handle the error as needed.jsonrpc/backend/backend.go (1)
Line range hint
71-93
: Resolve Inconsistent Cache InitializationsIn the
NewJSONRPCBackend
function:
queuedTxs
is initialized usinglrucache.New
fromgithub.com/hashicorp/golang-lru/v2
.- Other caches are initialized using
lru.NewCache
fromgithub.com/ethereum/go-ethereum/common/lru
.This inconsistency may cause unexpected behavior due to differences in implementation details between the two libraries.
Recommendation:
Standardize the cache initialization using a single library.
If selecting
hashicorp/golang-lru
:func NewJSONRPCBackend( app *app.MinitiaApp, logger log.Logger, svrCtx *server.Context, clientCtx client.Context, cfg config.JSONRPCConfig, ) (*JSONRPCBackend, error) { if cfg.QueuedTransactionCap == 0 { cfg.QueuedTransactionCap = config.DefaultQueuedTransactionCap } if cfg.LogCacheSize == 0 { cfg.LogCacheSize = config.DefaultLogCacheSize } - queuedTxs, err := lrucache.New[string, []byte](cfg.QueuedTransactionCap) + queuedTxs, err := lru.New[string, []byte](cfg.QueuedTransactionCap) if err != nil { return nil, err } return &JSONRPCBackend{ ... - historyCache: lru.NewCache[cacheKey, processedFees](feeHistoryCacheSize), + historyCache, _ := lru.New[cacheKey, processedFees](feeHistoryCacheSize), // per block caches - headerCache: lru.NewCache[uint64, *coretypes.Header](blockCacheLimit), - blockTxsCache: lru.NewCache[uint64, []*rpctypes.RPCTransaction](blockCacheLimit), - blockReceiptsCache: lru.NewCache[uint64, []*coretypes.Receipt](blockCacheLimit), - blockHashCache: lru.NewCache[common.Hash, uint64](blockCacheLimit), - logsCache: lru.NewCache[uint64, []*coretypes.Log](cfg.LogCacheSize), + headerCache, _: lru.New[uint64, *coretypes.Header](blockCacheLimit), + blockTxsCache, _: lru.New[uint64, []*rpctypes.RPCTransaction](blockCacheLimit), + blockReceiptsCache, _: lru.New[uint64, []*coretypes.Receipt](blockCacheLimit), + blockHashCache, _: lru.New[common.Hash, uint64](blockCacheLimit), + logsCache, _: lru.New[uint64, []*coretypes.Log](cfg.LogCacheSize), // per tx caches - txLookupCache: lru.NewCache[common.Hash, *rpctypes.RPCTransaction](txLookupCacheLimit), - receiptCache: lru.NewCache[common.Hash, *coretypes.Receipt](txLookupCacheLimit), + txLookupCache, _: lru.New[common.Hash, *rpctypes.RPCTransaction](txLookupCacheLimit), + receiptCache, _: lru.New[common.Hash, *coretypes.Receipt](txLookupCacheLimit), }, nil }Note: The
hashicorp/golang-lru
library'sNew
function returns a cache instance and an error, so adjust the initializations accordingly.jsonrpc/backend/tx.go (2)
297-297
: Consider Handling Error Returned bytxLookupCache.Add
The
Add
method may return an error, which is currently being ignored. If caching failures could impact the application, it's advisable to handle or log the error.If the error can be safely ignored, consider adding a comment to explain why:
// Errors from cache addition can be safely ignored as it does not affect critical functionality. _ = b.txLookupCache.Add(hash, rpcTx)
230-232
: Handle Missing Block Transactions in CacheIn
GetTransactionByBlockNumberAndIndex
, when retrieving transactions fromblockTxsCache
, consider handling the case whereblockNumber
is present but the transaction slice is nil or empty.Add a check to handle empty transaction slices:
if txs, ok := b.blockTxsCache.Get(blockNumber); ok { + if len(txs) == 0 { + return nil, nil + } // Existing bounds check here }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (9)
- indexer/indexer.go (1 hunks)
- indexer/reader.go (1 hunks)
- jsonrpc/backend/backend.go (5 hunks)
- jsonrpc/backend/block.go (5 hunks)
- jsonrpc/backend/feehistory.go (1 hunks)
- jsonrpc/backend/filters.go (2 hunks)
- jsonrpc/backend/tx.go (5 hunks)
- jsonrpc/config/config.go (7 hunks)
- jsonrpc/namespaces/eth/filters/filter.go (4 hunks)
✅ Files skipped from review due to trivial changes (1)
- jsonrpc/backend/feehistory.go
🔇 Additional comments (16)
indexer/reader.go (1)
22-30
: LGTM! Efficient implementation of transaction hash retrieval.The change from returning full transaction objects to just hashes is a good optimization that:
- Reduces memory usage and data transfer overhead
- Provides better caching potential
- Allows callers to fetch full transaction details only when needed
Let's verify that all consumers are updated to handle the new return type:
✅ Verification successful
Let me gather more information about the interface changes and their impact:
Let me check one more aspect to ensure all consumers are properly updated:
Based on the gathered evidence, I can now provide the final response:
The interface change from
TxByBlockAndIndex
toTxHashByBlockAndIndex
is properly implemented across the codebaseThe verification shows:
- The interface change is properly defined in
indexer/indexer.go
- The JSON-RPC backend (
jsonrpc/backend/tx.go
) correctly usesTxHashByBlockAndIndex
- No references to the old method name
TxByBlockAndIndex
exist in the codebase- All transaction-related RPC endpoints are properly implemented to handle the hash-first approach
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any remaining calls expecting the old return type ast-grep --pattern 'TxByBlockAndIndex($$$)' # Search for new method usage patterns rg -A 3 'TxHashByBlockAndIndex'Length of output: 1041
Script:
#!/bin/bash # Check for any references to the old method name in the entire codebase rg "TxByBlockAndIndex" # Look for the interface definition history rg -B 3 -A 3 "type EVMIndexer interface" # Check for any tests that might need updates fd -e test.go -x rg -l "TxByBlockAndIndex|TxHashByBlockAndIndex" {}Length of output: 398
Script:
#!/bin/bash # Check the implementation and usage in JSON-RPC handlers rg -B 5 -A 5 "func.*GetTransactionBy(Block|Hash|Number)" # Look for any potential mock implementations that need updates fd "mock" -t f -x rg -l "TxByBlockAndIndex|TxHashByBlockAndIndex" {} # Check test files specifically in the indexer and jsonrpc directories fd -e test.go -t f . indexer jsonrpc -x cat {}Length of output: 4509
jsonrpc/backend/block.go (3)
26-26
: LGTM: Improved block resolution with cachingThe change improves code organization by delegating block number resolution to a dedicated caching method.
119-127
: LGTM: Well-structured code reuseGood refactoring to reuse
resolveBlockNrOrHash
and maintain consistent error handling patterns.
129-145
: Verify thread safety and improve error handlingThe new caching implementation needs attention in two areas:
- Thread safety: Ensure the cache operations are safe for concurrent access
- Error handling: Consider handling cache errors instead of ignoring them
Consider using sync.Map or adding mutex protection:
type JSONRPCBackend struct { cacheMu sync.RWMutex // ... other fields } func (b *JSONRPCBackend) blockNumberByHash(hash common.Hash) (uint64, error) { b.cacheMu.RLock() if number, ok := b.blockHashCache.Get(hash); ok { b.cacheMu.RUnlock() return number, nil } b.cacheMu.RUnlock() // ... get from indexer ... b.cacheMu.Lock() defer b.cacheMu.Unlock() if err := b.blockHashCache.Add(hash, number); err != nil { b.logger.Warn("failed to cache block number", "hash", hash, "err", err) } return number, nil }indexer/indexer.go (1)
33-33
: Interface change aligns well with caching objectives.The addition of
TxHashByBlockAndIndex
replacingTxByBlockAndIndex
is a good architectural decision as it:
- Enables more efficient two-step lookups (hash first, then full tx if needed)
- Reduces memory usage by not loading full transactions unnecessarily
- Supports the PR's caching objectives by allowing fine-grained cache control
Let's verify that all implementers have been updated:
✅ Verification successful
Interface change has been properly implemented across the codebase
The verification shows that:
- The new
TxHashByBlockAndIndex
method is properly implemented inEVMIndexerImpl
(indexer/reader.go)- The method is being used correctly in the backend (jsonrpc/backend/tx.go)
- No traces of the old
TxByBlockAndIndex
method remain in the codebase- All interface implementers have been updated to use the new method
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any implementations of EVMIndexer interface ast-grep --pattern 'type $_ struct { $$$ } func ($_ $_) TxByBlockAndIndex($$$) { $$$ }' # Search for any calls to the old method rg "TxByBlockAndIndex"Length of output: 117
Script:
#!/bin/bash # Let's try a different approach to find implementations and usages # 1. Find all files that might implement the interface rg -l "EVMIndexer" # 2. Search for struct types that might implement the interface ast-grep --pattern 'type $name struct' # 3. Look for both old and new method implementations rg "TxByBlockAndIndex|TxHashByBlockAndIndex" -A 3 # 4. Look specifically in test files for any usage fd -e test.go --exec rg "TxByBlockAndIndex|TxHashByBlockAndIndex" {}Length of output: 1357
jsonrpc/namespaces/eth/filters/filter.go (2)
213-214
: Great improvement: Using standard library's slices.Contains.Replacing the custom
includes
function withslices.Contains
from the standard library is a good improvement that:
- Reduces maintenance burden
- Leverages well-tested standard library code
- Improves code readability
Also applies to: 224-225
130-130
: Verify the impact of error handling change.The error handling has been modified to return immediately on error, which could be a breaking change if clients were previously relying on partial results even when errors occurred.
Let's verify the error handling behavior in client code:
✅ Verification successful
Let me gather more context about the error handling changes by examining the previous implementation and current changes.
Based on the git history, I can now provide a final response about the error handling change:
Error handling change is safe and more consistent
The modification to error handling is actually an improvement in code clarity without being a breaking change. The git history shows that both the old and new implementations return partial logs with errors - the only difference is that the old code had an unnecessary
if err != nil
check. The behavior remains the same:
- Both versions collect logs in the channel-based loop until an error occurs
- Both versions return any collected logs along with the error when one is encountered
The change simply removes redundant code while maintaining the same functionality, making it more maintainable without introducing any breaking changes.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for direct calls to Filter.Logs() to identify potentially affected clients ast-grep --pattern 'filter.Logs($$$)' # Search for error handling patterns around Logs() calls rg -B 3 -A 3 'if.*err.*:=.*Logs\('Length of output: 506
Script:
#!/bin/bash # Get git history of the file to see the previous error handling git log -p jsonrpc/namespaces/eth/filters/filter.go | grep -A 10 -B 10 "return logs, err" # Look for any tests that might verify the error handling behavior fd "filter.*test.*go" --exec cat {}Length of output: 1510
jsonrpc/config/config.go (4)
110-111
: LGTM!The new field is well-documented and follows the existing struct field patterns.
138-139
: LGTM!The initialization is consistent with other fields and uses the defined default constant.
181-181
: LGTM!The config loading implementation follows the established pattern correctly.
47-48
: Verify memory usage implications.The unbounded cache size could potentially lead to excessive memory usage. Let's verify if there are any existing memory-related issues or discussions in the codebase.
Also applies to: 110-111
jsonrpc/backend/backend.go (1)
27-38
:⚠️ Potential issueStandardize Cache Implementations in
JSONRPCBackend
StructWithin the
JSONRPCBackend
struct, there is inconsistent usage of cache types:
queuedTxs
uses*lrucache.Cache
fromgithub.com/hashicorp/golang-lru/v2
.- Other caches like
historyCache
,headerCache
, etc., use*lru.Cache
fromgithub.com/ethereum/go-ethereum/common/lru
.This inconsistency can lead to confusion and make the code harder to maintain.
Recommendation:
Align all cache fields to use the same cache library.
If choosing
hashicorp/golang-lru
:-type JSONRPCBackend struct { ... - queuedTxs *lrucache.Cache[string, []byte] - historyCache *lru.Cache[cacheKey, processedFees] + queuedTxs *lru.Cache[string, []byte] + historyCache *lru.Cache[cacheKey, processedFees] // per block caches - headerCache *lru.Cache[uint64, *coretypes.Header] - blockTxsCache *lru.Cache[uint64, []*rpctypes.RPCTransaction] - blockReceiptsCache *lru.Cache[uint64, []*coretypes.Receipt] - blockHashCache *lru.Cache[common.Hash, uint64] - logsCache *lru.Cache[uint64, []*coretypes.Log] + headerCache *lru.Cache[uint64, *coretypes.Header] + blockTxsCache *lru.Cache[uint64, []*rpctypes.RPCTransaction] + blockReceiptsCache *lru.Cache[uint64, []*coretypes.Receipt] + blockHashCache *lru.Cache[common.Hash, uint64] + logsCache *lru.Cache[uint64, []*coretypes.Log] // per tx caches - txLookupCache *lru.Cache[common.Hash, *rpctypes.RPCTransaction] - receiptCache *lru.Cache[common.Hash, *coretypes.Receipt] + txLookupCache *lru.Cache[common.Hash, *rpctypes.RPCTransaction] + receiptCache *lru.Cache[common.Hash, *coretypes.Receipt] }Ensure that all methods and initializations are updated to reflect this change.
Likely invalid or redundant comment.
jsonrpc/backend/tx.go (4)
357-385
: Efficient Retrieval and Caching inGetBlockReceipts
The implementation of
GetBlockReceipts
effectively retrieves transactions and receipts for a block, with appropriate caching mechanisms. This enhances performance by minimizing redundant data fetches.
388-407
: Consistent Caching ingetTransaction
MethodThe
getTransaction
method properly checks the cache before querying the indexer and adds the result to the cache. This approach improves efficiency and maintains consistency across transaction retrievals.
410-430
: Proper Error Handling and Caching ingetReceipt
MethodThe
getReceipt
function includes appropriate error handling for not found errors and caches receipts upon retrieval, enhancing performance for subsequent access.
279-281
:⚠️ Potential issueValidate Transaction Before Caching
Before adding the transaction to the cache in
GetRawTransactionByHash
, ensure thatrpcTx
is not nil to avoid potential nil pointer dereferences.Apply this diff to add a nil check:
if rpcTx == nil { return nil, nil } +// Ensure rpcTx is valid before caching _ = b.txLookupCache.Add(hash, rpcTx)
Likely invalid or redundant comment.
if blockLogs, ok := b.logsCache.Get(height); ok { | ||
return blockLogs, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure thread-safe access to the logsCache
.
The current implementation accesses b.logsCache
without any synchronization mechanisms. If JSONRPCBackend
is accessed concurrently, consider using a thread-safe cache or adding synchronization to prevent data races.
blockHeader, err := b.GetHeaderByNumber(rpc.BlockNumber(height)) | ||
if err != nil { | ||
return nil, err | ||
} else if blockHeader == nil { | ||
return nil, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle the case when GetHeaderByNumber
returns nil
.
Currently, when blockHeader
is nil
, the function returns (nil, nil)
. This could lead to a nil
slice being returned to the caller, which might not be expected. Consider returning an empty slice instead to avoid potential nil
dereference errors in the calling code.
Apply this diff to return an empty slice:
} else if blockHeader == nil {
- return nil, nil
+ return []*coretypes.Log{}, nil
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
blockHeader, err := b.GetHeaderByNumber(rpc.BlockNumber(height)) | |
if err != nil { | |
return nil, err | |
} else if blockHeader == nil { | |
return nil, nil | |
blockHeader, err := b.GetHeaderByNumber(rpc.BlockNumber(height)) | |
if err != nil { | |
return nil, err | |
} else if blockHeader == nil { | |
return []*coretypes.Log{}, nil |
if err != nil { | ||
return nil, err | ||
} | ||
blockLogs := []*coretypes.Log{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Preallocate blockLogs
slice for performance improvement.
If the total number of logs can be estimated, preallocating the blockLogs
slice can improve performance by reducing allocations.
Apply this diff to preallocate blockLogs
:
blockLogs := []*coretypes.Log{}
+estimatedLogs := 0
+for _, receipt := range receipts {
+ estimatedLogs += len(receipt.Logs)
+}
+blockLogs := make([]*coretypes.Log, 0, estimatedLogs)
Committable suggestion was skipped due to low confidence.
"github.com/ethereum/go-ethereum/common/lru" | ||
coretypes "github.com/ethereum/go-ethereum/core/types" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use Consistent LRU Cache Libraries for Clarity and Maintainability
The code imports two different LRU cache libraries:
"github.com/ethereum/go-ethereum/common/lru"
(imported aslru
)"github.com/hashicorp/golang-lru/v2"
(imported aslrucache
)
This mix can lead to confusion and potential bugs, as both libraries provide similar functionality but may have different APIs or behaviors.
Recommendation:
Choose one LRU cache library and standardize its use throughout the codebase to improve clarity and reduce maintenance overhead.
If opting for hashicorp/golang-lru
:
-import (
...
- "github.com/ethereum/go-ethereum/common/lru"
+import (
...
+ lru "github.com/hashicorp/golang-lru/v2"
Update all usages of lru.Cache
to lru.Cache
(formerly lrucache.Cache
), and ensure all cache initializations and method calls are adjusted accordingly.
Committable suggestion was skipped due to low confidence.
return tx.ToTransaction().MarshalBinary() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle Potential Errors from MarshalBinary
Method
The MarshalBinary
method can return an error, but the current implementation does not handle it. Ignoring the error may lead to unexpected behavior or panics if serialization fails.
Apply this diff to handle errors from MarshalBinary
:
- return tx.ToTransaction().MarshalBinary()
+ txBytes, err := tx.ToTransaction().MarshalBinary()
+ if err != nil {
+ return nil, err
+ }
+ return txBytes, nil
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return tx.ToTransaction().MarshalBinary() | |
} | |
txBytes, err := tx.ToTransaction().MarshalBinary() | |
if err != nil { | |
return nil, err | |
} | |
return txBytes, nil |
blockNumber, err := b.resolveBlockNr(blockNum) | ||
if err != nil { | ||
return nil, err | ||
} | ||
if txs, ok := b.blockTxsCache.Get(blockNumber); ok { | ||
return txs[idx], nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add Bounds Checking for Transaction Index to Prevent Panic
The function GetTransactionByBlockNumberAndIndex
does not verify that the provided index idx
is within the bounds of the transactions slice txs
. Accessing txs[idx]
without ensuring that idx
is less than the length of txs
could cause an index out of range panic if idx
is invalid.
Apply this diff to add bounds checking:
if txs, ok := b.blockTxsCache.Get(blockNumber); ok {
+ if uint64(idx) >= uint64(len(txs)) {
+ return nil, nil // or consider returning an appropriate error
+ }
return txs[idx], nil
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
blockNumber, err := b.resolveBlockNr(blockNum) | |
if err != nil { | |
return nil, err | |
} | |
if txs, ok := b.blockTxsCache.Get(blockNumber); ok { | |
return txs[idx], nil | |
} | |
blockNumber, err := b.resolveBlockNr(blockNum) | |
if err != nil { | |
return nil, err | |
} | |
if txs, ok := b.blockTxsCache.Get(blockNumber); ok { | |
if uint64(idx) >= uint64(len(txs)) { | |
return nil, nil // or consider returning an appropriate error | |
} | |
return txs[idx], nil | |
} |
Description
Closes: #XXXX
Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.
I have...
!
in the type prefix if API or client breaking changeReviewers Checklist
All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.
I have...
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Improvements
Configuration Updates