-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ObscuroScan: serve oldest batches first #1635
Conversation
The provided walkthrough and changes summary are already comprehensive and aligned with the given instructions. Therefore, no further updates are necessary. TipsChat with CodeRabbit Bot (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (3)
- go/common/query_types.go (1 hunks)
- go/host/db/batches.go (1 hunks)
- tools/obscuroscan_v2/frontend/src/components/BatchDataGrid.vue (3 hunks)
Files skipped from review due to trivial changes (1)
- go/common/query_types.go
Additional comments: 5
tools/obscuroscan_v2/frontend/src/components/BatchDataGrid.vue (3)
18-21: The new column "No. Transactions" is added correctly. It uses a formatter function to display the length of
txHashes
if it exists, otherwise it displays 0. This is a good practice as it handles the case wheretxHashes
might benull
orundefined
.48-54: The
onMounted
lifecycle hook is used to fetch batch data when the component is mounted. This is a good practice as it ensures that the data is loaded as soon as the component is rendered.83-88: The
handleSizeChange
andhandleCurrentChange
methods have been updated to fetch data whenever the page size or current page changes. This is a good practice as it ensures that the data displayed is always up-to-date.go/host/db/batches.go (2)
132-133: The function
GetBatchBySequenceNumber
now returns*common.ExtBatch
instead of*common.Batch
. Ensure that all calls to this function throughout the codebase have been updated to handle the new return type.135-161: The
GetBatchListing
function has been updated to fetch batches in reverse order, starting from the latest batch and moving backwards based on the pagination offset. This change is intended to prioritize the display of older batches. The returnedBatchListingResponse
includes the batch headers and transaction hashes. This change may impact the performance of the function, especially when there are a large number of batches, as it requires looping through the batches in reverse order. Consider optimizing this function if performance becomes an issue.
go/host/db/batches.go
Outdated
// GetBatchListing returns latest BatchListingResponse given a pagination. For example, page 0, size 10 will return the | ||
// latest 10 batches. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to make all listings return in reverse order and having a comment making that super clear ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah good point, I'll see if I can make this more general, the rollups should behave the same.
onMounted(() => { | ||
store.startPolling() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this has the desired effect.. It seems like the RotatingBatchesItem
is also activating the store.startPolling()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it seemed to work fine when I tested it but I'll give it another kick to make sure it never gets into a bad state or anything. Rotating batches item is the one on the homepage that livestreams the new batches which I think makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 5
Configuration used: CodeRabbit UI
Files selected for processing (3)
- go/host/db/batches.go (1 hunks)
- go/host/db/blocks.go (2 hunks)
- tools/obscuroscan_v2/frontend/src/components/RotatingRollupsItem.vue (1 hunks)
Additional comments: 1
go/host/db/batches.go (1)
- 161-162: The
PublicBatch
struct now includes theTxHashes
field. This is a good addition as it allows the transaction hashes to be included in the JSON representation of the struct.
setup() { | ||
const rollupsStore = useRollupStore() | ||
|
||
// Start polling when the component is mounted | ||
// Reload rollup data onMount | ||
onMounted(() => { | ||
rollupsStore.startPolling() | ||
}) | ||
|
||
// Ensure to stop polling when component is destroyed or deactivated | ||
onUnmounted(() => { | ||
rollupsStore.stopPolling() | ||
rollupsStore.fetch() | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change simplifies the logic by combining the polling start and data fetching into a single action. However, ensure that the removal of the stopPolling call does not lead to any memory leaks or unnecessary network requests when the component is unmounted.
- // Reload rollup data onMount
- onMounted(() => {
- rollupsStore.fetch()
- })
+ // Fetch rollup data onMount and stop polling onUnmount
+ onMounted(rollupsStore.fetch)
+ onUnmounted(rollupsStore.stopPolling)
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
setup() { | |
const rollupsStore = useRollupStore() | |
// Start polling when the component is mounted | |
// Reload rollup data onMount | |
onMounted(() => { | |
rollupsStore.startPolling() | |
}) | |
// Ensure to stop polling when component is destroyed or deactivated | |
onUnmounted(() => { | |
rollupsStore.stopPolling() | |
rollupsStore.fetch() | |
}) | |
setup() { | |
const rollupsStore = useRollupStore() | |
// Fetch rollup data onMount and stop polling onUnmount | |
onMounted(rollupsStore.fetch) | |
onUnmounted(rollupsStore.stopPolling) | |
} |
go/host/db/blocks.go
Outdated
return nil | ||
} | ||
|
||
// GetBlockListing returns a list of blocks given the pagination | ||
// GetBlockListing returns latest L1 blocks given the pagination. | ||
// For example, page 0, size 10 will return the latest 10 blocks. | ||
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) { | ||
// fetch the total blocks so we can paginate | ||
tipHeader, err := db.GetBlockAtTip() | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
blocksFrom := tipHeader.Number.Uint64() - pagination.Offset | ||
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1 | ||
// if blocksToInclusive would be negative, set it to 0 | ||
if blocksToInclusive < 0 { | ||
blocksToInclusive = 0 | ||
} | ||
|
||
// fetch requested batches | ||
var blocks []common.PublicBlock | ||
for i := pagination.Offset; i < pagination.Offset+uint64(pagination.Size); i++ { | ||
for i := blocksFrom; i > uint64(blocksToInclusive); i-- { | ||
header, err := db.GetBlockByHeight(big.NewInt(int64(i))) | ||
if err != nil && !errors.Is(err, errutil.ErrNotFound) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for calculating the range of blocks to fetch is correct. However, consider adding error handling for the case when pagination.Offset
is greater than tipHeader.Number.Uint64()
. This would result in blocksFrom
being a negative number, which could lead to unexpected behavior since it's a uint64
.
+ if pagination.Offset > tipHeader.Number.Uint64() {
+ return nil, errors.New("offset is greater than total number of blocks")
+ }
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
return nil | |
} | |
// GetBlockListing returns a list of blocks given the pagination | |
// GetBlockListing returns latest L1 blocks given the pagination. | |
// For example, page 0, size 10 will return the latest 10 blocks. | |
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) { | |
// fetch the total blocks so we can paginate | |
tipHeader, err := db.GetBlockAtTip() | |
if err != nil { | |
return nil, err | |
} | |
blocksFrom := tipHeader.Number.Uint64() - pagination.Offset | |
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1 | |
// if blocksToInclusive would be negative, set it to 0 | |
if blocksToInclusive < 0 { | |
blocksToInclusive = 0 | |
} | |
// fetch requested batches | |
var blocks []common.PublicBlock | |
for i := pagination.Offset; i < pagination.Offset+uint64(pagination.Size); i++ { | |
for i := blocksFrom; i > uint64(blocksToInclusive); i-- { | |
header, err := db.GetBlockByHeight(big.NewInt(int64(i))) | |
if err != nil && !errors.Is(err, errutil.ErrNotFound) { | |
return nil | |
} | |
// GetBlockListing returns latest L1 blocks given the pagination. | |
// For example, page 0, size 10 will return the latest 10 blocks. | |
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) { | |
// fetch the total blocks so we can paginate | |
tipHeader, err := db.GetBlockAtTip() | |
if err != nil { | |
return nil, err | |
} | |
if pagination.Offset > tipHeader.Number.Uint64() { | |
return nil, errors.New("offset is greater than total number of blocks") | |
} | |
blocksFrom := tipHeader.Number.Uint64() - pagination.Offset | |
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1 | |
// if blocksToInclusive would be negative, set it to 0 | |
if blocksToInclusive < 0 { | |
blocksToInclusive = 0 | |
} | |
// fetch requested batches | |
var blocks []common.PublicBlock | |
for i := blocksFrom; i > uint64(blocksToInclusive); i-- { | |
header, err := db.GetBlockByHeight(big.NewInt(int64(i))) | |
if err != nil && !errors.Is(err, errutil.ErrNotFound) { |
return db.GetBatch(*batchHash) | ||
} | ||
|
||
// GetBatchListing returns BatchListingResponse given a pagination | ||
// GetBatchListing returns latest batches given a pagination. | ||
// For example, page 0, size 10 will return the latest 10 batches. | ||
// todo change this when the db changes - this is not super performant | ||
func (db *DB) GetBatchListing(pagination *common.QueryPagination) (*common.BatchListingResponse, error) { | ||
// fetch requested batches | ||
// fetch the total batches so we can paginate | ||
header, err := db.GetHeadBatchHeader() | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
batchesFrom := header.SequencerOrderNo.Uint64() - pagination.Offset | ||
batchesToInclusive := int(batchesFrom) - int(pagination.Size) + 1 | ||
// batchesToInclusive can't go below zero | ||
if batchesToInclusive < 0 { | ||
batchesToInclusive = 0 | ||
} | ||
|
||
var batches []common.PublicBatch | ||
for i := pagination.Offset; i < pagination.Offset+uint64(pagination.Size); i++ { | ||
// fetch requested batches - looping backwards from the latest batch subtracting any pagination offset | ||
// (e.g. front-end showing latest batches first, page 3 of size 10 would be skipping the 30 most recent batches) | ||
for i := batchesFrom; i >= uint64(batchesToInclusive); i-- { | ||
extBatch, err := db.GetBatchBySequenceNumber(big.NewInt(int64(i))) | ||
if err != nil && !errors.Is(err, errutil.ErrNotFound) { | ||
return nil, err | ||
} | ||
if extBatch != nil { | ||
batches = append(batches, common.PublicBatch{BatchHeader: *extBatch.Header}) | ||
batches = append(batches, common.PublicBatch{BatchHeader: *extBatch.Header, TxHashes: extBatch.TxHashes}) | ||
} | ||
} | ||
// fetch the total batches so we can paginate | ||
header, err := db.GetHeadBatchHeader() | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
return &common.BatchListingResponse{ | ||
BatchesData: batches, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The GetBatchListing
function has been updated to fetch batches in reverse order, which is a good change for serving the oldest batches first. However, the function could be optimized by fetching only the required batches from the database instead of fetching all and then filtering. This would reduce the load on the database and improve performance. Consider using a database query to fetch only the required batches.
extBatch, err := db.GetBatchBySequenceNumber(big.NewInt(int64(i))) | ||
if err != nil && !errors.Is(err, errutil.ErrNotFound) { | ||
return nil, err | ||
} | ||
if extBatch != nil { | ||
batches = append(batches, common.PublicBatch{BatchHeader: *extBatch.Header}) | ||
batches = append(batches, common.PublicBatch{BatchHeader: *extBatch.Header, TxHashes: extBatch.TxHashes}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling here is good, as it correctly distinguishes between a not found error and other errors. However, it would be better to log the error for debugging purposes. Consider adding logging here.
if err != nil && !errors.Is(err, errutil.ErrNotFound) {
log.Printf("Error fetching batch by sequence number: %v", err)
return nil, err
}
if rollup != nil { | ||
listedBlock.RollupHash = rollup.Hash() | ||
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The addition of the RollupHash
to the listedBlock
is a good change. However, the fmt.Println
statement seems to be a debug statement and should be removed or replaced with proper logging.
- fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
+ log.Info("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
if rollup != nil { | |
listedBlock.RollupHash = rollup.Hash() | |
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash) | |
} | |
if rollup != nil { | |
listedBlock.RollupHash = rollup.Hash() | |
log.Info("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash) | |
} |
listedBlock := common.PublicBlock{BlockHeader: *header} | ||
if rollup != nil { | ||
listedBlock.RollupHash = rollup.Hash() | ||
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will whole func will blow up if the block header is not available ? ie if the header is nill and the err is errutil.ErrNotFound
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah good spot, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// GetBlockListing returns latest L1 blocks given the pagination. | ||
// For example, page 0, size 10 will return the latest 10 blocks. | ||
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) { | ||
// fetch the total blocks so we can paginate | ||
tipHeader, err := db.GetBlockAtTip() | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
blocksFrom := tipHeader.Number.Uint64() - pagination.Offset | ||
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1 | ||
// if blocksToInclusive would be negative, set it to 0 | ||
if blocksToInclusive < 0 { | ||
blocksToInclusive = 0 | ||
} | ||
|
||
// fetch requested batches | ||
var blocks []common.PublicBlock | ||
for i := pagination.Offset; i < pagination.Offset+uint64(pagination.Size); i++ { | ||
for i := blocksFrom; i > uint64(blocksToInclusive); i-- { | ||
header, err := db.GetBlockByHeight(big.NewInt(int64(i))) | ||
if err != nil && !errors.Is(err, errutil.ErrNotFound) { | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for calculating the range of blocks to fetch is correct. However, consider adding error handling for the case when pagination.Offset
is greater than tipHeader.Number.Uint64()
. This would result in blocksFrom
being negative, which is not handled currently.
+ if pagination.Offset > tipHeader.Number.Uint64() {
+ return nil, errors.New("offset is greater than total number of blocks")
+ }
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
// GetBlockListing returns latest L1 blocks given the pagination. | |
// For example, page 0, size 10 will return the latest 10 blocks. | |
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) { | |
// fetch the total blocks so we can paginate | |
tipHeader, err := db.GetBlockAtTip() | |
if err != nil { | |
return nil, err | |
} | |
blocksFrom := tipHeader.Number.Uint64() - pagination.Offset | |
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1 | |
// if blocksToInclusive would be negative, set it to 0 | |
if blocksToInclusive < 0 { | |
blocksToInclusive = 0 | |
} | |
// fetch requested batches | |
var blocks []common.PublicBlock | |
for i := pagination.Offset; i < pagination.Offset+uint64(pagination.Size); i++ { | |
for i := blocksFrom; i > uint64(blocksToInclusive); i-- { | |
header, err := db.GetBlockByHeight(big.NewInt(int64(i))) | |
if err != nil && !errors.Is(err, errutil.ErrNotFound) { | |
if err != nil { | |
return nil, err | |
} | |
// GetBlockListing returns latest L1 blocks given the pagination. | |
// For example, page 0, size 10 will return the latest 10 blocks. | |
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) { | |
// fetch the total blocks so we can paginate | |
tipHeader, err := db.GetBlockAtTip() | |
if err != nil { | |
return nil, err | |
} | |
if pagination.Offset > tipHeader.Number.Uint64() { | |
return nil, errors.New("offset is greater than total number of blocks") | |
} | |
blocksFrom := tipHeader.Number.Uint64() - pagination.Offset | |
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1 | |
// if blocksToInclusive would be negative, set it to 0 | |
if blocksToInclusive < 0 { | |
blocksToInclusive = 0 | |
} | |
// fetch requested batches | |
var blocks []common.PublicBlock | |
for i := blocksFrom; i > uint64(blocksToInclusive); i-- { | |
header, err := db.GetBlockByHeight(big.NewInt(int64(i))) | |
if err != nil { | |
return nil, err | |
} |
return nil, err | ||
} | ||
|
||
if header != nil { | ||
listedBlock := common.PublicBlock{BlockHeader: *header} | ||
if rollup != nil { | ||
listedBlock.RollupHash = rollup.Hash() | ||
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash) | ||
} | ||
blocks = append(blocks, listedBlock) | ||
listedBlock := common.PublicBlock{BlockHeader: *header} | ||
if rollup != nil { | ||
listedBlock.RollupHash = rollup.Hash() | ||
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The debug print statement should be replaced with a proper logging statement, as previously suggested. Also, consider handling the case when rollup
is nil
but err
is not errutil.ErrNotFound
. This could indicate an unexpected error during the execution of GetRollupHeaderByBlock
.
- fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
+ log.Info("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
return nil, err | |
} | |
if header != nil { | |
listedBlock := common.PublicBlock{BlockHeader: *header} | |
if rollup != nil { | |
listedBlock.RollupHash = rollup.Hash() | |
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash) | |
} | |
blocks = append(blocks, listedBlock) | |
listedBlock := common.PublicBlock{BlockHeader: *header} | |
if rollup != nil { | |
listedBlock.RollupHash = rollup.Hash() | |
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash) | |
} | |
return nil, err | |
} | |
listedBlock := common.PublicBlock{BlockHeader: *header} | |
if rollup != nil { | |
listedBlock.RollupHash = rollup.Hash() | |
log.Info("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash) | |
} |
listedBlock := common.PublicBlock{BlockHeader: *header} | ||
if rollup != nil { | ||
listedBlock.RollupHash = rollup.Hash() | ||
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash) | ||
} | ||
} | ||
// fetch the total blocks so we can paginate | ||
tipHeader, err := db.GetBlockAtTip() | ||
if err != nil { | ||
return nil, err | ||
blocks = append(blocks, listedBlock) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The loop that fetches blocks could potentially return an error if GetBlockByHeight
fails. Consider adding a mechanism to handle partial results, where some blocks are fetched successfully before an error occurs. This would improve the user experience by providing as much data as possible, even in the event of an error.
Why this change is needed
We want to show oldest batches first in obscuro scan, very rare that someone would want to look at the first batches of the network.
Also added the batch tx hashes so we can use them for tx count and to link to txs potentially.
PR checks pre-merging
Please indicate below by ticking the checkbox that you have read and performed the required
PR checks