Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ObscuroScan: serve oldest batches first #1635

Merged
merged 3 commits into from
Nov 7, 2023

Conversation

BedrockSquirrel
Copy link
Collaborator

Why this change is needed

We want to show oldest batches first in obscuro scan, very rare that someone would want to look at the first batches of the network.

Also added the batch tx hashes so we can use them for tx count and to link to txs potentially.

PR checks pre-merging

Please indicate below by ticking the checkbox that you have read and performed the required
PR checks

  • PR checks reviewed and performed

Copy link

coderabbitai bot commented Nov 4, 2023

The provided walkthrough and changes summary are already comprehensive and aligned with the given instructions. Therefore, no further updates are necessary.


Tips

Chat with CodeRabbit Bot (@coderabbitai)

  • If you reply to a review comment from CodeRabbit, the bot will automatically respond.
  • To engage with CodeRabbit bot directly around the specific lines of code in the PR, mention @coderabbitai in your review comment
  • Note: Review comments are made on code diffs or files, not on the PR overview.
  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai help to get help.
  • @coderabbitai resolve to resolve all the CodeRabbit review comments.

Note: For conversation with the bot, please use the review comments on code diffs or files.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

@BedrockSquirrel BedrockSquirrel changed the title ObscuroScan: server oldest batches first ObscuroScan: serve oldest batches first Nov 4, 2023
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 5c3fc3c and 93e94e8.
Files selected for processing (3)
  • go/common/query_types.go (1 hunks)
  • go/host/db/batches.go (1 hunks)
  • tools/obscuroscan_v2/frontend/src/components/BatchDataGrid.vue (3 hunks)
Files skipped from review due to trivial changes (1)
  • go/common/query_types.go
Additional comments: 5
tools/obscuroscan_v2/frontend/src/components/BatchDataGrid.vue (3)
  • 18-21: The new column "No. Transactions" is added correctly. It uses a formatter function to display the length of txHashes if it exists, otherwise it displays 0. This is a good practice as it handles the case where txHashes might be null or undefined.

  • 48-54: The onMounted lifecycle hook is used to fetch batch data when the component is mounted. This is a good practice as it ensures that the data is loaded as soon as the component is rendered.

  • 83-88: The handleSizeChange and handleCurrentChange methods have been updated to fetch data whenever the page size or current page changes. This is a good practice as it ensures that the data displayed is always up-to-date.

go/host/db/batches.go (2)
  • 132-133: The function GetBatchBySequenceNumber now returns *common.ExtBatch instead of *common.Batch. Ensure that all calls to this function throughout the codebase have been updated to handle the new return type.

  • 135-161: The GetBatchListing function has been updated to fetch batches in reverse order, starting from the latest batch and moving backwards based on the pagination offset. This change is intended to prioritize the display of older batches. The returned BatchListingResponse includes the batch headers and transaction hashes. This change may impact the performance of the function, especially when there are a large number of batches, as it requires looping through the batches in reverse order. Consider optimizing this function if performance becomes an issue.

Comment on lines 135 to 136
// GetBatchListing returns latest BatchListingResponse given a pagination. For example, page 0, size 10 will return the
// latest 10 batches.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to make all listings return in reverse order and having a comment making that super clear ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah good point, I'll see if I can make this more general, the rollups should behave the same.

onMounted(() => {
store.startPolling()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this has the desired effect.. It seems like the RotatingBatchesItem is also activating the store.startPolling()

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it seemed to work fine when I tested it but I'll give it another kick to make sure it never gets into a bad state or anything. Rotating batches item is the one on the homepage that livestreams the new batches which I think makes sense.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 5

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 93e94e8 and 2d29b13.
Files selected for processing (3)
  • go/host/db/batches.go (1 hunks)
  • go/host/db/blocks.go (2 hunks)
  • tools/obscuroscan_v2/frontend/src/components/RotatingRollupsItem.vue (1 hunks)
Additional comments: 1
go/host/db/batches.go (1)
  • 161-162: The PublicBatch struct now includes the TxHashes field. This is a good addition as it allows the transaction hashes to be included in the JSON representation of the struct.

Comment on lines 26 to 32
setup() {
const rollupsStore = useRollupStore()

// Start polling when the component is mounted
// Reload rollup data onMount
onMounted(() => {
rollupsStore.startPolling()
})

// Ensure to stop polling when component is destroyed or deactivated
onUnmounted(() => {
rollupsStore.stopPolling()
rollupsStore.fetch()
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change simplifies the logic by combining the polling start and data fetching into a single action. However, ensure that the removal of the stopPolling call does not lead to any memory leaks or unnecessary network requests when the component is unmounted.

-    // Reload rollup data onMount
-    onMounted(() => {
-      rollupsStore.fetch()
-    })
+    // Fetch rollup data onMount and stop polling onUnmount
+    onMounted(rollupsStore.fetch)
+    onUnmounted(rollupsStore.stopPolling)

Commitable suggestion

[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.

Suggested change
setup() {
const rollupsStore = useRollupStore()
// Start polling when the component is mounted
// Reload rollup data onMount
onMounted(() => {
rollupsStore.startPolling()
})
// Ensure to stop polling when component is destroyed or deactivated
onUnmounted(() => {
rollupsStore.stopPolling()
rollupsStore.fetch()
})
setup() {
const rollupsStore = useRollupStore()
// Fetch rollup data onMount and stop polling onUnmount
onMounted(rollupsStore.fetch)
onUnmounted(rollupsStore.stopPolling)
}

Comment on lines 58 to 81
return nil
}

// GetBlockListing returns a list of blocks given the pagination
// GetBlockListing returns latest L1 blocks given the pagination.
// For example, page 0, size 10 will return the latest 10 blocks.
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) {
// fetch the total blocks so we can paginate
tipHeader, err := db.GetBlockAtTip()
if err != nil {
return nil, err
}

blocksFrom := tipHeader.Number.Uint64() - pagination.Offset
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1
// if blocksToInclusive would be negative, set it to 0
if blocksToInclusive < 0 {
blocksToInclusive = 0
}

// fetch requested batches
var blocks []common.PublicBlock
for i := pagination.Offset; i < pagination.Offset+uint64(pagination.Size); i++ {
for i := blocksFrom; i > uint64(blocksToInclusive); i-- {
header, err := db.GetBlockByHeight(big.NewInt(int64(i)))
if err != nil && !errors.Is(err, errutil.ErrNotFound) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic for calculating the range of blocks to fetch is correct. However, consider adding error handling for the case when pagination.Offset is greater than tipHeader.Number.Uint64(). This would result in blocksFrom being a negative number, which could lead to unexpected behavior since it's a uint64.

+ if pagination.Offset > tipHeader.Number.Uint64() {
+   return nil, errors.New("offset is greater than total number of blocks")
+ }

Commitable suggestion

[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.

Suggested change
return nil
}
// GetBlockListing returns a list of blocks given the pagination
// GetBlockListing returns latest L1 blocks given the pagination.
// For example, page 0, size 10 will return the latest 10 blocks.
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) {
// fetch the total blocks so we can paginate
tipHeader, err := db.GetBlockAtTip()
if err != nil {
return nil, err
}
blocksFrom := tipHeader.Number.Uint64() - pagination.Offset
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1
// if blocksToInclusive would be negative, set it to 0
if blocksToInclusive < 0 {
blocksToInclusive = 0
}
// fetch requested batches
var blocks []common.PublicBlock
for i := pagination.Offset; i < pagination.Offset+uint64(pagination.Size); i++ {
for i := blocksFrom; i > uint64(blocksToInclusive); i-- {
header, err := db.GetBlockByHeight(big.NewInt(int64(i)))
if err != nil && !errors.Is(err, errutil.ErrNotFound) {
return nil
}
// GetBlockListing returns latest L1 blocks given the pagination.
// For example, page 0, size 10 will return the latest 10 blocks.
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) {
// fetch the total blocks so we can paginate
tipHeader, err := db.GetBlockAtTip()
if err != nil {
return nil, err
}
if pagination.Offset > tipHeader.Number.Uint64() {
return nil, errors.New("offset is greater than total number of blocks")
}
blocksFrom := tipHeader.Number.Uint64() - pagination.Offset
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1
// if blocksToInclusive would be negative, set it to 0
if blocksToInclusive < 0 {
blocksToInclusive = 0
}
// fetch requested batches
var blocks []common.PublicBlock
for i := blocksFrom; i > uint64(blocksToInclusive); i-- {
header, err := db.GetBlockByHeight(big.NewInt(int64(i)))
if err != nil && !errors.Is(err, errutil.ErrNotFound) {

Comment on lines 132 to 166
return db.GetBatch(*batchHash)
}

// GetBatchListing returns BatchListingResponse given a pagination
// GetBatchListing returns latest batches given a pagination.
// For example, page 0, size 10 will return the latest 10 batches.
// todo change this when the db changes - this is not super performant
func (db *DB) GetBatchListing(pagination *common.QueryPagination) (*common.BatchListingResponse, error) {
// fetch requested batches
// fetch the total batches so we can paginate
header, err := db.GetHeadBatchHeader()
if err != nil {
return nil, err
}

batchesFrom := header.SequencerOrderNo.Uint64() - pagination.Offset
batchesToInclusive := int(batchesFrom) - int(pagination.Size) + 1
// batchesToInclusive can't go below zero
if batchesToInclusive < 0 {
batchesToInclusive = 0
}

var batches []common.PublicBatch
for i := pagination.Offset; i < pagination.Offset+uint64(pagination.Size); i++ {
// fetch requested batches - looping backwards from the latest batch subtracting any pagination offset
// (e.g. front-end showing latest batches first, page 3 of size 10 would be skipping the 30 most recent batches)
for i := batchesFrom; i >= uint64(batchesToInclusive); i-- {
extBatch, err := db.GetBatchBySequenceNumber(big.NewInt(int64(i)))
if err != nil && !errors.Is(err, errutil.ErrNotFound) {
return nil, err
}
if extBatch != nil {
batches = append(batches, common.PublicBatch{BatchHeader: *extBatch.Header})
batches = append(batches, common.PublicBatch{BatchHeader: *extBatch.Header, TxHashes: extBatch.TxHashes})
}
}
// fetch the total batches so we can paginate
header, err := db.GetHeadBatchHeader()
if err != nil {
return nil, err
}

return &common.BatchListingResponse{
BatchesData: batches,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GetBatchListing function has been updated to fetch batches in reverse order, which is a good change for serving the oldest batches first. However, the function could be optimized by fetching only the required batches from the database instead of fetching all and then filtering. This would reduce the load on the database and improve performance. Consider using a database query to fetch only the required batches.

Comment on lines 156 to 162
extBatch, err := db.GetBatchBySequenceNumber(big.NewInt(int64(i)))
if err != nil && !errors.Is(err, errutil.ErrNotFound) {
return nil, err
}
if extBatch != nil {
batches = append(batches, common.PublicBatch{BatchHeader: *extBatch.Header})
batches = append(batches, common.PublicBatch{BatchHeader: *extBatch.Header, TxHashes: extBatch.TxHashes})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error handling here is good, as it correctly distinguishes between a not found error and other errors. However, it would be better to log the error for debugging purposes. Consider adding logging here.

if err != nil && !errors.Is(err, errutil.ErrNotFound) {
    log.Printf("Error fetching batch by sequence number: %v", err)
    return nil, err
}

Comment on lines +92 to 95
if rollup != nil {
listedBlock.RollupHash = rollup.Hash()
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The addition of the RollupHash to the listedBlock is a good change. However, the fmt.Println statement seems to be a debug statement and should be removed or replaced with proper logging.

- fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
+ log.Info("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)

Commitable suggestion

[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.

Suggested change
if rollup != nil {
listedBlock.RollupHash = rollup.Hash()
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
}
if rollup != nil {
listedBlock.RollupHash = rollup.Hash()
log.Info("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
}

Comment on lines +91 to +94
listedBlock := common.PublicBlock{BlockHeader: *header}
if rollup != nil {
listedBlock.RollupHash = rollup.Hash()
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will whole func will blow up if the block header is not available ? ie if the header is nill and the err is errutil.ErrNotFound

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah good spot, thanks!

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 3

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 2d29b13 and 4f5fea5.
Files selected for processing (1)
  • go/host/db/blocks.go (2 hunks)

Comment on lines +61 to 84
// GetBlockListing returns latest L1 blocks given the pagination.
// For example, page 0, size 10 will return the latest 10 blocks.
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) {
// fetch the total blocks so we can paginate
tipHeader, err := db.GetBlockAtTip()
if err != nil {
return nil, err
}

blocksFrom := tipHeader.Number.Uint64() - pagination.Offset
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1
// if blocksToInclusive would be negative, set it to 0
if blocksToInclusive < 0 {
blocksToInclusive = 0
}

// fetch requested batches
var blocks []common.PublicBlock
for i := pagination.Offset; i < pagination.Offset+uint64(pagination.Size); i++ {
for i := blocksFrom; i > uint64(blocksToInclusive); i-- {
header, err := db.GetBlockByHeight(big.NewInt(int64(i)))
if err != nil && !errors.Is(err, errutil.ErrNotFound) {
if err != nil {
return nil, err
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic for calculating the range of blocks to fetch is correct. However, consider adding error handling for the case when pagination.Offset is greater than tipHeader.Number.Uint64(). This would result in blocksFrom being negative, which is not handled currently.

+ if pagination.Offset > tipHeader.Number.Uint64() {
+     return nil, errors.New("offset is greater than total number of blocks")
+ }

Commitable suggestion

[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.

Suggested change
// GetBlockListing returns latest L1 blocks given the pagination.
// For example, page 0, size 10 will return the latest 10 blocks.
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) {
// fetch the total blocks so we can paginate
tipHeader, err := db.GetBlockAtTip()
if err != nil {
return nil, err
}
blocksFrom := tipHeader.Number.Uint64() - pagination.Offset
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1
// if blocksToInclusive would be negative, set it to 0
if blocksToInclusive < 0 {
blocksToInclusive = 0
}
// fetch requested batches
var blocks []common.PublicBlock
for i := pagination.Offset; i < pagination.Offset+uint64(pagination.Size); i++ {
for i := blocksFrom; i > uint64(blocksToInclusive); i-- {
header, err := db.GetBlockByHeight(big.NewInt(int64(i)))
if err != nil && !errors.Is(err, errutil.ErrNotFound) {
if err != nil {
return nil, err
}
// GetBlockListing returns latest L1 blocks given the pagination.
// For example, page 0, size 10 will return the latest 10 blocks.
func (db *DB) GetBlockListing(pagination *common.QueryPagination) (*common.BlockListingResponse, error) {
// fetch the total blocks so we can paginate
tipHeader, err := db.GetBlockAtTip()
if err != nil {
return nil, err
}
if pagination.Offset > tipHeader.Number.Uint64() {
return nil, errors.New("offset is greater than total number of blocks")
}
blocksFrom := tipHeader.Number.Uint64() - pagination.Offset
blocksToInclusive := int(blocksFrom) - int(pagination.Size) + 1
// if blocksToInclusive would be negative, set it to 0
if blocksToInclusive < 0 {
blocksToInclusive = 0
}
// fetch requested batches
var blocks []common.PublicBlock
for i := blocksFrom; i > uint64(blocksToInclusive); i-- {
header, err := db.GetBlockByHeight(big.NewInt(int64(i)))
if err != nil {
return nil, err
}

Comment on lines 88 to 95
return nil, err
}

if header != nil {
listedBlock := common.PublicBlock{BlockHeader: *header}
if rollup != nil {
listedBlock.RollupHash = rollup.Hash()
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
}
blocks = append(blocks, listedBlock)
listedBlock := common.PublicBlock{BlockHeader: *header}
if rollup != nil {
listedBlock.RollupHash = rollup.Hash()
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The debug print statement should be replaced with a proper logging statement, as previously suggested. Also, consider handling the case when rollup is nil but err is not errutil.ErrNotFound. This could indicate an unexpected error during the execution of GetRollupHeaderByBlock.

- fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
+ log.Info("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)

Commitable suggestion

[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.

Suggested change
return nil, err
}
if header != nil {
listedBlock := common.PublicBlock{BlockHeader: *header}
if rollup != nil {
listedBlock.RollupHash = rollup.Hash()
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
}
blocks = append(blocks, listedBlock)
listedBlock := common.PublicBlock{BlockHeader: *header}
if rollup != nil {
listedBlock.RollupHash = rollup.Hash()
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
}
return nil, err
}
listedBlock := common.PublicBlock{BlockHeader: *header}
if rollup != nil {
listedBlock.RollupHash = rollup.Hash()
log.Info("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
}

Comment on lines +91 to 97
listedBlock := common.PublicBlock{BlockHeader: *header}
if rollup != nil {
listedBlock.RollupHash = rollup.Hash()
fmt.Println("added at block: ", header.Number.Int64(), " - ", listedBlock.RollupHash)
}
}
// fetch the total blocks so we can paginate
tipHeader, err := db.GetBlockAtTip()
if err != nil {
return nil, err
blocks = append(blocks, listedBlock)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The loop that fetches blocks could potentially return an error if GetBlockByHeight fails. Consider adding a mechanism to handle partial results, where some blocks are fetched successfully before an error occurs. This would improve the user experience by providing as much data as possible, even in the event of an error.

@BedrockSquirrel BedrockSquirrel merged commit 47bb9e6 into main Nov 7, 2023
@BedrockSquirrel BedrockSquirrel deleted the matt/latest-batches-first branch November 7, 2023 10:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants