Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor code #11

Merged
merged 9 commits into from
Aug 8, 2024
Merged

Refactor code #11

merged 9 commits into from
Aug 8, 2024

Conversation

0x6e616d
Copy link
Member

@0x6e616d 0x6e616d commented Aug 1, 2024

  • Add block keeper for syncing the old blocks
  • Handle re-orged blocks
  • Refactor code

I tested the deposit and withdraw transaction successfully on my local
Screenshot 2024-08-06 at 14 42 38

Explorer URL: https://sepolia.etherscan.io/tx/0x9ee609a4919f6d9ac3ce673cac0f2f14776acb29c3eabe5fd169f458b9724f20

How to test my PR?

  • Before you start, please make the docker-compose first to start the redis instance(redis is used for storing the block synced metadata)
docker-compose up -d
  • After that, make your .env file
## thanos notification env
export NETWORK=sepolia

L1_WS_RPC=wss://ethereum-sepolia-rpc.publicnode.com
L2_WS_RPC=wss://rpc.thanos-sepolia.tokamak.network

L1_STANDARD_BRIDGE=0x385076516318551d566CAaE5EC59c23fe09cbF65
L2_STANDARD_BRIDGE=0x4200000000000000000000000000000000000010

L1_USDC_BRIDGE=0xE390EE020Afb7F8e4A2Dc44a71088db2acd72CF3
L2_USDC_BRIDGE=0x4200000000000000000000000000000000000775

SLACK_URL=<your_slack_hook_url>

L1_EXPLORER_URL=https://sepolia.etherscan.io
L2_EXPLORER_URL=https://explorer.thanos-sepolia.tokamak.network

L1_TOKEN_ADDRESSES=0xa30fe40285B8f5c0457DbC3B7C8A280373c40044,0xFF3Ef745D9878AfE5934Ff0b130868AFDDbc58e8,0x42d3b260c761cD5da022dB56Fe2F89c4A909b04A,0x3D752DE8Ec40B655F21D11b02Fa7438AE18687Fc,0x8c4C0FC89382F96E435527D39C9Ec69dDed34E77,0xf8474C2a90B9035e0b431e1789fE76F54d4ce708,0xfFf9976782d46CC05630D1f6eBAb18b2324d6B14
L2_TOKEN_ADDRESSES=0x4200000000000000000000000000000000000486,0xDeadDeAddeAddEAddeadDEaDDEAdDeaDDeAD0000

REDIS_ADDRESS=localhost:6379
  • Run the application
go run cmd/app/main.go listener 
  • Enjoy with making some deposit/withdrawal transactions with @tokamak-network/thanos-sdk

@0x6e616d 0x6e616d changed the title Refactor code WIP: Refactor code Aug 1, 2024
@0x6e616d 0x6e616d changed the title WIP: Refactor code Refactor code Aug 6, 2024
@0x6e616d 0x6e616d requested a review from a team August 6, 2024 07:41
@0x6e616d 0x6e616d self-assigned this Aug 6, 2024
@nguyenzung
Copy link
Member

It is an impressive implementation!

I think there are some things we need to check more. Correct me if I am wrong

  • There is a gaps time between syncOldBlock and subscribeNewHead, so it is possible that missing some blocks in this period.
  • In Enqueue(), it is better if not removing oldest blocks. we should remove blocks that need to be removed when reorg occurs. The current impl can lead to a case that a parent hash can not be found in line "if bk.q.Contains(block.Hash().String())"
  • I remembered that in each toppic, we have a param is called Removed. Maybe we can use this param to detect removed log due to reorg

@0x6e616d 0x6e616d closed this Aug 6, 2024
@0x6e616d 0x6e616d reopened this Aug 6, 2024
@0x6e616d
Copy link
Member Author

0x6e616d commented Aug 6, 2024

@nguyenzung

  1. there is a gap if the time to fetch the old blocks is too long(over one block time). For example, the old block is 1, but the current block from newHead is 3.
    -> It can be lacking with block 2. But we check the reorg blocks every time we receive one block. That means we can consider this case as the re-orged blocks. -> On the handleReorgBlocks function, the parentHash of block 3 is different from the current head(block 1) -> so this code will fetch the block 2 information -> It can be done.

You can test this case by putting time.Sleep on the first line of subscribeNewHead function.

  1. circular_queue is a data structure that will limit the size of the queue. If there is a new one that is inserted into the queue, we will remove the first one to keep the size isn't over.
    (https://en.wikipedia.org/wiki/Circular_buffer)

In our case, the block queue used on the block keeper is the circular queue that stores the 64 latest blocks. We assume that after 64 blocks, the chain is immutable
-> In the case of re-orged, we find the re-orged blocks, after that, we will loop the reorged blocks and override the new head.
For example:

1. Current circular queue:
1-2-3-4-5

(The correct chain is 1-2-3-4'-5')
2. We receive block 6' -> the parent hash isn't block hash of 5 -> reorg happened
3. We loop to find the canonical chain -> fetch block 5' with hash is the parentHash of 6
4. Until the parentHash of 4' is the hash of 3. We finish.
5. Order them
6. Loop 4' -> 5' -> 6', each block we set head to the circular queue to 1->2-3-4'-5'-6' 
7. We receive block 7' with the parentHash of 7' as the hash of 6' -> continue this process.

-> So we finish the process of finding the re-orged blocks, we will have the correct queue.

  1. Yes, I missed it, let me update it. We can use log.Remove as true or false to know if this event has been removed or added.

0x6e616d

This comment was marked as duplicate.

@0x6e616d
Copy link
Member Author

0x6e616d commented Aug 6, 2024

@nguyenzung Oh I got your point 2. Let me think about it. You're correct. We should remove the reorged block on the queue not the first element. Good point

go.work.sum Outdated Show resolved Hide resolved
Copy link
Member Author

@0x6e616d 0x6e616d left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nguyenzung many thanks @nguyenzung for your helpful comments

  1. The gap between the time we run syncOldBlocks function and subscribeNewHead can be fixed by the function handleReorgedBlocks by checking whether the parentHash exists on the queue or not.

  2. I updated the queue by adding the new remove function to remove the element by the value. The purpose of this function is to remove the re-orged block hashes if exist not the first element on the queue.

Also, I initiated the block-keeper to get the latest 64 blocks(2 epochs) when starting the event listener service. The purpose of it is to check the re-org of blocks when starting the event listener from scratch or continue from the starting point.

  1. I updated the code to ignore the Removed log when true

Comment on lines +279 to +281
if l.Removed {
continue
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ref to 3.

Comment on lines +43 to +46
func (q *CircularQueue[T]) RemoveAndEnqueue(value T, removed T) {
q.Remove(removed)
q.Enqueue(value)
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ref to 2.

Comment on lines +85 to +103
for i := uint64(blockNo) - TwoEpochBlocks + 1; i < blockNo; i = i + batchBlocksSize {
from := i
to := i + batchBlocksSize - 1
if to > blockNo-1 {
to = blockNo - 1
}

blocks, err := bcSource.GetBlocks(ctx, false, from, to)
if err != nil {
return nil, err
}

for _, block := range blocks {
keeper.enqueue(block.Header.Hash(), block.Header.Number.Uint64())
}
}

keeper.enqueue(common.HexToHash(currentBlockHash), blockNo)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ref to 2.

@nguyenzung
Copy link
Member

It looks great, Nam!

I think maybe we should have other ticket for UnitTest to make sure everything works well

@nguyenzung nguyenzung self-requested a review August 7, 2024 11:00
@0x6e616d 0x6e616d merged commit e87ec07 into main Aug 8, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants