Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Gnosis #925

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Add Gnosis #925

wants to merge 1 commit into from

Conversation

kaladinlight
Copy link
Contributor

  • No geth support, the execution node is nethermind which is written in C#. This resulted in the need for hand rolling the Header type and associated marshal/rlp handlers. I did start to think about if it is even necessary to use the Hash() function on the Header to get the block hash instead of just getting hash directly off the Block. Was this an intentional decision for some edge case that I am not thinking about?
  • Start the consensus node first so it creates the jwt and then start the execution node.
  • Nethermind is super quick to sync a full node to start testing (~10hrs), archive nodes still takes the better part of a week I believe

@kaladinlight
Copy link
Contributor Author

@martinboehm ready for review whenever you have free cycles :)

@matyushkins
Copy link
Contributor

matyushkins commented Jul 17, 2023

@kaladinlight Hi. Archive node.
websocket: read limit exceeded

E0717 10:45:03.174752       1 ethrpc.go:652] debug_traceBlockByHash block 0xdc31a7341d59c76c89e2a20bccfa726b15ef165cf2cb57cf67af92cd46417133, error websocket: read limit exceeded
E0717 10:45:03.174769       1 ethrpc.go:652] debug_traceBlockByHash block 0xcdc64212c941d29f0a9b9905a858222de498b0377524cc42a90b76836e0910dc, error websocket: read limit exceeded
E0717 10:45:03.174774       1 ethrpc.go:652] debug_traceBlockByHash block 0x5dfc70726803b8b6a2db378c9e1d1686e8fb2f2a07a7f10dac398478b4f6b5b6, error websocket: read limit exceeded
E0717 10:45:03.174759       1 ethrpc.go:652] debug_traceBlockByHash block 0xc64f390b6556bf77708b5d11a8161c5ce15f3dbe2c066a02dde1f4e0b0490d95, error websocket: read limit exceeded

@kaladinlight
Copy link
Contributor Author

@kaladinlight Hi. Archive node. websocket: read limit exceeded

E0717 10:45:03.174752       1 ethrpc.go:652] debug_traceBlockByHash block 0xdc31a7341d59c76c89e2a20bccfa726b15ef165cf2cb57cf67af92cd46417133, error websocket: read limit exceeded
E0717 10:45:03.174769       1 ethrpc.go:652] debug_traceBlockByHash block 0xcdc64212c941d29f0a9b9905a858222de498b0377524cc42a90b76836e0910dc, error websocket: read limit exceeded
E0717 10:45:03.174774       1 ethrpc.go:652] debug_traceBlockByHash block 0x5dfc70726803b8b6a2db378c9e1d1686e8fb2f2a07a7f10dac398478b4f6b5b6, error websocket: read limit exceeded
E0717 10:45:03.174759       1 ethrpc.go:652] debug_traceBlockByHash block 0xc64f390b6556bf77708b5d11a8161c5ce15f3dbe2c066a02dde1f4e0b0490d95, error websocket: read limit exceeded

Did you build blockbook using the Makefile? There is a hack in place to manually increase the websocket message size limit that should fix this issue unless the gnosis block traces are actually larger than the current 80mb setting.

prepare-sources:
	@ [ -n "`ls /src 2> /dev/null`" ] || (echo "/src doesn't exist or is empty" 1>&2 && exit 1)
	rm -rf $(BLOCKBOOK_SRC)
	mkdir -p $(BLOCKBOOK_BASE)
	cp -r /src $(BLOCKBOOK_SRC)
	cd $(BLOCKBOOK_SRC) && go mod download
	sed -i 's/wsMessageSizeLimit\ =\ 15\ \*\ 1024\ \*\ 1024/wsMessageSizeLimit = 80 * 1024 * 1024/g' $(GOPATH)/pkg/mod/github.com/ethereum/go-ethereum*/rpc/websocket.go
	sed -i 's/wsMessageSizeLimit\ =\ 15\ \*\ 1024\ \*\ 1024/wsMessageSizeLimit = 80 * 1024 * 1024/g' $(GOPATH)/pkg/mod/github.com/ava-labs/coreth*/rpc/websocket.go

@matyushkins
Copy link
Contributor

matyushkins commented Jul 17, 2023

@kaladinlight Hi. Archive node. websocket: read limit exceeded

E0717 10:45:03.174752       1 ethrpc.go:652] debug_traceBlockByHash block 0xdc31a7341d59c76c89e2a20bccfa726b15ef165cf2cb57cf67af92cd46417133, error websocket: read limit exceeded
E0717 10:45:03.174769       1 ethrpc.go:652] debug_traceBlockByHash block 0xcdc64212c941d29f0a9b9905a858222de498b0377524cc42a90b76836e0910dc, error websocket: read limit exceeded
E0717 10:45:03.174774       1 ethrpc.go:652] debug_traceBlockByHash block 0x5dfc70726803b8b6a2db378c9e1d1686e8fb2f2a07a7f10dac398478b4f6b5b6, error websocket: read limit exceeded
E0717 10:45:03.174759       1 ethrpc.go:652] debug_traceBlockByHash block 0xc64f390b6556bf77708b5d11a8161c5ce15f3dbe2c066a02dde1f4e0b0490d95, error websocket: read limit exceeded

Did you build blockbook using the Makefile? There is a hack in place to manually increase the websocket message size limit that should fix this issue unless the gnosis block traces are actually larger than the current 80mb setting.

prepare-sources:
	@ [ -n "`ls /src 2> /dev/null`" ] || (echo "/src doesn't exist or is empty" 1>&2 && exit 1)
	rm -rf $(BLOCKBOOK_SRC)
	mkdir -p $(BLOCKBOOK_BASE)
	cp -r /src $(BLOCKBOOK_SRC)
	cd $(BLOCKBOOK_SRC) && go mod download
	sed -i 's/wsMessageSizeLimit\ =\ 15\ \*\ 1024\ \*\ 1024/wsMessageSizeLimit = 80 * 1024 * 1024/g' $(GOPATH)/pkg/mod/github.com/ethereum/go-ethereum*/rpc/websocket.go
	sed -i 's/wsMessageSizeLimit\ =\ 15\ \*\ 1024\ \*\ 1024/wsMessageSizeLimit = 80 * 1024 * 1024/g' $(GOPATH)/pkg/mod/github.com/ava-labs/coreth*/rpc/websocket.go

Yes. I run make command with BASE_IMAGE=ubuntu:22.04.
make BASE_IMAGE=ubuntu:22.04

@kaladinlight
Copy link
Contributor Author

I can push a change to bump that limit if you want to rebuild the image to test. I don't have an archive node to point at currently to test locally...

@matyushkins
Copy link
Contributor

{
       "jsonrpc": "2.0",
       "method": "debug_traceBlockByHash",
       "params": ["0x9bccfe5644bc698235f904c76375e3ed882d037451b5dc8bd66a09c592b3902d", {"tracer": "callTracer"}],
       "id": 1
}

Size json file 205 MB

@kaladinlight
Copy link
Contributor Author

{
       "jsonrpc": "2.0",
       "method": "debug_traceBlockByHash",
       "params": ["0x9bccfe5644bc698235f904c76375e3ed882d037451b5dc8bd66a09c592b3902d", {"tracer": "callTracer"}],
       "id": 1
}

Size json file 205 MB

Good lord... I will bump to 256 for temp fix. I would like to look into adding a logic fork to leverage the parity trace module instead of geth debug to reduce these payload sizes and improve perf long term.

@matyushkins
Copy link
Contributor

matyushkins commented Jul 17, 2023

@kaladinlight I raised the limit to 2048 because the page size is 1800MB. And rpc timeout to 300 sec.

{
       "jsonrpc": "2.0",
       "method": "debug_traceBlockByHash",
       "params": ["0xcd3cd8cf7540380c376d61799adc5990a707ac02c34fa9f4c802bf891de4679b", {"tracer": "callTracer"}],
       "id": 1
}

5 million blocks. The flight is normal.

@matyushkins
Copy link
Contributor

~2200 MB

{
       "jsonrpc": "2.0",
       "method": "debug_traceBlockByHash",
       "params": ["0x7eb96747d3b5356abfef48cd3729e69394140315e6945254e3256e902ec1db72", {"tracer": "callTracer"}],
       "id": 1
}

@kaladinlight
Copy link
Contributor Author

~2200 MB

{
       "jsonrpc": "2.0",
       "method": "debug_traceBlockByHash",
       "params": ["0x7eb96747d3b5356abfef48cd3729e69394140315e6945254e3256e902ec1db72", {"tracer": "callTracer"}],
       "id": 1
}

that is crazy... at this point I would say raise it as you need to for your purposes and when I have time I will work on supporting the trace module for gnosis which will hopefully help things

@kaladinlight
Copy link
Contributor Author

@matyushkins not sure if you are still running this, but I finally got around to adding the parity trace debug for gnosis if you wanted to give that a shot. it should be significantly lighter to run with this change. let me know!

Parser *EthereumParser
PushHandler func(bchain.NotificationType)
OpenRPC func(string) (bchain.EVMRPCClient, bchain.EVMClient, error)
GetInternalDataForBlock func(blockHash string, blockHeight uint32, transactions []bchain.RpcTransaction) ([]bchain.EthereumInternalData, []bchain.ContractInfo, error)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

support overridable GetInternalDataForBlock to support the use of the parity trace module for debug. This is only used by gnosis at this time, but I suppose I could also add that logic directly to ethrpc.go and leverage a DebugType flag if you would like that better.

@@ -592,7 +595,8 @@ type rpcTraceResult struct {
Result rpcCallTrace `json:"result"`
}

func (b *EthereumRPC) getCreationContractInfo(contract string, height uint32) *bchain.ContractInfo {
// GetCreationContractInfo retrieves the info for a contract address and sets the creation block height
func (b *EthereumRPC) GetCreationContractInfo(contract string, height uint32) *bchain.ContractInfo {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

expose for use in GnosisRPC

}

// getInternalDataForBlock extracts internal transfers and creation or destruction of contracts using the parity trace module
func (b *GnosisRPC) getInternalDataForBlock(blockHash string, blockHeight uint32, transactions []bchain.RpcTransaction) ([]bchain.EthereumInternalData, []bchain.ContractInfo, error) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parity trace logic for internal data

@kaladinlight
Copy link
Contributor Author

@martinboehm another one that would be nice to get upstreamed if you have any spare cycles. thanks!


option go_package = "github.com/trezor/blockbook/bchain/coins/eth";

message ProtoAddrContracts {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update addr contracts to proto type to fix performance impact of manual pack/unpack related to huge multi addr contracts on chain

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seemed to help a bit, but sync has since crawled to a stop...

blockbook.go Outdated Show resolved Hide resolved
@kaladinlight kaladinlight force-pushed the add-gnosis branch 3 times, most recently from 55f1bba to f22919d Compare May 7, 2024 20:39
@kaladinlight kaladinlight marked this pull request as ready for review May 7, 2024 20:40
@kaladinlight
Copy link
Contributor Author

@martinboehm happy to talk through this guy as well, but quick tldr:

  • due to massive amounts of nft on gnosis, the sync speed crawled to a halt
  • after profiling, the bottleneck was pack/unpack -> read/write to db for address contracts
  • I went ahead and upgraded the address contracts to protobufs for more efficient encode/decode along with a configurable address cache size in an effort to improve performance
  • there did seem to be improvements, but fresh sync is slowing down drastically again resulting in my perf improvement attempts not being enough

Would love to brainstorm with you if you have the time and are up for it, but at this point gnosis is not really production ready in the current state of blockbook :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants