diff --git a/Dockerfile b/Dockerfile index 27cf45c9..e5a52882 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,4 +1,4 @@ -# stage 1 Build bstream binary +# stage 1 Build blobstream binary FROM --platform=$BUILDPLATFORM docker.io/golang:1.21.2-alpine3.18 as builder RUN apk update && apk --no-cache add make gcc musl-dev git bash @@ -27,7 +27,7 @@ RUN apk update && apk add --no-cache \ -s /sbin/nologin \ -u ${UID} -COPY --from=builder /orchestrator-relayer/build/bstream /bin/bstream +COPY --from=builder /orchestrator-relayer/build/blobstream /bin/blobstream COPY --chown=${USER_NAME}:${USER_NAME} docker/entrypoint.sh /opt/entrypoint.sh USER ${USER_NAME} diff --git a/Makefile b/Makefile index 661554e2..6c360e5b 100644 --- a/Makefile +++ b/Makefile @@ -7,8 +7,8 @@ DOCKER := $(shell which docker) all: install install: go.sum - @echo "--> Installing bstream" - @go install -mod=readonly ./cmd/bstream + @echo "--> Installing blobstream" + @go install -mod=readonly ./cmd/blobstream go.sum: mod @echo "--> Verifying dependencies have expected content" @@ -24,7 +24,7 @@ pre-build: build: mod @mkdir -p build/ - @go build -o build ./cmd/bstream + @go build -o build ./cmd/blobstream build-docker: @echo "--> Building Docker image" diff --git a/README.md b/README.md index cf23ad8b..71def239 100644 --- a/README.md +++ b/README.md @@ -1,16 +1,16 @@ # orchestrator-relayer -Contains the implementation of the Blobstream orchestrator and relayer. +Contains the implementation of the Bloblobstream orchestrator and relayer. -The orchestrator is the software that signs the Blobstream attestations, and the relayer is the one that relays them to the target EVM chain. +The orchestrator is the software that signs the Bloblobstream attestations, and the relayer is the one that relays them to the target EVM chain. -For a high-level overview of how the Blobstream works, check [here](https://github.com/celestiaorg/quantum-gravity-bridge/tree/76efeca0be1a17d32ef633c0fdbd3c8f5e4cc53f#how-it-works) and [here](https://blog.celestia.org/celestiums/). +For a high-level overview of how the Bloblobstream works, check [here](https://github.com/celestiaorg/quantum-gravity-bridge/tree/76efeca0be1a17d32ef633c0fdbd3c8f5e4cc53f#how-it-works) and [here](https://blog.celestia.org/celestiums/). ## Install 1. [Install Go](https://go.dev/doc/install) 1.21.1 2. Clone this repo -3. Install the Blobstream CLI +3. Install the Bloblobstream CLI ```shell make install @@ -20,16 +20,16 @@ make install ```sh # Print help -bstream --help +blobstream --help ``` ## How to run If you are a Celestia-app validator, all you need to do is run the orchestrator. Check [here](https://github.com/celestiaorg/orchestrator-relayer/blob/main/docs/orchestrator.md) for more details. -If you want to post commitments on an EVM chain, you will need to deploy a new Blobstream contract and run a relayer. Check [here](https://github.com/celestiaorg/orchestrator-relayer/blob/main/docs/relayer.md) for relayer docs and [here](https://github.com/celestiaorg/orchestrator-relayer/blob/main/docs/deploy.md) for how to deploy a new Blobstream contract. +If you want to post commitments on an EVM chain, you will need to deploy a new Bloblobstream contract and run a relayer. Check [here](https://github.com/celestiaorg/orchestrator-relayer/blob/main/docs/relayer.md) for relayer docs and [here](https://github.com/celestiaorg/orchestrator-relayer/blob/main/docs/deploy.md) for how to deploy a new Bloblobstream contract. -Note: the Blobstream P2P network is a separate network than the consensus or the data availability one. Thus, you will need its specific bootstrappers to be able to connect to it. +Note: the Bloblobstream P2P network is a separate network than the consensus or the data availability one. Thus, you will need its specific bootstrappers to be able to connect to it. ## Contributing @@ -41,7 +41,7 @@ Note: the Blobstream P2P network is a separate network than the consensus or the ### Helpful Commands ```sh -# Build a new orchestrator-relayer binary and output to build/bstream +# Build a new orchestrator-relayer binary and output to build/blobstream make build # Run tests @@ -53,10 +53,10 @@ make fmt ## Useful links -The smart contract implementation is in [blobstream-contracts](https://github.com/celestiaorg/blobstream-contracts). +The smart contract implementation is in [bloblobstream-contracts](https://github.com/celestiaorg/bloblobstream-contracts). -The state machine implementation is in [x/blobstream](https://github.com/celestiaorg/celestia-app/tree/main/x/blobstream). +The state machine implementation is in [x/bloblobstream](https://github.com/celestiaorg/celestia-app/tree/main/x/bloblobstream). -Blobstream ADRs are in the [docs](https://github.com/celestiaorg/celestia-app/tree/main/docs/architecture). +Bloblobstream ADRs are in the [docs](https://github.com/celestiaorg/celestia-app/tree/main/docs/architecture). -Blobstream design explained in this [blog](https://blog.celestia.org/celestiums). +Bloblobstream design explained in this [blog](https://blog.celestia.org/celestiums). diff --git a/cmd/bstream/base/config.go b/cmd/bstream/base/config.go index 4c09783a..7f340ef3 100644 --- a/cmd/bstream/base/config.go +++ b/cmd/bstream/base/config.go @@ -24,7 +24,7 @@ type Config struct { EVMPassphrase string } -// DefaultServicePath constructs the default Blobstream store path for +// DefaultServicePath constructs the default Bloblobstream store path for // the provided service. // It tries to get the home directory from an environment variable // called `_HOME`. If not set, then reverts to using diff --git a/cmd/bstream/bootstrapper/cmd.go b/cmd/bstream/bootstrapper/cmd.go index e7afff9f..4956cbde 100644 --- a/cmd/bstream/bootstrapper/cmd.go +++ b/cmd/bstream/bootstrapper/cmd.go @@ -6,7 +6,7 @@ import ( "strings" "time" - p2pcmd "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys/p2p" + p2pcmd "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys/p2p" "github.com/celestiaorg/orchestrator-relayer/helpers" "github.com/celestiaorg/orchestrator-relayer/p2p" "github.com/celestiaorg/orchestrator-relayer/store" @@ -21,7 +21,7 @@ func Command() *cobra.Command { bsCmd := &cobra.Command{ Use: "bootstrapper", Aliases: []string{"bs"}, - Short: "Blobstream P2P network bootstrapper command", + Short: "Bloblobstream P2P network bootstrapper command", SilenceUsage: true, } @@ -110,7 +110,7 @@ func Start() *cobra.Command { } // creating the dht - dht, err := p2p.NewBlobstreamDHT(ctx, h, dataStore, aIBootstrappers, logger) + dht, err := p2p.NewBloblobstreamDHT(ctx, h, dataStore, aIBootstrappers, logger) if err != nil { return err } @@ -137,7 +137,7 @@ func Start() *cobra.Command { func Init() *cobra.Command { cmd := cobra.Command{ Use: "init", - Short: "Initialize the Blobstream bootstrapper store. Passed flags have persisted effect.", + Short: "Initialize the Bloblobstream bootstrapper store. Passed flags have persisted effect.", RunE: func(cmd *cobra.Command, args []string) error { config, err := parseInitFlags(cmd) if err != nil { diff --git a/cmd/bstream/bootstrapper/config.go b/cmd/bstream/bootstrapper/config.go index 3eace905..bea7245d 100644 --- a/cmd/bstream/bootstrapper/config.go +++ b/cmd/bstream/bootstrapper/config.go @@ -1,7 +1,7 @@ package bootstrapper import ( - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/base" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/base" "github.com/spf13/cobra" ) @@ -14,7 +14,7 @@ func addStartFlags(cmd *cobra.Command) *cobra.Command { if err != nil { panic(err) } - cmd.Flags().String(base.FlagHome, homeDir, "The Blobstream bootstrappers home directory") + cmd.Flags().String(base.FlagHome, homeDir, "The Bloblobstream bootstrappers home directory") base.AddP2PNicknameFlag(cmd) base.AddP2PListenAddressFlag(cmd) base.AddBootstrappersFlag(cmd) @@ -65,7 +65,7 @@ func addInitFlags(cmd *cobra.Command) *cobra.Command { if err != nil { panic(err) } - cmd.Flags().String(base.FlagHome, homeDir, "The Blobstream bootstrappers home directory") + cmd.Flags().String(base.FlagHome, homeDir, "The Bloblobstream bootstrappers home directory") return cmd } diff --git a/cmd/bstream/common/helpers.go b/cmd/bstream/common/helpers.go index a827518c..e64d0a4d 100644 --- a/cmd/bstream/common/helpers.go +++ b/cmd/bstream/common/helpers.go @@ -12,7 +12,7 @@ import ( "github.com/celestiaorg/celestia-app/app" "github.com/celestiaorg/celestia-app/app/encoding" - common2 "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys/p2p" + common2 "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys/p2p" "github.com/celestiaorg/orchestrator-relayer/helpers" "github.com/celestiaorg/orchestrator-relayer/p2p" "github.com/celestiaorg/orchestrator-relayer/rpc" @@ -60,7 +60,7 @@ func NewTmAndAppQuerier(logger tmlog.Logger, tendermintRPC string, celesGRPC str return tmQuerier, appQuerier, stopFuncs, nil } -// CreateDHTAndWaitForPeers helper function that creates a new Blobstream DHT and waits for some peers to connect to it. +// CreateDHTAndWaitForPeers helper function that creates a new Bloblobstream DHT and waits for some peers to connect to it. func CreateDHTAndWaitForPeers( ctx context.Context, logger tmlog.Logger, @@ -69,7 +69,7 @@ func CreateDHTAndWaitForPeers( p2pListenAddr string, bootstrappers string, dataStore ds.Batching, -) (*p2p.BlobstreamDHT, error) { +) (*p2p.BloblobstreamDHT, error) { // get the p2p private key or generate a new one privKey, err := common2.GetP2PKeyOrGenerateNewOne(p2pKeyStore, p2pNickname) if err != nil { @@ -98,7 +98,7 @@ func CreateDHTAndWaitForPeers( } // creating the dht - dht, err := p2p.NewBlobstreamDHT(ctx, h, dataStore, aIBootstrappers, logger) + dht, err := p2p.NewBloblobstreamDHT(ctx, h, dataStore, aIBootstrappers, logger) if err != nil { return nil, err } diff --git a/cmd/bstream/deploy/cmd.go b/cmd/bstream/deploy/cmd.go index 54b74dd2..b2827975 100644 --- a/cmd/bstream/deploy/cmd.go +++ b/cmd/bstream/deploy/cmd.go @@ -5,15 +5,15 @@ import ( "os" "strconv" - evm2 "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys/evm" + evm2 "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys/evm" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys" "github.com/ethereum/go-ethereum/accounts/keystore" "github.com/ethereum/go-ethereum/common" "github.com/celestiaorg/celestia-app/app" "github.com/celestiaorg/celestia-app/app/encoding" - "github.com/celestiaorg/celestia-app/x/blobstream/types" + "github.com/celestiaorg/celestia-app/x/bloblobstream/types" "github.com/celestiaorg/orchestrator-relayer/evm" "github.com/celestiaorg/orchestrator-relayer/rpc" "github.com/celestiaorg/orchestrator-relayer/store" @@ -25,7 +25,7 @@ import ( func Command() *cobra.Command { command := &cobra.Command{ Use: "deploy ", - Short: "Deploys the Blobstream contract and initializes it using the provided Celestia chain", + Short: "Deploys the Bloblobstream contract and initializes it using the provided Celestia chain", RunE: func(cmd *cobra.Command, args []string) error { config, err := parseDeployFlags(cmd) if err != nil { @@ -37,7 +37,7 @@ func Command() *cobra.Command { // checking if the provided home is already initiated isInit := store.IsInit(logger, config.Home, store.InitOptions{NeedEVMKeyStore: true}) if !isInit { - logger.Info("please initialize the EVM keystore using the `bstream deploy keys add/import` command") + logger.Info("please initialize the EVM keystore using the `blobstream deploy keys add/import` command") return store.ErrNotInited } @@ -59,7 +59,7 @@ func Command() *cobra.Command { if err != nil { return errors.Wrap( err, - "cannot initialize the Blobstream contract without having a valset request: %s", + "cannot initialize the Bloblobstream contract without having a valset request: %s", ) } @@ -110,15 +110,15 @@ func Command() *cobra.Command { } defer backend.Close() - address, tx, _, err := evmClient.DeployBlobstreamContract(txOpts, backend, *vs, vs.Nonce, false) + address, tx, _, err := evmClient.DeployBloblobstreamContract(txOpts, backend, *vs, vs.Nonce, false) if err != nil { - logger.Error("failed to deploy Blobstream contract") + logger.Error("failed to deploy Bloblobstream contract") return err } receipt, err := evmClient.WaitForTransaction(cmd.Context(), backend, tx) if err == nil && receipt != nil && receipt.Status == 1 { - logger.Info("deployed Blobstream contract", "proxy_address", address.Hex(), "tx_hash", tx.Hash().String()) + logger.Info("deployed Bloblobstream contract", "proxy_address", address.Hex(), "tx_hash", tx.Hash().String()) } return nil diff --git a/cmd/bstream/deploy/config.go b/cmd/bstream/deploy/config.go index 10d3309b..4aad056d 100644 --- a/cmd/bstream/deploy/config.go +++ b/cmd/bstream/deploy/config.go @@ -4,7 +4,7 @@ import ( "errors" "fmt" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/base" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/base" "github.com/celestiaorg/orchestrator-relayer/evm" "github.com/spf13/cobra" @@ -30,7 +30,7 @@ func addDeployFlags(cmd *cobra.Command) *cobra.Command { cmd.Flags().String( FlagStartingNonce, "latest", - "Specify the nonce to start the Blobstream contract from. "+ + "Specify the nonce to start the Bloblobstream contract from. "+ "\"earliest\": for genesis, "+ "\"latest\": for latest valset nonce, "+ "\"nonce\": for the latest valset before the provided nonce, provided nonce included.", @@ -40,7 +40,7 @@ func addDeployFlags(cmd *cobra.Command) *cobra.Command { if err != nil { panic(err) } - cmd.Flags().String(base.FlagHome, homeDir, "The Blobstream deployer home directory") + cmd.Flags().String(base.FlagHome, homeDir, "The Bloblobstream deployer home directory") cmd.Flags().String(base.FlagEVMPassphrase, "", "the evm account passphrase (if not specified as a flag, it will be asked interactively)") return cmd diff --git a/cmd/bstream/keys/evm/config.go b/cmd/bstream/keys/evm/config.go index d909201d..d46a287c 100644 --- a/cmd/bstream/keys/evm/config.go +++ b/cmd/bstream/keys/evm/config.go @@ -1,7 +1,7 @@ package evm import ( - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/base" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/base" "github.com/cosmos/cosmos-sdk/client/flags" "github.com/spf13/cobra" ) @@ -15,7 +15,7 @@ func keysConfigFlags(cmd *cobra.Command, service string) *cobra.Command { if err != nil { panic(err) } - cmd.Flags().String(base.FlagHome, homeDir, "The Blobstream evm keys home directory") + cmd.Flags().String(base.FlagHome, homeDir, "The Bloblobstream evm keys home directory") cmd.Flags().String(base.FlagEVMPassphrase, "", "the evm account passphrase (if not specified as a flag, it will be asked interactively)") return cmd } @@ -54,7 +54,7 @@ func keysNewPassphraseConfigFlags(cmd *cobra.Command, service string) *cobra.Com if err != nil { panic(err) } - cmd.Flags().String(base.FlagHome, homeDir, "The Blobstream evm keys home directory") + cmd.Flags().String(base.FlagHome, homeDir, "The Bloblobstream evm keys home directory") cmd.Flags().String(base.FlagEVMPassphrase, "", "the evm account passphrase (if not specified as a flag, it will be asked interactively)") cmd.Flags().String(FlagNewEVMPassphrase, "", "the evm account new passphrase (if not specified as a flag, it will be asked interactively)") return cmd diff --git a/cmd/bstream/keys/evm/evm.go b/cmd/bstream/keys/evm/evm.go index d6656d49..ca28edce 100644 --- a/cmd/bstream/keys/evm/evm.go +++ b/cmd/bstream/keys/evm/evm.go @@ -8,7 +8,7 @@ import ( "github.com/ethereum/go-ethereum/accounts/keystore" - common2 "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys/common" + common2 "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys/common" "github.com/celestiaorg/orchestrator-relayer/store" "github.com/ethereum/go-ethereum/accounts" "github.com/ethereum/go-ethereum/common" @@ -21,7 +21,7 @@ import ( func Root(serviceName string) *cobra.Command { evmCmd := &cobra.Command{ Use: "evm", - Short: "Blobstream EVM keys manager", + Short: "Bloblobstream EVM keys manager", SilenceUsage: true, } diff --git a/cmd/bstream/keys/keys.go b/cmd/bstream/keys/keys.go index 4aa3b993..3be2d52a 100644 --- a/cmd/bstream/keys/keys.go +++ b/cmd/bstream/keys/keys.go @@ -1,15 +1,15 @@ package keys import ( - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys/evm" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys/p2p" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys/evm" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys/p2p" "github.com/spf13/cobra" ) func Command(serviceName string) *cobra.Command { keysCmd := &cobra.Command{ Use: "keys", - Short: "Blobstream keys manager", + Short: "Bloblobstream keys manager", SilenceUsage: true, } diff --git a/cmd/bstream/keys/p2p/config.go b/cmd/bstream/keys/p2p/config.go index df478948..141ebe03 100644 --- a/cmd/bstream/keys/p2p/config.go +++ b/cmd/bstream/keys/p2p/config.go @@ -1,7 +1,7 @@ package p2p import ( - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/base" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/base" "github.com/cosmos/cosmos-sdk/client/flags" "github.com/spf13/cobra" ) @@ -11,7 +11,7 @@ func keysConfigFlags(cmd *cobra.Command, service string) *cobra.Command { if err != nil { panic(err) } - cmd.Flags().String(base.FlagHome, homeDir, "The Blobstream p2p keys home directory") + cmd.Flags().String(base.FlagHome, homeDir, "The Bloblobstream p2p keys home directory") return cmd } diff --git a/cmd/bstream/keys/p2p/p2p.go b/cmd/bstream/keys/p2p/p2p.go index e422f79d..e70c7a23 100644 --- a/cmd/bstream/keys/p2p/p2p.go +++ b/cmd/bstream/keys/p2p/p2p.go @@ -7,7 +7,7 @@ import ( "github.com/ipfs/boxo/keystore" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys/common" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys/common" "github.com/celestiaorg/orchestrator-relayer/store" util "github.com/ipfs/boxo/util" "github.com/libp2p/go-libp2p/core/crypto" @@ -18,7 +18,7 @@ import ( func Root(serviceName string) *cobra.Command { p2pCmd := &cobra.Command{ Use: "p2p", - Short: "Blobstream p2p keys manager", + Short: "Bloblobstream p2p keys manager", SilenceUsage: true, } diff --git a/cmd/bstream/keys/p2p/p2p_test.go b/cmd/bstream/keys/p2p/p2p_test.go index 9b7b016b..3d0c47b2 100644 --- a/cmd/bstream/keys/p2p/p2p_test.go +++ b/cmd/bstream/keys/p2p/p2p_test.go @@ -3,7 +3,7 @@ package p2p_test import ( "testing" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys/p2p" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys/p2p" "github.com/ipfs/boxo/keystore" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" diff --git a/cmd/bstream/main.go b/cmd/bstream/main.go index 53b92b18..b09e1205 100644 --- a/cmd/bstream/main.go +++ b/cmd/bstream/main.go @@ -4,7 +4,7 @@ import ( "context" "os" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/root" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/root" ) func main() { diff --git a/cmd/bstream/orchestrator/cmd.go b/cmd/bstream/orchestrator/cmd.go index 8c192440..eaf675e4 100644 --- a/cmd/bstream/orchestrator/cmd.go +++ b/cmd/bstream/orchestrator/cmd.go @@ -5,12 +5,12 @@ import ( "os" "time" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/common" - evm2 "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys/evm" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/common" + evm2 "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys/evm" "github.com/celestiaorg/orchestrator-relayer/p2p" dssync "github.com/ipfs/go-datastore/sync" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys" "github.com/celestiaorg/orchestrator-relayer/store" "github.com/celestiaorg/orchestrator-relayer/helpers" @@ -23,7 +23,7 @@ func Command() *cobra.Command { orchCmd := &cobra.Command{ Use: "orchestrator", Aliases: []string{"orch"}, - Short: "Blobstream orchestrator that signs attestations", + Short: "Bloblobstream orchestrator that signs attestations", SilenceUsage: true, } @@ -42,7 +42,7 @@ func Command() *cobra.Command { func Start() *cobra.Command { command := &cobra.Command{ Use: "start ", - Short: "Starts the Blobstream orchestrator to sign attestations", + Short: "Starts the Bloblobstream orchestrator to sign attestations", RunE: func(cmd *cobra.Command, args []string) error { config, err := parseOrchestratorFlags(cmd) if err != nil { @@ -106,7 +106,7 @@ func Start() *cobra.Command { }() // creating the broadcaster - broadcaster := orchestrator.NewBroadcaster(p2pQuerier.BlobstreamDHT) + broadcaster := orchestrator.NewBroadcaster(p2pQuerier.BloblobstreamDHT) if err != nil { return err } @@ -144,7 +144,7 @@ func Start() *cobra.Command { func Init() *cobra.Command { cmd := cobra.Command{ Use: "init", - Short: "Initialize the Blobstream orchestrator store. Passed flags have persisted effect.", + Short: "Initialize the Bloblobstream orchestrator store. Passed flags have persisted effect.", RunE: func(cmd *cobra.Command, args []string) error { config, err := parseInitFlags(cmd) if err != nil { diff --git a/cmd/bstream/orchestrator/config.go b/cmd/bstream/orchestrator/config.go index 66fb3c4a..49708a97 100644 --- a/cmd/bstream/orchestrator/config.go +++ b/cmd/bstream/orchestrator/config.go @@ -4,7 +4,7 @@ import ( "errors" "fmt" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/base" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/base" "github.com/cosmos/cosmos-sdk/client/flags" "github.com/spf13/cobra" ) @@ -32,7 +32,7 @@ func addOrchestratorFlags(cmd *cobra.Command) *cobra.Command { if err != nil { panic(err) } - cmd.Flags().String(base.FlagHome, homeDir, "The Blobstream orchestrator home directory") + cmd.Flags().String(base.FlagHome, homeDir, "The Bloblobstream orchestrator home directory") cmd.Flags().String(base.FlagEVMPassphrase, "", "the evm account passphrase (if not specified as a flag, it will be asked interactively)") base.AddP2PNicknameFlag(cmd) base.AddP2PListenAddressFlag(cmd) @@ -119,7 +119,7 @@ func addInitFlags(cmd *cobra.Command) *cobra.Command { if err != nil { panic(err) } - cmd.Flags().String(base.FlagHome, homeDir, "The Blobstream orchestrator home directory") + cmd.Flags().String(base.FlagHome, homeDir, "The Bloblobstream orchestrator home directory") return cmd } diff --git a/cmd/bstream/query/cmd.go b/cmd/bstream/query/cmd.go index 7aca0661..e58666d5 100644 --- a/cmd/bstream/query/cmd.go +++ b/cmd/bstream/query/cmd.go @@ -11,8 +11,8 @@ import ( common2 "github.com/ethereum/go-ethereum/common" - celestiatypes "github.com/celestiaorg/celestia-app/x/blobstream/types" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/common" + celestiatypes "github.com/celestiaorg/celestia-app/x/bloblobstream/types" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/common" "github.com/celestiaorg/orchestrator-relayer/p2p" "github.com/celestiaorg/orchestrator-relayer/rpc" "github.com/celestiaorg/orchestrator-relayer/types" @@ -29,7 +29,7 @@ func Command() *cobra.Command { queryCmd := &cobra.Command{ Use: "query", Aliases: []string{"q"}, - Short: "Query relevant information from a running Blobstream", + Short: "Query relevant information from a running Bloblobstream", SilenceUsage: true, } @@ -47,8 +47,8 @@ func Signers() *cobra.Command { command := &cobra.Command{ Use: "signers ", Args: cobra.ExactArgs(1), - Short: "Queries the Blobstream for attestations signers", - Long: "Queries the Blobstream for attestations signers. The nonce is the attestation nonce that the command" + + Short: "Queries the Bloblobstream for attestations signers", + Long: "Queries the Bloblobstream for attestations signers. The nonce is the attestation nonce that the command" + " will query signatures for. It should be either a specific nonce starting from 2 and on." + " Or, use 'latest' as argument to check the latest attestation nonce", RunE: func(cmd *cobra.Command, args []string) error { @@ -107,7 +107,7 @@ func Signers() *cobra.Command { dataStore := dssync.MutexWrap(ds.NewMapDatastore()) // creating the dht - dht, err := p2p.NewBlobstreamDHT(cmd.Context(), h, dataStore, []peer.AddrInfo{}, logger) + dht, err := p2p.NewBloblobstreamDHT(cmd.Context(), h, dataStore, []peer.AddrInfo{}, logger) if err != nil { return err } @@ -390,7 +390,7 @@ func Signature() *cobra.Command { dataStore := dssync.MutexWrap(ds.NewMapDatastore()) // creating the dht - dht, err := p2p.NewBlobstreamDHT(cmd.Context(), h, dataStore, []peer.AddrInfo{}, logger) + dht, err := p2p.NewBloblobstreamDHT(cmd.Context(), h, dataStore, []peer.AddrInfo{}, logger) if err != nil { return err } diff --git a/cmd/bstream/query/config.go b/cmd/bstream/query/config.go index b2b05f33..22e6c050 100644 --- a/cmd/bstream/query/config.go +++ b/cmd/bstream/query/config.go @@ -3,7 +3,7 @@ package query import ( "fmt" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/relayer" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/relayer" "github.com/spf13/cobra" ) diff --git a/cmd/bstream/relayer/cmd.go b/cmd/bstream/relayer/cmd.go index a1d35988..7a7e1091 100644 --- a/cmd/bstream/relayer/cmd.go +++ b/cmd/bstream/relayer/cmd.go @@ -5,14 +5,14 @@ import ( "os" "time" - blobstreamwrapper "github.com/celestiaorg/quantum-gravity-bridge/v2/wrappers/QuantumGravityBridge.sol" + bloblobstreamwrapper "github.com/celestiaorg/quantum-gravity-bridge/v2/wrappers/QuantumGravityBridge.sol" - evm2 "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys/evm" + evm2 "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys/evm" "github.com/celestiaorg/orchestrator-relayer/p2p" dssync "github.com/ipfs/go-datastore/sync" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/common" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/keys" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/common" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/keys" "github.com/celestiaorg/orchestrator-relayer/evm" "github.com/celestiaorg/orchestrator-relayer/helpers" "github.com/celestiaorg/orchestrator-relayer/store" @@ -27,7 +27,7 @@ func Command() *cobra.Command { relCmd := &cobra.Command{ Use: "relayer", Aliases: []string{"rel"}, - Short: "Blobstream relayer that relays signatures to the target EVM chain", + Short: "Bloblobstream relayer that relays signatures to the target EVM chain", SilenceUsage: true, } @@ -46,7 +46,7 @@ func Command() *cobra.Command { func Init() *cobra.Command { cmd := cobra.Command{ Use: "init", - Short: "Initialize the Blobstream relayer store. Passed flags have persisted effect.", + Short: "Initialize the Bloblobstream relayer store. Passed flags have persisted effect.", RunE: func(cmd *cobra.Command, args []string) error { config, err := parseInitFlags(cmd) if err != nil { @@ -81,7 +81,7 @@ func Init() *cobra.Command { func Start() *cobra.Command { command := &cobra.Command{ Use: "start ", - Short: "Runs the Blobstream relayer to submit attestations to the target EVM chain", + Short: "Runs the Bloblobstream relayer to submit attestations to the target EVM chain", RunE: func(cmd *cobra.Command, args []string) error { config, err := parseRelayerStartFlags(cmd) if err != nil { @@ -145,13 +145,13 @@ func Start() *cobra.Command { } }() - // connecting to a Blobstream contract + // connecting to a Bloblobstream contract ethClient, err := ethclient.Dial(config.evmRPC) if err != nil { return err } defer ethClient.Close() - blobStreamWrapper, err := blobstreamwrapper.NewWrappers(config.contractAddr, ethClient) + blobStreamWrapper, err := bloblobstreamwrapper.NewWrappers(config.contractAddr, ethClient) if err != nil { return err } diff --git a/cmd/bstream/relayer/config.go b/cmd/bstream/relayer/config.go index b1cbee1c..05384f46 100644 --- a/cmd/bstream/relayer/config.go +++ b/cmd/bstream/relayer/config.go @@ -6,7 +6,7 @@ import ( "github.com/cosmos/cosmos-sdk/client/flags" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/base" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/base" "github.com/celestiaorg/orchestrator-relayer/evm" "github.com/spf13/cobra" @@ -35,13 +35,13 @@ func addRelayerStartFlags(cmd *cobra.Command) *cobra.Command { cmd.Flags().String(FlagCoreRPCHost, "localhost", "Specify the rest rpc address host") cmd.Flags().Uint(FlagCoreRPCPort, 26657, "Specify the rest rpc address port") cmd.Flags().String(FlagEVMRPC, "http://localhost:8545", "Specify the ethereum rpc address") - cmd.Flags().String(FlagContractAddress, "", "Specify the contract at which the Blobstream is deployed") + cmd.Flags().String(FlagContractAddress, "", "Specify the contract at which the Bloblobstream is deployed") cmd.Flags().Uint64(FlagEVMGasLimit, evm.DefaultEVMGasLimit, "Specify the evm gas limit") homeDir, err := base.DefaultServicePath(ServiceNameRelayer) if err != nil { panic(err) } - cmd.Flags().String(base.FlagHome, homeDir, "The Blobstream relayer home directory") + cmd.Flags().String(base.FlagHome, homeDir, "The Bloblobstream relayer home directory") cmd.Flags().String(base.FlagEVMPassphrase, "", "the evm account passphrase (if not specified as a flag, it will be asked interactively)") base.AddP2PNicknameFlag(cmd) base.AddP2PListenAddressFlag(cmd) @@ -159,7 +159,7 @@ func addInitFlags(cmd *cobra.Command) *cobra.Command { if err != nil { panic(err) } - cmd.Flags().String(base.FlagHome, homeDir, "The Blobstream relayer home directory") + cmd.Flags().String(base.FlagHome, homeDir, "The Bloblobstream relayer home directory") return cmd } diff --git a/cmd/bstream/root/cmd.go b/cmd/bstream/root/cmd.go index fc1eb85c..d6d42c2d 100644 --- a/cmd/bstream/root/cmd.go +++ b/cmd/bstream/root/cmd.go @@ -1,24 +1,24 @@ package root import ( - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/bootstrapper" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/generate" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/query" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/bootstrapper" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/generate" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/query" - "github.com/celestiaorg/celestia-app/x/blobstream/client" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/deploy" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/orchestrator" - "github.com/celestiaorg/orchestrator-relayer/cmd/bstream/relayer" + "github.com/celestiaorg/celestia-app/x/bloblobstream/client" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/deploy" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/orchestrator" + "github.com/celestiaorg/orchestrator-relayer/cmd/blobstream/relayer" "github.com/spf13/cobra" ) -// Cmd creates a new root command for the Blobstream CLI. It is called once in the +// Cmd creates a new root command for the Bloblobstream CLI. It is called once in the // main function. func Cmd() *cobra.Command { rootCmd := &cobra.Command{ - Use: "bstream", - Short: "The Blobstream CLI", + Use: "blobstream", + Short: "The Bloblobstream CLI", SilenceUsage: true, } diff --git a/docker/entrypoint.sh b/docker/entrypoint.sh index 7db3ac1b..323c8bdc 100644 --- a/docker/entrypoint.sh +++ b/docker/entrypoint.sh @@ -2,7 +2,7 @@ set -e -echo "Starting Celestia Blobstream with command:" +echo "Starting Celestia Bloblobstream with command:" echo "$@" echo "" diff --git a/docs/bootstrapper.md b/docs/bootstrapper.md index bf404006..c86e17e1 100644 --- a/docs/bootstrapper.md +++ b/docs/bootstrapper.md @@ -1,19 +1,19 @@ -# Blobstream bootstrapper +# Bloblobstream bootstrapper -To bootstrap the Blobstream P2P network, we use the bootstrapper Blobstream node type to accept connections from freshly created orchestrators/relayers and share its peer table with them. +To bootstrap the Bloblobstream P2P network, we use the bootstrapper Bloblobstream node type to accept connections from freshly created orchestrators/relayers and share its peer table with them. ## How to run -### Install the Blobstream binary +### Install the Bloblobstream binary -Make sure to have the Blobstream binary installed. Check [the Blobstream binary page](https://docs.celestia.org/nodes/blobstream-binary) for more details. +Make sure to have the Bloblobstream binary installed. Check [the Bloblobstream binary page](https://docs.celestia.org/nodes/bloblobstream-binary) for more details. ### Init the store Before starting the bootstrapper, we will need to init the store: ```ssh -bstream bootstrapper init +blobstream bootstrapper init ``` By default, the store will be created un `~/.bootstrapper`. However, if you want to specify a custom location, you can use the `--home` flag. Or, you can use the following environment variable: @@ -29,7 +29,7 @@ The P2P private key is optional, and a new one will be generated automatically o The `p2p` sub-command will help you set up this key if you want to use a specific one: ```ssh -bstream bootstrapper p2p --help +blobstream bootstrapper p2p --help ``` ### Start the bootstrapper @@ -37,12 +37,12 @@ bstream bootstrapper p2p --help Now that we have the store initialized, we can start the bootstrapper: ```shell -bstream bootstrapper +blobstream bootstrapper -Blobstream P2P network bootstrapper command +Bloblobstream P2P network bootstrapper command Usage: - bstream bootstrapper [command] + blobstream bootstrapper [command] Aliases: bootstrapper, bs @@ -50,7 +50,7 @@ Aliases: Flags: -h, --help help for bootstrapper -Use "bstream bootstrapper [command] --help" for more information about a command. +Use "blobstream bootstrapper [command] --help" for more information about a command. ``` ### Open the P2P port diff --git a/docs/deploy.md b/docs/deploy.md index 4cd541b7..daa21a7b 100644 --- a/docs/deploy.md +++ b/docs/deploy.md @@ -1,45 +1,45 @@ --- -sidebar_label: Deploy the Blobstream contract -description: Learn how to deploy the Blobstream smart contract. +sidebar_label: Deploy the Bloblobstream contract +description: Learn how to deploy the Bloblobstream smart contract. --- -# Deploy the Blobstream contract +# Deploy the Bloblobstream contract -The `deploy` is a helper command that allows deploying the Blobstream smart contract to a new EVM chain: +The `deploy` is a helper command that allows deploying the Bloblobstream smart contract to a new EVM chain: ```ssh -bstream deploy --help +blobstream deploy --help -Deploys the Blobstream contract and initializes it using the provided Celestia chain +Deploys the Bloblobstream contract and initializes it using the provided Celestia chain Usage: - bstream deploy [flags] - bstream deploy [command] + blobstream deploy [flags] + blobstream deploy [command] Available Commands: - keys Blobstream keys manager + keys Bloblobstream keys manager ``` ## How to run -### Install the Blobstream binary +### Install the Bloblobstream binary -Make sure to have the Blobstream binary installed. Check [the Blobstream binary page](https://docs.celestia.org/nodes/blobstream-binary) for more details. +Make sure to have the Bloblobstream binary installed. Check [the Bloblobstream binary page](https://docs.celestia.org/nodes/bloblobstream-binary) for more details. ### Add keys -In order to deploy a Blobstream smart contract, you will need a funded EVM address and its private key. The `keys` command will help you set up this key: +In order to deploy a Bloblobstream smart contract, you will need a funded EVM address and its private key. The `keys` command will help you set up this key: ```ssh -bstream deploy keys --help +blobstream deploy keys --help ``` To import your EVM private key, there is the `import` subcommand to assist you with that: ```ssh -bstream deploy keys evm import --help +blobstream deploy keys evm import --help ``` This subcommand allows you to either import a raw ECDSA private key provided as plaintext, or import it from a file. The files are JSON keystore files encrypted using a passphrase like in [this example](https://geth.ethereum.org/docs/developers/dapp-developer/native-accounts). @@ -47,17 +47,17 @@ This subcommand allows you to either import a raw ECDSA private key provided as After adding the key, you can check that it's added via running: ```ssh -bstream deploy keys evm list +blobstream deploy keys evm list ``` -For more information about the `keys` command, check [the `keys` documentation](https://docs.celestia.org/nodes/blobstream-keys). +For more information about the `keys` command, check [the `keys` documentation](https://docs.celestia.org/nodes/bloblobstream-keys). ### Deploy the contract -Now, we can deploy the Blobstream contract to a new EVM chain: +Now, we can deploy the Bloblobstream contract to a new EVM chain: ```ssh -blobstream deploy \ +bloblobstream deploy \ --evm.chain-id 4 \ --evm.contract-address 0x27a1F8CE94187E4b043f4D57548EF2348Ed556c7 \ --core.grpc.host localhost \ @@ -68,8 +68,8 @@ blobstream deploy \ The `latest` can be replaced by the following: -- `latest`: to deploy the Blobstream contract starting from the latest validator set. -- `earliest`: to deploy the Blobstream contract starting from genesis. -- `nonce`: you can provide a custom nonce on where you want the Blobstream to start. If the provided nonce is not a `Valset` attestation, then the one before it will be used to deploy the Blobstream smart contract. +- `latest`: to deploy the Bloblobstream contract starting from the latest validator set. +- `earliest`: to deploy the Bloblobstream contract starting from genesis. +- `nonce`: you can provide a custom nonce on where you want the Bloblobstream to start. If the provided nonce is not a `Valset` attestation, then the one before it will be used to deploy the Bloblobstream smart contract. -And, now you will see the Blobstream smart contract address in the logs along with the transaction hash. +And, now you will see the Bloblobstream smart contract address in the logs along with the transaction hash. diff --git a/docs/keys.md b/docs/keys.md index aa44bcdc..3af26503 100644 --- a/docs/keys.md +++ b/docs/keys.md @@ -7,11 +7,11 @@ description: Learn how to manage EVM private keys and P2P identities. -The Blobstream `keys` command allows managing EVM private keys and P2P identities. It is defined as a subcommand for multiple commands with the only difference being the directory where the keys are stored. For the remaining functionality, it is the same for all the commands. +The Bloblobstream `keys` command allows managing EVM private keys and P2P identities. It is defined as a subcommand for multiple commands with the only difference being the directory where the keys are stored. For the remaining functionality, it is the same for all the commands. ## Orchestrator command -The `bstream orchestrator keys` command manages keys for the orchestrator. By default, it uses the orchestrator default home directory to store the keys: `~/.orchestrator/keystore`. However, the default home can be changed either by specifying a different directory using the `--home` flag or setting the following environment variable: +The `blobstream orchestrator keys` command manages keys for the orchestrator. By default, it uses the orchestrator default home directory to store the keys: `~/.orchestrator/keystore`. However, the default home can be changed either by specifying a different directory using the `--home` flag or setting the following environment variable: | Variable | Explanation | Default value | Required | |---------------------|---------------------------------------|-------------------|----------| @@ -19,7 +19,7 @@ The `bstream orchestrator keys` command manages keys for the orchestrator. By de ## Relayer command -The `bstream relayer keys` command manages keys for the relayer. By default, it uses the relayer default home directory to store the keys: `~/.relayer/keystore`. However, the default home can be changed either by specifying a different directory using the `--home` flag or setting the following environment variable: +The `blobstream relayer keys` command manages keys for the relayer. By default, it uses the relayer default home directory to store the keys: `~/.relayer/keystore`. However, the default home can be changed either by specifying a different directory using the `--home` flag or setting the following environment variable: | Variable | Explanation | Default value | Required | |---------------------|---------------------------------------|-------------------|----------| @@ -27,7 +27,7 @@ The `bstream relayer keys` command manages keys for the relayer. By default, it ## Deploy command -The `bstream deploy keys` command manages keys for the deployer. By default, it uses the deployer default home directory to store the keys: `~/.deployer/keystore`. However, the default home can be changed either by specifying a different directory using the `--home` flag or setting the following environment variable: +The `blobstream deploy keys` command manages keys for the deployer. By default, it uses the deployer default home directory to store the keys: `~/.deployer/keystore`. However, the default home can be changed either by specifying a different directory using the `--home` flag or setting the following environment variable: | Variable | Explanation | Default value | Required | |---------------------|---------------------------------------|-------------------|----------| @@ -48,21 +48,21 @@ As specified above, aside from the difference in the default home directory, the The examples will use the orchestrator command to access the keys. However, the same behaviour applies to the other commands as well. ```ssh -bstream orchestrator keys --help +blobstream orchestrator keys --help -Blobstream keys manager +Bloblobstream keys manager Usage: - bstream orchestrator keys [command] + blobstream orchestrator keys [command] Available Commands: - evm Blobstream EVM keys manager - p2p Blobstream p2p keys manager + evm Bloblobstream EVM keys manager + p2p Bloblobstream p2p keys manager Flags: -h, --help help for keys -Use "bstream orchestrator keys [command] --help" for more information about a command. +Use "blobstream orchestrator keys [command] --help" for more information about a command. ``` ### EVM keystore @@ -72,12 +72,12 @@ The first subcommand of the `keys` command is `evm`. This latter allows managing The EVM keys are `ECDSA` keys using the `secp256k1` curve. The implementation uses `geth` file system keystore [implementation](https://geth.ethereum.org/docs/developers/dapp-developer/native-accounts). Thus, keys can be used interchangeably with any compatible software. ```ssh -bstream orchestrator keys evm --help +blobstream orchestrator keys evm --help -Blobstream EVM keys manager +Bloblobstream EVM keys manager Usage: - bstream orchestrator keys evm [command] + blobstream orchestrator keys evm [command] Available Commands: add create a new EVM address @@ -89,7 +89,7 @@ Available Commands: Flags: -h, --help help for evm -Use "bstream orchestrator keys evm [command] --help" for more information about a command. +Use "blobstream orchestrator keys evm [command] --help" for more information about a command. ``` The store also uses the `accounts.StandardScryptN` and `accounts.StandardScryptP` for the `Scrypt` password-based key derivation algorithm to improve its resistance to parallel hardware attacks: @@ -103,12 +103,12 @@ evmKs = keystore.NewKeyStore(evmKeyStorePath(path), keystore.StandardScryptN, ke The `add` subcommand allows creating a new EVM private key and storing it in the keystore: ```ssh -bstream orchestrator keys evm add --help +blobstream orchestrator keys evm add --help create a new EVM address Usage: - bstream orchestrator keys evm add [flags] + blobstream orchestrator keys evm add [flags] ``` The passphrase of the key encryption can be passed as a flag. But it is advised not to pass it as plain text and instead enter it when prompted interactively. @@ -116,7 +116,7 @@ The passphrase of the key encryption can be passed as a flag. But it is advised After creating a new key, you will see its corresponding address printed: ```ssh -bstream orchestrator keys evm add +blobstream orchestrator keys evm add I[2023-04-13|14:16:11.387] successfully opened store path=/home/midnight/.orchestrator I[2023-04-13|14:16:11.387] please provide a passphrase for your account @@ -129,12 +129,12 @@ I[2023-04-13|14:16:30.534] successfully closed store path=/ho The `delete` subcommand allows deleting an EVM private key from store via providing its corresponding address: ```ssh -bstream orchestrator keys evm delete --help +blobstream orchestrator keys evm delete --help delete an EVM addresses from the key store Usage: - bstream orchestrator keys evm delete [flags] + blobstream orchestrator keys evm delete [flags] ``` The provided address should be a `0x` prefixed EVM address. @@ -144,7 +144,7 @@ After running the command, you will be prompted to enter the passphrase for the Then, you will be prompted to confirm that you want to delete that private key. Make sure to verify if you're deleting the right one because once deleted, it can no longer be recovered! ```ssh -bstream orchestrator keys evm delete 0x27a1F8CE94187E4b043f4D57548EF2348Ed556c7 +blobstream orchestrator keys evm delete 0x27a1F8CE94187E4b043f4D57548EF2348Ed556c7 I[2023-04-13|15:01:41.308] successfully opened store path=/home/midnight/.orchestrator I[2023-04-13|15:01:41.309] deleting account address=0x27a1F8CE94187E4b043f4D57548EF2348Ed556c7 @@ -160,7 +160,7 @@ I[2023-04-13|15:01:45.534] successfully closed store path=/ho The `list` subcommand allows listing the existing keys in the keystore: ```ssh -bstream orchestrator keys evm list +blobstream orchestrator keys evm list I[2023-04-13|16:08:45.084] successfully opened store path=/home/midnight/.orchestrator I[2023-04-13|16:08:45.084] listing accounts available in store @@ -175,18 +175,18 @@ You could specify a different home using the `--home` flag to list the keys in a The `update` subcommand allows changing the EVM private key passphrase to a new one. It takes as argument the `0x` prefixed EVM address corresponding to the private key we want to edit. ```ssh -bstream orchestrator evm update --help +blobstream orchestrator evm update --help update an EVM account passphrase Usage: - bstream orchestrator keys evm update [flags] + blobstream orchestrator keys evm update [flags] ``` Example: ```ssh -bstream orchestrator evm update 0x7Dd8F9CAfe6D25165249A454F2d0b72FD149Bbba +blobstream orchestrator evm update 0x7Dd8F9CAfe6D25165249A454F2d0b72FD149Bbba I[2023-04-13|16:21:17.139] successfully opened store path=/home/midnight/.orchestrator I[2023-04-13|16:21:17.140] updating account address=0x7Dd8F9CAfe6D25165249A454F2d0b72FD149Bbba @@ -205,12 +205,12 @@ The `--home` can be specified if the store is not in the default directory. The `import` subcommand allows importing existing private keys into the keystore. It has two subcommands: `ecdsa` and `file`. The first allows importing a private key in plaintext, while the other allows importing a private key from a JSON file secured with a passphrase. ```ssh -bstream orchestrator keys evm import --help +blobstream orchestrator keys evm import --help import evm keys to the keystore Usage: - bstream orchestrator keys evm import [command] + blobstream orchestrator keys evm import [command] Available Commands: ecdsa import an EVM address from an ECDSA private key @@ -219,7 +219,7 @@ Available Commands: Flags: -h, --help help for import -Use "bstream orchestrator keys evm import [command] --help" for more information about a command. +Use "blobstream orchestrator keys evm import [command] --help" for more information about a command. ``` #### EVM: Import ECDSA @@ -229,7 +229,7 @@ For the first one, it takes as argument the private key in plaintext. Then, it p Example: ```ssh -bstream orchestrator keys evm import ecdsa da6ed55cb2894ac2c9c10209c09de8e8b9d109b910338d5bf3d747a7e1fc9eb7 +blobstream orchestrator keys evm import ecdsa da6ed55cb2894ac2c9c10209c09de8e8b9d109b910338d5bf3d747a7e1fc9eb7 I[2023-04-13|17:00:48.617] successfully opened store path=/home/midnight/.orchestrator I[2023-04-13|17:00:48.617] importing account @@ -243,18 +243,18 @@ I[2023-04-13|17:00:51.990] successfully closed store path=/ho For the second, it takes a JSON key file, as defined in [@ethereum/eth-keyfile](https://github.com/ethereum/eth-keyfile), and imports it to your keystore, so it can be used for signatures. ```ssh -bstream orchestrator keys evm import file --help +blobstream orchestrator keys evm import file --help import an EVM address from a file Usage: - bstream orchestrator keys evm import file [flags] + blobstream orchestrator keys evm import file [flags] ``` For example, if we have a file in the current directory containing a private key, we could run the following: ```ssh -bstream orchestrator keys evm import file UTC--2023-04-13T15-00-50.302148204Z--966e6f22781ef6a6a82bbb4db3df8e225dfd9488 +blobstream orchestrator keys evm import file UTC--2023-04-13T15-00-50.302148204Z--966e6f22781ef6a6a82bbb4db3df8e225dfd9488 I[2023-04-13|17:31:53.307] successfully opened store path=/home/midnight/.orchestrator I[2023-04-13|17:31:53.307] importing account @@ -264,7 +264,7 @@ I[2023-04-13|17:31:58.436] successfully imported file address= I[2023-04-13|17:31:58.437] successfully closed store path=/home/midnight/.orchestrator ``` -with the `passphrase` being the current file passphrase, and the `new passphrase` being the new passphrase that will be used to encrypt the private key in the Blobstream store. +with the `passphrase` being the current file passphrase, and the `new passphrase` being the new passphrase that will be used to encrypt the private key in the Bloblobstream store. ### P2P keystore @@ -273,12 +273,12 @@ Similar to the above EVM keystore, the P2P store has similar subcommands for han To access the P2P keystore, run the following: ```ssh -bstream orchestrator keys p2p +blobstream orchestrator keys p2p -Blobstream p2p keys manager +Bloblobstream p2p keys manager Usage: - bstream orchestrator keys p2p [command] + blobstream orchestrator keys p2p [command] Available Commands: add create a new Ed25519 P2P address @@ -289,7 +289,7 @@ Available Commands: Flags: -h, --help help for p2p -Use "bstream orchestrator keys p2p [command] --help" for more information about a command. +Use "blobstream orchestrator keys p2p [command] --help" for more information about a command. ``` The `orchestrator` could be replaced by `relayer` and the only difference would be the default home directory. Aside from that, all the methods defined for the orchestrator will also work with the relayer. @@ -299,18 +299,18 @@ The `orchestrator` could be replaced by `relayer` and the only difference would The `add` subcommand creates a new p2p key to the p2p store: ```ssh -bstream orchestrator keys p2p add --help +blobstream orchestrator keys p2p add --help create a new Ed25519 P2P address Usage: - bstream orchestrator keys p2p add [flags] + blobstream orchestrator keys p2p add [flags] ``` It takes as argument an optional `` which would be the name that we can use to reference that private key. If not specified, an incremental nickname will be assigned. ```ssh -bstream orchestrator keys p2p add +blobstream orchestrator keys p2p add I[2023-04-13|17:38:17.289] successfully opened store path=/home/midnight/.orchestrator I[2023-04-13|17:38:17.290] generating a new Ed25519 private key nickname=1 @@ -327,12 +327,12 @@ The nickname will be needed in case the orchestrator needs to use a specific pri The `delete` subcommand will delete a P2P private key from store referenced by its nickname: ```ssh -bstream orchestrator keys p2p delete --help +blobstream orchestrator keys p2p delete --help delete an Ed25519 P2P private key from store Usage: - bstream orchestrator keys p2p delete [flags] + blobstream orchestrator keys p2p delete [flags] ``` #### P2P: Import subcommand @@ -340,12 +340,12 @@ Usage: The `import` subcommand will import an existing Ed25519 private key to the store. It takes as argument the nickname that we wish to save the private key under, and the actual private key in hex format without `0x`: ```ssh -bstream orchestrator keys p2p import --help +blobstream orchestrator keys p2p import --help import an existing p2p private key Usage: - bstream orchestrator keys p2p import [flags] + blobstream orchestrator keys p2p import [flags] ``` #### P2P: List subcommand @@ -353,10 +353,10 @@ Usage: The `list` subcommand lists the existing P2P private keys in the store: ```ssh -bstream orchestrator keys p2p list --help +blobstream orchestrator keys p2p list --help list existing p2p addresses Usage: - bstream orchestrator keys p2p list [flags] + blobstream orchestrator keys p2p list [flags] ``` diff --git a/docs/orchestrator.md b/docs/orchestrator.md index 0b999b01..9147f364 100644 --- a/docs/orchestrator.md +++ b/docs/orchestrator.md @@ -1,22 +1,22 @@ --- -sidebar_label: Blobstream Orchestrator -description: Learn about the Blobstream Orchestrator. +sidebar_label: Bloblobstream Orchestrator +description: Learn about the Bloblobstream Orchestrator. --- -# Blobstream Orchestrator +# Bloblobstream Orchestrator -The role of the orchestrator is to sign attestations using its corresponding validator EVM private key. These attestations are generated within the Blobstream module of the Celestia-app state machine. To learn more about what attestations are, you can refer to [the Blobstream overview](https://github.com/celestiaorg/celestia-app/tree/main/x/blobstream). +The role of the orchestrator is to sign attestations using its corresponding validator EVM private key. These attestations are generated within the Bloblobstream module of the Celestia-app state machine. To learn more about what attestations are, you can refer to [the Bloblobstream overview](https://github.com/celestiaorg/celestia-app/tree/main/x/bloblobstream). ## How it works The orchestrator does the following: 1. Connect to a Celestia-app full node or validator node via RPC and gRPC and wait for new attestations -2. Once an attestation is created inside the Blobstream state machine, the orchestrator queries it. -3. After getting the attestation, the orchestrator signs it using the provided EVM private key. The private key should correspond to the EVM address provided when creating the validator. Read [more about Blobstream keys](https://docs.celestia.org/nodes/blobstream-keys/). +2. Once an attestation is created inside the Bloblobstream state machine, the orchestrator queries it. +3. After getting the attestation, the orchestrator signs it using the provided EVM private key. The private key should correspond to the EVM address provided when creating the validator. Read [more about Bloblobstream keys](https://docs.celestia.org/nodes/bloblobstream-keys/). 4. Then, the orchestrator pushes its signature to the P2P network it is connected to, via adding it as a DHT value. 5. Listen for new attestations and go back to step 2. @@ -46,16 +46,16 @@ To run an orchestrator, you will need to have access to the following: * *A list of bootstrappers for the P2P network. These will be shared by the team for every network we plan on supporting. * *Access to your consensus node RPC and gRPC ports. -### Install the Blobstream binary +### Install the Bloblobstream binary -Make sure to have the Blobstream binary installed. Check [the Blobstream binary page](https://docs.celestia.org/nodes/blobstream-binary) for more details. +Make sure to have the Bloblobstream binary installed. Check [the Bloblobstream binary page](https://docs.celestia.org/nodes/bloblobstream-binary) for more details. ### Init the store Before starting the orchestrator, we will need to init the store: ```ssh -blobstream orchestrator init +bloblobstream orchestrator init ``` By default, the store will be created under `~/.orchestrator`. However, if you want to specify a custom location, you can use the `--home` flag. Or, you can use the following environment variable: @@ -78,7 +78,7 @@ The P2P private key is optional, and a new one will be generated automatically o The `keys` command will help you set up these keys: ```ssh -bstream orchestrator keys --help +blobstream orchestrator keys --help ``` To add an EVM private key, check the next section. @@ -92,7 +92,7 @@ To register an EVM address for your validator, check the section [Register EVM A To import your EVM private key, there is the `import` subcommand to assist you with that: ```ssh -bstream orchestrator keys evm import --help +blobstream orchestrator keys evm import --help ``` This subcommand allows you to either import a raw ECDSA private key provided as plaintext, or import it from a file. The files are JSON keystore files encrypted using a passphrase like in [this example](https://geth.ethereum.org/docs/developers/dapp-developer/native-accounts). @@ -100,10 +100,10 @@ This subcommand allows you to either import a raw ECDSA private key provided as After adding the key, you can check that it's added via running: ```ssh -bstream orchestrator keys evm list +blobstream orchestrator keys evm list ``` -For more information about the `keys` command, check [the `keys` documentation](https://docs.celestia.org/nodes/blobstream-keys). +For more information about the `keys` command, check [the `keys` documentation](https://docs.celestia.org/nodes/bloblobstream-keys). ### Start the orchestrator @@ -112,18 +112,18 @@ Now that we have the store initialized, we can start the orchestrator. Make sure The orchestrator accepts the following flags: ```ssh -bstream orchestrator start --help +blobstream orchestrator start --help -Starts the Blobstream orchestrator to sign attestations +Starts the Bloblobstream orchestrator to sign attestations Usage: - bstream orchestrator start [flags] + blobstream orchestrator start [flags] ``` To start the orchestrator in the default home directory, run the following: ```ssh -bstream orchestrator start \ +blobstream orchestrator start \ --core.grpc.host localhost \ --core.grpc.port 9090 \ --core.rpc.host localhost \ @@ -145,7 +145,7 @@ If not, then the signatures may not be available to the network and relayers wil #### Register EVM Address -When creating a validator, a random EVM address corresponding to its operator is set in the Blobstream state. This latter will be used by the orchestrator to sign attestations. And since validators will generally not have access to its corresponding private key, that address needs to be edited with one whose private key is known to the validator operator. +When creating a validator, a random EVM address corresponding to its operator is set in the Bloblobstream state. This latter will be used by the orchestrator to sign attestations. And since validators will generally not have access to its corresponding private key, that address needs to be edited with one whose private key is known to the validator operator. To edit an EVM address for a certain validator, its corresponding account needs to send a `RegisterEVMAddress` transaction with the new address. @@ -160,13 +160,13 @@ This assumes that you're using the default home directory, the default keystore To check which EVM address is registered for your `valoper` address, run the following: ```ssh -celestia-appd query blobstream evm +celestia-appd query bloblobstream evm ``` Then, to proceed with the edit, run the following command: ```shell -celestia-appd tx blobstream register \ +celestia-appd tx bloblobstream register \ \ \ --fees 30000utia \ @@ -244,11 +244,11 @@ logs: - events: - attributes: - key: action - value: /celestia.blobstream.v1.MsgRegisterEVMAddress + value: /celestia.bloblobstream.v1.MsgRegisterEVMAddress type: message log: "" msg_index: 0 -raw_log: '[{"msg_index":0,"events":[{"type":"message","attributes":[{"key":"action","value":"/celestia.blobstream.v1.MsgRegisterEVMAddress"}]}]}]' +raw_log: '[{"msg_index":0,"events":[{"type":"message","attributes":[{"key":"action","value":"/celestia.bloblobstream.v1.MsgRegisterEVMAddress"}]}]}]' timestamp: "" tx: null txhash: 4199EA959A2CFEFCD4726D8D8F7B536458A46A27318D3483A4E9614F560606BC @@ -257,7 +257,7 @@ txhash: 4199EA959A2CFEFCD4726D8D8F7B536458A46A27318D3483A4E9614F560606BC Now, you can verify that the EVM address has been updated using the following command: ```ssh -celestia-appd query blobstream evm +celestia-appd query bloblobstream evm ``` Now, you can restart the orchestrator, and it should start signing. @@ -273,12 +273,12 @@ If you want to start the orchestrator as a `systemd` service, you could use the ```text [Unit] -Description=Blobstream orchestrator service +Description=Bloblobstream orchestrator service After=network.target [Service] Type=simple -ExecStart= orchestrator start --evm.account --evm.passphrase --core.grpc.host --core.grpc.port --core.rpc.host --core.rpc.port --p2p.bootstrappers +ExecStart= orchestrator start --evm.account --evm.passphrase --core.grpc.host --core.grpc.port --core.rpc.host --core.rpc.port --p2p.bootstrappers LimitNOFILE=infinity LimitCORE=infinity Restart=always diff --git a/docs/relayer.md b/docs/relayer.md index bda66970..9abcb1d9 100644 --- a/docs/relayer.md +++ b/docs/relayer.md @@ -1,27 +1,27 @@ --- -sidebar_label: Blobstream Relayer -description: Learn about the Blobstream Relayer. +sidebar_label: Bloblobstream Relayer +description: Learn about the Bloblobstream Relayer. --- -# Blobstream Relayer +# Bloblobstream Relayer -The role of the relayer is to gather attestations' signatures from the orchestrators, and submit them to a target EVM chain. The attestations are generated within the Blobstream module of the Celestia-app state machine. To learn more about what attestations are, you can refer to [the Blobstream overview](https://github.com/celestiaorg/celestia-app/tree/main/x/blobstream). +The role of the relayer is to gather attestations' signatures from the orchestrators, and submit them to a target EVM chain. The attestations are generated within the Bloblobstream module of the Celestia-app state machine. To learn more about what attestations are, you can refer to [the Bloblobstream overview](https://github.com/celestiaorg/celestia-app/tree/main/x/bloblobstream). -Also, while every validator in the Celestia validator set needs to run an orchestrator, we only need one entity to run the relayer, and it can be anyone. Thus, if you're a validator, most likely you want to read [the orchestrator documentation](https://docs.celestia.org/nodes/blobstream-orchestrator/). +Also, while every validator in the Celestia validator set needs to run an orchestrator, we only need one entity to run the relayer, and it can be anyone. Thus, if you're a validator, most likely you want to read [the orchestrator documentation](https://docs.celestia.org/nodes/bloblobstream-orchestrator/). -Every relayer needs to target a Blobstream smart contract. This latter can be deployed, if not already, using the `bstream deploy` command. More details in the [Deploy the Blobstream contract guide](https://docs.celestia.org/nodes/blobstream-deploy/). +Every relayer needs to target a Bloblobstream smart contract. This latter can be deployed, if not already, using the `blobstream deploy` command. More details in the [Deploy the Bloblobstream contract guide](https://docs.celestia.org/nodes/bloblobstream-deploy/). ## How it works The relayer works as follows: 1. Connect to a Celestia-app full node or validator node via RPC and gRPC and wait for attestations. -2. Once an attestation is created inside the Blobstream state machine, the relayer queries it. -3. After getting the attestation, the relayer checks if the target Blobstream smart contract's nonce is lower than the attestation. +2. Once an attestation is created inside the Bloblobstream state machine, the relayer queries it. +3. After getting the attestation, the relayer checks if the target Bloblobstream smart contract's nonce is lower than the attestation. 4. If so, the relayer queries the P2P network for signatures from the orchestrators. -5. Once the relayer finds more than 2/3s signatures, it submits them to the target Blobstream smart contract where they get validated. +5. Once the relayer finds more than 2/3s signatures, it submits them to the target Bloblobstream smart contract where they get validated. 6. Listen for new attestations and go back to step 2. The relayer connects to a separate P2P network than the consensus or the data availability one. So, we will provide bootstrappers for that one. @@ -36,16 +36,16 @@ I[2023-04-26|00:04:28.175] waiting for routing table to populate targetnu ## How to run -### Install the Blobstream binary +### Install the Bloblobstream binary -Make sure to have the Blobstream binary installed. Check out the [Install the Blobstream binary page](https://docs.celestia.org/nodes/blobstream-binary) for more details. +Make sure to have the Bloblobstream binary installed. Check out the [Install the Bloblobstream binary page](https://docs.celestia.org/nodes/bloblobstream-binary) for more details. ### Init the store Before starting the relayer, we will need to init the store: ```ssh -bstream relayer init +blobstream relayer init ``` By default, the store will be created un `~/.relayer`. However, if you want to specify a custom location, you can use the `--home` flag. Or, you can use the following environment variable: @@ -68,7 +68,7 @@ The P2P private key is optional, and a new one will be generated automatically o The `keys` command will help you set up these keys: ```ssh -bstream relayer keys --help +blobstream relayer keys --help ``` To add an EVM private key, check the next section. @@ -80,7 +80,7 @@ Because EVM keys are important, we provide a keystore that will help manage them To import your EVM private key, there is the `import` subcommand to assist you with that: ```ssh -bstream relayer keys evm import --help +blobstream relayer keys evm import --help ``` This subcommand allows you to either import a raw ECDSA private key provided as plaintext, or import it from a file. The files are JSON keystore files encrypted using a passphrase like [in this example](https://geth.ethereum.org/docs/developers/dapp-developer/native-accounts). @@ -88,30 +88,30 @@ This subcommand allows you to either import a raw ECDSA private key provided as After adding the key, you can check that it's added via running: ```ssh -bstream relayer keys evm list +blobstream relayer keys evm list ``` -For more information about the `keys` command, check [the `keys` documentation](https://docs.celestia.org/nodes/blobstream-keys). +For more information about the `keys` command, check [the `keys` documentation](https://docs.celestia.org/nodes/bloblobstream-keys). ### Start the relayer -Now that we have the store initialized, and we have a target Blobstream smart contract address, we can start the relayer. Make sure you have your Celestia-app node RPC and gRPC accessible, and able to connect to the P2P network bootstrappers. +Now that we have the store initialized, and we have a target Bloblobstream smart contract address, we can start the relayer. Make sure you have your Celestia-app node RPC and gRPC accessible, and able to connect to the P2P network bootstrappers. The relayer accepts the following flags: ```ssh -bstream relayer start --help +blobstream relayer start --help -Runs the Blobstream relayer to submit attestations to the target EVM chain +Runs the Bloblobstream relayer to submit attestations to the target EVM chain Usage: - bstream relayer start [flags] + blobstream relayer start [flags] ``` To start the relayer using the default home directory, run the following: ```ssh -/bin/bstream relayer start \ +/bin/blobstream relayer start \ --evm.contract-address=0x27a1F8CE94187E4b043f4D57548EF2348Ed556c7 \ --core.rpc.host=localhost \ --core.rpc.port=26657 \ @@ -124,4 +124,4 @@ To start the relayer using the default home directory, run the following: --p2p.listen-addr=/ip4/0.0.0.0/tcp/30001 ``` -And, you will be prompted to enter your EVM key passphrase for the EVM address passed using the `-d` flag, so that the relayer can use it to send transactions to the target Blobstream smart contract. Make sure that it's funded. +And, you will be prompted to enter your EVM key passphrase for the EVM address passed using the `-d` flag, so that the relayer can use it to send transactions to the target Bloblobstream smart contract. Make sure that it's funded. diff --git a/e2e/Dockerfile_e2e b/e2e/Dockerfile_e2e index 10bfe59a..c65f4773 100644 --- a/e2e/Dockerfile_e2e +++ b/e2e/Dockerfile_e2e @@ -1,4 +1,4 @@ -# stage 1 Build bstream binary +# stage 1 Build blobstream binary FROM golang:1.21.1-alpine as builder RUN apk update && apk --no-cache add make gcc musl-dev git COPY . /orchestrator-relayer @@ -13,9 +13,9 @@ USER root # hadolint ignore=DL3018 RUN apk update && apk --no-cache add bash jq coreutils curl -COPY --from=builder /orchestrator-relayer/build/bstream /bin/bstream +COPY --from=builder /orchestrator-relayer/build/blobstream /bin/blobstream # p2p port EXPOSE 9090 26657 30000 -CMD [ "/bin/bstream" ] +CMD [ "/bin/blobstream" ] diff --git a/e2e/README.md b/e2e/README.md index cda669b8..208706f1 100644 --- a/e2e/README.md +++ b/e2e/README.md @@ -1,16 +1,16 @@ -# Blobstream end to end integration test +# Bloblobstream end to end integration test -This directory contains the Blobstream e2e integration tests. It serves as a way to fully test the Blobstream orchestrator and relayer in real network scenarios +This directory contains the Bloblobstream e2e integration tests. It serves as a way to fully test the Bloblobstream orchestrator and relayer in real network scenarios ## Topology -as discussed under [#398](https://github.com/celestiaorg/celestia-app/issues/398) The e2e network defined under `blobstream_network.go` has the following components: +as discussed under [#398](https://github.com/celestiaorg/celestia-app/issues/398) The e2e network defined under `bloblobstream_network.go` has the following components: - 4 Celestia-app nodes that can be validators - 4 Orchestrator nodes that will each run aside of a celestia-app - 1 Ethereum node. Probably Ganache as it is easier to set up - 1 Relayer node that will listen to Celestia chain and relay attestations -- 1 Deployer node that can deploy a new Blobstream contract when needed. +- 1 Deployer node that can deploy a new Bloblobstream contract when needed. For more information on the environment variables required to run these tests, please check the `docker-compose.yml` file and the shell scripts defined under `celestia-app` directory. @@ -22,7 +22,7 @@ In some test scenarios, we only care about running a single orchestrator node. T // create dht for querying bootstrapper, err := helpers.ParseAddrInfos(network.Logger, BOOTSTRAPPERS) HandleNetworkError(t, network, err, false) -_, _, dht := blobstreamtesting.NewTestDHT(ctx, bootstrapper) +_, _, dht := bloblobstreamtesting.NewTestDHT(ctx, bootstrapper) defer dht.Close() ``` diff --git a/e2e/celestia-app/config.toml b/e2e/celestia-app/config.toml index d1fe4eec..86e09e3d 100644 --- a/e2e/celestia-app/config.toml +++ b/e2e/celestia-app/config.toml @@ -15,7 +15,7 @@ proxy_app = "tcp://127.0.0.1:26658" # A custom human readable name for this node -moniker = "blobstream-e2e" +moniker = "bloblobstream-e2e" # If this node is many blocks behind the tip of the chain, FastSync # allows them to catchup quickly by downloading blocks in parallel diff --git a/e2e/celestia-app/genesis.json b/e2e/celestia-app/genesis.json index a21c7e3e..8cfff8a7 100644 --- a/e2e/celestia-app/genesis.json +++ b/e2e/celestia-app/genesis.json @@ -1,6 +1,6 @@ { "genesis_time": "2023-08-19T12:26:21.927143572Z", - "chain_id": "blobstream-e2e", + "chain_id": "bloblobstream-e2e", "initial_height": "1", "consensus_params": { "block": { @@ -188,7 +188,7 @@ { "@type": "/cosmos.staking.v1beta1.MsgCreateValidator", "description": { - "moniker": "blobstream-e2e", + "moniker": "bloblobstream-e2e", "identity": "", "website": "", "security_contact": "", diff --git a/e2e/celestia-app/genesis_template.json b/e2e/celestia-app/genesis_template.json index 83035d45..443b0cfb 100644 --- a/e2e/celestia-app/genesis_template.json +++ b/e2e/celestia-app/genesis_template.json @@ -1,6 +1,6 @@ { "genesis_time": "", - "chain_id": "blobstream-e2e", + "chain_id": "bloblobstream-e2e", "initial_height": "1", "consensus_params": { "block": { @@ -188,7 +188,7 @@ { "@type": "/cosmos.staking.v1beta1.MsgCreateValidator", "description": { - "moniker": "blobstream-e2e", + "moniker": "bloblobstream-e2e", "identity": "", "website": "", "security_contact": "", diff --git a/e2e/deployer_test.go b/e2e/deployer_test.go index f29ca46e..712dd7a3 100644 --- a/e2e/deployer_test.go +++ b/e2e/deployer_test.go @@ -14,10 +14,10 @@ import ( func TestDeployer(t *testing.T) { if os.Getenv("BLOBSTREAM_INTEGRATION_TEST") != TRUE { - t.Skip("Skipping Blobstream integration tests") + t.Skip("Skipping Bloblobstream integration tests") } - network, err := NewBlobstreamNetwork() + network, err := NewBloblobstreamNetwork() HandleNetworkError(t, network, err, false) // to release resources after tests @@ -31,13 +31,13 @@ func TestDeployer(t *testing.T) { err = network.WaitForBlock(ctx, 2) HandleNetworkError(t, network, err, false) - _, err = network.GetLatestDeployedBlobstreamContractWithCustomTimeout(ctx, 15*time.Second) + _, err = network.GetLatestDeployedBloblobstreamContractWithCustomTimeout(ctx, 15*time.Second) HandleNetworkError(t, network, err, true) - err = network.DeployBlobstreamContract() + err = network.DeployBloblobstreamContract() HandleNetworkError(t, network, err, false) - bridge, err := network.GetLatestDeployedBlobstreamContract(ctx) + bridge, err := network.GetLatestDeployedBloblobstreamContract(ctx) HandleNetworkError(t, network, err, false) evmClient := evm.NewClient(nil, bridge, nil, nil, network.EVMRPC, evm.DefaultEVMGasLimit) diff --git a/e2e/docker-compose.yml b/e2e/docker-compose.yml index 7a91bf99..4eb94a42 100644 --- a/e2e/docker-compose.yml +++ b/e2e/docker-compose.yml @@ -259,7 +259,7 @@ services: environment: # By default, we don't want to run the deploy on each run. - DEPLOY_NEW_CONTRACT=false - - EVM_CHAIN_ID=blobstream-e2e + - EVM_CHAIN_ID=bloblobstream-e2e - EVM_ACCOUNT=0x95359c3348e189ef7781546e6E13c80230fC9fB5 - PRIVATE_KEY=0e9688e585562e828dcbd4f402d5eddf686f947fb6bf75894a85bf008b017401 - CORE_RPC_HOST=core0 @@ -273,10 +273,10 @@ services: "/bin/bash" ] command: [ - "/opt/deploy_blobstream_contract.sh" + "/opt/deploy_bloblobstream_contract.sh" ] volumes: - - ${PWD}/scripts/deploy_blobstream_contract.sh:/opt/deploy_blobstream_contract.sh:ro + - ${PWD}/scripts/deploy_bloblobstream_contract.sh:/opt/deploy_bloblobstream_contract.sh:ro relayer: container_name: relayer @@ -289,7 +289,7 @@ services: ports: - "30004:30000" environment: - - EVM_CHAIN_ID=blobstream-e2e + - EVM_CHAIN_ID=bloblobstream-e2e - EVM_ACCOUNT=0x95359c3348e189ef7781546e6E13c80230fC9fB5 - PRIVATE_KEY=0e9688e585562e828dcbd4f402d5eddf686f947fb6bf75894a85bf008b017401 - CORE_RPC_HOST=core0 @@ -302,7 +302,7 @@ services: - P2P_BOOTSTRAPPERS=/dns/core0-orch/tcp/30000/p2p/12D3KooWBSMasWzRSRKXREhediFUwABNZwzJbkZcYz5rYr9Zdmfn - P2P_LISTEN=/ip4/0.0.0.0/tcp/30000 # set the following environment variable to some value -# if you want to relay to an existing Blobstream contract +# if you want to relay to an existing Bloblobstream contract # - BLOBSTREAM_CONTRACT=0x123 entrypoint: [ "/bin/bash" @@ -312,4 +312,4 @@ services: ] volumes: - ${PWD}/scripts/start_relayer.sh:/opt/start_relayer.sh:ro - - ${PWD}/scripts/deploy_blobstream_contract.sh:/opt/deploy_blobstream_contract.sh:ro + - ${PWD}/scripts/deploy_bloblobstream_contract.sh:/opt/deploy_bloblobstream_contract.sh:ro diff --git a/e2e/orchestrator_test.go b/e2e/orchestrator_test.go index abda2480..e4a112e7 100644 --- a/e2e/orchestrator_test.go +++ b/e2e/orchestrator_test.go @@ -8,7 +8,7 @@ import ( "github.com/celestiaorg/orchestrator-relayer/helpers" - blobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" + bloblobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -16,10 +16,10 @@ import ( func TestOrchestratorWithOneValidator(t *testing.T) { if os.Getenv("BLOBSTREAM_INTEGRATION_TEST") != TRUE { - t.Skip("Skipping Blobstream integration tests") + t.Skip("Skipping Bloblobstream integration tests") } - network, err := NewBlobstreamNetwork() + network, err := NewBloblobstreamNetwork() HandleNetworkError(t, network, err, false) // to release resources after tests @@ -42,7 +42,7 @@ func TestOrchestratorWithOneValidator(t *testing.T) { // create dht for querying bootstrapper, err := helpers.ParseAddrInfos(network.Logger, BOOTSTRAPPERS) HandleNetworkError(t, network, err, false) - host, _, dht := blobstreamtesting.NewTestDHT(ctx, bootstrapper) + host, _, dht := bloblobstreamtesting.NewTestDHT(ctx, bootstrapper) defer dht.Close() // force the connection to the DHT to start the orchestrator @@ -68,10 +68,10 @@ func TestOrchestratorWithOneValidator(t *testing.T) { func TestOrchestratorWithTwoValidators(t *testing.T) { if os.Getenv("BLOBSTREAM_INTEGRATION_TEST") != TRUE { - t.Skip("Skipping Blobstream integration tests") + t.Skip("Skipping Bloblobstream integration tests") } - network, err := NewBlobstreamNetwork() + network, err := NewBloblobstreamNetwork() HandleNetworkError(t, network, err, false) // to release resources after tests @@ -103,7 +103,7 @@ func TestOrchestratorWithTwoValidators(t *testing.T) { // create dht for querying bootstrapper, err := helpers.ParseAddrInfos(network.Logger, BOOTSTRAPPERS) HandleNetworkError(t, network, err, false) - host, _, dht := blobstreamtesting.NewTestDHT(ctx, bootstrapper) + host, _, dht := bloblobstreamtesting.NewTestDHT(ctx, bootstrapper) defer dht.Close() // force the connection to the DHT to start the orchestrator @@ -157,10 +157,10 @@ func TestOrchestratorWithTwoValidators(t *testing.T) { func TestOrchestratorWithMultipleValidators(t *testing.T) { if os.Getenv("BLOBSTREAM_INTEGRATION_TEST") != TRUE { - t.Skip("Skipping Blobstream integration tests") + t.Skip("Skipping Bloblobstream integration tests") } - network, err := NewBlobstreamNetwork() + network, err := NewBloblobstreamNetwork() assert.NoError(t, err) // to release resources after tests @@ -179,7 +179,7 @@ func TestOrchestratorWithMultipleValidators(t *testing.T) { // create dht for querying bootstrapper, err := helpers.ParseAddrInfos(network.Logger, BOOTSTRAPPERS) HandleNetworkError(t, network, err, false) - host, _, dht := blobstreamtesting.NewTestDHT(ctx, bootstrapper) + host, _, dht := bloblobstreamtesting.NewTestDHT(ctx, bootstrapper) defer dht.Close() // force the connection to the DHT to start the orchestrator @@ -246,10 +246,10 @@ func TestOrchestratorWithMultipleValidators(t *testing.T) { func TestOrchestratorReplayOld(t *testing.T) { if os.Getenv("BLOBSTREAM_INTEGRATION_TEST") != TRUE { - t.Skip("Skipping Blobstream integration tests") + t.Skip("Skipping Bloblobstream integration tests") } - network, err := NewBlobstreamNetwork() + network, err := NewBloblobstreamNetwork() HandleNetworkError(t, network, err, false) // to release resources after tests @@ -284,7 +284,7 @@ func TestOrchestratorReplayOld(t *testing.T) { // create dht for querying bootstrapper, err := helpers.ParseAddrInfos(network.Logger, BOOTSTRAPPERS) HandleNetworkError(t, network, err, false) - host, _, dht := blobstreamtesting.NewTestDHT(ctx, bootstrapper) + host, _, dht := bloblobstreamtesting.NewTestDHT(ctx, bootstrapper) defer dht.Close() // force the connection to the DHT to start the orchestrator diff --git a/e2e/qgb_network.go b/e2e/qgb_network.go index 140b892d..6f97a3fe 100644 --- a/e2e/qgb_network.go +++ b/e2e/qgb_network.go @@ -19,14 +19,14 @@ import ( "google.golang.org/grpc" "google.golang.org/grpc/credentials/insecure" - blobstreamwrapper "github.com/celestiaorg/quantum-gravity-bridge/v2/wrappers/QuantumGravityBridge.sol" + bloblobstreamwrapper "github.com/celestiaorg/quantum-gravity-bridge/v2/wrappers/QuantumGravityBridge.sol" "github.com/celestiaorg/celestia-app/app" "github.com/celestiaorg/celestia-app/app/encoding" - "github.com/celestiaorg/celestia-app/x/blobstream/types" + "github.com/celestiaorg/celestia-app/x/bloblobstream/types" "github.com/celestiaorg/orchestrator-relayer/p2p" "github.com/celestiaorg/orchestrator-relayer/rpc" - blobstreamtypes "github.com/celestiaorg/orchestrator-relayer/types" + bloblobstreamtypes "github.com/celestiaorg/orchestrator-relayer/types" "github.com/ethereum/go-ethereum/accounts/abi/bind" ethcommon "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/ethclient" @@ -36,7 +36,7 @@ import ( testcontainers "github.com/testcontainers/testcontainers-go/modules/compose" ) -type BlobstreamNetwork struct { +type BloblobstreamNetwork struct { ComposePaths []string Identifier string Instance *testcontainers.LocalDockerCompose @@ -53,7 +53,7 @@ type BlobstreamNetwork struct { toStopChan chan<- struct{} } -func NewBlobstreamNetwork() (*BlobstreamNetwork, error) { +func NewBloblobstreamNetwork() (*BloblobstreamNetwork, error) { id := strings.ToLower(uuid.New().String()) paths := []string{"./docker-compose.yml"} instance := testcontainers.NewLocalDockerCompose(paths, id) //nolint:staticcheck @@ -61,7 +61,7 @@ func NewBlobstreamNetwork() (*BlobstreamNetwork, error) { // given an initial capacity to avoid blocking in case multiple services failed // and wanted to notify the moderator. toStopChan := make(chan struct{}, 10) - network := &BlobstreamNetwork{ + network := &BloblobstreamNetwork{ Identifier: id, ComposePaths: paths, Instance: instance, @@ -96,7 +96,7 @@ func registerModerator(stopChan chan<- struct{}, toStopChan <-chan struct{}) { // it is not calling `DeleteAll()` here as it is being called inside the tests. No need to call it two times. // this comes from the fact that we're sticking with unit tests style tests to be able to run individual tests // https://github.com/celestiaorg/celestia-app/issues/428 -func registerGracefulExit(network *BlobstreamNetwork) { +func registerGracefulExit(network *BloblobstreamNetwork) { c := make(chan os.Signal, 1) signal.Notify(c, os.Interrupt) go func() { @@ -117,9 +117,9 @@ func forceExitIfNeeded(exitCode int) { } } -// StartAll starts the whole Blobstream cluster with multiple validators, orchestrators and a relayer +// StartAll starts the whole Bloblobstream cluster with multiple validators, orchestrators and a relayer // Make sure to release the resources after finishing by calling the `StopAll()` method. -func (network BlobstreamNetwork) StartAll() error { +func (network BloblobstreamNetwork) StartAll() error { // the reason for building before executing `up` is to avoid rebuilding all the images // if some container accidentally changed some files when running. // This to speed up a bit the execution. @@ -141,7 +141,7 @@ func (network BlobstreamNetwork) StartAll() error { // StopAll stops the network and leaves the containers created. This allows to resume // execution from the point where they stopped. -func (network BlobstreamNetwork) StopAll() error { +func (network BloblobstreamNetwork) StopAll() error { err := network.Instance. WithCommand([]string{"stop"}). Invoke() @@ -152,7 +152,7 @@ func (network BlobstreamNetwork) StopAll() error { } // DeleteAll deletes the containers, network and everything related to the cluster. -func (network BlobstreamNetwork) DeleteAll() error { +func (network BloblobstreamNetwork) DeleteAll() error { err := network.Instance. WithCommand([]string{"down"}). Invoke() @@ -163,7 +163,7 @@ func (network BlobstreamNetwork) DeleteAll() error { } // KillAll kills all the containers. -func (network BlobstreamNetwork) KillAll() error { +func (network BloblobstreamNetwork) KillAll() error { err := network.Instance. WithCommand([]string{"kill"}). Invoke() @@ -175,7 +175,7 @@ func (network BlobstreamNetwork) KillAll() error { // Start starts a service from the `Service` enum. Make sure to call `Stop`, in the // end, to release the resources. -func (network BlobstreamNetwork) Start(service Service) error { +func (network BloblobstreamNetwork) Start(service Service) error { serviceName, err := service.toString() if err != nil { return err @@ -196,10 +196,10 @@ func (network BlobstreamNetwork) Start(service Service) error { return nil } -// DeployBlobstreamContract uses the Deployer service to deploy a new Blobstream contract +// DeployBloblobstreamContract uses the Deployer service to deploy a new Bloblobstream contract // based on the existing running network. If no Celestia-app nor ganache is // started, it creates them automatically. -func (network BlobstreamNetwork) DeployBlobstreamContract() error { +func (network BloblobstreamNetwork) DeployBloblobstreamContract() error { fmt.Println("building images...") err := network.Instance. WithCommand([]string{"build", "--quiet", DEPLOYER}). @@ -218,7 +218,7 @@ func (network BlobstreamNetwork) DeployBlobstreamContract() error { // StartMultiple start multiple services. Make sure to call `Stop`, in the // end, to release the resources. -func (network BlobstreamNetwork) StartMultiple(services ...Service) error { +func (network BloblobstreamNetwork) StartMultiple(services ...Service) error { if len(services) == 0 { return fmt.Errorf("empty list of services provided") } @@ -246,7 +246,7 @@ func (network BlobstreamNetwork) StartMultiple(services ...Service) error { return nil } -func (network BlobstreamNetwork) Stop(service Service) error { +func (network BloblobstreamNetwork) Stop(service Service) error { serviceName, err := service.toString() if err != nil { return err @@ -262,7 +262,7 @@ func (network BlobstreamNetwork) Stop(service Service) error { // StopMultiple start multiple services. Make sure to call `Stop` or `StopMultiple`, in the // end, to release the resources. -func (network BlobstreamNetwork) StopMultiple(services ...Service) error { +func (network BloblobstreamNetwork) StopMultiple(services ...Service) error { if len(services) == 0 { return fmt.Errorf("empty list of services provided") } @@ -283,7 +283,7 @@ func (network BlobstreamNetwork) StopMultiple(services ...Service) error { return nil } -func (network BlobstreamNetwork) ExecCommand(service Service, command []string) error { +func (network BloblobstreamNetwork) ExecCommand(service Service, command []string) error { serviceName, err := service.toString() if err != nil { return err @@ -299,7 +299,7 @@ func (network BlobstreamNetwork) ExecCommand(service Service, command []string) // StartMinimal starts a network containing: 1 validator, 1 orchestrator, 1 relayer // and a ganache instance. -func (network BlobstreamNetwork) StartMinimal() error { +func (network BloblobstreamNetwork) StartMinimal() error { fmt.Println("building images...") err := network.Instance. WithCommand([]string{"build", "--quiet", "core0", "core0-orch", "relayer", "ganache"}). @@ -319,7 +319,7 @@ func (network BlobstreamNetwork) StartMinimal() error { // StartBase starts the very minimal component to have a network. // It consists of starting `core0` as it is the genesis validator, and the docker network // will be created along with it, allowing more containers to join it. -func (network BlobstreamNetwork) StartBase() error { +func (network BloblobstreamNetwork) StartBase() error { fmt.Println("building images...") err := network.Instance. WithCommand([]string{"build", "--quiet", "core0"}). @@ -336,7 +336,7 @@ func (network BlobstreamNetwork) StartBase() error { return nil } -func (network BlobstreamNetwork) WaitForNodeToStart(_ctx context.Context, rpcAddr string) error { +func (network BloblobstreamNetwork) WaitForNodeToStart(_ctx context.Context, rpcAddr string) error { ctx, cancel := context.WithTimeout(_ctx, 5*time.Minute) for { select { @@ -362,11 +362,11 @@ func (network BlobstreamNetwork) WaitForNodeToStart(_ctx context.Context, rpcAdd } } -func (network BlobstreamNetwork) WaitForBlock(_ctx context.Context, height int64) error { +func (network BloblobstreamNetwork) WaitForBlock(_ctx context.Context, height int64) error { return network.WaitForBlockWithCustomTimeout(_ctx, height, 5*time.Minute) } -func (network BlobstreamNetwork) WaitForBlockWithCustomTimeout( +func (network BloblobstreamNetwork) WaitForBlockWithCustomTimeout( _ctx context.Context, height int64, timeout time.Duration, @@ -415,7 +415,7 @@ func (network BlobstreamNetwork) WaitForBlockWithCustomTimeout( // and for any nonce, but would require adding a new method to the querier. Don't think it is worth it now as // the number of valsets that will be signed is trivial and reaching 0 would be in no time). // Returns the height and the nonce of some attestation that the orchestrator signed. -func (network BlobstreamNetwork) WaitForOrchestratorToStart(_ctx context.Context, dht *p2p.BlobstreamDHT, evmAddr string) (uint64, uint64, error) { +func (network BloblobstreamNetwork) WaitForOrchestratorToStart(_ctx context.Context, dht *p2p.BloblobstreamDHT, evmAddr string) (uint64, uint64, error) { // create p2p querier p2pQuerier := p2p.NewQuerier(dht, network.Logger) @@ -473,7 +473,7 @@ func (network BlobstreamNetwork) WaitForOrchestratorToStart(_ctx context.Context if err != nil { continue } - dataRootTupleRoot := blobstreamtypes.DataCommitmentTupleRootSignBytes(big.NewInt(int64(castedAtt.Nonce)), commitment) + dataRootTupleRoot := bloblobstreamtypes.DataCommitmentTupleRootSignBytes(big.NewInt(int64(castedAtt.Nonce)), commitment) dcConfirm, err := p2pQuerier.QueryDataCommitmentConfirmByEVMAddress(ctx, lastNonce-i, evmAddr, dataRootTupleRoot.Hex()) if err == nil && dcConfirm != nil { cancel() @@ -489,7 +489,7 @@ func (network BlobstreamNetwork) WaitForOrchestratorToStart(_ctx context.Context // GetValsetContainingVals Gets the last valset that contains a certain number of validator. // This is used after enabling orchestrators not to sign unless they belong to some valset. // Thus, any nonce after the returned valset should be signed by all orchestrators. -func (network BlobstreamNetwork) GetValsetContainingVals(_ctx context.Context, number int) (*types.Valset, error) { +func (network BloblobstreamNetwork) GetValsetContainingVals(_ctx context.Context, number int) (*types.Valset, error) { appQuerier := rpc.NewAppQuerier(network.Logger, network.CelestiaGRPC, network.EncCfg) err := appQuerier.Start() if err != nil { @@ -530,12 +530,12 @@ func (network BlobstreamNetwork) GetValsetContainingVals(_ctx context.Context, n // GetValsetConfirm Returns the valset confirm for nonce `nonce` // signed by orchestrator whose EVM address is `evmAddr`. -func (network BlobstreamNetwork) GetValsetConfirm( +func (network BloblobstreamNetwork) GetValsetConfirm( _ctx context.Context, - dht *p2p.BlobstreamDHT, + dht *p2p.BloblobstreamDHT, nonce uint64, evmAddr string, -) (*blobstreamtypes.ValsetConfirm, error) { +) (*bloblobstreamtypes.ValsetConfirm, error) { p2pQuerier := p2p.NewQuerier(dht, network.Logger) // create app querier appQuerier := rpc.NewAppQuerier(network.Logger, network.CelestiaGRPC, network.EncCfg) @@ -583,12 +583,12 @@ func (network BlobstreamNetwork) GetValsetConfirm( // GetDataCommitmentConfirm Returns the data commitment confirm for nonce `nonce` // signed by orchestrator whose EVM address is `evmAddr`. -func (network BlobstreamNetwork) GetDataCommitmentConfirm( +func (network BloblobstreamNetwork) GetDataCommitmentConfirm( _ctx context.Context, - dht *p2p.BlobstreamDHT, + dht *p2p.BloblobstreamDHT, nonce uint64, evmAddr string, -) (*blobstreamtypes.DataCommitmentConfirm, error) { +) (*bloblobstreamtypes.DataCommitmentConfirm, error) { // create p2p querier p2pQuerier := p2p.NewQuerier(dht, network.Logger) @@ -629,7 +629,7 @@ func (network BlobstreamNetwork) GetDataCommitmentConfirm( if err != nil { continue } - dataRootTupleRoot := blobstreamtypes.DataCommitmentTupleRootSignBytes(big.NewInt(int64(nonce)), commitment) + dataRootTupleRoot := bloblobstreamtypes.DataCommitmentTupleRootSignBytes(big.NewInt(int64(nonce)), commitment) resp, err := p2pQuerier.QueryDataCommitmentConfirmByEVMAddress(ctx, nonce, evmAddr, dataRootTupleRoot.Hex()) if err == nil && resp != nil { cancel() @@ -643,12 +643,12 @@ func (network BlobstreamNetwork) GetDataCommitmentConfirm( // GetDataCommitmentConfirmByHeight Returns the data commitment confirm that commits // to height `height` signed by orchestrator whose EVM address is `evmAddr`. -func (network BlobstreamNetwork) GetDataCommitmentConfirmByHeight( +func (network BloblobstreamNetwork) GetDataCommitmentConfirmByHeight( _ctx context.Context, - dht *p2p.BlobstreamDHT, + dht *p2p.BloblobstreamDHT, height uint64, evmAddr string, -) (*blobstreamtypes.DataCommitmentConfirm, error) { +) (*bloblobstreamtypes.DataCommitmentConfirm, error) { // create app querier appQuerier := rpc.NewAppQuerier(network.Logger, network.CelestiaGRPC, network.EncCfg) err := appQuerier.Start() @@ -669,7 +669,7 @@ func (network BlobstreamNetwork) GetDataCommitmentConfirmByHeight( } // GetLatestAttestationNonce Returns the latest attestation nonce. -func (network BlobstreamNetwork) GetLatestAttestationNonce(_ctx context.Context) (uint64, error) { +func (network BloblobstreamNetwork) GetLatestAttestationNonce(_ctx context.Context) (uint64, error) { // create app querier appQuerier := rpc.NewAppQuerier(network.Logger, network.CelestiaGRPC, network.EncCfg) err := appQuerier.Start() @@ -686,9 +686,9 @@ func (network BlobstreamNetwork) GetLatestAttestationNonce(_ctx context.Context) } // WasAttestationSigned Returns true if the attestation confirm exist. -func (network BlobstreamNetwork) WasAttestationSigned( +func (network BloblobstreamNetwork) WasAttestationSigned( _ctx context.Context, - dht *p2p.BlobstreamDHT, + dht *p2p.BloblobstreamDHT, nonce uint64, evmAddress string, ) (bool, error) { @@ -745,7 +745,7 @@ func (network BlobstreamNetwork) WasAttestationSigned( if err != nil { continue } - dataRootTupleRoot := blobstreamtypes.DataCommitmentTupleRootSignBytes(big.NewInt(int64(castedAtt.Nonce)), commitment) + dataRootTupleRoot := bloblobstreamtypes.DataCommitmentTupleRootSignBytes(big.NewInt(int64(castedAtt.Nonce)), commitment) resp, err := p2pQuerier.QueryDataCommitmentConfirmByEVMAddress( ctx, castedAtt.Nonce, @@ -763,14 +763,14 @@ func (network BlobstreamNetwork) WasAttestationSigned( } } -func (network BlobstreamNetwork) GetLatestDeployedBlobstreamContract(_ctx context.Context) (*blobstreamwrapper.Wrappers, error) { - return network.GetLatestDeployedBlobstreamContractWithCustomTimeout(_ctx, 5*time.Minute) +func (network BloblobstreamNetwork) GetLatestDeployedBloblobstreamContract(_ctx context.Context) (*bloblobstreamwrapper.Wrappers, error) { + return network.GetLatestDeployedBloblobstreamContractWithCustomTimeout(_ctx, 5*time.Minute) } -func (network BlobstreamNetwork) GetLatestDeployedBlobstreamContractWithCustomTimeout( +func (network BloblobstreamNetwork) GetLatestDeployedBloblobstreamContractWithCustomTimeout( _ctx context.Context, timeout time.Duration, -) (*blobstreamwrapper.Wrappers, error) { +) (*bloblobstreamwrapper.Wrappers, error) { client, err := ethclient.Dial(network.EVMRPC) if err != nil { return nil, err @@ -786,7 +786,7 @@ func (network BlobstreamNetwork) GetLatestDeployedBlobstreamContractWithCustomTi case <-ctx.Done(): cancel() if errors.Is(ctx.Err(), context.DeadlineExceeded) { - return nil, fmt.Errorf("timeout. couldn't find deployed blobstream contract") + return nil, fmt.Errorf("timeout. couldn't find deployed bloblobstream contract") } return nil, ctx.Err() default: @@ -820,8 +820,8 @@ func (network BlobstreamNetwork) GetLatestDeployedBlobstreamContractWithCustomTi if receipt.ContractAddress == (ethcommon.Address{}) { continue } - // If the bridge is loaded, then it's the latest-deployed proxy Blobstream contract - bridge, err := blobstreamwrapper.NewWrappers(receipt.ContractAddress, client) + // If the bridge is loaded, then it's the latest-deployed proxy Bloblobstream contract + bridge, err := bloblobstreamwrapper.NewWrappers(receipt.ContractAddress, client) if err != nil { continue } @@ -837,7 +837,7 @@ func (network BlobstreamNetwork) GetLatestDeployedBlobstreamContractWithCustomTi } } -func (network BlobstreamNetwork) WaitForRelayerToStart(_ctx context.Context, bridge *blobstreamwrapper.Wrappers) error { +func (network BloblobstreamNetwork) WaitForRelayerToStart(_ctx context.Context, bridge *bloblobstreamwrapper.Wrappers) error { ctx, cancel := context.WithTimeout(_ctx, 2*time.Minute) for { select { @@ -862,7 +862,7 @@ func (network BlobstreamNetwork) WaitForRelayerToStart(_ctx context.Context, bri } } -func (network BlobstreamNetwork) WaitForEventNonce(ctx context.Context, bridge *blobstreamwrapper.Wrappers, n uint64) error { +func (network BloblobstreamNetwork) WaitForEventNonce(ctx context.Context, bridge *bloblobstreamwrapper.Wrappers, n uint64) error { ctx, cancel := context.WithTimeout(ctx, 5*time.Minute) for { select { @@ -889,10 +889,10 @@ func (network BlobstreamNetwork) WaitForEventNonce(ctx context.Context, bridge * } } -func (network BlobstreamNetwork) UpdateDataCommitmentWindow(ctx context.Context, newWindow uint64) error { +func (network BloblobstreamNetwork) UpdateDataCommitmentWindow(ctx context.Context, newWindow uint64) error { fmt.Printf("updating data commitment window %d\n", newWindow) kr, err := keyring.New( - "blobstream-tests", + "bloblobstream-tests", "test", "celestia-app/core0", nil, @@ -983,13 +983,13 @@ func (network BlobstreamNetwork) UpdateDataCommitmentWindow(ctx context.Context, return nil } -func (network BlobstreamNetwork) PrintLogs() { +func (network BloblobstreamNetwork) PrintLogs() { _ = network.Instance. WithCommand([]string{"logs"}). Invoke() } -func (network BlobstreamNetwork) GetLatestValset(ctx context.Context) (*types.Valset, error) { +func (network BloblobstreamNetwork) GetLatestValset(ctx context.Context) (*types.Valset, error) { // create app querier appQuerier := rpc.NewAppQuerier(network.Logger, network.CelestiaGRPC, network.EncCfg) err := appQuerier.Start() @@ -1005,7 +1005,7 @@ func (network BlobstreamNetwork) GetLatestValset(ctx context.Context) (*types.Va return valset, nil } -func (network BlobstreamNetwork) GetCurrentDataCommitmentWindow(ctx context.Context) (uint64, error) { +func (network BloblobstreamNetwork) GetCurrentDataCommitmentWindow(ctx context.Context) (uint64, error) { var window uint64 queryFun := func() error { blobStreamGRPC, err := grpc.Dial("localhost:9090", grpc.WithTransportCredentials(insecure.NewCredentials())) diff --git a/e2e/relayer_test.go b/e2e/relayer_test.go index 408621b7..abe5a274 100644 --- a/e2e/relayer_test.go +++ b/e2e/relayer_test.go @@ -12,17 +12,17 @@ import ( "github.com/celestiaorg/orchestrator-relayer/evm" "github.com/celestiaorg/orchestrator-relayer/rpc" - blobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" + bloblobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" "github.com/ethereum/go-ethereum/accounts/abi/bind" "github.com/stretchr/testify/assert" ) func TestRelayerWithOneValidator(t *testing.T) { if os.Getenv("BLOBSTREAM_INTEGRATION_TEST") != TRUE { - t.Skip("Skipping Blobstream integration tests") + t.Skip("Skipping Bloblobstream integration tests") } - network, err := NewBlobstreamNetwork() + network, err := NewBloblobstreamNetwork() HandleNetworkError(t, network, err, false) // to release resources after tests @@ -40,7 +40,7 @@ func TestRelayerWithOneValidator(t *testing.T) { // create dht for querying bootstrapper, err := helpers.ParseAddrInfos(network.Logger, BOOTSTRAPPERS) HandleNetworkError(t, network, err, false) - host, _, dht := blobstreamtesting.NewTestDHT(ctx, bootstrapper) + host, _, dht := bloblobstreamtesting.NewTestDHT(ctx, bootstrapper) defer dht.Close() // force the connection to the DHT to start the orchestrator @@ -50,7 +50,7 @@ func TestRelayerWithOneValidator(t *testing.T) { _, _, err = network.WaitForOrchestratorToStart(ctx, dht, CORE0EVMADDRESS) HandleNetworkError(t, network, err, false) - bridge, err := network.GetLatestDeployedBlobstreamContract(ctx) + bridge, err := network.GetLatestDeployedBloblobstreamContract(ctx) HandleNetworkError(t, network, err, false) latestNonce, err := network.GetLatestAttestationNonce(ctx) @@ -70,10 +70,10 @@ func TestRelayerWithOneValidator(t *testing.T) { func TestRelayerWithTwoValidators(t *testing.T) { if os.Getenv("BLOBSTREAM_INTEGRATION_TEST") != TRUE { - t.Skip("Skipping Blobstream integration tests") + t.Skip("Skipping Bloblobstream integration tests") } - network, err := NewBlobstreamNetwork() + network, err := NewBloblobstreamNetwork() HandleNetworkError(t, network, err, false) // to release resources after tests @@ -100,7 +100,7 @@ func TestRelayerWithTwoValidators(t *testing.T) { // create dht for querying bootstrapper, err := helpers.ParseAddrInfos(network.Logger, BOOTSTRAPPERS) HandleNetworkError(t, network, err, false) - host, _, dht := blobstreamtesting.NewTestDHT(ctx, bootstrapper) + host, _, dht := bloblobstreamtesting.NewTestDHT(ctx, bootstrapper) defer dht.Close() // force the connection to the DHT to start the orchestrator @@ -116,7 +116,7 @@ func TestRelayerWithTwoValidators(t *testing.T) { // give the orchestrators some time to catchup time.Sleep(time.Second) - bridge, err := network.GetLatestDeployedBlobstreamContract(ctx) + bridge, err := network.GetLatestDeployedBloblobstreamContract(ctx) HandleNetworkError(t, network, err, false) err = network.WaitForRelayerToStart(ctx, bridge) @@ -136,10 +136,10 @@ func TestRelayerWithTwoValidators(t *testing.T) { func TestRelayerWithMultipleValidators(t *testing.T) { if os.Getenv("BLOBSTREAM_INTEGRATION_TEST") != TRUE { - t.Skip("Skipping Blobstream integration tests") + t.Skip("Skipping Bloblobstream integration tests") } - network, err := NewBlobstreamNetwork() + network, err := NewBloblobstreamNetwork() HandleNetworkError(t, network, err, false) // to release resources after tests @@ -158,7 +158,7 @@ func TestRelayerWithMultipleValidators(t *testing.T) { // create dht for querying bootstrapper, err := helpers.ParseAddrInfos(network.Logger, BOOTSTRAPPERS) HandleNetworkError(t, network, err, false) - host, _, dht := blobstreamtesting.NewTestDHT(ctx, bootstrapper) + host, _, dht := bloblobstreamtesting.NewTestDHT(ctx, bootstrapper) defer dht.Close() // force the connection to the DHT to start the orchestrator @@ -191,7 +191,7 @@ func TestRelayerWithMultipleValidators(t *testing.T) { assert.NoError(t, err) assert.Equal(t, 4, len(latestValset.Members)) - bridge, err := network.GetLatestDeployedBlobstreamContract(ctx) + bridge, err := network.GetLatestDeployedBloblobstreamContract(ctx) HandleNetworkError(t, network, err, false) err = network.WaitForRelayerToStart(ctx, bridge) @@ -208,10 +208,10 @@ func TestRelayerWithMultipleValidators(t *testing.T) { func TestUpdatingTheDataCommitmentWindow(t *testing.T) { if os.Getenv("BLOBSTREAM_INTEGRATION_TEST") != TRUE { - t.Skip("Skipping Blobstream integration tests") + t.Skip("Skipping Bloblobstream integration tests") } - network, err := NewBlobstreamNetwork() + network, err := NewBloblobstreamNetwork() HandleNetworkError(t, network, err, false) // to release resources after tests @@ -245,7 +245,7 @@ func TestUpdatingTheDataCommitmentWindow(t *testing.T) { // create dht for querying bootstrapper, err := helpers.ParseAddrInfos(network.Logger, BOOTSTRAPPERS) HandleNetworkError(t, network, err, false) - host, _, dht := blobstreamtesting.NewTestDHT(ctx, bootstrapper) + host, _, dht := bloblobstreamtesting.NewTestDHT(ctx, bootstrapper) defer dht.Close() // force the connection to the DHT to start the orchestrator @@ -278,7 +278,7 @@ func TestUpdatingTheDataCommitmentWindow(t *testing.T) { assert.NoError(t, err) assert.Equal(t, 4, len(latestValset.Members)) - bridge, err := network.GetLatestDeployedBlobstreamContract(ctx) + bridge, err := network.GetLatestDeployedBloblobstreamContract(ctx) HandleNetworkError(t, network, err, false) err = network.WaitForRelayerToStart(ctx, bridge) diff --git a/e2e/scripts/deploy_qgb_contract.sh b/e2e/scripts/deploy_qgb_contract.sh index 881e7376..f770fdce 100644 --- a/e2e/scripts/deploy_qgb_contract.sh +++ b/e2e/scripts/deploy_qgb_contract.sh @@ -1,11 +1,11 @@ #!/bin/bash -# This script deploys the Blobstream contract and outputs the address to stdout. +# This script deploys the Bloblobstream contract and outputs the address to stdout. # check whether to deploy a new contract or no need if [[ "${DEPLOY_NEW_CONTRACT}" != "true" ]] then - echo "no need to deploy a new Blobstream contract. exiting..." + echo "no need to deploy a new Bloblobstream contract. exiting..." exit 0 fi @@ -59,11 +59,11 @@ do done # import keys to deployer -/bin/bstream deploy keys evm import ecdsa "${PRIVATE_KEY}" --evm.passphrase=123 +/bin/blobstream deploy keys evm import ecdsa "${PRIVATE_KEY}" --evm.passphrase=123 -echo "deploying Blobstream contract..." +echo "deploying Bloblobstream contract..." -/bin/bstream deploy \ +/bin/blobstream deploy \ --evm.chain-id "${EVM_CHAIN_ID}" \ --evm.account "${EVM_ACCOUNT}" \ --core.grpc.host "${CORE_GRPC_HOST}" \ @@ -74,4 +74,4 @@ echo "deploying Blobstream contract..." echo $(cat /opt/output) -cat /opt/output | grep "deployed" | awk '{ print $5 }' | cut -f 2 -d = > /opt/blobstream_address.txt +cat /opt/output | grep "deployed" | awk '{ print $5 }' | cut -f 2 -d = > /opt/bloblobstream_address.txt diff --git a/e2e/scripts/start_core0.sh b/e2e/scripts/start_core0.sh index 5e688ec9..747be8b4 100644 --- a/e2e/scripts/start_core0.sh +++ b/e2e/scripts/start_core0.sh @@ -33,7 +33,7 @@ fi VAL_ADDRESS=$(celestia-appd keys show core0 --keyring-backend test --bech=val --home /opt -a) # Register the validator EVM address - celestia-appd tx blobstream register \ + celestia-appd tx bloblobstream register \ "${VAL_ADDRESS}" \ 0x966e6f22781EF6a6A82BBB4DB3df8E225DfD9488 \ --from core0 \ diff --git a/e2e/scripts/start_node_and_create_validator.sh b/e2e/scripts/start_node_and_create_validator.sh index 9533413f..71b81301 100644 --- a/e2e/scripts/start_node_and_create_validator.sh +++ b/e2e/scripts/start_node_and_create_validator.sh @@ -64,7 +64,7 @@ fi done # Register the validator EVM address - celestia-appd tx blobstream register \ + celestia-appd tx bloblobstream register \ "${VAL_ADDRESS}" \ "${EVM_ACCOUNT}" \ --from "${MONIKER}" \ diff --git a/e2e/scripts/start_orchestrator_after_validator_created.sh b/e2e/scripts/start_orchestrator_after_validator_created.sh index 9b840207..fbbf199c 100644 --- a/e2e/scripts/start_orchestrator_after_validator_created.sh +++ b/e2e/scripts/start_orchestrator_after_validator_created.sh @@ -36,18 +36,18 @@ do done # initialize orchestrator -/bin/bstream orch init +/bin/blobstream orch init # add keys to keystore -/bin/bstream orch keys evm import ecdsa "${PRIVATE_KEY}" --evm.passphrase 123 +/bin/blobstream orch keys evm import ecdsa "${PRIVATE_KEY}" --evm.passphrase 123 # start orchestrator if [[ -z "${P2P_BOOTSTRAPPERS}" ]] then # import the p2p key to use - /bin/bstream orchestrator keys p2p import key "${P2P_IDENTITY}" + /bin/blobstream orchestrator keys p2p import key "${P2P_IDENTITY}" - /bin/bstream orchestrator start \ + /bin/blobstream orchestrator start \ --evm.account="${EVM_ACCOUNT}" \ --core.rpc.host="${CORE_RPC_HOST}" \ --core.rpc.port="${CORE_RPC_PORT}" \ @@ -60,7 +60,7 @@ else # to give time for the bootstrappers to be up sleep 5s - /bin/bstream orchestrator start \ + /bin/blobstream orchestrator start \ --evm.account="${EVM_ACCOUNT}" \ --core.rpc.host="${CORE_RPC_HOST}" \ --core.rpc.port="${CORE_RPC_PORT}" \ diff --git a/e2e/scripts/start_relayer.sh b/e2e/scripts/start_relayer.sh index f03a7590..ce258262 100644 --- a/e2e/scripts/start_relayer.sh +++ b/e2e/scripts/start_relayer.sh @@ -1,6 +1,6 @@ #!/bin/bash -# This script runs the Blobstream relayer with the ability to deploy a new Blobstream contract or +# This script runs the Bloblobstream relayer with the ability to deploy a new Bloblobstream contract or # pass one as an environment variable BLOBSTREAM_CONTRACT # check if environment variables are set @@ -39,21 +39,21 @@ then export DEPLOY_NEW_CONTRACT=true export STARTING_NONCE=latest # expects the script to be mounted to this directory - /bin/bash /opt/deploy_blobstream_contract.sh + /bin/bash /opt/deploy_bloblobstream_contract.sh fi -# get the address from the `blobstream_address.txt` file -BLOBSTREAM_CONTRACT=$(cat /opt/blobstream_address.txt) +# get the address from the `bloblobstream_address.txt` file +BLOBSTREAM_CONTRACT=$(cat /opt/bloblobstream_address.txt) # init the relayer -/bin/bstream relayer init +/bin/blobstream relayer init # import keys to relayer -/bin/bstream relayer keys evm import ecdsa "${PRIVATE_KEY}" --evm.passphrase 123 +/bin/blobstream relayer keys evm import ecdsa "${PRIVATE_KEY}" --evm.passphrase 123 # to give time for the bootstrappers to be up sleep 5s -/bin/bstream relayer start \ +/bin/blobstream relayer start \ --evm.account="${EVM_ACCOUNT}" \ --core.rpc.host="${CORE_RPC_HOST}" \ --core.rpc.port="${CORE_RPC_PORT}" \ diff --git a/e2e/test_commons.go b/e2e/test_commons.go index 866279a7..12185f56 100644 --- a/e2e/test_commons.go +++ b/e2e/test_commons.go @@ -15,7 +15,7 @@ import ( const TRUE = "true" -func HandleNetworkError(t *testing.T, network *BlobstreamNetwork, err error, expectError bool) { +func HandleNetworkError(t *testing.T, network *BloblobstreamNetwork, err error, expectError bool) { if expectError && err == nil { network.PrintLogs() assert.Error(t, err) @@ -31,7 +31,7 @@ func HandleNetworkError(t *testing.T, network *BlobstreamNetwork, err error, exp } } -func ConnectToDHT(ctx context.Context, h host.Host, dht *p2p.BlobstreamDHT, target peer.AddrInfo) error { +func ConnectToDHT(ctx context.Context, h host.Host, dht *p2p.BloblobstreamDHT, target peer.AddrInfo) error { timeout := time.NewTimer(time.Minute) for { select { diff --git a/evm/ethereum_signature_test.go b/evm/ethereum_signature_test.go index 9ddfcde4..94d4d51c 100644 --- a/evm/ethereum_signature_test.go +++ b/evm/ethereum_signature_test.go @@ -14,7 +14,7 @@ import ( "github.com/stretchr/testify/require" ) -// The signatures in these tests are generated using the foundry setup in the blobstream-contracts repository. +// The signatures in these tests are generated using the foundry setup in the bloblobstream-contracts repository. func TestNewEthereumSignature(t *testing.T) { digest, err := hexutil.Decode("0x078c42ff72a01b355f9d76bfeecd2132a0d3f1aad9380870026c56e23e6d00e5") diff --git a/evm/evm_client.go b/evm/evm_client.go index 1bf41262..597a038a 100644 --- a/evm/evm_client.go +++ b/evm/evm_client.go @@ -15,7 +15,7 @@ import ( "github.com/celestiaorg/celestia-app/x/qgb/types" proxywrapper "github.com/celestiaorg/quantum-gravity-bridge/v2/wrappers/ERC1967Proxy.sol" - blobstreamwrapper "github.com/celestiaorg/quantum-gravity-bridge/v2/wrappers/QuantumGravityBridge.sol" + bloblobstreamwrapper "github.com/celestiaorg/quantum-gravity-bridge/v2/wrappers/QuantumGravityBridge.sol" "github.com/ethereum/go-ethereum/accounts/abi/bind" ) @@ -24,19 +24,19 @@ const DefaultEVMGasLimit = uint64(2500000) type Client struct { logger tmlog.Logger - Wrapper *blobstreamwrapper.Wrappers + Wrapper *bloblobstreamwrapper.Wrappers Ks *keystore.KeyStore Acc *accounts.Account EvmRPC string GasLimit uint64 } -// NewClient Creates a new EVM Client that can be used to deploy the Blobstream contract and +// NewClient Creates a new EVM Client that can be used to deploy the Bloblobstream contract and // interact with it. // The wrapper parameter can be nil when creating the client for contract deployment. func NewClient( logger tmlog.Logger, - wrapper *blobstreamwrapper.Wrappers, + wrapper *bloblobstreamwrapper.Wrappers, ks *keystore.KeyStore, acc *accounts.Account, evmRPC string, @@ -62,21 +62,21 @@ func (ec *Client) NewEthClient() (*ethclient.Client, error) { return ethClient, nil } -// DeployBlobstreamContract Deploys the Blobstream contract and initializes it with the provided valset. +// DeployBloblobstreamContract Deploys the Bloblobstream contract and initializes it with the provided valset. // The waitToBeMined, when set to true, will wait for the transaction to be included in a block, // and log relevant information. // The initBridge, when set to true, will assign the newly deployed bridge to the wrapper. This // can be used later for further interactions with the new contract. -// Multiple calls to DeployBlobstreamContract with the initBridge flag set to true will overwrite everytime +// Multiple calls to DeployBloblobstreamContract with the initBridge flag set to true will overwrite everytime // the bridge contract. -func (ec *Client) DeployBlobstreamContract( +func (ec *Client) DeployBloblobstreamContract( opts *bind.TransactOpts, contractBackend bind.ContractBackend, contractInitValset types.Valset, contractInitNonce uint64, initBridge bool, -) (gethcommon.Address, *coregethtypes.Transaction, *blobstreamwrapper.Wrappers, error) { - // deploy the Blobstream implementation contract +) (gethcommon.Address, *coregethtypes.Transaction, *bloblobstreamwrapper.Wrappers, error) { + // deploy the Bloblobstream implementation contract impAddr, impTx, _, err := ec.DeployImplementation(opts, contractBackend) if err != nil { return gethcommon.Address{}, nil, nil, err @@ -84,12 +84,12 @@ func (ec *Client) DeployBlobstreamContract( ec.logger.Info("deploying QGB implementation contract...", "address", impAddr.Hex(), "tx_hash", impTx.Hash().Hex()) - // encode the Blobstream contract initialization data using the chain parameters + // encode the Bloblobstream contract initialization data using the chain parameters ethVsHash, err := contractInitValset.Hash() if err != nil { return gethcommon.Address{}, nil, nil, err } - blobStreamABI, err := blobstreamwrapper.WrappersMetaData.GetAbi() + blobStreamABI, err := bloblobstreamwrapper.WrappersMetaData.GetAbi() if err != nil { return gethcommon.Address{}, nil, nil, err } @@ -103,7 +103,7 @@ func (ec *Client) DeployBlobstreamContract( opts.Nonce.Add(opts.Nonce, big.NewInt(1)) } - // deploy the ERC1967 proxy, link it to the Blobstream implementation contract, and initialize it + // deploy the ERC1967 proxy, link it to the Bloblobstream implementation contract, and initialize it proxyAddr, tx, _, err := ec.DeployERC1867Proxy(opts, contractBackend, impAddr, initData) if err != nil { return gethcommon.Address{}, nil, nil, err @@ -111,7 +111,7 @@ func (ec *Client) DeployBlobstreamContract( ec.logger.Info("deploying QGB proxy contract...", "address", proxyAddr, "tx_hash", tx.Hash().Hex()) - bridge, err := blobstreamwrapper.NewWrappers(proxyAddr, contractBackend) + bridge, err := bloblobstreamwrapper.NewWrappers(proxyAddr, contractBackend) if err != nil { return gethcommon.Address{}, nil, nil, err } @@ -128,7 +128,7 @@ func (ec *Client) UpdateValidatorSet( opts *bind.TransactOpts, newNonce, newThreshHold uint64, currentValset, newValset types.Valset, - sigs []blobstreamwrapper.Signature, + sigs []bloblobstreamwrapper.Signature, ) (*coregethtypes.Transaction, error) { // TODO in addition to the nonce, log more interesting information ec.logger.Info("relaying valset", "nonce", newNonce) @@ -171,7 +171,7 @@ func (ec *Client) SubmitDataRootTupleRoot( tupleRoot gethcommon.Hash, newNonce uint64, currentValset types.Valset, - sigs []blobstreamwrapper.Signature, + sigs []bloblobstreamwrapper.Signature, ) (*coregethtypes.Transaction, error) { ethVals, err := ethValset(currentValset) if err != nil { @@ -236,10 +236,10 @@ func (ec *Client) WaitForTransaction( func (ec *Client) DeployImplementation(opts *bind.TransactOpts, backend bind.ContractBackend) ( gethcommon.Address, *coregethtypes.Transaction, - *blobstreamwrapper.Wrappers, + *bloblobstreamwrapper.Wrappers, error, ) { - return blobstreamwrapper.DeployWrappers( + return bloblobstreamwrapper.DeployWrappers( opts, backend, ) @@ -259,14 +259,14 @@ func (ec *Client) DeployERC1867Proxy( ) } -func ethValset(valset types.Valset) ([]blobstreamwrapper.Validator, error) { - ethVals := make([]blobstreamwrapper.Validator, len(valset.Members)) +func ethValset(valset types.Valset) ([]bloblobstreamwrapper.Validator, error) { + ethVals := make([]bloblobstreamwrapper.Validator, len(valset.Members)) for i, v := range valset.Members { if ok := gethcommon.IsHexAddress(v.EvmAddress); !ok { return nil, errors.New("invalid ethereum address found in validator set") } addr := gethcommon.HexToAddress(v.EvmAddress) - ethVals[i] = blobstreamwrapper.Validator{ + ethVals[i] = bloblobstreamwrapper.Validator{ Addr: addr, Power: big.NewInt(int64(v.Power)), } diff --git a/evm/evm_client_test.go b/evm/evm_client_test.go index 854bb054..89093335 100644 --- a/evm/evm_client_test.go +++ b/evm/evm_client_test.go @@ -15,7 +15,7 @@ import ( func (s *EVMTestSuite) TestSubmitDataCommitment() { // deploy a new bridge contract - _, _, _, err := s.Client.DeployBlobstreamContract(s.Chain.Auth, s.Chain.Backend, *s.InitVs, 1, true) + _, _, _, err := s.Client.DeployBloblobstreamContract(s.Chain.Auth, s.Chain.Backend, *s.InitVs, 1, true) s.NoError(err) // we just need something to sign over, it doesn't matter what @@ -72,7 +72,7 @@ func (s *EVMTestSuite) TestSubmitDataCommitment() { func (s *EVMTestSuite) TestUpdateValset() { // deploy a new bridge contract - _, _, _, err := s.Client.DeployBlobstreamContract(s.Chain.Auth, s.Chain.Backend, *s.InitVs, 1, true) + _, _, _, err := s.Client.DeployBloblobstreamContract(s.Chain.Auth, s.Chain.Backend, *s.InitVs, 1, true) s.NoError(err) updatedValset := celestiatypes.Valset{ diff --git a/evm/suite_test.go b/evm/suite_test.go index 411e4d9c..ac07c15b 100644 --- a/evm/suite_test.go +++ b/evm/suite_test.go @@ -9,7 +9,7 @@ import ( celestiatypes "github.com/celestiaorg/celestia-app/x/qgb/types" "github.com/celestiaorg/orchestrator-relayer/evm" - blobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" + bloblobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" ethcmn "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/crypto" "github.com/stretchr/testify/require" @@ -18,7 +18,7 @@ import ( type EVMTestSuite struct { suite.Suite - Chain *blobstreamtesting.EVMChain + Chain *bloblobstreamtesting.EVMChain Client *evm.Client InitVs *celestiatypes.Valset VsPrivateKey *ecdsa.PrivateKey @@ -29,7 +29,7 @@ func (s *EVMTestSuite) SetupTest() { testPrivateKey, err := crypto.HexToECDSA("64a1d6f0e760a8d62b4afdde4096f16f51b401eaaecc915740f71770ea76a8ad") s.VsPrivateKey = testPrivateKey require.NoError(t, err) - s.Chain = blobstreamtesting.NewEVMChain(testPrivateKey) + s.Chain = bloblobstreamtesting.NewEVMChain(testPrivateKey) ks := keystore.NewKeyStore(t.TempDir(), keystore.LightScryptN, keystore.LightScryptP) acc, err := ks.ImportECDSA(testPrivateKey, "123") @@ -37,7 +37,7 @@ func (s *EVMTestSuite) SetupTest() { err = ks.Unlock(acc, "123") require.NoError(t, err) - s.Client = blobstreamtesting.NewEVMClient(ks, &acc) + s.Client = bloblobstreamtesting.NewEVMClient(ks, &acc) s.InitVs, err = celestiatypes.NewValset( 1, 10, diff --git a/go.sum b/go.sum index ec883f78..33be1838 100644 --- a/go.sum +++ b/go.sum @@ -1187,7 +1187,7 @@ github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNU github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/kkdai/bstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6CZQHDETBtE9HaSEkGmuNXF86RwHhHUvq4= +github.com/kkdai/blobstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6CZQHDETBtE9HaSEkGmuNXF86RwHhHUvq4= github.com/klauspost/compress v1.4.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= github.com/klauspost/compress v1.8.2/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= github.com/klauspost/compress v1.9.7/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= diff --git a/orchestrator/broadcaster.go b/orchestrator/broadcaster.go index f6d874c1..a211584f 100644 --- a/orchestrator/broadcaster.go +++ b/orchestrator/broadcaster.go @@ -9,23 +9,23 @@ import ( ) type Broadcaster struct { - BlobstreamDHT *p2p.BlobstreamDHT + BloblobstreamDHT *p2p.BloblobstreamDHT } -func NewBroadcaster(blobStreamDHT *p2p.BlobstreamDHT) *Broadcaster { - return &Broadcaster{BlobstreamDHT: blobStreamDHT} +func NewBroadcaster(blobStreamDHT *p2p.BloblobstreamDHT) *Broadcaster { + return &Broadcaster{BloblobstreamDHT: blobStreamDHT} } func (b Broadcaster) ProvideDataCommitmentConfirm(ctx context.Context, nonce uint64, confirm types.DataCommitmentConfirm, dataRootTupleRoot string) error { - if len(b.BlobstreamDHT.RoutingTable().ListPeers()) == 0 { + if len(b.BloblobstreamDHT.RoutingTable().ListPeers()) == 0 { return ErrEmptyPeersTable } - return b.BlobstreamDHT.PutDataCommitmentConfirm(ctx, p2p.GetDataCommitmentConfirmKey(nonce, confirm.EthAddress, dataRootTupleRoot), confirm) + return b.BloblobstreamDHT.PutDataCommitmentConfirm(ctx, p2p.GetDataCommitmentConfirmKey(nonce, confirm.EthAddress, dataRootTupleRoot), confirm) } func (b Broadcaster) ProvideValsetConfirm(ctx context.Context, nonce uint64, confirm types.ValsetConfirm, signBytes string) error { - if len(b.BlobstreamDHT.RoutingTable().ListPeers()) == 0 { + if len(b.BloblobstreamDHT.RoutingTable().ListPeers()) == 0 { return ErrEmptyPeersTable } - return b.BlobstreamDHT.PutValsetConfirm(ctx, p2p.GetValsetConfirmKey(nonce, confirm.EthAddress, signBytes), confirm) + return b.BloblobstreamDHT.PutValsetConfirm(ctx, p2p.GetValsetConfirmKey(nonce, confirm.EthAddress, signBytes), confirm) } diff --git a/orchestrator/broadcaster_test.go b/orchestrator/broadcaster_test.go index 519cd694..f1952647 100644 --- a/orchestrator/broadcaster_test.go +++ b/orchestrator/broadcaster_test.go @@ -17,7 +17,7 @@ import ( "github.com/celestiaorg/orchestrator-relayer/orchestrator" "github.com/celestiaorg/orchestrator-relayer/p2p" - blobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" + bloblobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" "github.com/celestiaorg/orchestrator-relayer/types" "github.com/stretchr/testify/assert" ) @@ -28,7 +28,7 @@ var ( ) func TestBroadcastDataCommitmentConfirm(t *testing.T) { - network := blobstreamtesting.NewDHTNetwork(context.Background(), 4) + network := bloblobstreamtesting.NewDHTNetwork(context.Background(), 4) defer network.Stop() ks := keystore.NewKeyStore(t.TempDir(), keystore.LightScryptN, keystore.LightScryptP) @@ -64,7 +64,7 @@ func TestBroadcastDataCommitmentConfirm(t *testing.T) { } func TestBroadcastValsetConfirm(t *testing.T) { - network := blobstreamtesting.NewDHTNetwork(context.Background(), 4) + network := bloblobstreamtesting.NewDHTNetwork(context.Background(), 4) defer network.Stop() ks := keystore.NewKeyStore(t.TempDir(), keystore.LightScryptN, keystore.LightScryptP) @@ -100,8 +100,8 @@ func TestBroadcastValsetConfirm(t *testing.T) { // TestEmptyPeersTable tests that values are not broadcasted if the DHT peers // table is empty. func TestEmptyPeersTable(t *testing.T) { - _, _, dht := blobstreamtesting.NewTestDHT(context.Background(), nil) - defer func(dht *p2p.BlobstreamDHT) { + _, _, dht := bloblobstreamtesting.NewTestDHT(context.Background(), nil) + defer func(dht *p2p.BloblobstreamDHT) { err := dht.Close() if err != nil { require.NoError(t, err) diff --git a/orchestrator/orchestrator.go b/orchestrator/orchestrator.go index dfd1164c..b3ea0f34 100644 --- a/orchestrator/orchestrator.go +++ b/orchestrator/orchestrator.go @@ -306,7 +306,7 @@ func (orch Orchestrator) Process(ctx context.Context, nonce uint64) error { // if nonce == 1, then, the current valset should sign the confirm. // In fact, the first nonce should never be signed. Because, the first attestation, in the case // where the `earliest` flag is specified when deploying the contract, will be relayed as part of - // the deployment of the Blobstream contract. + // the deployment of the Bloblobstream contract. // It will be signed temporarily for now. previousValset, err = orch.AppQuerier.QueryValsetByNonce(ctx, att.GetNonce()) if err != nil { diff --git a/orchestrator/suite_test.go b/orchestrator/suite_test.go index 01f0cbd0..54a36d4b 100644 --- a/orchestrator/suite_test.go +++ b/orchestrator/suite_test.go @@ -9,13 +9,13 @@ import ( "github.com/celestiaorg/celestia-app/test/util/testnode" "github.com/celestiaorg/celestia-app/x/qgb/types" "github.com/celestiaorg/orchestrator-relayer/orchestrator" - blobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" + bloblobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" "github.com/stretchr/testify/suite" ) type OrchestratorTestSuite struct { suite.Suite - Node *blobstreamtesting.TestNode + Node *bloblobstreamtesting.TestNode Orchestrator *orchestrator.Orchestrator } @@ -23,14 +23,14 @@ func (s *OrchestratorTestSuite) SetupSuite() { t := s.T() ctx := context.Background() codec := encoding.MakeConfig(app.ModuleEncodingRegisters...).Codec - s.Node = blobstreamtesting.NewTestNode( + s.Node = bloblobstreamtesting.NewTestNode( ctx, t, testnode.ImmediateProposals(codec), - blobstreamtesting.SetDataCommitmentWindowParams(codec, types.Params{DataCommitmentWindow: 101}), - // blobstreamtesting.SetVotingParams(codec, v1beta1.VotingParams{VotingPeriod: 100 * time.Hour}), + bloblobstreamtesting.SetDataCommitmentWindowParams(codec, types.Params{DataCommitmentWindow: 101}), + // bloblobstreamtesting.SetVotingParams(codec, v1beta1.VotingParams{VotingPeriod: 100 * time.Hour}), ) - s.Orchestrator = blobstreamtesting.NewOrchestrator(t, s.Node) + s.Orchestrator = bloblobstreamtesting.NewOrchestrator(t, s.Node) } func (s *OrchestratorTestSuite) TearDownSuite() { diff --git a/p2p/dht.go b/p2p/dht.go index b9e4aeea..4b34b4b4 100644 --- a/p2p/dht.go +++ b/p2p/dht.go @@ -14,21 +14,21 @@ import ( ) const ( - ProtocolPrefix = "/blobstream/0.1.0" // TODO "/blobstream/" ? + ProtocolPrefix = "/bloblobstream/0.1.0" // TODO "/bloblobstream/" ? DataCommitmentConfirmNamespace = "dcc" ValsetConfirmNamespace = "vc" ) -// BlobstreamDHT wrapper around the `IpfsDHT` implementation. +// BloblobstreamDHT wrapper around the `IpfsDHT` implementation. // Used to add helper methods to easily handle the DHT. -type BlobstreamDHT struct { +type BloblobstreamDHT struct { *dht.IpfsDHT logger tmlog.Logger } -// NewBlobstreamDHT create a new IPFS DHT using a suitable configuration for the Blobstream. +// NewBloblobstreamDHT create a new IPFS DHT using a suitable configuration for the Bloblobstream. // If nil is passed for bootstrappers, the DHT will not try to connect to any existing peer. -func NewBlobstreamDHT(ctx context.Context, h host.Host, store ds.Batching, bootstrappers []peer.AddrInfo, logger tmlog.Logger) (*BlobstreamDHT, error) { +func NewBloblobstreamDHT(ctx context.Context, h host.Host, store ds.Batching, bootstrappers []peer.AddrInfo, logger tmlog.Logger) (*BloblobstreamDHT, error) { // this value is set to 23 days, which is the unbonding period. // we want to have the signatures available for this whole period. providers.ProvideValidity = time.Hour * 24 * 23 @@ -48,7 +48,7 @@ func NewBlobstreamDHT(ctx context.Context, h host.Host, store ds.Batching, boots return nil, err } - return &BlobstreamDHT{ + return &BloblobstreamDHT{ IpfsDHT: router, logger: logger, }, nil @@ -57,7 +57,7 @@ func NewBlobstreamDHT(ctx context.Context, h host.Host, store ds.Batching, boots // WaitForPeers waits for peers to be connected to the DHT. // Returns nil if the context is done or the peers list has more peers than the specified peersThreshold. // Returns error if it times out. -func (q BlobstreamDHT) WaitForPeers(ctx context.Context, timeout time.Duration, rate time.Duration, peersThreshold int) error { +func (q BloblobstreamDHT) WaitForPeers(ctx context.Context, timeout time.Duration, rate time.Duration, peersThreshold int) error { if peersThreshold < 1 { return ErrPeersThresholdCannotBeNegative } @@ -101,7 +101,7 @@ func (q BlobstreamDHT) WaitForPeers(ctx context.Context, timeout time.Duration, // PutDataCommitmentConfirm encodes a data commitment confirm then puts its value to the DHT. // The key can be generated using the `GetDataCommitmentConfirmKey` method. // Returns an error if it fails to do so. -func (q BlobstreamDHT) PutDataCommitmentConfirm(ctx context.Context, key string, dcc types.DataCommitmentConfirm) error { +func (q BloblobstreamDHT) PutDataCommitmentConfirm(ctx context.Context, key string, dcc types.DataCommitmentConfirm) error { encodedData, err := types.MarshalDataCommitmentConfirm(dcc) if err != nil { return err @@ -116,7 +116,7 @@ func (q BlobstreamDHT) PutDataCommitmentConfirm(ctx context.Context, key string, // GetDataCommitmentConfirm looks for a data commitment confirm referenced by its key in the DHT. // The key can be generated using the `GetDataCommitmentConfirmKey` method. // Returns an error if it fails to get the confirm. -func (q BlobstreamDHT) GetDataCommitmentConfirm(ctx context.Context, key string) (types.DataCommitmentConfirm, error) { +func (q BloblobstreamDHT) GetDataCommitmentConfirm(ctx context.Context, key string) (types.DataCommitmentConfirm, error) { encodedConfirm, err := q.GetValue(ctx, key) // this is a blocking call, we should probably use timeout and channel if err != nil { return types.DataCommitmentConfirm{}, err @@ -131,7 +131,7 @@ func (q BlobstreamDHT) GetDataCommitmentConfirm(ctx context.Context, key string) // PutValsetConfirm encodes a valset confirm then puts its value to the DHT. // The key can be generated using the `GetValsetConfirmKey` method. // Returns an error if it fails to do so. -func (q BlobstreamDHT) PutValsetConfirm(ctx context.Context, key string, vc types.ValsetConfirm) error { +func (q BloblobstreamDHT) PutValsetConfirm(ctx context.Context, key string, vc types.ValsetConfirm) error { encodedData, err := types.MarshalValsetConfirm(vc) if err != nil { return err @@ -146,7 +146,7 @@ func (q BlobstreamDHT) PutValsetConfirm(ctx context.Context, key string, vc type // GetValsetConfirm looks for a valset confirm referenced by its key in the DHT. // The key can be generated using the `GetValsetConfirmKey` method. // Returns an error if it fails to get the confirm. -func (q BlobstreamDHT) GetValsetConfirm(ctx context.Context, key string) (types.ValsetConfirm, error) { +func (q BloblobstreamDHT) GetValsetConfirm(ctx context.Context, key string) (types.ValsetConfirm, error) { encodedConfirm, err := q.GetValue(ctx, key) // this is a blocking call, we should probably use timeout and channel if err != nil { return types.ValsetConfirm{}, err diff --git a/p2p/dht_test.go b/p2p/dht_test.go index 9ebfcecc..93f02baf 100644 --- a/p2p/dht_test.go +++ b/p2p/dht_test.go @@ -15,7 +15,7 @@ import ( ethcrypto "github.com/ethereum/go-ethereum/crypto" "github.com/celestiaorg/orchestrator-relayer/p2p" - blobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" + bloblobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" "github.com/celestiaorg/orchestrator-relayer/types" "github.com/libp2p/go-libp2p/core/peer" "github.com/stretchr/testify/assert" @@ -30,11 +30,11 @@ var ( func TestDHTBootstrappers(t *testing.T) { ctx := context.Background() // create first dht - h1, _, dht1 := blobstreamtesting.NewTestDHT(ctx, nil) + h1, _, dht1 := bloblobstreamtesting.NewTestDHT(ctx, nil) defer dht1.Close() // create second dht with dht1 being a bootstrapper - h2, _, dht2 := blobstreamtesting.NewTestDHT( + h2, _, dht2 := bloblobstreamtesting.NewTestDHT( ctx, []peer.AddrInfo{{ ID: h1.ID(), @@ -58,7 +58,7 @@ func TestDHTBootstrappers(t *testing.T) { } func TestPutDataCommitmentConfirm(t *testing.T) { - network := blobstreamtesting.NewDHTNetwork(context.Background(), 2) + network := bloblobstreamtesting.NewDHTNetwork(context.Background(), 2) defer network.Stop() ks := keystore.NewKeyStore(t.TempDir(), keystore.LightScryptN, keystore.LightScryptP) @@ -96,7 +96,7 @@ func TestPutDataCommitmentConfirm(t *testing.T) { } func TestNetworkPutDataCommitmentConfirm(t *testing.T) { - network := blobstreamtesting.NewDHTNetwork(context.Background(), 10) + network := bloblobstreamtesting.NewDHTNetwork(context.Background(), 10) defer network.Stop() ks := keystore.NewKeyStore(t.TempDir(), keystore.LightScryptN, keystore.LightScryptP) @@ -134,7 +134,7 @@ func TestNetworkPutDataCommitmentConfirm(t *testing.T) { } func TestNetworkGetNonExistentDataCommitmentConfirm(t *testing.T) { - network := blobstreamtesting.NewDHTNetwork(context.Background(), 10) + network := bloblobstreamtesting.NewDHTNetwork(context.Background(), 10) defer network.Stop() // generate a test key for the DataCommitmentConfirm @@ -147,7 +147,7 @@ func TestNetworkGetNonExistentDataCommitmentConfirm(t *testing.T) { } func TestPutValsetConfirm(t *testing.T) { - network := blobstreamtesting.NewDHTNetwork(context.Background(), 2) + network := bloblobstreamtesting.NewDHTNetwork(context.Background(), 2) defer network.Stop() ks := keystore.NewKeyStore(t.TempDir(), keystore.LightScryptN, keystore.LightScryptP) @@ -182,7 +182,7 @@ func TestPutValsetConfirm(t *testing.T) { } func TestNetworkPutValsetConfirm(t *testing.T) { - network := blobstreamtesting.NewDHTNetwork(context.Background(), 10) + network := bloblobstreamtesting.NewDHTNetwork(context.Background(), 10) defer network.Stop() ks := keystore.NewKeyStore(t.TempDir(), keystore.LightScryptN, keystore.LightScryptP) @@ -217,7 +217,7 @@ func TestNetworkPutValsetConfirm(t *testing.T) { } func TestNetworkGetNonExistentValsetConfirm(t *testing.T) { - network := blobstreamtesting.NewDHTNetwork(context.Background(), 10) + network := bloblobstreamtesting.NewDHTNetwork(context.Background(), 10) defer network.Stop() // generate a test key for the ValsetConfirm @@ -232,7 +232,7 @@ func TestNetworkGetNonExistentValsetConfirm(t *testing.T) { func TestWaitForPeers(t *testing.T) { ctx := context.Background() // create first dht - h1, _, dht1 := blobstreamtesting.NewTestDHT(ctx, nil) + h1, _, dht1 := bloblobstreamtesting.NewTestDHT(ctx, nil) defer dht1.Close() // wait for peers @@ -241,7 +241,7 @@ func TestWaitForPeers(t *testing.T) { assert.Error(t, err) // create second dht - h2, _, dht2 := blobstreamtesting.NewTestDHT(ctx, nil) + h2, _, dht2 := bloblobstreamtesting.NewTestDHT(ctx, nil) defer dht2.Close() // connect to first dht err = h2.Connect(ctx, peer.AddrInfo{ diff --git a/p2p/keys.go b/p2p/keys.go index 6b7132c2..829f1e09 100644 --- a/p2p/keys.go +++ b/p2p/keys.go @@ -11,7 +11,7 @@ import ( // - nonce: in hex format // - evm address: the 0x prefixed orchestrator EVM address in hex format // - data root tuple root: is the digest, in a 0x prefixed hex format, that is signed over for a -// data commitment and whose signature is relayed to the Blobstream smart contract. +// data commitment and whose signature is relayed to the Bloblobstream smart contract. // Expects the EVM address to be a correct address. func GetDataCommitmentConfirmKey(nonce uint64, evmAddr string, dataRootTupleRoot string) string { return "/" + DataCommitmentConfirmNamespace + "/" + @@ -24,7 +24,7 @@ func GetDataCommitmentConfirmKey(nonce uint64, evmAddr string, dataRootTupleRoot // - nonce: in hex format // - evm address: the orchestrator EVM address in hex format // - sign bytes: is the digest, in a 0x prefixed hex format, that is signed over for a valset and -// whose signature is relayed to the Blobstream smart contract. +// whose signature is relayed to the Bloblobstream smart contract. // Expects the EVM address to be a correct address. func GetValsetConfirmKey(nonce uint64, evmAddr string, signBytes string) string { return "/" + ValsetConfirmNamespace + "/" + diff --git a/p2p/querier.go b/p2p/querier.go index 3be409ee..d2947f00 100644 --- a/p2p/querier.go +++ b/p2p/querier.go @@ -16,13 +16,13 @@ import ( // Querier used to query the DHT for confirms. type Querier struct { - BlobstreamDHT *BlobstreamDHT + BloblobstreamDHT *BloblobstreamDHT logger tmlog.Logger } -func NewQuerier(blobStreamDht *BlobstreamDHT, logger tmlog.Logger) *Querier { +func NewQuerier(blobStreamDht *BloblobstreamDHT, logger tmlog.Logger) *Querier { return &Querier{ - BlobstreamDHT: blobStreamDht, + BloblobstreamDHT: blobStreamDht, logger: logger, } } @@ -246,7 +246,7 @@ func (q Querier) QueryValsetConfirmByEVMAddress( address string, signBytes string, ) (*types.ValsetConfirm, error) { - confirm, err := q.BlobstreamDHT.GetValsetConfirm( + confirm, err := q.BloblobstreamDHT.GetValsetConfirm( ctx, GetValsetConfirmKey(nonce, address, signBytes), ) @@ -264,7 +264,7 @@ func (q Querier) QueryValsetConfirmByEVMAddress( // and signed by the orchestrator whose EVM address is `address`. // Returns (nil, nil) if the confirm is not found func (q Querier) QueryDataCommitmentConfirmByEVMAddress(ctx context.Context, nonce uint64, address string, dataRootTupleRoot string) (*types.DataCommitmentConfirm, error) { - confirm, err := q.BlobstreamDHT.GetDataCommitmentConfirm( + confirm, err := q.BloblobstreamDHT.GetDataCommitmentConfirm( ctx, GetDataCommitmentConfirmKey(nonce, address, dataRootTupleRoot), ) @@ -283,7 +283,7 @@ func (q Querier) QueryDataCommitmentConfirmByEVMAddress(ctx context.Context, non func (q Querier) QueryDataCommitmentConfirms(ctx context.Context, valset celestiatypes.Valset, nonce uint64, dataRootTupleRoot string) ([]types.DataCommitmentConfirm, error) { confirms := make([]types.DataCommitmentConfirm, 0) for _, member := range valset.Members { - confirm, err := q.BlobstreamDHT.GetDataCommitmentConfirm( + confirm, err := q.BloblobstreamDHT.GetDataCommitmentConfirm( ctx, GetDataCommitmentConfirmKey(nonce, member.EvmAddress, dataRootTupleRoot), ) @@ -304,7 +304,7 @@ func (q Querier) QueryDataCommitmentConfirms(ctx context.Context, valset celesti func (q Querier) QueryValsetConfirms(ctx context.Context, nonce uint64, valset celestiatypes.Valset, signBytes string) ([]types.ValsetConfirm, error) { confirms := make([]types.ValsetConfirm, 0) for _, member := range valset.Members { - confirm, err := q.BlobstreamDHT.GetValsetConfirm( + confirm, err := q.BloblobstreamDHT.GetValsetConfirm( ctx, GetValsetConfirmKey(nonce, member.EvmAddress, signBytes), ) diff --git a/p2p/querier_test.go b/p2p/querier_test.go index fa2af3c9..00fa4d6b 100644 --- a/p2p/querier_test.go +++ b/p2p/querier_test.go @@ -14,7 +14,7 @@ import ( celestiatypes "github.com/celestiaorg/celestia-app/x/qgb/types" "github.com/celestiaorg/orchestrator-relayer/p2p" - blobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" + bloblobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" "github.com/celestiaorg/orchestrator-relayer/types" "github.com/ethereum/go-ethereum/common" "github.com/stretchr/testify/assert" @@ -33,7 +33,7 @@ var ( func TestQueryTwoThirdsDataCommitmentConfirms(t *testing.T) { ctx := context.Background() - network := blobstreamtesting.NewDHTNetwork(ctx, 2) + network := bloblobstreamtesting.NewDHTNetwork(ctx, 2) defer network.Stop() vsNonce := uint64(2) @@ -152,7 +152,7 @@ func TestQueryTwoThirdsDataCommitmentConfirms(t *testing.T) { func TestQueryTwoThirdsValsetConfirms(t *testing.T) { ctx := context.Background() - network := blobstreamtesting.NewDHTNetwork(ctx, 2) + network := bloblobstreamtesting.NewDHTNetwork(ctx, 2) defer network.Stop() vsNonce := uint64(2) @@ -279,7 +279,7 @@ func TestQueryTwoThirdsValsetConfirms(t *testing.T) { func TestQueryValsetConfirmByEVMAddress(t *testing.T) { ctx := context.Background() - network := blobstreamtesting.NewDHTNetwork(ctx, 2) + network := bloblobstreamtesting.NewDHTNetwork(ctx, 2) defer network.Stop() vsNonce := uint64(10) @@ -320,7 +320,7 @@ func TestQueryValsetConfirmByEVMAddress(t *testing.T) { func TestQueryDataCommitmentConfirmByEVMAddress(t *testing.T) { ctx := context.Background() - network := blobstreamtesting.NewDHTNetwork(ctx, 2) + network := bloblobstreamtesting.NewDHTNetwork(ctx, 2) defer network.Stop() dcNonce := uint64(10) @@ -362,7 +362,7 @@ func TestQueryDataCommitmentConfirmByEVMAddress(t *testing.T) { func TestQueryValsetConfirms(t *testing.T) { ctx := context.Background() - network := blobstreamtesting.NewDHTNetwork(ctx, 2) + network := bloblobstreamtesting.NewDHTNetwork(ctx, 2) defer network.Stop() vsNonce := uint64(2) @@ -444,7 +444,7 @@ func TestQueryValsetConfirms(t *testing.T) { func TestQueryDataCommitmentConfirms(t *testing.T) { ctx := context.Background() - network := blobstreamtesting.NewDHTNetwork(ctx, 2) + network := bloblobstreamtesting.NewDHTNetwork(ctx, 2) defer network.Stop() dcNonce := uint64(2) diff --git a/relayer/relayer.go b/relayer/relayer.go index 0bfb491d..1bbf06e8 100644 --- a/relayer/relayer.go +++ b/relayer/relayer.go @@ -327,7 +327,7 @@ func (r *Relayer) SaveDataCommitmentSignaturesToStore(ctx context.Context, att c } // matchAttestationConfirmSigs matches and sorts the confirm signatures with the valset -// members as expected by the Blobstream contract. +// members as expected by the Bloblobstream contract. // Also, it leaves the non provided signatures as nil in the `sigs` slice: // https://github.com/celestiaorg/celestia-app/issues/628 func matchAttestationConfirmSigs( @@ -335,7 +335,7 @@ func matchAttestationConfirmSigs( currentValset celestiatypes.Valset, ) ([]wrapper.Signature, error) { sigs := make([]wrapper.Signature, len(currentValset.Members)) - // the Blobstream contract expects the signatures to be ordered by validators in valset + // the Bloblobstream contract expects the signatures to be ordered by validators in valset for i, val := range currentValset.Members { sig, has := signatures[val.EvmAddress] if !has { diff --git a/relayer/relayer_test.go b/relayer/relayer_test.go index 1b35a4fa..a9df4480 100644 --- a/relayer/relayer_test.go +++ b/relayer/relayer_test.go @@ -8,7 +8,7 @@ import ( "github.com/celestiaorg/orchestrator-relayer/p2p" "github.com/ipfs/go-datastore" - blobstreamtypes "github.com/celestiaorg/orchestrator-relayer/types" + bloblobstreamtypes "github.com/celestiaorg/orchestrator-relayer/types" "github.com/stretchr/testify/assert" @@ -27,7 +27,7 @@ func (s *RelayerTestSuite) TestProcessAttestation() { att := types.NewDataCommitment(latestValset.Nonce+1, 10, 100, time.Now()) commitment, err := s.Orchestrator.TmQuerier.QueryCommitment(ctx, att.BeginBlock, att.EndBlock) require.NoError(t, err) - dataRootTupleRoot := blobstreamtypes.DataCommitmentTupleRootSignBytes(big.NewInt(int64(att.Nonce)), commitment) + dataRootTupleRoot := bloblobstreamtypes.DataCommitmentTupleRootSignBytes(big.NewInt(int64(att.Nonce)), commitment) err = s.Orchestrator.ProcessDataCommitmentEvent(ctx, *att, dataRootTupleRoot) require.NoError(t, err) diff --git a/relayer/suite_test.go b/relayer/suite_test.go index 5f72ab25..3c56da15 100644 --- a/relayer/suite_test.go +++ b/relayer/suite_test.go @@ -8,14 +8,14 @@ import ( "github.com/celestiaorg/orchestrator-relayer/orchestrator" "github.com/celestiaorg/orchestrator-relayer/relayer" - blobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" + bloblobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" "github.com/stretchr/testify/require" "github.com/stretchr/testify/suite" ) type RelayerTestSuite struct { suite.Suite - Node *blobstreamtesting.TestNode + Node *bloblobstreamtesting.TestNode Orchestrator *orchestrator.Orchestrator Relayer *relayer.Relayer } @@ -26,15 +26,15 @@ func (s *RelayerTestSuite) SetupSuite() { t.Skip("skipping relayer tests in short mode.") } ctx := context.Background() - s.Node = blobstreamtesting.NewTestNode(ctx, t) + s.Node = bloblobstreamtesting.NewTestNode(ctx, t) _, err := s.Node.CelestiaNetwork.WaitForHeight(2) require.NoError(t, err) - s.Orchestrator = blobstreamtesting.NewOrchestrator(t, s.Node) - s.Relayer = blobstreamtesting.NewRelayer(t, s.Node) + s.Orchestrator = bloblobstreamtesting.NewOrchestrator(t, s.Node) + s.Relayer = bloblobstreamtesting.NewRelayer(t, s.Node) go s.Node.EVMChain.PeriodicCommit(ctx, time.Millisecond) initVs, err := s.Relayer.AppQuerier.QueryLatestValset(s.Node.Context) require.NoError(t, err) - _, _, _, err = s.Relayer.EVMClient.DeployBlobstreamContract(s.Node.EVMChain.Auth, s.Node.EVMChain.Backend, *initVs, initVs.Nonce, true) + _, _, _, err = s.Relayer.EVMClient.DeployBloblobstreamContract(s.Node.EVMChain.Auth, s.Node.EVMChain.Backend, *initVs, initVs.Nonce, true) require.NoError(t, err) } diff --git a/rpc/app_querier.go b/rpc/app_querier.go index a6397c5d..027e0ce3 100644 --- a/rpc/app_querier.go +++ b/rpc/app_querier.go @@ -106,7 +106,7 @@ func (aq *AppQuerier) QueryDataCommitmentForHeight(ctx context.Context, height u return resp.DataCommitment, nil } -// QueryLatestDataCommitment query the latest data commitment in Blobstream state machine. +// QueryLatestDataCommitment query the latest data commitment in Bloblobstream state machine. func (aq *AppQuerier) QueryLatestDataCommitment(ctx context.Context) (*celestiatypes.DataCommitment, error) { queryClient := celestiatypes.NewQueryClient(aq.clientConn) resp, err := queryClient.LatestDataCommitment(ctx, &celestiatypes.QueryLatestDataCommitmentRequest{}) diff --git a/rpc/suite_test.go b/rpc/suite_test.go index 9fbc055c..cea3b4b8 100644 --- a/rpc/suite_test.go +++ b/rpc/suite_test.go @@ -11,13 +11,13 @@ import ( "github.com/stretchr/testify/require" - blobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" + bloblobstreamtesting "github.com/celestiaorg/orchestrator-relayer/testing" "github.com/stretchr/testify/suite" ) type QuerierTestSuite struct { suite.Suite - Network *blobstreamtesting.CelestiaNetwork + Network *bloblobstreamtesting.CelestiaNetwork EncConf encoding.Config Logger tmlog.Logger } @@ -25,7 +25,7 @@ type QuerierTestSuite struct { func (s *QuerierTestSuite) SetupSuite() { t := s.T() ctx := context.Background() - s.Network = blobstreamtesting.NewCelestiaNetwork(ctx, t) + s.Network = bloblobstreamtesting.NewCelestiaNetwork(ctx, t) _, err := s.Network.WaitForHeightWithTimeout(400, 30*time.Second) s.EncConf = encoding.MakeConfig(app.ModuleEncodingRegisters...) s.Logger = tmlog.NewNopLogger() diff --git a/store/init.go b/store/init.go index afe36b23..9c1962f6 100644 --- a/store/init.go +++ b/store/init.go @@ -35,7 +35,7 @@ type InitOptions struct { NeedP2PKeyStore bool } -// Init initializes the Blobstream file system in the directory under +// Init initializes the Bloblobstream file system in the directory under // 'path'. // It also creates a lock under that directory, so it can't be used // by multiple processes. diff --git a/store/store.go b/store/store.go index 6d4d10d6..6e5c9ee1 100644 --- a/store/store.go +++ b/store/store.go @@ -15,7 +15,7 @@ import ( tmlog "github.com/tendermint/tendermint/libs/log" ) -// Store contains relevant information about the Blobstream store. +// Store contains relevant information about the Bloblobstream store. type Store struct { // DataStore provides a Datastore - a KV store for dht p2p data to be stored on disk. DataStore datastore.Batching @@ -29,7 +29,7 @@ type Store struct { // P2PKeyStore provides a keystore for P2P private keys. P2PKeyStore *keystore2.FSKeystore - // Path the path to the Blobstream storage root. + // Path the path to the Bloblobstream storage root. Path string // storeLock protects directory when the data store is open. diff --git a/testing/dht_network.go b/testing/dht_network.go index 63c5c8ad..6ae58a21 100644 --- a/testing/dht_network.go +++ b/testing/dht_network.go @@ -19,7 +19,7 @@ type DHTNetwork struct { Context context.Context Hosts []host.Host Stores []ds.Batching - DHTs []*p2p.BlobstreamDHT + DHTs []*p2p.BloblobstreamDHT } // NewDHTNetwork creates a new DHT test network running in-memory. @@ -34,7 +34,7 @@ func NewDHTNetwork(ctx context.Context, count int) *DHTNetwork { } hosts := make([]host.Host, count) stores := make([]ds.Batching, count) - dhts := make([]*p2p.BlobstreamDHT, count) + dhts := make([]*p2p.BloblobstreamDHT, count) for i := 0; i < count; i++ { if i == 0 { hosts[i], stores[i], dhts[i] = NewTestDHT(ctx, nil) @@ -59,13 +59,13 @@ func NewDHTNetwork(ctx context.Context, count int) *DHTNetwork { } // NewTestDHT creates a test DHT not connected to any peers. -func NewTestDHT(ctx context.Context, bootstrappers []peer.AddrInfo) (host.Host, ds.Batching, *p2p.BlobstreamDHT) { +func NewTestDHT(ctx context.Context, bootstrappers []peer.AddrInfo) (host.Host, ds.Batching, *p2p.BloblobstreamDHT) { h, err := libp2p.New() if err != nil { panic(err) } dataStore := dssync.MutexWrap(ds.NewMapDatastore()) - dht, err := p2p.NewBlobstreamDHT(ctx, h, dataStore, bootstrappers, tmlog.NewNopLogger()) + dht, err := p2p.NewBloblobstreamDHT(ctx, h, dataStore, bootstrappers, tmlog.NewNopLogger()) if err != nil { panic(err) } @@ -73,7 +73,7 @@ func NewTestDHT(ctx context.Context, bootstrappers []peer.AddrInfo) (host.Host, } // WaitForPeerTableToUpdate waits for nodes to have updated their peers list -func WaitForPeerTableToUpdate(ctx context.Context, dhts []*p2p.BlobstreamDHT, timeout time.Duration) error { +func WaitForPeerTableToUpdate(ctx context.Context, dhts []*p2p.BloblobstreamDHT, timeout time.Duration) error { withTimeout, cancel := context.WithTimeout(ctx, timeout) defer cancel() ticker := time.NewTicker(time.Millisecond) diff --git a/types/data_commitment_confirm.go b/types/data_commitment_confirm.go index ad391a0e..8ced63c0 100644 --- a/types/data_commitment_confirm.go +++ b/types/data_commitment_confirm.go @@ -57,7 +57,7 @@ func IsEmptyMsgDataCommitmentConfirm(dcc DataCommitmentConfirm) bool { } // DataCommitmentTupleRootSignBytes EncodeDomainSeparatedDataCommitment takes the required input data and -// produces the required signature to confirm a validator set update on the Blobstream Ethereum contract. +// produces the required signature to confirm a validator set update on the Bloblobstream Ethereum contract. // This value will then be signed before being submitted to Cosmos, verified, and then relayed to Ethereum. func DataCommitmentTupleRootSignBytes(nonce *big.Int, commitment []byte) ethcmn.Hash { var dataCommitment [32]uint8 diff --git a/types/valset_confirm.go b/types/valset_confirm.go index 33c66782..e9350f8f 100644 --- a/types/valset_confirm.go +++ b/types/valset_confirm.go @@ -14,7 +14,7 @@ import ( // // If a sufficient number of validators (66% of voting power) submit ValsetConfirm // messages with their signatures, it is then possible for anyone to query them from -// the Blobstream P2P network and submit them to Ethereum to update the validator set. +// the Bloblobstream P2P network and submit them to Ethereum to update the validator set. type ValsetConfirm struct { // Ethereum address, associated to the orchestrator, used to sign the `ValSet` // message.