Skip to content

Commit

Permalink
Merge branch 'main' into nina/fix-tx-size-cap
Browse files Browse the repository at this point in the history
  • Loading branch information
ninabarbakadze authored Dec 9, 2024
2 parents f0fca5b + 818026a commit cbd099a
Show file tree
Hide file tree
Showing 7 changed files with 36 additions and 6 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ jobs:
run: make test-coverage

- name: Upload coverage.txt
uses: codecov/codecov-action@v5.0.7
uses: codecov/codecov-action@v5.1.1
with:
file: ./coverage.txt

Expand Down
15 changes: 15 additions & 0 deletions cmd/celestia-appd/cmd/start.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import (
"strings"
"time"

"github.com/celestiaorg/celestia-app/v3/pkg/appconsts"
"github.com/cosmos/cosmos-sdk/client"
"github.com/cosmos/cosmos-sdk/client/flags"
"github.com/cosmos/cosmos-sdk/codec"
Expand Down Expand Up @@ -117,6 +118,20 @@ is performed. Note, when enabled, gRPC will also be automatically enabled.
return err
}

switch clientCtx.ChainID {
case appconsts.ArabicaChainID:
serverCtx.Logger.Info(fmt.Sprintf("Since the chainID is %v, configuring the default v2 upgrade height to %v", appconsts.ArabicaChainID, appconsts.ArabicaUpgradeHeightV2))
serverCtx.Viper.SetDefault(UpgradeHeightFlag, appconsts.ArabicaUpgradeHeightV2)
case appconsts.MochaChainID:
serverCtx.Logger.Info(fmt.Sprintf("Since the chainID is %v, configuring the default v2 upgrade height to %v", appconsts.MochaChainID, appconsts.MochaUpgradeHeightV2))
serverCtx.Viper.SetDefault(UpgradeHeightFlag, appconsts.MochaUpgradeHeightV2)
case appconsts.MainnetChainID:
serverCtx.Logger.Info(fmt.Sprintf("Since the chainID is %v, configuring the default v2 upgrade height to %v", appconsts.MainnetChainID, appconsts.MainnetUpgradeHeightV2))
serverCtx.Viper.SetDefault(UpgradeHeightFlag, appconsts.MainnetUpgradeHeightV2)
default:
serverCtx.Logger.Info(fmt.Sprintf("No default value exists for the v2 upgrade height when the chainID is %v", clientCtx.ChainID))
}

withTM, _ := cmd.Flags().GetBool(flagWithTendermint)
if !withTM {
serverCtx.Logger.Info("starting ABCI without Tendermint")
Expand Down
2 changes: 2 additions & 0 deletions pkg/appconsts/chain_ids.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,6 @@ package appconsts

const (
ArabicaChainID = "arabica-11"
MochaChainID = "mocha-4"
MainnetChainID = "celestia"
)
13 changes: 13 additions & 0 deletions pkg/appconsts/upgrade_heights.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
package appconsts

const (
// ArabicaUpgradeHeightV2 is the block height at which the arabica-11
// upgraded from app version 1 to 2.
ArabicaUpgradeHeightV2 = 1751707
// MochaUpgradeHeightV2 is the block height at which the mocha-4 upgraded
// from app version 1 to 2.
MochaUpgradeHeightV2 = 2585031
// MainnetUpgradeHeightV2 is the block height at which the celestia upgraded
// from app version 1 to 2.
MainnetUpgradeHeightV2 = 2371495
)
4 changes: 2 additions & 2 deletions specs/src/cat_pool.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Both `SeenTx` and `WantTx` contain the sha256 hash of the raw transaction bytes.
Both messages are sent across a new channel with the ID: `byte(0x31)`. This enables cross compatibility as discussed in greater detail below.

> **Note:**
> The term `SeenTx` is used over the more common `HasTx` because the transaction pool contains sophisticated eviction logic. TTL's, higher priority transactions and reCheckTx may mean that a transaction pool *had* a transaction but does not have it any more. Semantically it's more appropriate to use `SeenTx` to imply not the presence of a transaction but that the node has seen it and dealt with it accordingly.
> The term `SeenTx` is used over the more common `HasTx` because the transaction pool contains sophisticated eviction logic. TTLs, higher priority transactions and reCheckTx may mean that a transaction pool *had* a transaction but does not have it any more. Semantically it's more appropriate to use `SeenTx` to imply not the presence of a transaction but that the node has seen it and dealt with it accordingly.
## Outbound logic

Expand Down Expand Up @@ -88,7 +88,7 @@ Upon receiving a `SeenTx` message:
- If the node does not have the transaction but recently evicted it, it MAY choose to rerequest the transaction if it has adequate resources now to process it.
- If the node has not seen the transaction or does not have any pending requests for that transaction, it can do one of two things:
- It MAY immediately request the tx from the peer with a `WantTx`.
- If the node is connected to the peer specified in `FROM`, it is likely, from a non-byzantine peer, that the node will also shortly receive the transaction from the peer. It MAY wait for a `Txs` message for a bounded amount of time but MUST eventually send a `WantMsg` message to either the original peer or any other peer that *has* the specified transaction.
- If the node is connected to the peer specified in `FROM`, it is likely, from a non-byzantine peer, that the node will also shortly receive the transaction from the peer. It MAY wait for a `Txs` message for a bounded amount of time but MUST eventually send a `WantTx` message to either the original peer or any other peer that *has* the specified transaction.

Upon receiving a `WantTx` message:

Expand Down
2 changes: 1 addition & 1 deletion specs/src/data_square_layout.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The order of blobs in a namespace is dictated by the priority of the PFBs that p

Transactions can pay fees for a blob to be included in the same block as the transaction itself. It may seem natural to bundle the `MsgPayForBlobs` transaction that pays for a number of blobs with these blobs (which is the case in other blockchains with native execution, e.g. calldata in Ethereum transactions or OP_RETURN data in Bitcoin transactions), however this would mean that processes validating the state of the Celestia network would need to download all blob data. PayForBlob transactions must therefore only include a commitment to (i.e. some hash of) the blob they pay fees for. If implemented naively (e.g. with a simple hash of the blob, or a simple binary Merkle tree root of the blob), this can lead to a data availability problem, as there are no guarantees that the data behind these commitments is actually part of the block data.

To that end, we impose some additional rules onto _blobs only_: blobs must be placed is a way such that both the transaction sender and the block producer can be held accountable—a necessary property for e.g. fee burning. Accountable in this context means that
To that end, we impose some additional rules onto _blobs only_: blobs must be placed in a way such that both the transaction sender and the block producer can be held accountable—a necessary property for e.g. fee burning. Accountable in this context means that

1. The transaction sender must pay sufficient fees for blob inclusion.
1. The block proposer cannot claim that a blob was included when it was not (which implies that a transaction and the blob it pays for must be included in the same block). In addition all blobs must be accompanied by a PayForBlob transaction.
Expand Down
4 changes: 2 additions & 2 deletions test/e2e/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ make test-e2e E2ESimple

**Optional parameters**:

- `KNUUU_TIMEOUT` can be used to override the default timeout of 60 minutes for the tests.
- `KNUU_TIMEOUT` can be used to override the default timeout of 60 minutes for the tests.

## Observation

Expand All @@ -56,7 +56,7 @@ This will back up your default kubernetes configuration. If you use a different

### Install minikube

Minikube is required to be installed on your machine. If you have a linux machine, follow the [minikube docs](https://kubernetes.io/fr/docs/tasks/tools/install-minikube/). If you're on macOS ARM, this [tutorial](https://devopscube.com/minikube-mac/) can be helpful to run it using qemu.
Minikube is required to be installed on your machine. If you have a linux machine, follow the [minikube docs](https://kubernetes.io/docs/tasks/tools/install-minikube/). If you're on macOS ARM, this [tutorial](https://devopscube.com/minikube-mac/) can be helpful to run it using qemu.

### Create namespace

Expand Down

0 comments on commit cbd099a

Please sign in to comment.