Skip to content

Commit

Permalink
Merge branch 'celestiaorg:main' into feat-fixed-celestiaorg#3078
Browse files Browse the repository at this point in the history
  • Loading branch information
abhirajprasad authored Jan 2, 2025
2 parents d89e99c + 39a0ab2 commit 8e9aaa0
Show file tree
Hide file tree
Showing 21 changed files with 30 additions and 30 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ If you import celestia-app as a Go module, you may need to add some Go module `r
If you are running celestia-app in tests, you may want to override the `timeout_commit` to produce blocks faster. By default, a celestia-app chain with app version >= 3 will produce blocks every ~6 seconds. To produce blocks faster, you can override the `timeout_commit` with the `--timeout-commit` flag.

```shell
# Start celestia-appd with a one second timeout commit.
# Start celestia-appd with a one-second timeout commit.
celestia-appd start --timeout-commit 1s
```

Expand Down
2 changes: 1 addition & 1 deletion app/genesis.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ import (
)

// The genesis state of the blockchain is represented here as a map of raw json
// messages key'd by a identifier string.
// messages key'd by an identifier string.
// The identifier is used to determine which module genesis information belongs
// to so it may be appropriately routed during init chain.
// Within this application default genesis information is retrieved from
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/adr-001-abci++-adoption.md
Original file line number Diff line number Diff line change
Expand Up @@ -328,7 +328,7 @@ func (sqwr *shareSplitter) writeMalleatedTx(
}
```

Lastly, the data availability header is used to create the `DataHash` in the `Header` in the application instead of in tendermint. This is done by modifying the protobuf version of the block data to retain the cached hash and setting it during `ProcessProposal`. Later, in `ProcessProposal` other full nodes check that the `DataHash` matches the block data by recomputing it. Previously, this extra check was performed inside the `ValidateBasic` method of `types.Data`, where is was computed each time it was decoded. Not only is this more efficient as it saves significant computational resources and keeps `ValidateBasic` light, it is also much more explicit. This approach does not however dramatically change any existing code in tendermint, as the code to compute the hash of the block data remains there. Ideally, we would move all of the code that computes erasure encoding to the app. This approach allows us to keep the intuitiveness of the `Hash` method for `types.Data`, along with not forcing us to change many tests in tendermint, which rely on this functionality.
Lastly, the data availability header is used to create the `DataHash` in the `Header` in the application instead of in tendermint. This is done by modifying the protobuf version of the block data to retain the cached hash and setting it during `ProcessProposal`. Later, in `ProcessProposal` other full nodes check that the `DataHash` matches the block data by recomputing it. Previously, this extra check was performed inside the `ValidateBasic` method of `types.Data`, where it was computed each time it was decoded. Not only is this more efficient as it saves significant computational resources and keeps `ValidateBasic` light, it is also much more explicit. This approach does not however dramatically change any existing code in tendermint, as the code to compute the hash of the block data remains there. Ideally, we would move all of the code that computes erasure encoding to the app. This approach allows us to keep the intuitiveness of the `Hash` method for `types.Data`, along with not forcing us to change many tests in tendermint, which rely on this functionality.

### ProcessProposal [#214](https://github.com/celestiaorg/celestia-app/pull/214), [#216](https://github.com/celestiaorg/celestia-app/pull/216), and [#224](https://github.com/celestiaorg/celestia-app/pull/224)

Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/adr-002-qgb-valset.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Finally, if there are no validator set updates for the unbonding window, the bri

### Message types

We added the following messages types:
We added the following message types:

#### Bridge Validator

Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/adr-006-non-interactive-defaults.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ While this functions as a message inclusion check, the light client has to assum
The main issue with that requirement is that users must know the relevant subtree roots before they sign, which is problematic considering that if the block is not organized perfectly, the subtree roots will include data unknown to the user at the time of signing.

To fix this, the spec outlines the non-interactive default rules. These involve a few additional **default but optional** message layout rules that enables the user to follow the above block validity rule, while also not interacting with a block producer. Commitments to messages can consist entirely of sub-tree roots of the data hash, and those sub-tree roots are to be generated only from the message itself (so that the user can sign something non-interactively).
To fix this, the spec outlines the "non-interactive default rules". These involve a few additional **default but optional** message layout rules that enables the user to follow the above block validity rule, while also not interacting with a block producer. Commitments to messages can consist entirely of sub-tree roots of the data hash, and those sub-tree roots are to be generated only from the message itself (so that the user can sign something "non-interactively").

> **Messages must begin at a location aligned with the largest power of 2 that is not larger than the message length or k.**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ Worst case commitment inclusion proof size over 2-4 share PFB
<!--- This does not need a fraud proof as it could be a validation rule that even light clients can check. This would require the light clients to know the sequencer set and whose turn it was. (not sure about this)
--->

The fraud proof for this would be to prove that the commitment of the PFB transaction does not equal the predicted commitment in the header. Therefore this is equivalent to a PFB transaction inclusion proof. This fraud proof would be optimistic as we would assume that the PFB commitment is correct. But realistically if the commitment over the PFB transaction is wrong then the PFB commitment is most likely wrong as well. Therefore the fraud poof would be a PFB Fraud Proof as described at the top.
The fraud proof for this would be to prove that the commitment of the PFB transaction does not equal the predicted commitment in the header. Therefore this is equivalent to a PFB transaction inclusion proof. This fraud proof would be optimistic as we would assume that the PFB commitment is correct. But realistically if the commitment over the PFB transaction is wrong then the PFB commitment is most likely wrong as well. Therefore the fraud proof would be a PFB Fraud Proof as described at the top.
If we do not have a PFB transaction that can be predicted, we also need to slash double signing of 2 valid PFB transactions in Celestia. This is required so we don't create a valid fraud proof over a valid commitment over the PFB transaction.

The third optimization could be to SNARK the PFB Inclusion Proof to reduce the size even more.?
Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ require (
golang.org/x/exp v0.0.0-20240904232852-e7e105dedf7e
google.golang.org/genproto/googleapis/api v0.0.0-20241015192408-796eee8c2d53
google.golang.org/grpc v1.69.2
google.golang.org/protobuf v1.36.0
google.golang.org/protobuf v1.36.1
gopkg.in/yaml.v2 v2.4.0
k8s.io/apimachinery v0.32.0
)
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -2013,8 +2013,8 @@ google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQ
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.36.0 h1:mjIs9gYtt56AzC4ZaffQuh88TZurBGhIJMBZGSxNerQ=
google.golang.org/protobuf v1.36.0/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.36.1 h1:yBPeRvTftaleIgM3PZ/WBIZ7XM/eEYAaEyCwvyjq/gk=
google.golang.org/protobuf v1.36.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
Expand Down
4 changes: 2 additions & 2 deletions local_devnet/celestia-app/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -398,11 +398,11 @@ wal_file = "data/cs.wal/wal"
timeout_propose = "10s"
# How much timeout_propose increases with each round
timeout_propose_delta = "500ms"
# How long we wait after receiving +2/3 prevotes for anything (ie. not a single block or nil)
# How long we wait after receiving +2/3 prevotes for "anything" (ie. not a single block or nil)
timeout_prevote = "1s"
# How much the timeout_prevote increases with each round
timeout_prevote_delta = "500ms"
# How long we wait after receiving +2/3 precommits for anything (ie. not a single block or nil)
# How long we wait after receiving +2/3 precommits for "anything" (ie. not a single block or nil)
timeout_precommit = "1s"
# How much the timeout_precommit increases with each round
timeout_precommit_delta = "500ms"
Expand Down
2 changes: 1 addition & 1 deletion pkg/appconsts/global_consts.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ import (
//
// They cannot change throughout the lifetime of a network.
const (
// DefaultShareVersion is the defacto share version. Use this if you are
// DefaultShareVersion is the de facto share version. Use this if you are
// unsure of which version to use.
DefaultShareVersion = share.ShareVersionZero

Expand Down
4 changes: 2 additions & 2 deletions pkg/appconsts/initial_consts.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ import (
)

// The following defaults correspond to initial parameters of the network that can be changed, not via app versions
// but other means such as on-chain governance, or the nodes local config
// but other means such as on-chain governance, or the node's local config
const (
// DefaultGovMaxSquareSize is the default value for the governance modifiable
// max square size.
Expand All @@ -19,7 +19,7 @@ const (

// DefaultMinGasPrice is the default min gas price that gets set in the app.toml file.
// The min gas price acts as a filter. Transactions below that limit will not pass
// a nodes `CheckTx` and thus not be proposed by that node.
// a node's `CheckTx` and thus not be proposed by that node.
DefaultMinGasPrice = 0.002 // utia

// DefaultUnbondingTime is the default time a validator must wait
Expand Down
2 changes: 1 addition & 1 deletion pkg/da/data_availability_header_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ func Test_DAHValidateBasic(t *testing.T) {
errStr: "minimum valid DataAvailabilityHeader has at least",
},
{
name: "bash hash",
name: "bad hash",
dah: badHashDah,
expectErr: true,
errStr: "wrong hash",
Expand Down
2 changes: 1 addition & 1 deletion pkg/proof/row_proof_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ var root = []byte{0x82, 0x37, 0x91, 0xd2, 0x5d, 0x77, 0x7, 0x67, 0x35, 0x3, 0x90
var incorrectRoot = bytes.Repeat([]byte{0}, 32)

// validRowProof returns a row proof for one row. This test data copied from
// ceelestia-app's pkg/proof/proof_test.go TestNewShareInclusionProof: "1
// celestia-app's pkg/proof/proof_test.go TestNewShareInclusionProof: "1
// transaction share"
func validRowProof() RowProof {
return RowProof{
Expand Down
2 changes: 1 addition & 1 deletion pkg/user/tx_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ func WithDefaultAccount(name string) Option {

// TxClient is an abstraction for building, signing, and broadcasting Celestia transactions
// It supports multiple accounts. If none is specified, it will
// try use the default account.
// try to use the default account.
// TxClient is thread-safe.
type TxClient struct {
mtx sync.Mutex
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/benchmark/throughput.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ var bigBlockManifest = Manifest{
CelestiaAppVersion: "pr-3261",
TxClientVersion: "pr-3261",
EnableLatency: false,
LatencyParams: LatencyParams{70, 0}, // in milliseconds
LatencyParams: LatencyParams{70, 0}, // in milliseconds
BlobSequences: 60,
BlobsPerSeq: 6,
BlobSizes: "200000",
Expand Down
4 changes: 2 additions & 2 deletions test/e2e/minor_version_compatibility.go
Original file line number Diff line number Diff line change
Expand Up @@ -127,11 +127,11 @@ func MinorVersionCompatibility(logger *log.Logger) error {
}

logger.Println("checking that all nodes are at the same height")
const maxPermissableDiff = 2
const maxPermissibleDiff = 2
for i := 0; i < len(heights); i++ {
for j := i + 1; j < len(heights); j++ {
diff := heights[i] - heights[j]
if diff > maxPermissableDiff {
if diff > maxPermissibleDiff {
logger.Fatalf("node %d is behind node %d by %d blocks", j, i, diff)
}
}
Expand Down
6 changes: 3 additions & 3 deletions test/pfm/pfm_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -76,10 +76,10 @@ func TestPacketForwardMiddlewareTransfer(t *testing.T) {
coordinator.Setup(path2)

celestiaApp := celestia.App.(*app.App)
originalCelestiaBalalance := celestiaApp.BankKeeper.GetBalance(celestia.GetContext(), celestia.SenderAccount.GetAddress(), sdk.DefaultBondDenom)
originalCelestiaBalance := celestiaApp.BankKeeper.GetBalance(celestia.GetContext(), celestia.SenderAccount.GetAddress(), sdk.DefaultBondDenom)

// Take half of the original balance
transferAmount := originalCelestiaBalalance.Amount.QuoRaw(2)
transferAmount := originalCelestiaBalance.Amount.QuoRaw(2)
timeoutHeight := clienttypes.NewHeight(1, 300)
coinToSendToB := sdk.NewCoin(sdk.DefaultBondDenom, transferAmount)

Expand Down Expand Up @@ -120,7 +120,7 @@ func TestPacketForwardMiddlewareTransfer(t *testing.T) {
require.NoError(t, err)

sourceBalanceAfter := celestiaApp.BankKeeper.GetBalance(celestia.GetContext(), celestia.SenderAccount.GetAddress(), sdk.DefaultBondDenom)
require.Equal(t, originalCelestiaBalalance.Amount.Sub(transferAmount), sourceBalanceAfter.Amount)
require.Equal(t, originalCelestiaBalance.Amount.Sub(transferAmount), sourceBalanceAfter.Amount)

ibcDenomTrace := types.ParseDenomTrace(types.GetPrefixedDenom(packet.GetDestPort(), packet.GetDestChannel(), sdk.DefaultBondDenom))
destinationBalanceAfter := chainB.App.(*SimApp).BankKeeper.GetBalance(chainB.GetContext(), chainB.SenderAccount.GetAddress(), ibcDenomTrace.IBCDenom())
Expand Down
2 changes: 1 addition & 1 deletion test/pfm/simapp.go
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ type App interface {
// Loads the app at a given height.
LoadHeight(height int64) error

// All the registered module account addreses.
// All the registered module account addresses.
ModuleAccountAddrs() map[string]bool

// Helper for the simulation framework.
Expand Down
2 changes: 1 addition & 1 deletion test/txsim/run_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ func TestTxSimUpgrade(t *testing.T) {

require.NoError(t, cctx.WaitForNextBlock())

// updrade to v3 at height 20
// upgrade to v3 at height 20
sequences := []txsim.Sequence{
txsim.NewUpgradeSequence(v3.Version, 20),
}
Expand Down
6 changes: 3 additions & 3 deletions test/txsim/stake.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,15 @@ var _ Sequence = &StakeSequence{}
// to a single validator at a time. TODO: Allow for multiple delegations
type StakeSequence struct {
initialStake int
redelegatePropability int
redelegateProbability int
delegatedTo string
account types.AccAddress
}

func NewStakeSequence(initialStake int) *StakeSequence {
return &StakeSequence{
initialStake: initialStake,
redelegatePropability: 10, // 1 in every 10
redelegateProbability: 10, // 1 in every 10
}
}

Expand Down Expand Up @@ -68,7 +68,7 @@ func (s *StakeSequence) Next(ctx context.Context, querier grpc.ClientConn, rand
}

// occasionally redelegate the initial stake to another validator at random
if rand.Intn(s.redelegatePropability) == 0 {
if rand.Intn(s.redelegateProbability) == 0 {
val, err := getRandomValidator(ctx, querier, rand)
if err != nil {
return Operation{}, err
Expand Down
4 changes: 2 additions & 2 deletions test/util/malicious/app.go
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,8 @@ func New(
badApp := &App{App: goodApp}

// set the malicious prepare proposal handler if it is set in the app options
if malHanderName := appOpts.Get(BehaviorConfigKey); malHanderName != nil {
badApp.SetMaliciousBehavior(malHanderName.(BehaviorConfig))
if malHandlerName := appOpts.Get(BehaviorConfigKey); malHandlerName != nil {
badApp.SetMaliciousBehavior(malHandlerName.(BehaviorConfig))
}

return badApp
Expand Down

0 comments on commit 8e9aaa0

Please sign in to comment.