Skip to content

Commit

Permalink
chore: fix all typos on the repository (#2926)
Browse files Browse the repository at this point in the history
This PR will fix typos that exist on current repository 
(variable name will stay untouch)

## Checklist

<!-- 
Please complete the checklist to ensure that the PR is ready to be
reviewed.

IMPORTANT:
PRs should be left in Draft until the below checklist is completed.
-->

- [ ] New and updated code has appropriate documentation
- [ ] New and updated code has new and/or updated testing
- [ ] Required CI checks are passing
- [ ] Visual proof for any user facing features like CLI or
documentation updates
- [ ] Linked issues closed with keywords
- [x] I reviewed file changes myself.

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
  • Loading branch information
hoangdv2429 and coderabbitai[bot] authored Dec 12, 2023
1 parent 2df365b commit 098bcb4
Show file tree
Hide file tree
Showing 33 changed files with 45 additions and 45 deletions.
2 changes: 1 addition & 1 deletion app/ante/ante.go
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ func NewAnteHandler(
blobante.NewMinGasPFBDecorator(blobKeeper),
// Ensure that the tx's total blob size is <= the max blob size.
blobante.NewMaxBlobSizeDecorator(blobKeeper),
// Ensure that tx's with a MsgSubmitProposal have atleast one proposal
// Ensure that tx's with a MsgSubmitProposal have at least one proposal
// message.
NewGovProposalDecorator(),
// Side effect: increment the nonce for all tx signers.
Expand Down
2 changes: 1 addition & 1 deletion app/process_proposal.go
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ func (app *App) ProcessProposal(req abci.RequestProcessProposal) (resp abci.Resp

// we need to increment the sequence for every transaction so that
// the signature check below is accurate. this error only gets hit
// if the account in question doens't exist.
// if the account in question doesn't exist.
sdkCtx, err = handler(sdkCtx, sdkTx, false)
if err != nil {
logInvalidPropBlockError(app.Logger(), req.Header, "failure to increment sequence", err)
Expand Down
2 changes: 1 addition & 1 deletion app/test/fuzz_abci_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ func TestPrepareProposalConsistency(t *testing.T) {
iterations int
}
tests := []test{
// running these tests more than once in CI will sometimes timout, so we
// running these tests more than once in CI will sometimes timeout, so we
// have to run them each once per square size. However, we can run these
// more locally by increasing the iterations.
{"many small single share single blob transactions", 1000, 1, 400, 1},
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/adr-006-non-interactive-defaults.md
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@ func (app *App) PrepareProposal(req abci.RequestPrepareProposal) abci.ResponsePr
The first major change is that we are making use of an intermediate data structure. It contains fields that are progressively and optionally used during the malleation process. This makes it easier to keep track of malleated transactions and their messages, prune transactions in the case that we go over the max square size, cache the decoded transactions avoiding excessive deserialization, and add metadata to malleated transactions after we malleate them. All while preserving the original ordering (from the prioritized mempool) of the transactions.

```go
// parsedTx is an interanl struct that keeps track of potentially valid txs and
// parsedTx is an internal struct that keeps track of potentially valid txs and
// their wire messages if they have any.
type parsedTx struct {
// the original raw bytes of the tx
Expand Down
6 changes: 3 additions & 3 deletions docs/architecture/adr-012-sequence-length-encoding.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Cons

## Option E: Extend protobuf and introduce a fixed16 type

Big endian uint32 seems equivalant to protobuf fixed32 but there is no fixed16. This option adds a fixed16 type to protobuf so that we can encode the sequence length as a fixed32 and the reserved bytes as a fixed16.
Big endian uint32 seems equivalent to protobuf fixed32 but there is no fixed16. This option adds a fixed16 type to protobuf so that we can encode the sequence length as a fixed32 and the reserved bytes as a fixed16.

## Table

Expand All @@ -81,7 +81,7 @@ Big endian uint32 seems equivalant to protobuf fixed32 but there is no fixed16.
| Option B | 4 byte padded varint | 2 byte padded varint |
| Option C | 4 byte big endian uint32 | 2 byte padded varint |
| Option D | 4 byte big endian uint32 | 4 byte big endian uint32 |
| Option E | 4 byte big endian uint32 (equivalant to protobuf fixed32) | 2 byte protobuf fixed16 (doesn't exist) |
| Option E | 4 byte big endian uint32 (equivalent to protobuf fixed32) | 2 byte protobuf fixed16 (doesn't exist) |

## Decision

Expand All @@ -96,7 +96,7 @@ Option D
### Neutral

- All options retain the need for other language implementations to parse varints because the length delimiter that is prefixed to units in a compact share (e.g. a transaction) is still a varint.
- This document assumes that an encoded big endian uint32 is equivalant to a protobuf fixed32
- This document assumes that an encoded big endian uint32 is equivalent to a protobuf fixed32

## References

Expand Down
4 changes: 2 additions & 2 deletions docs/architecture/adr-014-versioned-namespaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ An approach that addresses these issues is to prefix the namespace ID with versi
| Namespace Version | 1 | the version of the namespace ID |
| Namespace ID | 8 if Namespace Version=0, 32 if Namespace Version=1 | namespace ID of the share |

For example, consider the scenario where at mainnet launch blobs are layed out according to the existing non-interactive default rules. In this scenario, blobs always start at an index aligned with the `BlobMinSquareSize`. The only supported namespace ID is `0`. At some point in the future, if we introduce new non-interactive default rules (e.g. [celestia-app#1161](https://github.com/celestiaorg/celestia-app/pull/1161)), we may also expand the range of available namespaces to include namespaces that start with a leading `0` or `1` byte. Users may opt in to using the new non-interactive default rules by submitting PFB transactions with a namespace ID version of `1`.
For example, consider the scenario where at mainnet launch blobs are laid out according to the existing non-interactive default rules. In this scenario, blobs always start at an index aligned with the `BlobMinSquareSize`. The only supported namespace ID is `0`. At some point in the future, if we introduce new non-interactive default rules (e.g. [celestia-app#1161](https://github.com/celestiaorg/celestia-app/pull/1161)), we may also expand the range of available namespaces to include namespaces that start with a leading `0` or `1` byte. Users may opt in to using the new non-interactive default rules by submitting PFB transactions with a namespace ID version of `1`.

- When the namespace starts with `0`, all blobs in the namespace conform to the previous set of non-interactive default rules.
- When a namespace starts with `1`, all blobs in the namespace conform to the new set of non-interactive default rules.
Expand Down Expand Up @@ -159,7 +159,7 @@ When a user creates a PFB, concatenate the namespace version with the namespace
1. Option 1: when there are changes to the universal share prefix
2. Option 2: when there are changes to any part of the remaining data in a share
3. When do we expect to increment the namespace version?
1. During a backwards incompatable non-interactive default rule change
1. During a backwards incompatible non-interactive default rule change
2. If we change the format of a padding share (e.g. a namespace padding share) instead of `0` bytes, pad with something else like. We may need to preserve backwards compatibility for padding shares that use old namespaces. Note this scenario likely implies a namespace version and share version increase.
3. Change the format of PFB tx serialization. This scenario likely implies duplicating the PFB txs in a data square, one with the old namespace version and one with the new namespace version.
4. Inspired by [type-length-value](https://en.wikipedia.org/wiki/Type%E2%80%93length%E2%80%93value), should we consider prefixing optional fields (sequence length and reserved bytes) with a type and a length? This would enable us to modify those fields without introducing new share versions.
Expand Down
6 changes: 3 additions & 3 deletions docs/architecture/adr-015-namespace-id-size.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Users will specify a version (1 byte) and a ID (28 bytes) in their PFB. Addition
## Desirable criteria

1. A user should be able to randomly generate a namespace that hasn't been used before[^1]
2. There should exist a large enough namespace ID space for all rollups that may exist in the forseeable future (e.g. 100 years)
2. There should exist a large enough namespace ID space for all rollups that may exist in the foreseeable future (e.g. 100 years)

### Criteria 1

Expand All @@ -70,7 +70,7 @@ Namespace ID size (bytes) | 1 billion (10^9) | 1 trillion (10^12) | 1 quadrillio

> As a rule of thumb, a hash function with range of size N can hash on the order of sqrt(N) values before running into collisions.[^4]
Namespace ID size (bytes) | Hash funciton range | Can hash this many items before running into collision
Namespace ID size (bytes) | Hash function range | Can hash this many items before running into collision
--------------------------|---------------------|-------------------------------------------------------
8 | 2^64 | 2^32 = ~4 billion items
16 | 2^128 | 2^64 = ~1 quintillion items
Expand Down Expand Up @@ -141,7 +141,7 @@ Another tradeoff to consider is the size of the namespace in the share. Since a

### Maximum blob size

If the namespace size is increased, the maximum possible blob will decrease. Given the maximum possible blob is bounded by the number of bytes available for blob space in a data square, if a 32 byte namespace size is adopted, the maxmimum blob size will decrease by an upper bound of `appconsts.MaxSquareSize * appconsts.MaxSquareSize * (32-8)`. Note this is an upper bound because not all shares in the data square can be used for blob data (i.e. at least one share must contain the associated PayForBlob transaction).
If the namespace size is increased, the maximum possible blob will decrease. Given the maximum possible blob is bounded by the number of bytes available for blob space in a data square, if a 32 byte namespace size is adopted, the maximum blob size will decrease by an upper bound of `appconsts.MaxSquareSize * appconsts.MaxSquareSize * (32-8)`. Note this is an upper bound because not all shares in the data square can be used for blob data (i.e. at least one share must contain the associated PayForBlob transaction).

### SHA256 performance

Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/adr-018-network-upgrades.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Given this, a node can at any time spin up a v2 binary which will immediately be
The height of the upgrades will initially be hard coded into the binary. This will consist of a mapping from chain ID to app version to a range of heights that will be loaded by the application into working memory whenever the node begins and supplied directly to the `upgrades` module which will be responsible for scheduling. The chainID is required as we expect the same binary to be used across testnets and mainnet. There are a few considerations that shape how this system will work:

- Upgrading needs to support state migrations. These must happen to all nodes at the same moment between heights. Ideally all migrations that affect state would correspond at the height of the new app version i.e. after `Commit` and before processing of the transactions at that height. `BeginBlock` seems like an ideal area to perform these upgrades however these might affect the way that `PrepareProposal` and `ProcessProposal` is conducted thus they must be performed even prior to these ABCI calls. A simpler implementation would have been for the proposer to immediately propose a block with the next version i.e. v2. However that would require the proposer to first migrate state (taking an unknown length of time) and for the validators receiving that proposal to first migrate before validating and given that the upgrade is not certain, there would need to be a mechanism to migrate back to v1 (NOTE: this remains the case if we wish to support downgrading which is discussed later). To overcome these requirements, the proposer must signal in the prior height the intention to upgrade to a new version. This is done with a new message type, `MsgVersionChange`, which must be put as the first transaction in the block. Validators read this and if they are in agreement to supporting the version change they vote on the block accordingly. If the block reaches consensus then all validators will update the app version at `EndBlock`. CometBFT will then propose the next block using that version. Nodes that have not upgraded and don't support the binary will error and exit. Given that the previous block was approved by more than 2/3 of the network we have a strong guarantee that this block will be accepted by the network. However, it's worth noting that given a security model that must withstand 1/3 byzantine nodes, even a single byzantine node that voted for the upgrade yet doesn't vote for the following block can stall the network until > 2/3 nodes upgrade and vote on the following block.
- Given uncertainty in scheduling, the system must be able to handle changes to the upgrade height that most commonly would come in the form of delays. Embedding the upgrade schedule in the binary is convenient for node operators and avoids the possibility for user errors. However, binaries are static. If the community wished to push back the upgrade by two weeks there is the possibility that some nodes would not rerun the new binary thus we'd get a split between nodes running the old schedule and nodes running the new schedule. To overcome this, proposers will only propose a version change in the first round of each height, thus allowing transactions to still be committed even under circumstances where there is no consensus on upgrading. Secondly, we define a range in which nodes will attempt to upgrade the app version and failing this will continue to run the current version. Lastly, the binary will have the ability to manually specify the app version height mapping and overide the built-in values either through a flag or in the `app.toml` config. This is expected to be used in testing and in emergency situations only. Another example to keep in mind is if a quorum outright rejects an upgrade. If some of the validators are for the change they should have some way to continue participating in the network. Therefore we employ a range that nodes will attempt to upgrade and afterwards will continue on normally with the new binary however running the older version.
- Given uncertainty in scheduling, the system must be able to handle changes to the upgrade height that most commonly would come in the form of delays. Embedding the upgrade schedule in the binary is convenient for node operators and avoids the possibility for user errors. However, binaries are static. If the community wished to push back the upgrade by two weeks there is the possibility that some nodes would not rerun the new binary thus we'd get a split between nodes running the old schedule and nodes running the new schedule. To overcome this, proposers will only propose a version change in the first round of each height, thus allowing transactions to still be committed even under circumstances where there is no consensus on upgrading. Secondly, we define a range in which nodes will attempt to upgrade the app version and failing this will continue to run the current version. Lastly, the binary will have the ability to manually specify the app version height mapping and override the built-in values either through a flag or in the `app.toml` config. This is expected to be used in testing and in emergency situations only. Another example to keep in mind is if a quorum outright rejects an upgrade. If some of the validators are for the change they should have some way to continue participating in the network. Therefore we employ a range that nodes will attempt to upgrade and afterwards will continue on normally with the new binary however running the older version.
- The system needs to be tolerant of unexpected faults in the upgrade process. This can be:
- The community/contributors realise there is a bug in the new version after the binary has been released. Node operators will need to downgrade back to the previous version and restart their node.
- There is a halting bug in the migration or in processing of the first transactions. This most likely would be in the form of an apphash mismatch. This becomes more problematic with delayed execution as the block (with v2 transactions) has already been committed. Immediate execution has the advantage of the apphash mismatch being realised before the data is committed. It's still however feasible to over come this but it involves nodes rolling back the previous state and re-exectuing the transactions using the v1 state machine (which will skip over the v2 transactions). This means node operators should be able to manually override the app version that the proposer will propose with. Lastly, if state migrations occurred between v2 and v1, a reverse migration would need to be performed which would make things especially difficult. If we are unable to fallback to the previous version and continue then the other option is to remain halted until the bug is patched and the network can update and continue
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/adr-019-strict-inflation-schedule.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ In contrast to a flexible inflation rate, Celestia intends on having a predictab
| Target inflation | 1.50 |

When the target inflation is reached, it remains at that rate.
The table below depicts the inflation rate for the forseeable future:
The table below depicts the inflation rate for the foreseeable future:

| Year | Inflation (%) |
|------|-------------------|
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Implemented in <https://github.com/celestiaorg/celestia-app/pull/1690>

The current protocol around the construction of an original data square (ODS) is based around a set of constraints that are enforced during consensus through validation (See `ProcessProposal`). Block proposers are at liberty to choosing not only what transactions are included and in what order but can effectively decide on the amount of padding (i.e. where each blob is located in the square) and the size of the square. This degree of control leaks needless complexity to users with little upside and allows for adverse behaviour.

Earlier designs were incorporated around the notion of interaction between the block proposer and the transaction submitter. A user that wanted to submit a PFB would go to a potential block proposer, provide them with the transaction, the proposer would then reserve a position in the square for the transaction and finally the transaction submitter would sign the transaction with the provided share index. However, Celestia may have 100 potential block proposers which are often hidden from the network. Furthermore, tranasctions often reach a block proposer through a gossip network, severing the ability for the block proposer to directly communicate with the transaction submitter. Lastly, new transactions with greater fees might arrive causing the block proposer to want to shuffle the transactions around in the square. The response to these problems was to come up with "non-interactive defaults" (first mentioned in [ADR006](./adr-006-non-interactive-defaults.md)).
Earlier designs were incorporated around the notion of interaction between the block proposer and the transaction submitter. A user that wanted to submit a PFB would go to a potential block proposer, provide them with the transaction, the proposer would then reserve a position in the square for the transaction and finally the transaction submitter would sign the transaction with the provided share index. However, Celestia may have 100 potential block proposers which are often hidden from the network. Furthermore, transactions often reach a block proposer through a gossip network, severing the ability for the block proposer to directly communicate with the transaction submitter. Lastly, new transactions with greater fees might arrive causing the block proposer to want to shuffle the transactions around in the square. The response to these problems was to come up with "non-interactive defaults" (first mentioned in [ADR006](./adr-006-non-interactive-defaults.md)).

## Decision

Expand All @@ -25,7 +25,7 @@ Square construction is thus to be reduced to the simple deterministic function:
func ConstructSquare(txs []Tx) []Share
```

and it's couterpart
and it's counterpart

```go
func DeconstructSquare(shares []Share) []Tx
Expand Down
Loading

0 comments on commit 098bcb4

Please sign in to comment.