Skip to content

Commit

Permalink
per comments
Browse files Browse the repository at this point in the history
  • Loading branch information
wacban authored Nov 1, 2023
1 parent 99a3845 commit 26ec5a5
Showing 1 changed file with 13 additions and 8 deletions.
21 changes: 13 additions & 8 deletions neps/nep-0508.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ Currently, NEAR protocol has four shards. With more partners onboarding, we star
* ~~Resharding should not require additional hardware from nodes.~~
* This needs to be assessed during test
* Resharding should be fault tolerant
* Chain must not stall in case of resharding failure. TODO - this seems impossible under current assumptions because the shard layout for an epoch is committed to the chain before resharding is finished
* A validator should be able to recover in case they go offline during resharding.
* For now, our aim is at least allowing a validator to join back after resharding is finished.
* No transaction or receipt should be lost during resharding.
Expand Down Expand Up @@ -71,15 +70,15 @@ A new protocol version will be introduced specifying the new shard layout.

TBD. e.g. additional/updated data a node has to maintain

* For the duration of the resharding the node will need to maintain a snapshot of the flat state and related columns.
* For the duration of the epoch before the new shard layout takes effect, the node will need to maintain the state and flat state of shards in the old and new layout at the same time.
* For the duration of the resharding the node will need to maintain a snapshot of the flat state and related columns. As the main database and the snapshot diverge this will cause some extent of storage overhead.
* For the duration of the epoch before the new shard layout takes effect, the node will need to maintain the state and flat state of shards in the old and new layout at the same time. The State and FlatState columns will grow up to 2x. The processing overhead should be minimal as the chunks will still be executed only on the parent shards. There will be increased load on the database while applying changes to both the parent and the children shards.

### Resharding flow

TBD. how resharding happens at the high level

* The new shard layout will be agreed on offline by the protocol team and hardcoded in the neard reference implementation.
* In epoch T the protocol version upgrade date will pass and nodes will vote to switch to the new protocol version. The new protocol version will contain the new shard layout.
* In epoch T, past the protocol version upgrade date, nodes will vote to switch to the new protocol version.. The new protocol version will contain the new shard layout.
* In epoch T, in the last block of the epoch, the EpochConfig for epoch T+2 will be set. The EpochConfig for epoch T+2 will have the new shard layout.
* In epoch T + 1, all nodes will perform the state split. The child shards will be kept up to date with the blockchain up until the epoch end.
* In epoch T + 2, the chain will switch to the new shard layout.
Expand All @@ -88,6 +87,12 @@ TBD. how resharding happens at the high level

The implementation heavily re-uses the implementation from [NEP-40](https://github.com/near/NEPs/blob/master/specs/Proposals/0040-split-states.md). Below are listed only the major differences and additions.

### Code pointers to the proposed implementation

* [new shard layout](https://github.com/near/nearcore/blob/bc0eb3c6607f4ae865526a25b80706ec4e081fdc/core/primitives/src/shard_layout.rs#L161)
* [the main logic for splitting states](https://github.com/near/nearcore/blob/bc0eb3c6607f4ae865526a25b80706ec4e081fdc/chain/chain/src/resharding.rs#L248)
* [the main logic for applying chunks to split states](https://github.com/near/nearcore/blob/bc0eb3c6607f4ae865526a25b80706ec4e081fdc/chain/chain/src/chain.rs#L5180)

### Flat Storage

The old implementaion of resharding relied on iterating over the full state of the parent shard in order to build the state for the children shards. This implementation was suitable at the time but since then the state has grown considerably and this implementation is now too slow to fit within a single epoch. The new implementation relies on the flat storage in order to build the children shards quicker. Based on benchmarks, splitting one shard by using flat storage can take up to 15min.
Expand All @@ -102,19 +107,19 @@ When resharding, extra care should be taken when handling receipts in order to e

### New shard layout

A new shard layout will be determined and will be scheduled and executed in the production networks. The new shard layout will maintain the same boundaries for shards 0, 1 and 2. The heaviest shard today - Shard 3 - will be split by introducing a new boundary account. The new boundary account will be determined by analysing the storage and gas usage within the shard and selecting a point that will divide the shard roughly in half in accordance to the mentioned metrics. Other metrics can also be used.
The first release of the resharding v2 will contain a new shard layout where one of the existing shard will be split into two smaller shards. Furthermore additional reshardings can be scheduled in subsequent neard releases without additional NEPs unless the need for it arises. A new shard layout will be determined and will be scheduled and executed in the production networks. Resharding will typically happen by splitting one of the existing shards into two smaller shards. The new shard layout will be created by adding a new boundary account. The new boundary account will be determined by analysing the storage and gas usage within the shard and selecting a point that will divide the shard roughly in half in accordance to the mentioned metrics. Other metrics can also be used.

### Fixed shards

Fixed shards is a feature of the protocol that allows for assigning specific accounts and all of their recursive sub accounts to a predetermined shard. This feature is only used for testing, it was never used in production and there is no need for it in production. This feature unfortunately breaks the contiguity of shards. A sub account of a fixed shard account can fall in the middle of account range that belongs to a different shard. This property of fixed shards makes it particularly hard to reason about and implement efficient resharding.

This was implemented ahead of this NEP.
This was implemented ahead of this NEP and the fixed shards feature was **removed**.

### Transaction pool

The transaction pool is sharded e.i. it groups transactions by the shard where each should be converted to a receipt. The transaction pool was previously sharded by the ShardId. Unfortunately ShardId is insufficient to correctly identify a shard across a resharding event as ShardIds change domain. The transaction pool was migrated to group transactions by ShardUId instead and a transaction pool resharding was implemented to reassign transaction from parent shard to children shards right before the new shard layout takes effect.
The transaction pool is sharded e.i. it groups transactions by the shard where each should be converted to a receipt. The transaction pool was previously sharded by the ShardId. Unfortunately ShardId is insufficient to correctly identify a shard across a resharding event as ShardIds change domain. The transaction pool was migrated to group transactions by ShardUId instead and a transaction pool resharding was implemented to reassign transaction from parent shard to children shards right before the new shard layout takes effect. The ShardUId contains the version of the shard layout which allows differentiating between shards in different shard layouts.

This was implemented ahead of this NEP.
This was implemented ahead of this NEP and the transaction pool is now fully **migrated** to ShardUId.

## Security Implications

Expand Down

0 comments on commit 26ec5a5

Please sign in to comment.