From 9d4bc586c1278e6d53a22a2fee7609cdb2321975 Mon Sep 17 00:00:00 2001 From: Lyudmil Ivanov <55487633+flmel@users.noreply.github.com> Date: Tue, 1 Oct 2024 15:56:18 +0300 Subject: [PATCH 1/4] rename nep-519 file --- neps/{nep-519-yield-execution.md => nep-0519.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename neps/{nep-519-yield-execution.md => nep-0519.md} (100%) diff --git a/neps/nep-519-yield-execution.md b/neps/nep-0519.md similarity index 100% rename from neps/nep-519-yield-execution.md rename to neps/nep-0519.md From 60fc0a0a57527145fe794d69b91461b52ea324bb Mon Sep 17 00:00:00 2001 From: Lyudmil Ivanov <55487633+flmel@users.noreply.github.com> Date: Tue, 1 Oct 2024 16:03:33 +0300 Subject: [PATCH 2/4] fix: lint consistency --- neps/nep-0393.md | 4 +--- neps/nep-0399.md | 4 ++-- neps/nep-0418.md | 4 ++-- neps/nep-0491.md | 9 ++++----- neps/nep-0514.md | 1 - 5 files changed, 9 insertions(+), 13 deletions(-) diff --git a/neps/nep-0393.md b/neps/nep-0393.md index c414405ba..c045a18d9 100644 --- a/neps/nep-0393.md +++ b/neps/nep-0393.md @@ -10,8 +10,6 @@ Created: 12-Sep-2022 Requires: --- -# NEP: Soulbound Token - ## Summary Soulbound Token (SBT) is a form of a non-fungible token which represents an aspect of an account: _soul_. [Transferability](#transferability) is limited only to a case of recoverability or a _soul transfer_. The latter must coordinate with a registry to transfer all SBTs from one account to another, and _banning_ the source account. @@ -300,7 +298,7 @@ pub struct ClassMetadata { pub symbol: Option, /// An URL to an Icon. To protect fellow developers from unintentionally triggering any /// SSRF vulnerabilities with URL parsers, we don't allow to set an image bytes here. - /// If it doesn't start with a scheme (eg: https://) then `IssuerMetadata::base_uri` + /// If it doesn't start with a scheme (eg: https://) then `IssuerMetadata::base_uri` /// should be prepended. pub icon: Option, /// JSON or an URL to a JSON file with more info. If it doesn't start with a scheme diff --git a/neps/nep-0399.md b/neps/nep-0399.md index 77256b0e8..779c9b472 100644 --- a/neps/nep-0399.md +++ b/neps/nep-0399.md @@ -1,5 +1,5 @@ --- -NEP: 0399 +NEP: 399 Title: Flat Storage Author: Aleksandr Logunov Min Zhang DiscussionsTo: https://github.com/nearprotocol/neps/pull/0399 @@ -124,7 +124,7 @@ will never be finalized. As a result, if we use the last final block as the flat FlatStorage needs to process is a descendant of the flat head. To support key value lookups for other blocks that are not the flat head, FlatStorage will -store key value changes(deltas) per block for these blocks. +store key value changes(deltas) per block for these blocks. We call these deltas FlatStorageDelta (FSD). Let’s say the flat storage head is at block h, and we are applying transactions based on block h’. Since h is the last final block, h is an ancestor of h'. To access the state at block h', we need FSDs of all blocks between h and h'. diff --git a/neps/nep-0418.md b/neps/nep-0418.md index 9c6f9ed96..f1c703b90 100644 --- a/neps/nep-0418.md +++ b/neps/nep-0418.md @@ -1,5 +1,5 @@ --- -NEP: 0418 +NEP: 418 Title: Remove attached_deposit view panic Author: Austin Abell DiscussionsTo: https://github.com/nearprotocol/neps/pull/418 @@ -25,7 +25,7 @@ Initial discussion: https://near.zulipchat.com/#narrow/stream/295306-pagoda.2Fco ## Rationale and alternatives -The rationale for assigning `0u128` to the pointer (`u64`) passed into `attached_deposit` is that it's the least breaking change. +The rationale for assigning `0u128` to the pointer (`u64`) passed into `attached_deposit` is that it's the least breaking change. The alternative of returning some special value, say `u128::MAX`, is that it would cause some unintended side effects for view calls using the `attached_deposit`. For example, if `attached_deposit` is called within a function, older versions of a contract that do not check the special value will return a result assuming that the attached deposit is `u128::MAX`. This is not a large concern since it would just be a view call, but it might be a bad UX in some edge cases, where returning 0 wouldn't be an issue. diff --git a/neps/nep-0491.md b/neps/nep-0491.md index d5a9be90d..b84b18726 100644 --- a/neps/nep-0491.md +++ b/neps/nep-0491.md @@ -10,7 +10,6 @@ Created: 2023-07-24 LastUpdated: 2023-07-26 --- - ## Summary Non-refundable storage allows to create accounts with arbitrary state for users, @@ -122,7 +121,7 @@ On the protocol side, we need to add new action: ```rust enum Action { - CreateAccount(CreateAccountAction), + CreateAccount(CreateAccountAction), DeployContract(DeployContractAction), FunctionCall(FunctionCallAction), Transfer(TransferAction), @@ -194,7 +193,7 @@ writer.serialize(nonrefundable)?; Note that we are not migrating old accounts. Accounts created as version 1 will remain at version 1. -A proof of concept implementation for nearcore is available in this PR: +A proof of concept implementation for nearcore is available in this PR: https://github.com/near/nearcore/pull/9346 @@ -279,8 +278,8 @@ problem. As suggested by [@mfornet](https://github.com/near/NEPs/pull/491#discussion_r1349496234) another alternative is using a proxy account approach where the business creates -an account with a deployed contract that has Regular (user has full access key) -and Restricted mode (user doesn't have full access key and cannot delete +an account with a deployed contract that has Regular (user has full access key) +and Restricted mode (user doesn't have full access key and cannot delete account). In restricted mode, the user has a `FunctionCallKey` which allows the user to diff --git a/neps/nep-0514.md b/neps/nep-0514.md index 10a3bae80..8711e6b65 100644 --- a/neps/nep-0514.md +++ b/neps/nep-0514.md @@ -10,7 +10,6 @@ Created: 2023-10-25 LastUpdated: 2023-10-25 --- - ## Summary This proposal aims to adjust the number of block producer seats on `testnet` in From 19ac45d44ce7195a4d0eb1185de66f3883817db7 Mon Sep 17 00:00:00 2001 From: Lyudmil Ivanov <55487633+flmel@users.noreply.github.com> Date: Tue, 1 Oct 2024 16:43:44 +0300 Subject: [PATCH 3/4] chore: update Readme.md --- README.md | 58 ++++++++++++++++++++++++++++++++----------------------- 1 file changed, 34 insertions(+), 24 deletions(-) diff --git a/README.md b/README.md index 94af24912..dfb1ff58b 100644 --- a/README.md +++ b/README.md @@ -10,30 +10,40 @@ Changes to the protocol specification and standards are called NEAR Enhancement ## NEPs -| NEP # | Title | Author | Status | -| ----------------------------------------------------------------- | ---------------------------------------- | -------------------------------------------- | ---------- | -| [0001](https://github.com/near/NEPs/blob/master/neps/nep-0001.md) | NEP Purpose and Guidelines | @jlogelin | Living | -| [0021](https://github.com/near/NEPs/blob/master/neps/nep-0021.md) | Fungible Token Standard (Deprecated) | @evgenykuzyakov | Deprecated | -| [0141](https://github.com/near/NEPs/blob/master/neps/nep-0141.md) | Fungible Token Standard | @evgenykuzyakov @oysterpack, @robert-zaremba | Final | -| [0145](https://github.com/near/NEPs/blob/master/neps/nep-0145.md) | Storage Management | @evgenykuzyakov | Final | -| [0148](https://github.com/near/NEPs/blob/master/neps/nep-0148.md) | Fungible Token Metadata | @robert-zaremba @evgenykuzyakov @oysterpack | Final | -| [0171](https://github.com/near/NEPs/blob/master/neps/nep-0171.md) | Non Fungible Token Standard | @mikedotexe @evgenykuzyakov @oysterpack | Final | -| [0177](https://github.com/near/NEPs/blob/master/neps/nep-0177.md) | Non Fungible Token Metadata | @chadoh @mikedotexe | Final | -| [0178](https://github.com/near/NEPs/blob/master/neps/nep-0178.md) | Non Fungible Token Approval Management | @chadoh @thor314 | Final | -| [0181](https://github.com/near/NEPs/blob/master/neps/nep-0181.md) | Non Fungible Token Enumeration | @chadoh @thor314 | Final | -| [0199](https://github.com/near/NEPs/blob/master/neps/nep-0199.md) | Non Fungible Token Royalties and Payouts | @thor314 @mattlockyer | Final | -| [0245](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) | Multi Token Standard | @zcstarr @riqi @jriemann @marcos.sun | Review | -| [0264](https://github.com/near/NEPs/blob/master/neps/nep-0264.md) | Promise Gas Weights | @austinabell | Final | -| [0297](https://github.com/near/NEPs/blob/master/neps/nep-0297.md) | Events Standard | @telezhnaya | Final | -| [0330](https://github.com/near/NEPs/blob/master/neps/nep-0330.md) | Source Metadata | @BenKurrek | Review | -| [0366](https://github.com/near/NEPs/blob/master/neps/nep-0366.md) | Meta Transactions | @ilblackdragon @e-uleyskiy @fadeevab | Final | -| [0393](https://github.com/near/NEPs/blob/master/neps/nep-0393.md) | Sould Bound Token (SBT) | @robert-zaremba | Final | -| [0399](https://github.com/near/NEPs/blob/master/neps/nep-0399.md) | Flat Storage | @Longarithm @mzhangmzz | Final | -| [0448](https://github.com/near/NEPs/blob/master/neps/nep-0448.md) | Zero-balance Accounts | @bowenwang1996 | Final | -| [0452](https://github.com/near/NEPs/blob/master/neps/nep-0452.md) | Linkdrop Standard | @benkurrek @miyachi | Final | -| [0455](https://github.com/near/NEPs/blob/master/neps/nep-0455.md) | Parameter Compute Costs | @akashin @jakmeier | Final | -| [0491](https://github.com/near/NEPs/blob/master/neps/nep-0491.md) | Non-Refundable Storage Staking | @jakmeier | Review | -| [0514](https://github.com/near/NEPs/blob/master/neps/nep-0514.md) | Fewer Block Producer Seats in `testnet` | @nikurt | Final | +| NEP # | Title | Author | Status | +| ----------------------------------------------------------------- | ----------------------------------------------------------------- | ------------------------------------------------- | ---------- | +| [0001](https://github.com/near/NEPs/blob/master/neps/nep-0001.md) | NEP Purpose and Guidelines | @jlogelin | Living | +| [0021](https://github.com/near/NEPs/blob/master/neps/nep-0021.md) | Fungible Token Standard (Deprecated) | @evgenykuzyakov | Deprecated | +| [0141](https://github.com/near/NEPs/blob/master/neps/nep-0141.md) | Fungible Token Standard | @evgenykuzyakov @oysterpack, @robert-zaremba | Final | +| [0145](https://github.com/near/NEPs/blob/master/neps/nep-0145.md) | Storage Management | @evgenykuzyakov | Final | +| [0148](https://github.com/near/NEPs/blob/master/neps/nep-0148.md) | Fungible Token Metadata | @robert-zaremba @evgenykuzyakov @oysterpack | Final | +| [0171](https://github.com/near/NEPs/blob/master/neps/nep-0171.md) | Non Fungible Token Standard | @mikedotexe @evgenykuzyakov @oysterpack | Final | +| [0177](https://github.com/near/NEPs/blob/master/neps/nep-0177.md) | Non Fungible Token Metadata | @chadoh @mikedotexe | Final | +| [0178](https://github.com/near/NEPs/blob/master/neps/nep-0178.md) | Non Fungible Token Approval Management | @chadoh @thor314 | Final | +| [0181](https://github.com/near/NEPs/blob/master/neps/nep-0181.md) | Non Fungible Token Enumeration | @chadoh @thor314 | Final | +| [0199](https://github.com/near/NEPs/blob/master/neps/nep-0199.md) | Non Fungible Token Royalties and Payouts | @thor314 @mattlockyer | Final | +| [0245](https://github.com/near/NEPs/blob/master/neps/nep-0245.md) | Multi Token Standard | @zcstarr @riqi @jriemann @marcos.sun | Final | +| [0264](https://github.com/near/NEPs/blob/master/neps/nep-0264.md) | Promise Gas Weights | @austinabell | Final | +| [0297](https://github.com/near/NEPs/blob/master/neps/nep-0297.md) | Events Standard | @telezhnaya | Final | +| [0330](https://github.com/near/NEPs/blob/master/neps/nep-0330.md) | Source Metadata | @BenKurrek | Final | +| [0364](https://github.com/near/NEPs/blob/master/neps/nep-0364.md) | Efficient signature verification and hashing precompile functions | @blasrodri | Final | +| [0366](https://github.com/near/NEPs/blob/master/neps/nep-0366.md) | Meta Transactions | @ilblackdragon @e-uleyskiy @fadeevab | Final | +| [0393](https://github.com/near/NEPs/blob/master/neps/nep-0393.md) | Sould Bound Token (SBT) | @robert-zaremba | Final | +| [0399](https://github.com/near/NEPs/blob/master/neps/nep-0399.md) | Flat Storage | @Longarithm @mzhangmzz | Final | +| [0413](https://github.com/near/NEPs/blob/master/neps/nep-0413.md) | Near Wallet API - support for signMessage method | @gagdiez @gutsyphilip | Final | +| [0418](https://github.com/near/NEPs/blob/master/neps/nep-0418.md) | Remove attached_deposit view panic | @austinabell | Final | +| [0448](https://github.com/near/NEPs/blob/master/neps/nep-0448.md) | Zero-balance Accounts | @bowenwang1996 | Final | +| [0452](https://github.com/near/NEPs/blob/master/neps/nep-0452.md) | Linkdrop Standard | @benkurrek @miyachi | Final | +| [0455](https://github.com/near/NEPs/blob/master/neps/nep-0455.md) | Parameter Compute Costs | @akashin @jakmeier | Final | +| [0488](https://github.com/near/NEPs/blob/master/neps/nep-0488.md) | Host Functions for BLS12-381 Curve Operations | @olga24912 | Final | +| [0491](https://github.com/near/NEPs/blob/master/neps/nep-0491.md) | Non-Refundable Storage Staking | @jakmeier | Final | +| [0492](https://github.com/near/NEPs/blob/master/neps/nep-0492.md) | Restrict creation of Ethereum Addresses | @bowenwang1996 | Final | +| [0508](https://github.com/near/NEPs/blob/master/neps/nep-0508.md) | Resharding v2 | @wacban @shreyan-gupta @walnut-the-cat | Final | +| [0509](https://github.com/near/NEPs/blob/master/neps/nep-0509.md) | Stateless validation Stage 0 | @robin-near @pugachAG @Longarithm @walnut-the-cat | Final | +| [0514](https://github.com/near/NEPs/blob/master/neps/nep-0514.md) | Fewer Block Producer Seats in `testnet` | @nikurt | Final | +| [0519](https://github.com/near/NEPs/blob/master/neps/nep-0519.md) | Yield Execution | @akhi3030 @saketh-are | Final | +| [0536](https://github.com/near/NEPs/blob/master/neps/nep-0536.md) | Reduce the number of gas refunds | @evgenykuzyakov @bowenwang1996 | Final | +| [0539](https://github.com/near/NEPs/blob/master/neps/nep-0539.md) | Cross-Shard Congestion Control | @wacban @jakmeier | Final | ## Specification From f4ec06a8f58d409ae7c5eca60df72f54cab39e66 Mon Sep 17 00:00:00 2001 From: Lyudmil Ivanov <55487633+flmel@users.noreply.github.com> Date: Tue, 1 Oct 2024 17:46:52 +0300 Subject: [PATCH 4/4] fix: headers consistency, update NEP status --- neps/nep-0021.md | 2 +- neps/nep-0141.md | 2 +- neps/nep-0145.md | 2 +- neps/nep-0148.md | 2 +- neps/nep-0171.md | 6 +- neps/nep-0177.md | 2 +- neps/nep-0178.md | 2 +- neps/nep-0181.md | 4 +- neps/nep-0199.md | 4 +- neps/nep-0245.md | 110 ++++++++++++------------- neps/nep-0264.md | 12 +-- neps/nep-0297.md | 4 +- neps/nep-0330.md | 14 ++-- neps/nep-0364.md | 2 +- neps/nep-0366.md | 2 +- neps/nep-0393.md | 2 +- neps/nep-0399.md | 2 +- neps/nep-0413.md | 2 +- neps/nep-0418.md | 2 +- neps/nep-0448.md | 2 +- neps/nep-0452.md | 2 +- neps/nep-0455.md | 4 +- neps/nep-0488.md | 206 +++++++++++++++++++++++------------------------ neps/nep-0491.md | 2 +- neps/nep-0508.md | 62 +++++++------- neps/nep-0509.md | 80 +++++++++--------- neps/nep-0514.md | 2 +- neps/nep-0519.md | 2 +- neps/nep-0536.md | 2 +- neps/nep-0539.md | 66 +++++++-------- 30 files changed, 304 insertions(+), 304 deletions(-) diff --git a/neps/nep-0021.md b/neps/nep-0021.md index ab45c49c4..d7a1bb29d 100644 --- a/neps/nep-0021.md +++ b/neps/nep-0021.md @@ -2,8 +2,8 @@ NEP: 21 Title: Fungible Token Standard Author: Evgeny Kuzyakov -DiscussionsTo: https://github.com/near/NEPs/pull/21 Status: Final +DiscussionsTo: https://github.com/near/NEPs/pull/21 Type: Standards Track Category: Contract Created: 29-Oct-2019 diff --git a/neps/nep-0141.md b/neps/nep-0141.md index bdc5d35fb..03e1c570f 100644 --- a/neps/nep-0141.md +++ b/neps/nep-0141.md @@ -2,8 +2,8 @@ NEP: 141 Title: Fungible Token Standard Author: Evgeny Kuzyakov , Robert Zaremba <@robert-zaremba>, @oysterpack -DiscussionsTo: https://github.com/near/NEPs/issues/141 Status: Final +DiscussionsTo: https://github.com/near/NEPs/issues/141 Type: Standards Track Category: Contract Created: 03-Mar-2022 diff --git a/neps/nep-0145.md b/neps/nep-0145.md index c6fd70f74..821c7fda2 100644 --- a/neps/nep-0145.md +++ b/neps/nep-0145.md @@ -2,8 +2,8 @@ NEP: 145 Title: Storage Management Author: Evgeny Kuzyakov , @oysterpack -DiscussionsTo: https://github.com/near/NEPs/discussions/145 Status: Final +DiscussionsTo: https://github.com/near/NEPs/discussions/145 Type: Standards Track Category: Contract Created: 03-Mar-2022 diff --git a/neps/nep-0148.md b/neps/nep-0148.md index 1bcc59692..1534ae1b6 100644 --- a/neps/nep-0148.md +++ b/neps/nep-0148.md @@ -2,8 +2,8 @@ NEP: 148 Title: Fungible Token Metadata Author: Robert Zaremba , Evgeny Kuzyakov , @oysterpack -DiscussionsTo: https://github.com/near/NEPs/discussions/148 Status: Final +DiscussionsTo: https://github.com/near/NEPs/discussions/148 Type: Standards Track Category: Contract Created: 03-Mar-2022 diff --git a/neps/nep-0171.md b/neps/nep-0171.md index 8b0990319..31573ac9c 100644 --- a/neps/nep-0171.md +++ b/neps/nep-0171.md @@ -2,8 +2,8 @@ NEP: 171 Title: Non Fungible Token Standard Author: Mike Purvis , Evgeny Kuzyakov , @oysterpack -DiscussionsTo: https://github.com/near/NEPs/discussions/171 Status: Final +DiscussionsTo: https://github.com/near/NEPs/discussions/171 Type: Standards Track Category: Contract Version: 1.2.0 @@ -391,7 +391,7 @@ Note that the example events above cover two different kinds of events: 2. An event that has a relevant trigger function [NFT Core Standard](https://nomicon.io/Standards/NonFungibleToken/Core.html#nft-interface) (`nft_transfer`) This event standard also applies beyond the events highlighted here, where future events follow the same convention of as the second type. For instance, if an NFT contract uses the [approval management standard](https://nomicon.io/Standards/NonFungibleToken/ApprovalManagement.html), it may emit an event for `nft_approve` if that's deemed as important by the developer community. - + Please feel free to open pull requests for extending the events standard detailed here as needs arise. @@ -441,7 +441,7 @@ The extension NEP-0469 that added Token Metadata Update Event Kind to this NEP-0 #### Concerns | # | Concern | Resolution | Status | -| - | - | - | - | +| - | - | - | - | | 1 | Ecosystem will be split where legacy contracts won't emit these new events, so legacy support will still be needed | In the future, there will be fewer legacy contracts and eventually apps will have support for this type of event | Resolved | | 2 | `nft_update` event name is ambiguous | It was decided to use `nft_metadata_update` name, instead | Resolved | diff --git a/neps/nep-0177.md b/neps/nep-0177.md index 2c3969ff0..66ea8b436 100644 --- a/neps/nep-0177.md +++ b/neps/nep-0177.md @@ -2,8 +2,8 @@ NEP: 177 Title: Non Fungible Token Metadata Author: Chad Ostrowski <@chadoh>, Mike Purvis -DiscussionsTo: https://github.com/near/NEPs/discussions/177 Status: Final +DiscussionsTo: https://github.com/near/NEPs/discussions/177 Type: Standards Track Category: Contract Created: 03-Mar-2022 diff --git a/neps/nep-0178.md b/neps/nep-0178.md index cfbc49398..7021c8f23 100644 --- a/neps/nep-0178.md +++ b/neps/nep-0178.md @@ -2,8 +2,8 @@ NEP: 178 Title: Non Fungible Token Approval Management Author: Chad Ostrowski <@chadoh>, Thor <@thor314> -DiscussionsTo: https://github.com/near/NEPs/discussions/178 Status: Final +DiscussionsTo: https://github.com/near/NEPs/discussions/178 Type: Standards Track Category: Contract Created: 03-Mar-2022 diff --git a/neps/nep-0181.md b/neps/nep-0181.md index 716792500..89e44243f 100644 --- a/neps/nep-0181.md +++ b/neps/nep-0181.md @@ -2,8 +2,8 @@ NEP: 181 Title: Non Fungible Token Enumeration Author: Chad Ostrowski <@chadoh>, Thor <@thor314> -DiscussionsTo: https://github.com/near/NEPs/discussions/181 Status: Final +DiscussionsTo: https://github.com/near/NEPs/discussions/181 Type: Standards Track Category: Contract Created: 03-Mar-2022 @@ -78,7 +78,7 @@ function nft_tokens_for_owner( ## Notes -At the time of this writing, the specialized collections in the `near-sdk` Rust crate are iterable, but not all of them have implemented an `iter_from` solution. There may be efficiency gains for large collections and contract developers are encouraged to test their data structures with a large amount of entries. +At the time of this writing, the specialized collections in the `near-sdk` Rust crate are iterable, but not all of them have implemented an `iter_from` solution. There may be efficiency gains for large collections and contract developers are encouraged to test their data structures with a large amount of entries. ## Reference Implementation diff --git a/neps/nep-0199.md b/neps/nep-0199.md index 2e1322d7a..4d7cdeb4f 100644 --- a/neps/nep-0199.md +++ b/neps/nep-0199.md @@ -2,8 +2,8 @@ NEP: 199 Title: Non Fungible Token Royalties and Payouts Author: Thor <@thor314>, Matt Lockyer <@mattlockyer> -DiscussionsTo: https://github.com/near/NEPs/discussions/199 Status: Final +DiscussionsTo: https://github.com/near/NEPs/discussions/199 Type: Standards Track Category: Contract Created: 03-Mar-2022 @@ -116,7 +116,7 @@ NFT and financial contracts vary in implementation. This means that some extra C ## Drawbacks -There is an introduction of trust that the contract calling `nft_transfer_payout` will indeed pay out to all intended parties. However, since the calling contract will typically be something like a marketplace used by end users, malicious actors might be found out more easily and might have less incentive. +There is an introduction of trust that the contract calling `nft_transfer_payout` will indeed pay out to all intended parties. However, since the calling contract will typically be something like a marketplace used by end users, malicious actors might be found out more easily and might have less incentive. There is an assumption that NFT contracts will understand the limits of gas and not allow for a number of payouts that cannot be achieved. ## Future possibilities diff --git a/neps/nep-0245.md b/neps/nep-0245.md index 82c11c5ae..82d1d3d48 100644 --- a/neps/nep-0245.md +++ b/neps/nep-0245.md @@ -1,9 +1,9 @@ --- -NEP: 245 +NEP: 245 Title: Multi Token Standard Author: Zane Starr , @riqi, @jriemann, @marcos.sun +Status: Final DiscussionsTo: https://github.com/near/NEPs/discussions/246 -Status: Review Type: Standards Track Category: Contract Created: 03-Mar-2022 @@ -17,11 +17,11 @@ A standard interface for a multi token standard that supports fungible, semi-fun ## Motivation -In the three years since [ERC-1155] was ratified by the Ethereum Community, Multi Token based contracts have proven themselves valuable assets. Many blockchain projects emulate this standard for representing multiple token assets classes in a single contract. The ability to reduce transaction overhead for marketplaces, video games, DAOs, and exchanges is appealing to the blockchain ecosystem and simplifies transactions for developers. +In the three years since [ERC-1155] was ratified by the Ethereum Community, Multi Token based contracts have proven themselves valuable assets. Many blockchain projects emulate this standard for representing multiple token assets classes in a single contract. The ability to reduce transaction overhead for marketplaces, video games, DAOs, and exchanges is appealing to the blockchain ecosystem and simplifies transactions for developers. Having a single contract represent NFTs, FTs, and tokens that sit inbetween greatly improves efficiency. The standard also introduced the ability to make batch requests with multiple asset classes reducing complexity. This standard allows operations that currently require _many_ transactions to be completed in a single transaction that can transfer not only NFTs and FTs, but any tokens that are a part of same token contract. -With this standard, we have sought to take advantage of the ability of the NEAR blockchain to scale. Its sharded runtime, and [storage staking] model that decouples [gas] fees from storage demand, enables ultra low transaction fees and greater on chain storage (see [Metadata][MT Metadata] extension). +With this standard, we have sought to take advantage of the ability of the NEAR blockchain to scale. Its sharded runtime, and [storage staking] model that decouples [gas] fees from storage demand, enables ultra low transaction fees and greater on chain storage (see [Metadata][MT Metadata] extension). With the aforementioned, it is noteworthy to mention that like the [NFT] standard the Multi Token standard, implements `mt_transfer_call`, which allows, a user to attach many tokens to a call to a separate contract. Additionally, this standard includes an optional [Approval Management] extension. The extension allows marketplaces to trade on behalf of a user, providing additional flexibility for dApps. @@ -39,24 +39,24 @@ Why have another standard, aren't fungible and non-fungible tokens enough? The The standard here introduces a few concepts that evolve the original [ERC-1155] standard to have more utility, while maintaining the original flexibility of the standard. So keeping that in mind, we are defining this as a new token type. It combines two main features of FT and NFT. It allows us to represent many token types in a single contract, and it's possible to store the amount for each token. -The decision to not use FT and NFT as explicit token types was taken to allow the community to define their own standards and meanings through metadata. As standards evolve on other networks, this specification allows the standard to be able to represent tokens across networks accurately, without necessarily restricting the behavior to any preset definition. +The decision to not use FT and NFT as explicit token types was taken to allow the community to define their own standards and meanings through metadata. As standards evolve on other networks, this specification allows the standard to be able to represent tokens across networks accurately, without necessarily restricting the behavior to any preset definition. -The issues with this in general is a problem with defining what metadata means and how is that interpreted. We have chosen to follow the pattern that is currently in use on Ethereum in the [ERC-1155] standard. That pattern relies on people to make extensions or to make signals as to how they want the metadata to be represented for their use case. +The issues with this in general is a problem with defining what metadata means and how is that interpreted. We have chosen to follow the pattern that is currently in use on Ethereum in the [ERC-1155] standard. That pattern relies on people to make extensions or to make signals as to how they want the metadata to be represented for their use case. One of the areas that has broad sweeping implications from the [ERC-1155] standard is the lack of direct access to metadata. With Near's sharding we are able to have a [Metadata Extension][MT Metadata] for the standard that exists on chain. So developers and users are not required to use an indexer to understand, how to interact or interpret tokens, via token identifiers that they receive. -Another extension that we made was to provide an explicit ability for developers and users to group or link together series of NFTs/FTs or any combination of tokens. This provides additional flexibility that the [ERC-1155] standard only has loose guidelines on. This was chosen to make it easy for consumers to understand the relationship between tokens within the contract. +Another extension that we made was to provide an explicit ability for developers and users to group or link together series of NFTs/FTs or any combination of tokens. This provides additional flexibility that the [ERC-1155] standard only has loose guidelines on. This was chosen to make it easy for consumers to understand the relationship between tokens within the contract. -To recap, we choose to create this standard, to improve interoperability, developer ease of use, and to extend token representability beyond what was available directly in the FT or NFT standards. We believe this to be another tool in the developer's toolkit. It makes it possible to represent many types of tokens and to enable exchanges of many tokens within a single `transaction`. +To recap, we choose to create this standard, to improve interoperability, developer ease of use, and to extend token representability beyond what was available directly in the FT or NFT standards. We believe this to be another tool in the developer's toolkit. It makes it possible to represent many types of tokens and to enable exchanges of many tokens within a single `transaction`. -## Specification +## Specification **NOTES**: - All amounts, balances and allowance are limited by `U128` (max value `2**128 - 1`). - Token standard uses JSON for serialization of arguments and results. - Amounts in arguments and results are serialized as Base-10 strings, e.g. `"100"`. This is done to avoid JSON limitation of max integer value of `2**53`. -- The contract must track the change in storage when adding to and removing from collections. This is not included in this core multi token standard but instead in the [Storage Standard][Storage Management]. +- The contract must track the change in storage when adding to and removing from collections. This is not included in this core multi token standard but instead in the [Storage Standard][Storage Management]. - To prevent the deployed contract from being modified or deleted, it should not have any access keys on its account. ### MT Interface @@ -93,11 +93,11 @@ type Token = { // * `amount`: the number of tokens to transfer, wrapped in quotes and treated // like a string, although the number will be stored as an unsigned integer // with 128 bits. -// * `approval` (optional): is a tuple of [`owner_id`,`approval_id`]. -// `owner_id` is the valid Near account that owns the tokens. +// * `approval` (optional): is a tuple of [`owner_id`,`approval_id`]. +// `owner_id` is the valid Near account that owns the tokens. // `approval_id` is the expected approval ID. A number smaller than // 2^53, and therefore representable as JSON. See Approval Management -// standard for full explanation. +// standard for full explanation. // * `memo` (optional): for use cases that may benefit from indexing or // providing information for a transfer @@ -131,14 +131,14 @@ function mt_transfer( // * `amounts`: the number of tokens to transfer, wrapped in quotes and treated // like an array of strings, although the numbers will be stored as an array of unsigned integer // with 128 bits. -// * `approvals` (optional): is an array of expected `approval` per `token_ids`. -// If a `token_id` does not have a corresponding `approval` then the entry in the array +// * `approvals` (optional): is an array of expected `approval` per `token_ids`. +// If a `token_id` does not have a corresponding `approval` then the entry in the array // must be marked null. -// `approval` is a tuple of [`owner_id`,`approval_id`]. -// `owner_id` is the valid Near account that owns the tokens. +// `approval` is a tuple of [`owner_id`,`approval_id`]. +// `owner_id` is the valid Near account that owns the tokens. // `approval_id` is the expected approval ID. A number smaller than // 2^53, and therefore representable as JSON. See Approval Management -// standard for full explanation. +// standard for full explanation. // * `memo` (optional): for use cases that may benefit from indexing or // providing information for a transfer @@ -182,8 +182,8 @@ function mt_batch_transfer( // like a string, although the number will be stored as an unsigned integer // with 128 bits. // * `owner_id`: the valid NEAR account that owns the token -// * `approval` (optional): is a tuple of [`owner_id`,`approval_id`]. -// `owner_id` is the valid Near account that owns the tokens. +// * `approval` (optional): is a tuple of [`owner_id`,`approval_id`]. +// `owner_id` is the valid Near account that owns the tokens. // `approval_id` is the expected approval ID. A number smaller than // 2^53, and therefore representable as JSON. See Approval Management // * `memo` (optional): for use cases that may benefit from indexing or @@ -234,15 +234,15 @@ function mt_transfer_call( // * `token_ids`: the tokens to transfer // * `amounts`: the number of tokens to transfer, wrapped in quotes and treated // like an array of string, although the numbers will be stored as an array of -// unsigned integer with 128 bits. -// * `approvals` (optional): is an array of expected `approval` per `token_ids`. -// If a `token_id` does not have a corresponding `approval` then the entry in the array +// unsigned integer with 128 bits. +// * `approvals` (optional): is an array of expected `approval` per `token_ids`. +// If a `token_id` does not have a corresponding `approval` then the entry in the array // must be marked null. -// `approval` is a tuple of [`owner_id`,`approval_id`]. -// `owner_id` is the valid Near account that owns the tokens. +// `approval` is a tuple of [`owner_id`,`approval_id`]. +// `owner_id` is the valid Near account that owns the tokens. // `approval_id` is the expected approval ID. A number smaller than // 2^53, and therefore representable as JSON. See Approval Management -// standard for full explanation. +// standard for full explanation. // * `memo` (optional): for use cases that may benefit from indexing or // providing information for a transfer. // * `msg`: specifies information needed by the receiving contract in @@ -267,16 +267,16 @@ function mt_batch_transfer_call( // Returns the tokens with the given `token_ids` or `null` if no such token. function mt_token(token_ids: string[]) (Token | null)[] -// Returns the balance of an account for the given `token_id`. -// The balance though wrapped in quotes and treated like a string, +// Returns the balance of an account for the given `token_id`. +// The balance though wrapped in quotes and treated like a string, // the number will be stored as an unsigned integer with 128 bits. // Arguments: // * `account_id`: the NEAR account that owns the token. // * `token_id`: the token to retrieve the balance from function mt_balance_of(account_id: string, token_id: string): string -// Returns the balances of an account for the given `token_ids`. -// The balances though wrapped in quotes and treated like strings, +// Returns the balances of an account for the given `token_ids`. +// The balances though wrapped in quotes and treated like strings, // the numbers will be stored as an unsigned integer with 128 bits. // Arguments: // * `account_id`: the NEAR account that owns the tokens. @@ -284,12 +284,12 @@ function mt_balance_of(account_id: string, token_id: string): string function mt_batch_balance_of(account_id: string, token_ids: string[]): string[] // Returns the token supply with the given `token_id` or `null` if no such token exists. -// The supply though wrapped in quotes and treated like a string, the number will be stored +// The supply though wrapped in quotes and treated like a string, the number will be stored // as an unsigned integer with 128 bits. -function mt_supply(token_id: string): string | null +function mt_supply(token_id: string): string | null -// Returns the token supplies with the given `token_ids`, a string value is returned or `null` -// if no such token exists. The supplies though wrapped in quotes and treated like strings, +// Returns the token supplies with the given `token_ids`, a string value is returned or `null` +// if no such token exists. The supplies though wrapped in quotes and treated like strings, // the numbers will be stored as an unsigned integer with 128 bits. function mt_batch_supply(token_ids: string[]): (string | null)[] ``` @@ -297,7 +297,7 @@ function mt_batch_supply(token_ids: string[]): (string | null)[] The following behavior is required, but contract authors may name this function something other than the conventional `mt_resolve_transfer` used here. ```ts -// Finalize an `mt_transfer_call` or `mt_batch_transfer_call` chain of cross-contract calls. Generically +// Finalize an `mt_transfer_call` or `mt_batch_transfer_call` chain of cross-contract calls. Generically // referred to as `mt_transfer_call` as it applies to `mt_batch_transfer_call` as well. // // The `mt_transfer_call` process: @@ -323,19 +323,19 @@ The following behavior is required, but contract authors may name this function // * `approvals (optional)`: if using Approval Management, contract MUST provide // set of original approvals in this argument, and restore the // approved accounts in case of revert. -// `approvals` is an array of expected `approval_list` per `token_ids`. -// If a `token_id` does not have a corresponding `approvals_list` then the entry in the +// `approvals` is an array of expected `approval_list` per `token_ids`. +// If a `token_id` does not have a corresponding `approvals_list` then the entry in the // array must be marked null. -// `approvals_list` is an array of triplets of [`owner_id`,`approval_id`,`amount`]. -// `owner_id` is the valid Near account that owns the tokens. +// `approvals_list` is an array of triplets of [`owner_id`,`approval_id`,`amount`]. +// `owner_id` is the valid Near account that owns the tokens. // `approval_id` is the expected approval ID. A number smaller than // 2^53, and therefore representable as JSON. See Approval Management -// standard for full explanation. +// standard for full explanation. // `amount`: the number of tokens to transfer, wrapped in quotes and treated // like a string, although the number will be stored as an unsigned integer // with 128 bits. -// -// +// +// // // Returns total amount spent by the `receiver_id`, corresponding to the `token_id`. // The amounts returned, though wrapped in quotes and treated like strings, @@ -361,7 +361,7 @@ Contracts which want to make use of `mt_transfer_call` and `mt_batch_transfer_ca // Take some action after receiving a multi token // // Requirements: -// * Contract MUST restrict calls to this function to a set of whitelisted +// * Contract MUST restrict calls to this function to a set of whitelisted // contracts // * Contract MUST panic if `token_ids` length does not equals `amounts` // length @@ -379,8 +379,8 @@ Contracts which want to make use of `mt_transfer_call` and `mt_batch_transfer_ca // request. This may include method names and/or arguments. // // Returns the number of unused tokens in string form. For instance, if `amounts` -// is `["10"]` but only 9 are needed, it will return `["1"]`. The amounts returned, -// though wrapped in quotes and treated like strings, the numbers will be stored as +// is `["10"]` but only 9 are needed, it will return `["1"]`. The amounts returned, +// though wrapped in quotes and treated like strings, the numbers will be stored as // an unsigned integer with 128 bits. @@ -393,7 +393,7 @@ function mt_on_transfer( ): Promise; ``` -## Events +## Events NEAR and third-party applications need to track `mint`, `burn`, `transfer` events for all MT-driven apps consistently. This exension addresses that. @@ -418,11 +418,11 @@ interface MtEventLogData { ``` ```ts -// Minting event log. Emitted when a token is minted/created. +// Minting event log. Emitted when a token is minted/created. // Requirements // * Contract MUST emit event when minting a token -// Fields -// * Contract token_ids and amounts MUST be the same length +// Fields +// * Contract token_ids and amounts MUST be the same length // * `owner_id`: the account receiving the minted token // * `token_ids`: the tokens minted // * `amounts`: the number of tokens minted, wrapped in quotes and treated @@ -436,11 +436,11 @@ interface MtMintLog { memo?: string } -// Burning event log. Emitted when a token is burned. +// Burning event log. Emitted when a token is burned. // Requirements // * Contract MUST emit event when minting a token -// Fields -// * Contract token_ids and amounts MUST be the same length +// Fields +// * Contract token_ids and amounts MUST be the same length // * `owner_id`: the account whose token(s) are being burned // * `authorized_id`: approved account_id to burn, if applicable // * `token_ids`: the tokens being burned @@ -456,14 +456,14 @@ interface MtBurnLog { memo?: string } -// Transfer event log. Emitted when a token is transferred. +// Transfer event log. Emitted when a token is transferred. // Requirements // * Contract MUST emit event when transferring a token -// Fields +// Fields // * `authorized_id`: approved account_id to transfer // * `old_owner_id`: the account sending the tokens "sender.near" // * `new_owner_id`: the account receiving the tokens "receiver.near" -// * `token_ids`: the tokens to transfer +// * `token_ids`: the tokens to transfer // * `amounts`: the number of tokens to transfer, wrapped in quotes and treated // like a string, although the numbers will be stored as an unsigned integer // array with 128 bits. diff --git a/neps/nep-0264.md b/neps/nep-0264.md index e239a4b41..cbde6521c 100644 --- a/neps/nep-0264.md +++ b/neps/nep-0264.md @@ -2,8 +2,8 @@ NEP: 264 Title: Utilization of unspent gas for promise function calls Authors: Austin Abell +Status: Final DiscussionsTo: https://github.com/near/NEPs/pull/264 -Status: Approved Type: Protocol Version: 1.0.0 Created: 2021-09-30 @@ -20,7 +20,7 @@ We are proposing this to be able to utilize gas more efficiently but also to imp # Guide-level explanation -This host function is similar to [`promise_batch_action_function_call`](https://github.com/near/nearcore/blob/7d15bbc996282c8ae8f15b8f49d110fc901b84d8/runtime/near-vm-logic/src/logic.rs#L1526), except with an additional parameter that lets you specify how much of the excess gas should be attached to the function call. This parameter is a weight value that determines how much of the excess gas is attached to each function. +This host function is similar to [`promise_batch_action_function_call`](https://github.com/near/nearcore/blob/7d15bbc996282c8ae8f15b8f49d110fc901b84d8/runtime/near-vm-logic/src/logic.rs#L1526), except with an additional parameter that lets you specify how much of the excess gas should be attached to the function call. This parameter is a weight value that determines how much of the excess gas is attached to each function. So, for example, if there is 40 gas leftover and three function calls that select weights of 1, 5, and 2, the runtime will add 5, 25, and 10 gas to each function call. A developer can specify whether they want to attach a fixed amount of gas, a weight of remaining gas, or both. If at least one function call uses a weight of remaining gas, then all excess gas will be attached to future calls. This proposal allows developers the ability to utilize prepaid gas more efficiently than currently possible. @@ -71,7 +71,7 @@ This host function definition would look like this (as a Rust consumer): The only difference from the existing API is `gas_weight` added as another parameter, as an unsigned 64-bit integer. -As for calculations, the remaining gas at the end of the transaction can be floor divided by the sum of all the weights tracked. Then, after getting this value, just attach that value multiplied by the weight gas to each function call action. +As for calculations, the remaining gas at the end of the transaction can be floor divided by the sum of all the weights tracked. Then, after getting this value, just attach that value multiplied by the weight gas to each function call action. For example, if there are three weights, `a`, `b`, `c`: @@ -80,7 +80,7 @@ weight_sum = a + b + c a_gas += remaining_gas * a / weight_sum b_gas += remaining_gas * b / weight_sum c_gas += remaining_gas * c / weight_sum -``` +``` Any remaining gas that is not allocated to any of these function calls will be attached to the last function call scheduled. @@ -127,7 +127,7 @@ The primary alternative is using a numerator and denominator to represent a frac Pros: -- Can under-utilize the gas for the current transaction to limit gas allowed for certain functions +- Can under-utilize the gas for the current transaction to limit gas allowed for certain functions - This could take responsibility away from DApp users because they would not have to worry less about attaching too much prepaid gas - Thinking in terms of fractions may be more intuitive for some developers - Might future proof better if we ever need this ability in the future, want to minimize the number of host functions created at all costs @@ -158,7 +158,7 @@ Cons: # Unresolved questions -What needs to be addressed before this gets merged: +What needs to be addressed before this gets merged: ~~- How much refactoring exactly is needed to handle this pattern?~~ ~~- Can we keep a queue of receipt and action indices with their respective weights and update their gas values after the current method is executed? Is there a cleaner way to handle this while keeping order?~~ ~~- Do we want to attach the gas lost due to precision on division to any function?~~ diff --git a/neps/nep-0297.md b/neps/nep-0297.md index b4de6cd37..6b58a42e4 100644 --- a/neps/nep-0297.md +++ b/neps/nep-0297.md @@ -2,8 +2,8 @@ NEP: 297 Title: Events Author: Olga Telezhnaya -DiscussionsTo: https://github.com/near/NEPs/issues/297 Status: Final +DiscussionsTo: https://github.com/near/NEPs/issues/297 Type: Standards Track Category: Contract Created: 03-Mar-2022 @@ -37,7 +37,7 @@ Many apps use different interfaces that represent the same action. This interface standardizes that process by introducing event logs. Events use the standard logs capability of NEAR. -Events are log entries that start with the `EVENT_JSON:` prefix followed by a single valid JSON string. +Events are log entries that start with the `EVENT_JSON:` prefix followed by a single valid JSON string. JSON string may have any number of space characters in the beginning, the middle, or the end of the string. It's guaranteed that space characters do not break its parsing. All the examples below are pretty-formatted for better readability. diff --git a/neps/nep-0330.md b/neps/nep-0330.md index 6615165c4..ffb11993b 100644 --- a/neps/nep-0330.md +++ b/neps/nep-0330.md @@ -2,8 +2,8 @@ NEP: 330 Title: Source Metadata Author: Ben Kurrek , Osman Abdelnasir , Andrey Gruzdev <@canvi>, Alexey Zenin <@alexthebuildr> +Status: Final DiscussionsTo: https://github.com/near/NEPs/discussions/329 -Status: Approved Type: Standards Track Category: Contract Version: 1.2.0 @@ -58,7 +58,7 @@ type Standard { type BuildInfo { build_environment: string, // reference to a reproducible build environment docker image, e.g., "docker.io/sourcescan/cargo-near@sha256:bf488476d9c4e49e36862bbdef2c595f88d34a295fd551cc65dc291553849471" or something else pointing to the build environment. source_code_snapshot: string, // reference to the source code snapshot that was used to build the contract, e.g., "git+https://github.com/near/cargo-near-new-project-template.git#9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420" or "ipfs://". - contract_path: string|null, // relative path to contract crate within the source code, e.g., "contracts/contract-one". Often, it is the root of the repository, so can be omitted. + contract_path: string|null, // relative path to contract crate within the source code, e.g., "contracts/contract-one". Often, it is the root of the repository, so can be omitted. build_command: string[], // the exact command that was used to build the contract, with all the flags, e.g., ["cargo", "near", "build", "--no-abi"]. } ``` @@ -118,7 +118,7 @@ Calling the view function `contract_source_metadata`, the contract would return: link: "https://github.com/near/cargo-near-new-project-template/tree/9c16aaff3c0fe5bda4d8ffb418c4bb2b535eb420", standards: [ { - standard: "nep330", + standard: "nep330", version: "1.1.0" } ], @@ -149,11 +149,11 @@ pub struct Standard { } /// BuildInfo structure -pub struct BuildInfo { +pub struct BuildInfo { pub build_environment: String, pub source_code_snapshot: String, - pub contract_path: Option, - pub build_command: Vec, + pub contract_path: Option, + pub build_command: Vec, } /// Contract metadata structure @@ -201,7 +201,7 @@ The extension NEP-351 that added Contract Metadata to this NEP-330 was approved #### Concerns | # | Concern | Resolution | Status | -| - | - | - | - | +| - | - | - | - | | 1 | Integer field as a standard reference is limiting as third-party projects may want to introduce their own standards without pushing it through the NEP process | Author accepted the proposed string-value standard reference (e.g. “nep123” instead of just 123, and allow “xyz001” as previously it was not possible to express it) | Resolved | | 2 | NEP-330 and NEP-351 should be included in the list of the supported NEPs | There seems to be a general agreement that it is a good default, so NEP was updated | Resolved | | 3 | JSON Event could be beneficial, so tooling can react to the changes in the supported standards | It is outside the scope of this NEP. Also, list of supported standards only changes with contract re-deployment, so tooling can track DEPLOY_CODE events and check the list of supported standards when new code is deployed | Won’t fix | diff --git a/neps/nep-0364.md b/neps/nep-0364.md index 3863cb38f..52e4c5779 100644 --- a/neps/nep-0364.md +++ b/neps/nep-0364.md @@ -2,8 +2,8 @@ NEP: 364 Title: Efficient signature verification and hashing precompile functions Author: Blas Rodriguez Irizar +Status: Final DiscussionsTo: https://github.com/nearprotocol/neps/pull/364 -Status: Draft Type: Runtime Spec Category: Contract Created: 15-Jun-2022 diff --git a/neps/nep-0366.md b/neps/nep-0366.md index 5aada7b0a..c5a203c22 100644 --- a/neps/nep-0366.md +++ b/neps/nep-0366.md @@ -2,8 +2,8 @@ NEP: 366 Title: Meta Transactions Author: Illia Polosukhin , Egor Uleyskiy (egor.ulieiskii@gmail.com), Alexander Fadeev (fadeevab.com@gmail.com) +Status: Final DiscussionsTo: https://github.com/nearprotocol/neps/pull/366 -Status: Approved Type: Protocol Track Category: Runtime Version: 1.1.0 diff --git a/neps/nep-0393.md b/neps/nep-0393.md index c045a18d9..82f73ffe7 100644 --- a/neps/nep-0393.md +++ b/neps/nep-0393.md @@ -2,8 +2,8 @@ NEP: 393 Title: Soulbound Token Authors: Robert Zaremba <@robert-zaremba> +Status: Final DiscussionsTo: -Status: Approved Type: Standards Track Category: Contract Created: 12-Sep-2022 diff --git a/neps/nep-0399.md b/neps/nep-0399.md index 779c9b472..6dc21b25b 100644 --- a/neps/nep-0399.md +++ b/neps/nep-0399.md @@ -2,8 +2,8 @@ NEP: 399 Title: Flat Storage Author: Aleksandr Logunov Min Zhang +Status: Final DiscussionsTo: https://github.com/nearprotocol/neps/pull/0399 -Status: Draft Type: Protocol Track Category: Storage Created: 30-Sep-2022 diff --git a/neps/nep-0413.md b/neps/nep-0413.md index ef6230fc5..810bc3a89 100644 --- a/neps/nep-0413.md +++ b/neps/nep-0413.md @@ -2,8 +2,8 @@ NEP: 413 Title: Near Wallet API - support for signMessage method Author: Philip Obosi , Guillermo Gallardo +Status: Final # DiscussionsTo: -Status: Approved Type: Standards Track Category: Wallet Created: 25-Oct-2022 diff --git a/neps/nep-0418.md b/neps/nep-0418.md index f1c703b90..602ee3495 100644 --- a/neps/nep-0418.md +++ b/neps/nep-0418.md @@ -2,8 +2,8 @@ NEP: 418 Title: Remove attached_deposit view panic Author: Austin Abell +Status: Final DiscussionsTo: https://github.com/nearprotocol/neps/pull/418 -Status: Approved Type: Standards Track Category: Tools Version: 1.0.0 diff --git a/neps/nep-0448.md b/neps/nep-0448.md index d7d2856fd..fb7781487 100644 --- a/neps/nep-0448.md +++ b/neps/nep-0448.md @@ -2,8 +2,8 @@ NEP: 448 Title: Zero-balance Accounts Author: Bowen Wang -DiscussionsTo: https://github.com/nearprotocol/neps/pull/448 Status: Final +DiscussionsTo: https://github.com/nearprotocol/neps/pull/448 Type: Protocol Track Created: 10-Jan-2023 --- diff --git a/neps/nep-0452.md b/neps/nep-0452.md index c5df9f258..7df946b7b 100644 --- a/neps/nep-0452.md +++ b/neps/nep-0452.md @@ -2,8 +2,8 @@ NEP: 452 Title: Linkdrop Standard Author: Ben Kurrek , Ken Miyachi -DiscussionsTo: https://gov.near.org/t/official-linkdrop-standard/32463/1 Status: Final +DiscussionsTo: https://gov.near.org/t/official-linkdrop-standard/32463/1 Type: Standards Track Category: Contract Version: 1.0.0 diff --git a/neps/nep-0455.md b/neps/nep-0455.md index 5cd9bddfc..893c68a75 100644 --- a/neps/nep-0455.md +++ b/neps/nep-0455.md @@ -2,8 +2,8 @@ NEP: 455 Title: Parameter Compute Costs Author: Andrei Kashin , Jakob Meier -DiscussionsTo: https://github.com/nearprotocol/neps/pull/455 Status: Final +DiscussionsTo: https://github.com/nearprotocol/neps/pull/455 Type: Protocol Track Category: Runtime Created: 26-Jan-2023 @@ -259,7 +259,7 @@ Progress on this work is tracked here: https://github.com/near/nearcore/issues/8 #### Benefits - Among the alternatives, this is the easiest to implement. -- It allows us to able to publicly discuss undercharging issues before they are fixed. +- It allows us to able to publicly discuss undercharging issues before they are fixed. #### Concerns diff --git a/neps/nep-0488.md b/neps/nep-0488.md index 71c079923..16ea66184 100644 --- a/neps/nep-0488.md +++ b/neps/nep-0488.md @@ -2,7 +2,7 @@ NEP: 488 Title: Host Functions for BLS12-381 Curve Operations Authors: Olga Kuniavskaia -Status: Draft +Status: Final DiscussionsTo: https://github.com/nearprotocol/neps/pull/488 Type: Runtime Spec Version: 0.0.1 @@ -18,33 +18,33 @@ This NEP introduces host functions to perform operations on the BLS12-381 ellipt The primary aim of this NEP is to enable fast and efficient verification of BLS signatures and zkSNARKs based on the BLS12-381[^1],[^11],[^52] elliptic curve on NEAR. -To efficiently verify zkSNARKs[^19], host functions for operations on the BN254 -elliptic curve (also known as Alt-BN128)[^9], [^12] have already been implemented on NEAR[^10]. -For instance, the Zeropool[^20] project utilizes these host functions for verifying zkSNARKs on NEAR. -However, recent research shows that the BN254 security level is lower than 100-bit[^13] and it is not recommended for use. -BLS12-381, on the other hand, offers over 120 bits of security[^8] and is widely used[^2],[^3],[^4],[^5],[^6],[^7] as a robust alternative. +To efficiently verify zkSNARKs[^19], host functions for operations on the BN254 +elliptic curve (also known as Alt-BN128)[^9], [^12] have already been implemented on NEAR[^10]. +For instance, the Zeropool[^20] project utilizes these host functions for verifying zkSNARKs on NEAR. +However, recent research shows that the BN254 security level is lower than 100-bit[^13] and it is not recommended for use. +BLS12-381, on the other hand, offers over 120 bits of security[^8] and is widely used[^2],[^3],[^4],[^5],[^6],[^7] as a robust alternative. Supporting operations for BLS12-381 elliptic curve will significantly enhance the security of projects similar to Zeropool. -Another crucial objective is the verification of BLS signatures. -Initially, host functions for BN254 on NEAR were designed for zkSNARK verification and +Another crucial objective is the verification of BLS signatures. +Initially, host functions for BN254 on NEAR were designed for zkSNARK verification and are insufficient for BLS signature verification. However, even if these host functions were sufficient for BLS signature verification on the BN254 elliptic curve, this would not be enough for compatibility with other projects. -In particular, projects such as ZCash[^2], Ethereum[^3], Tezos[^5], and Filecoin[^6] incorporate BLS12-381 specifically within their protocols. -If we aim for compatibility with these projects, we must also utilize this elliptic curve. -For instance, to create a trustless bridge[^17] between Ethereum and NEAR, +In particular, projects such as ZCash[^2], Ethereum[^3], Tezos[^5], and Filecoin[^6] incorporate BLS12-381 specifically within their protocols. +If we aim for compatibility with these projects, we must also utilize this elliptic curve. +For instance, to create a trustless bridge[^17] between Ethereum and NEAR, we must efficiently verify BLS signatures based on BLS12-381, as these are the signatures employed within Ethereum's protocol. In this NEP, we propose to add the following host functions: - ***bls12381_p1_sum —*** computes the sum of signed points from $E(F_p)$ elliptic curve. This function is useful for aggregating public keys or signatures in the BLS signature scheme. It can be employed for simple addition in $E(F_p)$. It is kept separate from the `multiexp` function due to gas cost considerations. -- ***bls12381_p2_sum —*** computes the sum of signed points from $E'(F_{p^2})$ elliptic curve. This function is useful for aggregating signatures or public keys in the BLS signature scheme. -- ***bls12381_g1_multiexp —*** calculates $\sum p_i s_i$ for points $p_i \in G_1 \subset E(F_p)$ and scalars $s_i$. This operation can be used to multiply a group element by a scalar. -- ***bls12381_g2_multiexp —*** calculates $\sum p_i s_i$ for points $p_i \in G_2 \subset E'(F_{p^2})$ and scalars $s_i$. +- ***bls12381_p2_sum —*** computes the sum of signed points from $E'(F_{p^2})$ elliptic curve. This function is useful for aggregating signatures or public keys in the BLS signature scheme. +- ***bls12381_g1_multiexp —*** calculates $\sum p_i s_i$ for points $p_i \in G_1 \subset E(F_p)$ and scalars $s_i$. This operation can be used to multiply a group element by a scalar. +- ***bls12381_g2_multiexp —*** calculates $\sum p_i s_i$ for points $p_i \in G_2 \subset E'(F_{p^2})$ and scalars $s_i$. - ***bls12381_map_fp_to_g1 —*** maps base field elements into $G_1$ points. It does not perform the mapping of byte strings into field elements. - ***bls12381_map_fp2_to_g2 —*** maps extension field elements into $G_2$ points. This function does not perform the mapping of byte strings into extension field elements, which would be needed to efficiently map a message into a group element. We are not implementing the `hash_to_field`[^60] function because the latter can be executed within a contract and various hashing algorithms can be used within this function. - ***bls12381_p1_decompress —*** decompresses points from $E(F_p)$ provided in a compressed form. Certain protocols offer points on the curve in a compressed form (e.g., the light client updates in Ethereum 2.0), and decompression is a time-consuming operation. All the other functions in this NEP only accept decompressed points for simplicity and optimized gas consumption. - ***bls12381_p2_decompress —*** decompresses points from $E'(F_{p^2})$ provided in a compressed form. -- ***bls12381_pairing_check —*** verifies that $\prod e(p_i, q_i) = 1$, where $e$ is a pairing operation and $p_i \in G_1 \land q_i \in G_2$. This function is used to verify BLS signatures or zkSNARKs. +- ***bls12381_pairing_check —*** verifies that $\prod e(p_i, q_i) = 1$, where $e$ is a pairing operation and $p_i \in G_1 \land q_i \in G_2$. This function is used to verify BLS signatures or zkSNARKs. Functions required for verifying BLS signatures[^59]: @@ -61,12 +61,12 @@ Functions required for verifying zkSNARKs: - bls12381_g1_multiexp - bls12381_pairing_check -Both zkSNARKs and BLS signatures can be implemented alternatively by swapping $G_1$ and $G_2$. +Both zkSNARKs and BLS signatures can be implemented alternatively by swapping $G_1$ and $G_2$. Therefore, all functions have been implemented for both $G_1$ and $G_2$. -An analogous proposal, EIP-2537[^15], exists in Ethereum. -The functions here have been designed with compatibility -with that Ethereum's proposal in mind. This design approach aims +An analogous proposal, EIP-2537[^15], exists in Ethereum. +The functions here have been designed with compatibility +with that Ethereum's proposal in mind. This design approach aims to ensure future ease in supporting corresponding precompiles for Aurora[^24]. ## Specification @@ -75,10 +75,10 @@ to ensure future ease in supporting corresponding precompiles for Aurora[^24]. #### Elliptic Curve -**The field $F_p$** for some *prime* $p$ is a set of integer -elements $\textbraceleft 0, 1, \ldots, p - 1 \textbraceright$ with two +**The field $F_p$** for some *prime* $p$ is a set of integer +elements $\textbraceleft 0, 1, \ldots, p - 1 \textbraceright$ with two operations: multiplication $\cdot$ and addition $+$. -These operations involve standard integer multiplication and addition, +These operations involve standard integer multiplication and addition, followed by computing the remainder modulo $p$. **The elliptic curve $E(F_p)$** is the set of all pairs $(x, y)$ with coordinates in $F_p$ satisfying: @@ -131,8 +131,8 @@ Notation: |G| or #G, where G represents the group. For some technical reason (related to the `pairing` operation which we will define later), we will not operate over the entire $E(F_p)$, but only over the two subgroups $G_1$ and $G_2$ -having the same **order** $r$. -$G_1$ is a subset of $E(F_p)$, +having the same **order** $r$. +$G_1$ is a subset of $E(F_p)$, while $G_2$ is a subgroup of another group that we will define later. The value of $r$ should be a prime number and $G_1 \ne G_2$ @@ -163,7 +163,7 @@ c_i = (a_i + b_i) \mod p $$ -The multiplication $\cdot$ is defined as regular polynomial multiplication modulo $M(x)$, +The multiplication $\cdot$ is defined as regular polynomial multiplication modulo $M(x)$, where $M(x)$ is an irreducible polynomial of degree $k$ with coefficients from $F_p$. $$ @@ -172,12 +172,12 @@ $$ Notation: $F_{p^k} = F_{p}[x] / M(x)$ -In BLS12-381, we will require $F_{p^{12}}$. -We'll construct this field not directly as an extension from $F_p$, -but rather through a stepwise process. First, we'll build $F_{p^2}$ -as a quadratic extension of the field $F_p$. -Second, we'll establish $F_{p^6}$ as a cubic extension of $F_{p^2}$. -Finally, we'll create $F_{p^{12}}$ as a quadratic extension of the +In BLS12-381, we will require $F_{p^{12}}$. +We'll construct this field not directly as an extension from $F_p$, +but rather through a stepwise process. First, we'll build $F_{p^2}$ +as a quadratic extension of the field $F_p$. +Second, we'll establish $F_{p^6}$ as a cubic extension of $F_{p^2}$. +Finally, we'll create $F_{p^{12}}$ as a quadratic extension of the field $F_{p^6}$. To define these fields, we'll need to set up three irreducible polynomials[^51]: @@ -186,15 +186,15 @@ To define these fields, we'll need to set up three irreducible polynomials[^51]: - $F_{p^6} = F_{p^2}[v] / (v^3 - u - 1)$ - $F_{p^{12}} = F_{p^6}[w] / (w^2 - v)$ -The second subgroup we'll utilize has order r and -resides within the same elliptic curve but with elements from $F_{p^{12}}$. +The second subgroup we'll utilize has order r and +resides within the same elliptic curve but with elements from $F_{p^{12}}$. Specifically, $G_2 \subset E(F_{p^{12}})$, where $E: y^2 = x^3 + 4$ #### Twist -Storing elements from $E(F_{p^{12}})$ consumes a significant amount of memory. -The twist operation transforms the original curve $E(F_{p^{12}})$ into another curve within a different space, -denoted as $E'(F_{p^2})$. It is crucial that this new curve also includes a $G'_2$ subgroup with order 'r' +Storing elements from $E(F_{p^{12}})$ consumes a significant amount of memory. +The twist operation transforms the original curve $E(F_{p^{12}})$ into another curve within a different space, +denoted as $E'(F_{p^2})$. It is crucial that this new curve also includes a $G'_2$ subgroup with order 'r' so that we can easily transform it back to the original $G_2$. We want to have $\psi \colon E'(F_{p^2}) \rightarrow E(F_{p^{12}})$, such as @@ -260,7 +260,7 @@ The main properties of the pairing operation are: - $e(P + S, R) = e(P, R)\cdot e(S, R)$ To compute this function, we utilize an algorithm called Miller Loop. -For an affective implementation of this algorithm, +For an affective implementation of this algorithm, we require a key parameter for the BLS curve, denoted as $x$: $$ x = -\mathtt{0xd201000000010000}$$ @@ -326,12 +326,12 @@ All parameters were sourced from [^15], [^51], and [^14], and they remain consis ### Map to curve specification -This section delineates the functionality of the `bls12381_map_fp_to_g1` and `bls12381_map_fp2_to_g2` functions, +This section delineates the functionality of the `bls12381_map_fp_to_g1` and `bls12381_map_fp2_to_g2` functions, operating in accordance with the RFC9380 specification "Hashing to Elliptic Curves"[^62]. -These functions map field elements in $F_p$ or $F_{p^2}$ -to their corresponding subgroups: $G_1 \subset E(F_p)$ or $G_2 \subset E'(F_{p^2})$. -`bls12381_map_fp_to_g1`/`bls12381_map_fp2_to_g2` combine the functionalities +These functions map field elements in $F_p$ or $F_{p^2}$ +to their corresponding subgroups: $G_1 \subset E(F_p)$ or $G_2 \subset E'(F_{p^2})$. +`bls12381_map_fp_to_g1`/`bls12381_map_fp2_to_g2` combine the functionalities of `map_to_curve` and `clear_cofactor` from RFC9380[^63]. ```text @@ -340,21 +340,21 @@ fn bls12381_map_fp_to_g1(u): return clear_cofactor(Q); ``` -We choose not to implement the `hash_to_field` function as a host function due to potential changes in hashing methods. +We choose not to implement the `hash_to_field` function as a host function due to potential changes in hashing methods. Additionally, executing this function within the contract consumes approximately 2 TGas, which is acceptable for our goals. -Specific implementation parameters for `bls12381_map_fp_to_g1` and `bls12381_map_fp2_to_g2` can be found in RFC9380 +Specific implementation parameters for `bls12381_map_fp_to_g1` and `bls12381_map_fp2_to_g2` can be found in RFC9380 under sections 8.8.1[^64] and 8.8.2[^65], respectively. ### Curve points encoding #### General comments -The encoding rules for curve points and field elements align with the standards established in zkcrypto[^53] and +The encoding rules for curve points and field elements align with the standards established in zkcrypto[^53] and the implementation in the milagro lib[^29]. -For elements from $F_p$ the first three bits will always be $0$, because the first byte of $p$ equals $1$. As a result, -we can use these bits to encode extra information: the encoding format, the point at infinity, and the points' sign. +For elements from $F_p$ the first three bits will always be $0$, because the first byte of $p$ equals $1$. As a result, +we can use these bits to encode extra information: the encoding format, the point at infinity, and the points' sign. Read more in sections: Uncompressed/compressed points on curve $E(F_p)$ / $E'(F_{p^2})$. #### Sign @@ -371,13 +371,13 @@ Values from $F_p$ are encoded as big-endian [u8; 48]. Only values less than p ar #### Extension fields elements $F_{p^2}$ -An element $q \in F_{p^{2}}$ can be expressed as $q = c_0 + c_1 v$, where $c_0, c_1 \in F_p$. +An element $q \in F_{p^{2}}$ can be expressed as $q = c_0 + c_1 v$, where $c_0, c_1 \in F_p$. An element from $F_{p^2}$ is encoded in [u8; 96] as the byte concatenation of $c_1$ and $c_0$. The encoding for $c_1$ and $c_0$ follows the rule described in the previous section. #### Uncompressed points on curve $E(F_p)$ -Points on the curve are represented by affine coordinates: $(x: F_p, y: F_p)$. -Elements from $E(F_p)$ are encoded in `[u8; 96]` as the byte concatenation of the x and y point coordinates, where $x, y \in F_p$. +Points on the curve are represented by affine coordinates: $(x: F_p, y: F_p)$. +Elements from $E(F_p)$ are encoded in `[u8; 96]` as the byte concatenation of the x and y point coordinates, where $x, y \in F_p$. The encoding follows the rules outlined in the section “Fields elements $F_p$”. *The second-highest bit* within the encoding serves to signify a point at infinity. @@ -393,9 +393,9 @@ x[0] = x[0] | 0x40; #### Compressed points on curve $E(F_p)$ -The points on the curve are represented by affine coordinates: $(x: F_p, y: F_p)$. -Elements from $E(F_p)$ in compressed form are encoded as `[u8; 48]`, -with big-endian encoded $x \in F_p$. +The points on the curve are represented by affine coordinates: $(x: F_p, y: F_p)$. +Elements from $E(F_p)$ in compressed form are encoded as `[u8; 48]`, +with big-endian encoded $x \in F_p$. The $y$ coordinate is determined by the formula: $y = \pm \sqrt{x^3 + 4}$. - The highest bit indicates that the point is encoded in compressed form and thus must always be set to 1. @@ -424,12 +424,12 @@ x[0] = x[0] | 0x40; #### Uncompressed points on the twisted curve $E'(F_{p^2})$ -The points on the curve are represented by affine coordinates: $(x: F_{p^2}, y: F_{p^2})$. -Elements from $E'(F_{p^2})$ are encoded in [u8; 192] as a concatenation of bytes representing x and y coordinates, where $x, y \in F_{p^2}$. +The points on the curve are represented by affine coordinates: $(x: F_{p^2}, y: F_{p^2})$. +Elements from $E'(F_{p^2})$ are encoded in [u8; 192] as a concatenation of bytes representing x and y coordinates, where $x, y \in F_{p^2}$. The encoding for $x$ and $y$ follows the rules detailed in the "Extension Fields Elements $F_{p^2}$" section. -*The second-highest bit* within the encoding serves to signify a point at infinity. -When this bit is set to 1, it designates an infinity point. +*The second-highest bit* within the encoding serves to signify a point at infinity. +When this bit is set to 1, it designates an infinity point. In this case, all other bits should be set to 0. Encoding the point at infinity: @@ -441,9 +441,9 @@ x[0] = x[0] | 0x40; #### Compressed points on twisted curve $E'(F_{p^2})$ -The points on the curve are represented by affine coordinates: $(x: F_{p^2}, y: F_{p^2})$. -Elements from $E'(F_{p^2})$ in compressed form are encoded as [u8; 96], -with big-endian encoded $x \in F_{p^2}$. +The points on the curve are represented by affine coordinates: $(x: F_{p^2}, y: F_{p^2})$. +Elements from $E'(F_{p^2})$ in compressed form are encoded as [u8; 96], +with big-endian encoded $x \in F_{p^2}$. The $y$ coordinate is determined using the formula: $y = \pm \sqrt{x^3 + 4(u + 1)}$. - The highest bit indicates if the point is encoded in compressed form and should be set to 1. @@ -473,10 +473,10 @@ x[0] = x[0] | 0x40; #### ERROR_CODE Validating the input for the host functions within the contract can consume significant gas. -For instance, verifying if a point belongs to the subgroup is gas-consuming. -If an error is returned by the near host function, the entire execution is reverted. -To mitigate this, when the input verification is complex, the host function -will successfully complete its work but return an ERROR_CODE. +For instance, verifying if a point belongs to the subgroup is gas-consuming. +If an error is returned by the near host function, the entire execution is reverted. +To mitigate this, when the input verification is complex, the host function +will successfully complete its work but return an ERROR_CODE. This enables users to handle error cases independently. It's important to note that host functions might terminate with an error if it's straightforward to avoid it (e.g., incorrect input size). @@ -484,10 +484,10 @@ The ERROR_CODE is an u64 and can hold the following values: - 0: No error, execution was successful. For `bls12381_pairing_check` function, the pairing result equals the multiplicative identity. - 1: Execution finished with error due to: - - Incorrect encoding (e.g., incorrectly set compression/decompression bit, coordinate >= p, etc.). - - A point not on the curve (where applicable). + - Incorrect encoding (e.g., incorrectly set compression/decompression bit, coordinate >= p, etc.). + - A point not on the curve (where applicable). - A point not in the expected subgroup (where applicable). -- 2: Can be returned only in `bls12381_pairing_check`. No error, execution was successful, but the pairing result doesn't equal the multiplicative identity. +- 2: Can be returned only in `bls12381_pairing_check`. No error, execution was successful, but the pairing result doesn't equal the multiplicative identity. ### Host functions @@ -538,9 +538,9 @@ Note: This function accepts points from the entire curve and is not restricted t The sequence of pairs $(s_i, p_i)$, where $p_i \in E(F_p)$ represents a point and $s_i \in {0, 1}$ denotes the sign. Each point is encoded in decompressed form as $(x\colon F_p, y\colon F_p)$, and the sign is encoded in one byte, taking only two allowed values: 0 or 1. Expect 97*k bytes as input, which are interpreted as byte concatenation of k slices, with each slice representing the point sign and the uncompressed point from $E(F_p)$. Further details are available in the Curve Points Encoding section. -***Output:*** +***Output:*** -The ERROR_CODE is returned. +The ERROR_CODE is returned. - ERROR_CODE = 0: the input is correct - Output: 96 bytes represent one point $\in E(F_p)$ in its decompressed form. In case of an empty input, it outputs a point on infinity (refer to the Curve Points Encoding section for more details). @@ -614,9 +614,9 @@ Edge cases: ***Annotation:*** ```rust -pub fn bls12381_p1_sum(&mut self, - value_len: u64, - value_ptr: u64, +pub fn bls12381_p1_sum(&mut self, + value_len: u64, + value_ptr: u64, register_id: u64) -> Result; ``` @@ -630,11 +630,11 @@ The $E'(F_{p^2})$ curve, the points on the curve, the multiplication by -1, and Note: The function accepts any points on the curve and is not limited to points in $G_2$. -***Input:*** +***Input:*** -The sequence of pairs $(s_i, p_i)$, where $p_i \in E'(F_{p^2})$ is point and $s_i \in \textbraceleft 0, 1 \textbraceright$ represents a sign. -Each point is encoded in decompressed form as $(x: F_{p^2}, y: F_{p^2})$, and the sign is encoded in one byte. The expected input size is 193*k bytes, interpreted as a byte concatenation of k slices, -each slice representing the point sign alongside the uncompressed point from $E'(F_{p^2})$. +The sequence of pairs $(s_i, p_i)$, where $p_i \in E'(F_{p^2})$ is point and $s_i \in \textbraceleft 0, 1 \textbraceright$ represents a sign. +Each point is encoded in decompressed form as $(x: F_{p^2}, y: F_{p^2})$, and the sign is encoded in one byte. The expected input size is 193*k bytes, interpreted as a byte concatenation of k slices, +each slice representing the point sign alongside the uncompressed point from $E'(F_{p^2})$. More details are available in the Curve Points Encoding section. ***Output:*** @@ -654,9 +654,9 @@ The test cases are identical to those of `bls12381_p1_sum`, with the only altera ***Annotation:*** ```rust -pub fn bls12381_p2_sum(&mut self, - value_len: u64, - value_ptr: u64, +pub fn bls12381_p2_sum(&mut self, + value_len: u64, + value_ptr: u64, register_id: u64) -> Result; ``` @@ -737,7 +737,7 @@ These are identical test cases to those in the `bls12381_p1_sum` section, but on Tests for error cases -- The same test cases as those in the `bls12381_p1_sum` section. +- The same test cases as those in the `bls12381_p1_sum` section. - Points not from $G_1$. ***Annotation:*** @@ -771,11 +771,11 @@ Please note: - The scalar is an arbitrary unsigned integer and can exceed the group order. - To enhance gas efficiency, the Pippenger’s algorithm[^25] can be utilized. -***Input:*** the sequence of pairs $(p_i, s_i)$, where $p_i \in G_2 \subset E'(F_{p^2})$ is a point on the curve and $s_i \in \mathbb{N}_0$ is a scalar. +***Input:*** the sequence of pairs $(p_i, s_i)$, where $p_i \in G_2 \subset E'(F_{p^2})$ is a point on the curve and $s_i \in \mathbb{N}_0$ is a scalar. The expected input size is `224*k` bytes, interpreted as the byte concatenation of `k` slices. Each slice is the concatenation of an uncompressed point from $G_2 \subset E'(F_{p^2})$ — `192` bytes and a scalar — `32` bytes. More details are in the Curve Points Encoding section. -***Output:*** +***Output:*** The ERROR_CODE is returned. @@ -805,16 +805,16 @@ pub fn bls12381_g2_multiexp( ***Description:*** -This function takes as input a list of field elements $a_i \in F_p$ and maps them to $G_1 \subset E(F_p)$. -You can find the specification of this mapping function in the section titled 'Map to curve specification.' +This function takes as input a list of field elements $a_i \in F_p$ and maps them to $G_1 \subset E(F_p)$. +You can find the specification of this mapping function in the section titled 'Map to curve specification.' Importantly, this function does NOT perform the mapping of the byte string into $F_p$. The implementation of the mapping to $F_p$ may vary and can be effectively executed within the contract. -***Input:*** +***Input:*** The function expects `48*k` bytes as input, representing a list of element from $F_p$ (unsigned integer $< p$). Additional information is available in the Curve Points Encoding section. -***Output:*** +***Output:*** The ERROR_CODE is returned. @@ -845,7 +845,7 @@ Edge cases: Tests for error cases -- Input length is not divisible by 48: +- Input length is not divisible by 48: - Input is beyond memory bounds. - $a = p$ - Random number $\ge p$ @@ -871,7 +871,7 @@ The implementation of the mapping to $F_{p^2}$ may vary and can be effectively e ***Input:*** the function takes as input `96*k` bytes — the elements from $F_{p^2}$ (two unsigned integers $< p$). Additional details can be found in the Curve Points Encoding section. -***Output:*** +***Output:*** The ERROR_CODE is returned. @@ -924,7 +924,7 @@ pub fn bls12381_map_fp2_to_g2( ***Description:*** -The pairing function is a bilinear function $e\colon G_1 \times G_2 \rightarrow G_T$, where $G_T \subset F_{q^{12}}$, +The pairing function is a bilinear function $e\colon G_1 \times G_2 \rightarrow G_T$, where $G_T \subset F_{q^{12}}$, which is used to verify BLS signatures/zkSNARKs. This function takes as input the sequence of pairs $(p_i, q_i)$, where $p_i \in G_1 \subset E(F_{p})$ and $q_i \in G_2 \subset E'(F_{p^2})$ and validates: @@ -954,7 +954,7 @@ The ERROR_CODE is returned. - Generate a random point $P \in G_1$: verify $e(P, \mathcal{O}) = 1$ - Generate a random point $Q \in G_2$: verify $e(\mathcal{O}, Q) = 1$ -- Generate random points $P \ne \mathcal{O} \in G_1$ and $Q \ne \mathcal{O} \in G_2$: verify $e(P, Q) \ne 1$ +- Generate random points $P \ne \mathcal{O} \in G_1$ and $Q \ne \mathcal{O} \in G_2$: verify $e(P, Q) \ne 1$ Tests for two pairs @@ -995,8 +995,8 @@ The ERROR_CODE is returned. ***Annotation:*** ```rust -pub fn bls12381_pairing_check(&mut self, - value_len: u64, +pub fn bls12381_pairing_check(&mut self, + value_len: u64, value_ptr: u64) -> Result; ``` @@ -1037,18 +1037,18 @@ The ERROR_CODE is returned. Tests for error cases - The input length is not divisible by 48. -- The input is beyond memory bounds. -- Point is not on the curve. -- Incorrect decompression bit. +- The input is beyond memory bounds. +- Point is not on the curve. +- Incorrect decompression bit. - Incorrectly encoded point at infinity. - Point with a coordinate larger than 'p'. ***Annotation:*** ```rust -pub fn bls12381_p1_decompress(&mut self, - value_len: u64, - value_ptr: u64, +pub fn bls12381_p1_decompress(&mut self, + value_len: u64, + value_ptr: u64, register_id: u64) -> Result; ``` @@ -1065,7 +1065,7 @@ The ERROR_CODE is returned. - ERROR_CODE = 0: the input is correct - Output: the sequence of point $p_i \in E'(F_{p^2})$, with each point encoded in decompressed form. The expected output is 192*k bytes, interpreted as the byte concatenation of k slices. `k` corresponds to the value specified in the input section. Each slice represents the decompressed point from $E'(F_{p^2})$. For more details, refer to the Curve Points Encoding section. - ERROR_CODE = 1: - - Points are incorrectly encoded (refer to Curve points encoded section). + - Points are incorrectly encoded (refer to Curve points encoded section). - Point is not on the curve. ***Test cases:*** @@ -1075,9 +1075,9 @@ The same test cases as `bls12381_p1_decompress`, but with points from $G_2$, and ***Annotation:*** ```rust -pub fn bls12381_p2_decompress(&mut self, - value_len: u64, - value_ptr: u64, +pub fn bls12381_p2_decompress(&mut self, + value_len: u64, + value_ptr: u64, register_id: u64) -> Result; ``` @@ -1104,8 +1104,8 @@ In addition, there are implementations in other languages that are less relevant 5. Go, ***Matter Labs Go EIP-1962 implementation***[^41] 6. C++, ***Matter Labs C++ EIP-1962 implementation***[^42] -One of the possible libraries to use is the blst library[^30]. -This library exhibits good performance[^45] and has undergone several audits[^55]. +One of the possible libraries to use is the blst library[^30]. +This library exhibits good performance[^45] and has undergone several audits[^55]. You can find a draft implementation in nearcore, which is based on this library, through this link[^54]. ## Security Implications diff --git a/neps/nep-0491.md b/neps/nep-0491.md index b84b18726..828afdd8d 100644 --- a/neps/nep-0491.md +++ b/neps/nep-0491.md @@ -2,7 +2,7 @@ NEP: 491 Title: Non-Refundable Storage Staking Authors: Jakob Meier -Status: Draft +Status: Final DiscussionsTo: https://gov.near.org/t/proposal-locking-account-storage-refunds-to-avoid-faucet-draining-attacks/34155 Type: Protocol Track Version: 1.0.0 diff --git a/neps/nep-0508.md b/neps/nep-0508.md index 36fffe5ce..3230691b1 100644 --- a/neps/nep-0508.md +++ b/neps/nep-0508.md @@ -2,7 +2,7 @@ NEP: 508 Title: Resharding v2 Authors: Waclaw Banasik, Shreyan Gupta, Yoon Hong -Status: Draft +Status: Final DiscussionsTo: https://github.com/near/nearcore/issues/8992 Type: Protocol Version: 1.0.0 @@ -12,13 +12,13 @@ LastUpdated: 2023-11-14 ## Summary -This proposal introduces a new implementation for resharding and a new shard layout for the production networks. +This proposal introduces a new implementation for resharding and a new shard layout for the production networks. -In essence, this NEP is an extension of [NEP-40](https://github.com/near/NEPs/blob/master/specs/Proposals/0040-split-states.md), which was focused on splitting one shard into multiple shards. +In essence, this NEP is an extension of [NEP-40](https://github.com/near/NEPs/blob/master/specs/Proposals/0040-split-states.md), which was focused on splitting one shard into multiple shards. -We are introducing resharding v2, which supports one shard splitting into two within one epoch at a pre-determined split boundary. The NEP includes performance improvement to make resharding feasible under the current state as well as actual resharding in mainnet and testnet (To be specific, splitting the largest shard into two). +We are introducing resharding v2, which supports one shard splitting into two within one epoch at a pre-determined split boundary. The NEP includes performance improvement to make resharding feasible under the current state as well as actual resharding in mainnet and testnet (To be specific, splitting the largest shard into two). -While the new approach addresses critical limitations left unsolved in NEP-40 and is expected to remain valid for foreseeable future, it does not serve all use cases, such as dynamic resharding. +While the new approach addresses critical limitations left unsolved in NEP-40 and is expected to remain valid for foreseeable future, it does not serve all use cases, such as dynamic resharding. ## Motivation @@ -58,26 +58,26 @@ A new protocol version will be introduced specifying the new shard layout which ### Required state changes * For the duration of the resharding the node will need to maintain a snapshot of the flat state and related columns. As the main database and the snapshot diverge this will cause some extent of storage overhead. -* For the duration of the epoch before the new shard layout takes effect, the node will need to maintain the state and flat state of shards in the old and new layout at the same time. The State and FlatState columns will grow up to approx 2x the size. The processing overhead should be minimal as the chunks will still be executed only on the parent shards. There will be increased load on the database while applying changes to both the parent and the children shards. -* The total storage overhead is estimated to be on the order of 100GB for mainnet RPC nodes and 2TB for mainnet archival nodes. For testnet the overhead is expected to be much smaller. +* For the duration of the epoch before the new shard layout takes effect, the node will need to maintain the state and flat state of shards in the old and new layout at the same time. The State and FlatState columns will grow up to approx 2x the size. The processing overhead should be minimal as the chunks will still be executed only on the parent shards. There will be increased load on the database while applying changes to both the parent and the children shards. +* The total storage overhead is estimated to be on the order of 100GB for mainnet RPC nodes and 2TB for mainnet archival nodes. For testnet the overhead is expected to be much smaller. ### Resharding flow * The new shard layout will be agreed on offline by the protocol team and hardcoded in the reference implementation. * The first resharding will be scheduled soon after this NEP is merged. The new shard layout boundary accounts will be: ```["aurora", "aurora-0", "kkuuue2akv_1630967379.near", "tge-lockup.sweat"]```. * Subsequent reshardings will be scheduled as needed, without further NEPs, unless significant changes are introduced. -* In epoch T, past the protocol version upgrade date, nodes will vote to switch to the new protocol version. The new protocol version will contain the new shard layout. +* In epoch T, past the protocol version upgrade date, nodes will vote to switch to the new protocol version. The new protocol version will contain the new shard layout. * In epoch T, in the last block of the epoch, the EpochConfig for epoch T+2 will be set. The EpochConfig for epoch T+2 will have the new shard layout. * In epoch T + 1, all nodes will perform the state split. The child shards will be kept up to date with the blockchain up until the epoch end first via catchup, and later as part of block postprocessing state application. -* In epoch T + 2, the chain will switch to the new shard layout. +* In epoch T + 2, the chain will switch to the new shard layout. ## Reference Implementation -The implementation heavily re-uses the implementation from [NEP-40](https://github.com/near/NEPs/blob/master/specs/Proposals/0040-split-states.md). Below are listed the major differences and additions. +The implementation heavily re-uses the implementation from [NEP-40](https://github.com/near/NEPs/blob/master/specs/Proposals/0040-split-states.md). Below are listed the major differences and additions. -### Code pointers to the proposed implementation +### Code pointers to the proposed implementation -* [new shard layout](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/core/primitives/src/shard_layout.rs#L161) +* [new shard layout](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/core/primitives/src/shard_layout.rs#L161) * [the main logic for splitting states](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/chain/chain/src/resharding.rs#L280) * [the main logic for applying chunks to split states](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/chain/chain/src/update_shard.rs#L315) * [the main logic for garbage collecting state from parent shard](https://github.com/near/nearcore/blob/c9836ab5b05c229da933d451fe8198d781f40509/chain/chain/src/store.rs#L2335) @@ -92,7 +92,7 @@ In order to ensure consistent view of the flat storage while splitting the state ### Handling receipts, gas burnt and balance burnt -When resharding, extra care should be taken when handling receipts in order to ensure that no receipts are lost or duplicated. The gas burnt and balance burnt also need to be correctly handled. The old resharding implementation for handling receipts, gas burnt and balance burnt relied on the fact in the first resharding there was only a single parent shard to begin with. The new implementation will provide a more generic and robust way of reassigning the receipts to the child shards, gas burnt, and balance burnt, that works for arbitrary splitting of shards, regardless of the previous shard layout. +When resharding, extra care should be taken when handling receipts in order to ensure that no receipts are lost or duplicated. The gas burnt and balance burnt also need to be correctly handled. The old resharding implementation for handling receipts, gas burnt and balance burnt relied on the fact in the first resharding there was only a single parent shard to begin with. The new implementation will provide a more generic and robust way of reassigning the receipts to the child shards, gas burnt, and balance burnt, that works for arbitrary splitting of shards, regardless of the previous shard layout. ### New shard layout @@ -100,7 +100,7 @@ The first release of the resharding v2 will contain a new shard layout where one ### Removal of Fixed shards -Fixed shards was a feature of the protocol that allowed for assigning specific accounts and all of their recursive sub accounts to a predetermined shard. This feature was only used for testing and was never used in production. Fixed shards feature unfortunately breaks the contiguity of shards and is not compatible with the new resharding flow. A sub account of a fixed shard account can fall in the middle of account range that belongs to a different shard. This property of fixed shards made it particularly hard to reason about and implement efficient resharding. +Fixed shards was a feature of the protocol that allowed for assigning specific accounts and all of their recursive sub accounts to a predetermined shard. This feature was only used for testing and was never used in production. Fixed shards feature unfortunately breaks the contiguity of shards and is not compatible with the new resharding flow. A sub account of a fixed shard account can fall in the middle of account range that belongs to a different shard. This property of fixed shards made it particularly hard to reason about and implement efficient resharding. For example in a shard layout with boundary accounts [`b`, `d`] the account space is cleanly divided into three shards, each spanning a contiguous range and account ids: @@ -108,11 +108,11 @@ For example in a shard layout with boundary accounts [`b`, `d`] the account spac * 1 - `b:d` * 2 - `d:` -Now if we add a fixed shard `f` to the same shard layout, then any we'll have 4 shards but neither is contiguous. Accounts such as `aaa.f`, `ccc.f`, `eee.f` that would otherwise belong to shards 0, 1 and 2 respectively are now all assigned to the fixed shard and create holes in the shard account ranges. +Now if we add a fixed shard `f` to the same shard layout, then any we'll have 4 shards but neither is contiguous. Accounts such as `aaa.f`, `ccc.f`, `eee.f` that would otherwise belong to shards 0, 1 and 2 respectively are now all assigned to the fixed shard and create holes in the shard account ranges. -It's also worth noting that there is no benefit to having accounts colocated in the same shard. Any transaction or receipt is treated the same way regardless of crossing shard boundary. +It's also worth noting that there is no benefit to having accounts colocated in the same shard. Any transaction or receipt is treated the same way regardless of crossing shard boundary. -This was implemented ahead of this NEP and the fixed shards feature was **removed**. +This was implemented ahead of this NEP and the fixed shards feature was **removed**. ### Garbage collection @@ -120,15 +120,15 @@ In epoch T+2 once resharding is completed, we can delete the trie state and the ### Transaction pool -The transaction pool is sharded i.e. it groups transactions by the shard where each transaction should be converted to a receipt. The transaction pool was previously sharded by the ShardId. Unfortunately ShardId is insufficient to correctly identify a shard across a resharding event as ShardIds change domain. The transaction pool was migrated to group transactions by ShardUId instead, and a transaction pool resharding was implemented to reassign transaction from parent shard to children shards right before the new shard layout takes effect. The ShardUId contains the version of the shard layout which allows differentiating between shards in different shard layouts. +The transaction pool is sharded i.e. it groups transactions by the shard where each transaction should be converted to a receipt. The transaction pool was previously sharded by the ShardId. Unfortunately ShardId is insufficient to correctly identify a shard across a resharding event as ShardIds change domain. The transaction pool was migrated to group transactions by ShardUId instead, and a transaction pool resharding was implemented to reassign transaction from parent shard to children shards right before the new shard layout takes effect. The ShardUId contains the version of the shard layout which allows differentiating between shards in different shard layouts. -This was implemented ahead of this NEP and the transaction pool is now fully **migrated** to ShardUId. +This was implemented ahead of this NEP and the transaction pool is now fully **migrated** to ShardUId. ## Alternatives ### Why is this design the best in the space of possible designs? -This design is simple, robust, safe, and meets all requirements. +This design is simple, robust, safe, and meets all requirements. ### What other designs have been considered and what is the rationale for not choosing them? @@ -138,17 +138,17 @@ This design is simple, robust, safe, and meets all requirements. * Changing the trie structure to have the account id first and type of record later. This change would allow for much faster resharding by only iterating over the nodes on the boundary. This approach has two major drawbacks without providing too many benefits over the previous approach of splitting by each trie record type. 1) It would require a massive migration of trie. 2) We would need to maintain the old and the new trie structure forever. -* Changing the storage structure by having the storage key to have the format of `account_id.node_hash`. This structure would make it much easier to split the trie on storage level because the children shards are simple sub-ranges of the parent shard. Unfortunately we found that the migration would not be feasible. -* Changing the storage structure by having the key format as only node_hash and dropping the ShardUId prefix. This is a feasible approach but it adds complexity to the garbage collection and data deletion, specially when nodes would start tracking only one shard. We opted in for the much simpler one by using the existing scheme of prefixing storage entries by shard uid. +* Changing the storage structure by having the storage key to have the format of `account_id.node_hash`. This structure would make it much easier to split the trie on storage level because the children shards are simple sub-ranges of the parent shard. Unfortunately we found that the migration would not be feasible. +* Changing the storage structure by having the key format as only node_hash and dropping the ShardUId prefix. This is a feasible approach but it adds complexity to the garbage collection and data deletion, specially when nodes would start tracking only one shard. We opted in for the much simpler one by using the existing scheme of prefixing storage entries by shard uid. #### Other considerations * Dynamic Resharding - we have decided to not implement the full dynamic resharding at this time. Instead we hardcode the shard layout and schedule it manually. The reasons are as follows: - * We prefer incremental process of introducing resharding to make sure that it is robust and reliable, as well as give the community the time to adjust. - * Each resharding increases the potential total load on the system. We don't want to allow it to grow until full sharding is in place and we can handle that increase. + * We prefer incremental process of introducing resharding to make sure that it is robust and reliable, as well as give the community the time to adjust. + * Each resharding increases the potential total load on the system. We don't want to allow it to grow until full sharding is in place and we can handle that increase. * Extended shard layout adjustments - we have decided to only implement shard splitting and not implement any other operations. The reasons are as follows: * In this iteration we only want to perform splitting. - * The extended adjustments are currently not justified. Both merging and boundary moving may be useful in the future when the traffic patterns change and some shard become underutilized. In the nearest future we only predict needing to reduce the size of the heaviest shards. + * The extended adjustments are currently not justified. Both merging and boundary moving may be useful in the future when the traffic patterns change and some shard become underutilized. In the nearest future we only predict needing to reduce the size of the heaviest shards. ### What is the impact of not doing this? @@ -159,7 +159,7 @@ We need resharding in order to scale up the system. Without resharding eventuall There are two known issues in the integration of resharding and state sync: * When syncing the state for the first epoch where the new shard layout is used. In this case the node would need to apply the last block of the previous epoch. It cannot be done on the children shard as on chain the block was applied on the parent shards and the trie related gas costs would be different. -* When generating proofs for incoming receipts. The proof for each of the children shards contains only the receipts of the shard but it's generated on the parent shard layout and so may not be verified. +* When generating proofs for incoming receipts. The proof for each of the children shards contains only the receipts of the shard but it's generated on the parent shard layout and so may not be verified. In this NEP we propose that resharding should be rolled out first, before any real dependency on state sync is added. We can then safely roll out the resharding logic and solve the above mentioned issues separately. We believe at least some of the issues can be mitigated by the implementation of new pre-state root and chunk execution design. @@ -167,7 +167,7 @@ In this NEP we propose that resharding should be rolled out first, before any re The Stateless Validation requires that chunk producers provide proof of correctness of the transition function from one state root to another. That proof for the first block after the new shard layout takes place will need to prove that the entire state split was correct as well as the state transition. -In this NEP we propose that resharding should be rolled out first, before stateless validation. We can then safely roll out the resharding logic and solve the above mentioned issues separately. This issue was discussed with the stateless validation experts and we are cautiously optimistic that the integration will be possible. The most concerning part is the proof size and we believe that it should be small enough thanks to the resharding touching relatively small number of trie nodes - on the order of the depth of the trie. +In this NEP we propose that resharding should be rolled out first, before stateless validation. We can then safely roll out the resharding logic and solve the above mentioned issues separately. This issue was discussed with the stateless validation experts and we are cautiously optimistic that the integration will be possible. The most concerning part is the proof size and we believe that it should be small enough thanks to the resharding touching relatively small number of trie nodes - on the order of the depth of the trie. ## Future fast-followups @@ -183,7 +183,7 @@ As mentioned above under 'Integration with Stateless Validation' section, the in ### Further reshardings -This NEP introduces both an implementation of resharding and an actual resharding to be done in the production networks. Further reshardings can also be performed in the future by adding a new shard layout and setting the shard layout for the desired protocol version in the `AllEpochConfig`. +This NEP introduces both an implementation of resharding and an actual resharding to be done in the production networks. Further reshardings can also be performed in the future by adding a new shard layout and setting the shard layout for the desired protocol version in the `AllEpochConfig`. ### Dynamic resharding @@ -194,7 +194,7 @@ As noted above, dynamic resharding is out of scope for this NEP and should be im ### Extended shard layout adjustments -In this NEP we only propose supporting splitting shards. This operation should be more than sufficient for the near future but eventually we may want to add support for more sophisticated adjustments such as: +In this NEP we only propose supporting splitting shards. This operation should be more than sufficient for the near future but eventually we may want to add support for more sophisticated adjustments such as: * Merging shards together * Moving the boundary account between two shards @@ -208,7 +208,7 @@ In the future, we would like to potentially change the schema in a way such that ### Other useful features * Removal of shard uids and introducing globally unique shard ids -* Account colocation for low latency across account call - In case we start considering synchronous execution environment, colocating associated accounts (e.g. cross contract call between them) in the same shard can increase the efficiency +* Account colocation for low latency across account call - In case we start considering synchronous execution environment, colocating associated accounts (e.g. cross contract call between them) in the same shard can increase the efficiency * Shard purchase/reservation - When someone wants to secure entirety of limitation on a single shard (e.g. state size limit), they can 'purchase/reserve' a shard so it can be dedicated for them (similar to how Aurora is set up) ## Consequences @@ -233,7 +233,7 @@ In the future, we would like to potentially change the schema in a way such that ### Backwards Compatibility -Any light clients, tooling or frameworks external to nearcore that have the current shard layout or the current number of shards hardcoded may break and will need to be adjusted in advance. The recommended way for fixing it is querying an RPC node for the shard layout of the relevant epoch and using that information in place of the previously hardcoded shard layout or number of shards. The shard layout can be queried by using the `EXPERIMENTAL_protocol_config` rpc endpoint and reading the `shard_layout` field from the result. A dedicated endpoint may be added in the future as well. +Any light clients, tooling or frameworks external to nearcore that have the current shard layout or the current number of shards hardcoded may break and will need to be adjusted in advance. The recommended way for fixing it is querying an RPC node for the shard layout of the relevant epoch and using that information in place of the previously hardcoded shard layout or number of shards. The shard layout can be queried by using the `EXPERIMENTAL_protocol_config` rpc endpoint and reading the `shard_layout` field from the result. A dedicated endpoint may be added in the future as well. Within nearcore we do not expect anything to break with this change. Yet, shard splitting can introduce additional complexity on replayability. For instance, as target shard of a receipt and belonging shard of an account can change with shard splitting, shard splitting must be replayed along with transactions at the exact epoch boundary. diff --git a/neps/nep-0509.md b/neps/nep-0509.md index 4b10254cc..f32539d0c 100644 --- a/neps/nep-0509.md +++ b/neps/nep-0509.md @@ -2,7 +2,7 @@ NEP: 509 Title: Stateless validation Stage 0 Authors: Robin Cheng, Anton Puhach, Alex Logunov, Yoon Hong -Status: Draft +Status: Final DiscussionsTo: https://docs.google.com/document/d/1C-w4FNeXl8ZMd_Z_YxOf30XA1JM6eMDp5Nf3N-zzNWU/edit?usp=sharing, https://docs.google.com/document/d/1TzMENFGYjwc2g5A3Yf4zilvBwuYJufsUQJwRjXGb9Xc/edit?usp=sharing Type: Protocol Version: 1.0.0 @@ -14,10 +14,10 @@ LastUpdated: 2023-09-19 The NEP proposes an solution to achieve phase 2 of sharding (where none of the validators needs to track all shards), with stateless validation, instead of the traditionally proposed approach of fraud proof and state rollback. -The fundamental idea is that validators do not need to have state locally to validate chunks. +The fundamental idea is that validators do not need to have state locally to validate chunks. * Under stateless validation, the responsibility of a chunk producer extends to packaging transactions and receipts and annotating them with state witnesses. This extended role will be called "chunk proposers". -* The state witness of a chunk is defined to be a subset of the trie state, alongside its proof of inclusion in the trie, that is needed to execute a chunk. A state witness allows anyone to execute the chunk without having the state of its shard locally. +* The state witness of a chunk is defined to be a subset of the trie state, alongside its proof of inclusion in the trie, that is needed to execute a chunk. A state witness allows anyone to execute the chunk without having the state of its shard locally. * Then, at each block height, validators will be randomly assigned to a shard, to validate the state witness for that shard. Once a validator receives both a chunk and its state witness, it verifies the state transition of the chunk, signs a chunk endorsement and sends it to the block producer. This is similar to, but separate from, block approvals and consensus. * The block producer waits for sufficient chunk endorsements before including a chunk into the block it produces, or omits the chunk if not enough endorsements arrive in time. @@ -62,7 +62,7 @@ And the flow goes on for heights H+1, H+2, etc. The "induction base" is at genes One can observe that there is no "chunk validation" step here. In fact, validity of chunks is implicitly guaranteed by **requirement for all block producers to track all shards**. To achieve phase 2 of sharding, we want to drop this requirement. For that, we propose the following changes to the flow: - + ### Design after NEP-509 * Chunk producer, in addition to producing a chunk, produces new `ChunkStateWitness` message. The `ChunkStateWitness` contains data which is enough to prove validity of the chunk's header that is being produced. @@ -131,7 +131,7 @@ With stateless validation, this structure must change for several reasons: * Chunk production is the most resource consuming activity. * *Only* chunk production needs state in memory while other responsibilities can be completed via acquiring state witness -* Chunk production does not have to be performed by all validators. +* Chunk production does not have to be performed by all validators. Hence, to make transition seamless, we change the role of nodes out of top 100 to only validate chunks: @@ -142,7 +142,7 @@ if index(validator) < 100: roles(validator).append("chunk validator") ``` -The more stake validator has, the more **heavy** work it will get assigned. We expect that validators with higher stakes have more powerful hardware. +The more stake validator has, the more **heavy** work it will get assigned. We expect that validators with higher stakes have more powerful hardware. With stateless validation, relative heaviness of the work changes. Comparing to the current order "block production" > "chunk production", the new order is "chunk production" > "block production" > "chunk validation". Shards are equally split among chunk producers: as in Mainnet on 12 Jun 2024 we have 6 shards, each shard would have ~16 chunk producers assigned. @@ -168,10 +168,10 @@ Reward for each validator is defined as `total_epoch_reward * validator_relative So, the actual reward never exceeds total reward, and when everyone does perfect work, they are equal. For the context of the NEP, it is enough to assume that `work_quality_ratio = avg_{role}({role}_quality_ratio)`. So, if node is both a block and chunk producer, we compute quality for each role separately and then take average of them. - + When epoch is finalized, all block headers in it uniquely determine who was expected to produce each block and chunk. Thus, if we define quality ratio for block producer as `produced_blocks/expected_blocks`, everyone is able to compute it. -Similarly, `produced_chunks/expected_chunks` is a quality for chunk producer. +Similarly, `produced_chunks/expected_chunks` is a quality for chunk producer. It is more accurate to say `included_chunks/expected_chunks`, because inclusion of chunk in block is a final decision of a block producer which defines success here. Ideally, we could compute quality for chunk validator as `produced_endorsements/expected_endorsements`. Unfortunately, we won't do it in Stage 0 because: @@ -182,11 +182,11 @@ Ideally, we could compute quality for chunk validator as `produced_endorsements/ So for now we decided to compute quality for chunk validator as ratio of `included_chunks/expected_chunks`, where we iterate over chunks which node was expected to validate. It has clear drawbacks though: -* chunk validators are not incentivized to validate the chunks, given they will be rewarded the same in either case; +* chunk validators are not incentivized to validate the chunks, given they will be rewarded the same in either case; * if chunks are not produced at all, chunk validators will also be impacted. We plan to address them in the future releases. - + #### Kickouts In addition to that, if node performance is too poor, we want a mechanism to kick it out of the validator list, to ensure healthy protocol performance and validator rotation. @@ -203,7 +203,7 @@ if validator is chunk producer and chunk_producer_quality_ratio < 0.8: For chunk validator, we apply absolutely the same formula. However, because: -* the formula doesn't count endorsements explicitly +* the formula doesn't count endorsements explicitly * for chunk producers it kind of just makes chunk production condition stronger without adding value we apply it to nodes which **only validate chunks**. So, we add this line: @@ -215,12 +215,12 @@ if validator is only chunk validator and chunk_validator_quality_ratio < 0.8: As we pointed out above, current formula `chunk_validator_quality_ratio` is problematic. Here it brings even a bigger issue: if chunk producers don't produce chunks, chunk validators will be kicked out as well, which impacts network stability. -This is another reason to come up with the better formula. +This is another reason to come up with the better formula. #### Shard assignment -As chunk producer becomes the most important role, we need to ensure that every epoch has significant amount of healthy chunk producers. -This is a **significant difference** with current logic, where chunk-only producers generally have low stake and their performance doesn't impact overall performance. +As chunk producer becomes the most important role, we need to ensure that every epoch has significant amount of healthy chunk producers. +This is a **significant difference** with current logic, where chunk-only producers generally have low stake and their performance doesn't impact overall performance. The most challenging part of becoming a chunk producer for a shard is to download most recent shard state within previous epoch. This is called "state sync". Unfortunately, as of now, state sync is centralised on published snapshots, which is a major point of failure, until we don't have decentralised state sync. @@ -229,12 +229,12 @@ Because of that, we make additional change: if node was a chunk producer for som This way, we minimise number of required state syncs at each epoch. The exact algorithm needs a thorough description to satisfy different edge cases, so we will just leave a link to full explanation: https://github.com/near/nearcore/issues/11213#issuecomment-2111234940. - -### ChunkStateWitness + +### ChunkStateWitness The full structure is described [here](https://github.com/near/nearcore/blob/b8f08d9ded5b7cbae9d73883785902b76e4626fc/core/primitives/src/stateless_validation.rs#L247). Let's construct it sequentially together with explaining why every field is needed. Start from simple data: - + ```rust pub struct ChunkStateWitness { pub chunk_producer: AccountId, @@ -246,10 +246,10 @@ pub struct ChunkStateWitness { What is needed to prove `ShardChunkHeader`? -The key function we have in codebase is [validate_chunk_with_chunk_extra_and_receipts_root](https://github.com/near/nearcore/blob/c2d80742187d9b8fc1bb672f16e3d5c144722742/chain/chain/src/validate.rs#L141). +The key function we have in codebase is [validate_chunk_with_chunk_extra_and_receipts_root](https://github.com/near/nearcore/blob/c2d80742187d9b8fc1bb672f16e3d5c144722742/chain/chain/src/validate.rs#L141). The main arguments there are `prev_chunk_extra: &ChunkExtra` which stands for execution result of previous chunk, and `chunk_header`. The most important field for `ShardChunkHeader` is `prev_state_root` - consider latest implementation `ShardChunkHeaderInnerV3`. It stands for state root resulted from updating shard for the previous block, which means applying previous chunk if there is no missing chunks. -So, chunk validator needs some way to run transactions and receipts from the previous chunk. Let's call it a "main state transition" and add two more fields to state witness: +So, chunk validator needs some way to run transactions and receipts from the previous chunk. Let's call it a "main state transition" and add two more fields to state witness: ```rust /// The base state and post-state-root of the main transition where we @@ -287,11 +287,11 @@ Receipts are internal messages, resulting from transaction execution, sent betwe However, each receipt is an execution outcome of some transaction or other parent receipt, executed in some previous chunk. For every chunk, we conveniently store `prev_outgoing_receipts_root` which is a Merkle hash of all receipts sent to other shards resulting by execution of this chunk. So, for every receipt, there is a proof of its generation in some parent chunk. If there are no missing chunk, then it's enough to consider chunks from previous block. -So we add another field: +So we add another field: ```rust /// Non-strict superset of the receipts that must be applied, along with - /// information that allows these receipts to be verifiable against the + /// information that allows these receipts to be verifiable against the /// blockchain history. pub source_receipt_proofs: HashMap, ``` @@ -304,7 +304,7 @@ Unfortunately, production and inclusion of any chunk **cannot be guaranteed**: * chunk validators may not generate 2/3 endorsements; * block producer may not receive enough information to include chunk. -Let's handle this case as well. +Let's handle this case as well. First, each chunk producer needs not just to prove main state transition, but also all state transitions for latest missing chunks: ```rust @@ -318,23 +318,23 @@ First, each chunk producer needs not just to prove main state transition, but al pub implicit_transitions: Vec, ``` -Then, while our shard was missing chunks, other shards could still produce chunks, which could generate receipts targeting our shards. So, we need to extend `source_receipt_proofs`. +Then, while our shard was missing chunks, other shards could still produce chunks, which could generate receipts targeting our shards. So, we need to extend `source_receipt_proofs`. Field structure doesn't change, but we need to carefully pick range of set of source chunks, so different subsets will cover all source receipts without intersection. -Let's say B2 is the block that contains the last new chunk of shard S before chunk which state transition we execute, and B1 is the block that contains the last new chunk of shard S before B2. +Let's say B2 is the block that contains the last new chunk of shard S before chunk which state transition we execute, and B1 is the block that contains the last new chunk of shard S before B2. Then, we will define set of blocks B as the contiguous subsequence of blocks B1 (EXCLUSIVE) to B2 (inclusive) in this chunk's chain (i.e. the linear chain that this chunk's parent block is on). Lastly, source chunks are all chunks included in blocks from B. - + The last caveat is **new** transactions introduced by chunk with `chunk_header`. As chunk header introduces `tx_root` for them, we need to check validity of this field as well. If we don't do it, malicious chunk producer can include invalid transaction, and if it gets its chunk endorsed, nodes which track the shard must either accept invalid transaction or refuse to process chunk, but the latter means that shard will get stuck. To validate new `tx_root`, we also need Merkle partial state to validate sender' balances, access keys, nonces, etc., which leads to two last fields to be added: - + ```rust pub new_transactions: Vec, pub new_transactions_validation_state: PartialState, ``` -The logic to produce `ChunkStateWitness` is [here](https://github.com/near/nearcore/blob/b8f08d9ded5b7cbae9d73883785902b76e4626fc/chain/client/src/stateless_validation/state_witness_producer.rs#L79). +The logic to produce `ChunkStateWitness` is [here](https://github.com/near/nearcore/blob/b8f08d9ded5b7cbae9d73883785902b76e4626fc/chain/client/src/stateless_validation/state_witness_producer.rs#L79). Itself, it requires some minor changes to the logic of applying chunks, related to generating `ChunkStateTransition::base_state`. It is controlled by [this line](https://github.com/near/nearcore/blob/dc03a34101f77a17210873c4b5be28ef23443864/chain/chain/src/runtime/mod.rs#L977), which causes all nodes read during applying chunk to be put inside `TrieRecorder`. After applying chunk, its contents are saved to `StateTransitionData`. @@ -377,9 +377,9 @@ Based on target number of mandates and total chunk validators stake, [here](http All the mandates are stored in new version of `EpochInfo` `EpochInfoV4` in [validator_mandates](https://github.com/near/nearcore/blob/164b7a367623eb651914eeaf1cbf3579c107c22d/core/primitives/src/epoch_manager.rs#L775) field. After that, for each height in the epoch, [EpochInfo::sample_chunk_validators](https://github.com/near/nearcore/blob/164b7a367623eb651914eeaf1cbf3579c107c22d/core/primitives/src/epoch_manager.rs#L1224) is called to return `ChunkValidatorStakeAssignment`. It is `Vec>` where s-th element corresponds to s-th shard in the epoch, contains ids of all chunk validator for that height and shard, alongside with its total mandate stake assigned to that shard. -`sample_chunk_validators` basically just shuffles `validator_mandates` among shards using height-specific seed. If there are no more than 1/3 malicious validators, then by Chernoff bound the probability that at least one shard is corrupted is small enough. **This is a reason why we can split validators among shards and still rely on basic consensus assumption**. +`sample_chunk_validators` basically just shuffles `validator_mandates` among shards using height-specific seed. If there are no more than 1/3 malicious validators, then by Chernoff bound the probability that at least one shard is corrupted is small enough. **This is a reason why we can split validators among shards and still rely on basic consensus assumption**. -This way, everyone tracking block headers can compute chunk validator assignment for each height and shard. +This way, everyone tracking block headers can compute chunk validator assignment for each height and shard. ### Size limits @@ -527,35 +527,35 @@ It is also worth mentioning that large state witness size makes witness distribu ## Alternatives -The only real alternative that was considered is the original nightshade proposal. The full overview of the differences can be found in the revised nightshade whitepaper at https://near.org/papers/nightshade. +The only real alternative that was considered is the original nightshade proposal. The full overview of the differences can be found in the revised nightshade whitepaper at https://near.org/papers/nightshade. ## Future possibilities * Integration with ZK allowing to get rid of large state witness distribution. If we treat state witness as a proof and ZK-ify it, anyone can validate that state witness indeed proves the new chunk header with much lower effort. Complexity of actual proof generation and computation indeed increases, but it can be distributed among chunk producers, and we can have separate concept of finality while allowing generic users to query optimistic chunks. -* Integration with resharding to further increase the number of shards and the total throughput. -* The sharding of non-validating nodes and services. There are a number of services that may benefit from tracking only a subset of shards. Some examples include the RPC, archival and read-RPC nodes. +* Integration with resharding to further increase the number of shards and the total throughput. +* The sharding of non-validating nodes and services. There are a number of services that may benefit from tracking only a subset of shards. Some examples include the RPC, archival and read-RPC nodes. ## Consequences ### Positive -* The validator nodes will need to track at most one shard. +* The validator nodes will need to track at most one shard. * The state will be held in memory making the chunk application much faster. -* The disk space hardware requirement will decrease. The top 100 nodes will need to store at most 2 shards at a time and the remaining nodes will not need to store any shards. -* Thanks to the above, in the future, it will be possible to reduce the gas costs and by doing so increase the throughput of the system. +* The disk space hardware requirement will decrease. The top 100 nodes will need to store at most 2 shards at a time and the remaining nodes will not need to store any shards. +* Thanks to the above, in the future, it will be possible to reduce the gas costs and by doing so increase the throughput of the system. ### Neutral -* The current approach to resharding will need to be revised to support generating state witness. -* The security assumptions will change. The responsibility will be moved from block producers to chunk validators and the security will become probabilistic. +* The current approach to resharding will need to be revised to support generating state witness. +* The security assumptions will change. The responsibility will be moved from block producers to chunk validators and the security will become probabilistic. ### Negative * The network bandwidth and memory hardware requirements will increase. - * The top 100 validators will need to store up to 2 shards in memory and participate in state witness distribution. - * The remaining validators will need to participate in state witness distribution. + * The top 100 validators will need to store up to 2 shards in memory and participate in state witness distribution. + * The remaining validators will need to participate in state witness distribution. * Additional limits will be put on the size of transactions, receipts and, more generally, cross shard communication. -* The dependency on cloud state sync will increase the centralization of the blockchain. This will be resolved separately by the decentralized state sync. +* The dependency on cloud state sync will increase the centralization of the blockchain. This will be resolved separately by the decentralized state sync. ### Backwards Compatibility diff --git a/neps/nep-0514.md b/neps/nep-0514.md index 8711e6b65..a5a0fee48 100644 --- a/neps/nep-0514.md +++ b/neps/nep-0514.md @@ -2,7 +2,7 @@ NEP: 514 Title: Reducing the number of Block Producer Seats in `testnet` Authors: Nikolay Kurtov -Status: New +Status: Final DiscussionsTo: https://github.com/nearprotocol/neps/pull/514 Type: Protocol Version: 1.0.0 diff --git a/neps/nep-0519.md b/neps/nep-0519.md index e636d0df3..6afc9edd4 100644 --- a/neps/nep-0519.md +++ b/neps/nep-0519.md @@ -2,7 +2,7 @@ NEP: 519 Title: Yield Execution Authors: Akhi Singhania ; Saketh Are -Status: Draft +Status: Final DiscussionsTo: https://github.com/near/NEPs/pull/519 Type: Protocol Version: 0.0.0 diff --git a/neps/nep-0536.md b/neps/nep-0536.md index 6f94d751a..5bc849a0c 100644 --- a/neps/nep-0536.md +++ b/neps/nep-0536.md @@ -2,7 +2,7 @@ NEP: 536 Title: Reduce the number of gas refunds Authors: Evgeny Kuzyakov , Bowen Wang -Status: New +Status: Final DiscussionsTo: https://github.com/near/NEPs/pull/536 Type: Protocol Version: 1.0.0 diff --git a/neps/nep-0539.md b/neps/nep-0539.md index ca2dd0e4d..3328c253e 100644 --- a/neps/nep-0539.md +++ b/neps/nep-0539.md @@ -2,7 +2,7 @@ NEP: 539 Title: Cross-Shard Congestion Control Authors: Waclaw Banasik , Jakob Meier -Status: New +Status: Final DiscussionsTo: https://github.com/nearprotocol/neps/pull/539 Type: Protocol Version: 1.0.0 @@ -172,9 +172,9 @@ In the pseudo code above, we borrow the [`mix`](https://docs.gl/sl4/mix) function from GLSL for linear interpolation. > `mix(x, y, a)` -> +> > `mix` performs a linear interpolation between x and y using a to weight between -> them. The return value is computed as $x \times (1 - a) + y \times a$. +> them. The return value is computed as $x \times (1 - a) + y \times a$. More importantly, we add a more targeted rule to reject all transactions *targeting* a shard with a congestion level above a certain threshold. @@ -252,9 +252,9 @@ The new chunk execution then follows this order. if congestion >= 1.0: # Maximum congestion: reduce to minimum speed if current_shard == allowed_shard[receiver]: - outgoing_gas_limit[receiver] = 1 Pgas + outgoing_gas_limit[receiver] = 1 Pgas else: - outgoing_gas_limit[receiver] = 0 + outgoing_gas_limit[receiver] = 0 else: # Green or Amber # linear interpolation based on congestion level @@ -344,7 +344,7 @@ pub struct ShardBufferedReceiptIndices { } ``` -The `BUFFERED_RECEIPT` column stores receipts keyed by +The `BUFFERED_RECEIPT` column stores receipts keyed by `TrieKey::BufferedReceipt{ receiving_shard: ShardId, index: u64 }`. The `BufferedReceiptIndices` map defines which queues exist, which changes @@ -369,7 +369,7 @@ utilization as long as the ratio between burnt and attached gas in receipts is above 1 to 20. A shorter delayed queue would result in lower delays but in our model -simulations, we saw reduced utilization even in simple and balanced workloads. +simulations, we saw reduced utilization even in simple and balanced workloads. The 1 GB of memory is a target value for the control algorithm to try and stay below. With receipts in the normal range of sizes seen in today's traffic, we @@ -439,11 +439,11 @@ specification above but are defined in the reference implementation. The congestion information is computed based on the gas and size of the incoming queue and the outgoing buffers. A naive implementation would just iterate over all of the receipts in the queue and buffers and sum up the relevant metrics. This -approach is slow and, in the context of stateless validation, would add too much -to the state witness size. In order to prevent those issues we consider two +approach is slow and, in the context of stateless validation, would add too much +to the state witness size. In order to prevent those issues we consider two alternative optimizations. Both use the same principle of caching the previously -calculated metrics and updating them based on the changes to the incoming queue -and outgoing buffers. +calculated metrics and updating them based on the changes to the incoming queue +and outgoing buffers. After applying a chunk, we store detailed information of the shard in the chunk extra. Unlike the shard header, this is only stored on the shard and not shared @@ -469,7 +469,7 @@ pub struct CongestionInfo { This implementation allows to efficiently update the `StoredReceiptsInfo` during chunk application by starting with the information of the previous chunk and -applying only the changes. +applying only the changes. Regarding integer sizes, `delayed_receipts_gas` and `buffered_receipts_gas` use 128-bit unsigned integers because 64-bit would not always be enough. `u64::MAX` @@ -550,55 +550,55 @@ information](#efficiently-computing-the-congestion-information) creates a dependence on the previous block when processing a block. For a fully synced node this requirement is always fulfilled because we keep at least 3 epochs of blocks. However in state sync we start processing from an arbitrary place in -chain without access to full history. +chain without access to full history. In order to integrate the congestion control and state sync features we will -add extra steps in state sync to download the blocks that may be needed in -order to finalize state sync. +add extra steps in state sync to download the blocks that may be needed in +order to finalize state sync. The blocks that are needed are the `sync hash` block, the `previous block` where -state sync creates chunk extra in order to kick off block sync and the `previous -previous block` that is now needed in order to process the `previous block`. On +state sync creates chunk extra in order to kick off block sync and the `previous +previous block` that is now needed in order to process the `previous block`. On top of that we may need to download further blocks to ensure that every shard has -at least one new chunk in the blocks leading up to the sync hash block. +at least one new chunk in the blocks leading up to the sync hash block. ## Integration with resharding -Resharding is a process wherin we change the shard layout - the assignment of +Resharding is a process wherin we change the shard layout - the assignment of accound ids to shards. The centerpiece of resharding is moving the trie / state -records from parent shards to children shards. It's important to preserve the +records from parent shards to children shards. It's important to preserve the ability to perform resharding while adding other protocol features such as congestion control. Below is a short description how resharding and congestion control can be -integrated, in particular how to reshard the new trie columns - the outgoing buffers. +integrated, in particular how to reshard the new trie columns - the outgoing buffers. -For simplicity we'll only consider splitting a single parent shard into multiple -children shards which is currently the only supported operation. +For simplicity we'll only consider splitting a single parent shard into multiple +children shards which is currently the only supported operation. -The actual implementation of this integration will be done independently and -outside of this effort. +The actual implementation of this integration will be done independently and +outside of this effort. Importantly the resharding affects both the shard that is being split and all the -other shards. +other shards. #### Changes to the shard under resharding The outgoing buffers of the parent shard can be split among children by iterating -all of the receipts in each buffer and inserting it to appropriate child shard. -The assignment can in theory be arbitrary e.g. all receipts can be reassigned to -a single shard. In practice it would make sense to either split the receipts -equally between children or based on the sender account id of the receipt. +all of the receipts in each buffer and inserting it to appropriate child shard. +The assignment can in theory be arbitrary e.g. all receipts can be reassigned to +a single shard. In practice it would make sense to either split the receipts +equally between children or based on the sender account id of the receipt. Special consideration should be given to refund receipts where the sender account is "system" that may not belong to neither parent nor children shards. -Any assignment of such receipts is fine. +Any assignment of such receipts is fine. #### Changes to the other shards -The other shards, that is all shards that are not under resharding, have an +The other shards, that is all shards that are not under resharding, have an outgoing buffer to the shard under resharding. This buffer should be split into one outgoing buffer per child shard. The buffer can be split by iterating receipts and reassigning each to either of the child shards. Each receipt can -be reassigned based on it's receiver account id and the new shard layout. +be reassigned based on it's receiver account id and the new shard layout. ## Alternatives