diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..8a0958ce --- /dev/null +++ b/404.html @@ -0,0 +1,429 @@ + + + + + + + + + + + + + + + + + + + + Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + +
+ +
+ +

404 - Not found

+ +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/LICENSE/index.html b/LICENSE/index.html new file mode 100644 index 00000000..80b6abad --- /dev/null +++ b/LICENSE/index.html @@ -0,0 +1,3698 @@ + + + + + + + + + + + + + + + + + + + + + + + + LICENSE | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

LICENSE

+ +

Attribution 4.0 International

+

=======================================================================

+

Creative Commons Corporation ("Creative Commons") is not a law firm and +does not provide legal services or legal advice. Distribution of +Creative Commons public licenses does not create a lawyer-client or +other relationship. Creative Commons makes its licenses and related +information available on an "as-is" basis. Creative Commons gives no +warranties regarding its licenses, any material licensed under their +terms and conditions, or any related information. Creative Commons +disclaims all liability for damages resulting from their use to the +fullest extent possible.

+

Using Creative Commons Public Licenses

+

Creative Commons public licenses provide a standard set of terms and +conditions that creators and other rights holders may use to share +original works of authorship and other material subject to copyright +and certain other rights specified in the public license below. The +following considerations are for informational purposes only, are not +exhaustive, and do not form part of our licenses.

+
 Considerations for licensors: Our public licenses are
+ intended for use by those authorized to give the public
+ permission to use material in ways otherwise restricted by
+ copyright and certain other rights. Our licenses are
+ irrevocable. Licensors should read and understand the terms
+ and conditions of the license they choose before applying it.
+ Licensors should also secure all rights necessary before
+ applying our licenses so that the public can reuse the
+ material as expected. Licensors should clearly mark any
+ material not subject to the license. This includes other CC-
+ licensed material, or material used under an exception or
+ limitation to copyright. More considerations for licensors:
+wiki.creativecommons.org/Considerations_for_licensors
+
+ Considerations for the public: By using one of our public
+ licenses, a licensor grants the public permission to use the
+ licensed material under specified terms and conditions. If
+ the licensor's permission is not necessary for any reason--for
+ example, because of any applicable exception or limitation to
+ copyright--then that use is not regulated by the license. Our
+ licenses grant only permissions under copyright and certain
+ other rights that a licensor has authority to grant. Use of
+ the licensed material may still be restricted for other
+ reasons, including because others have copyright or other
+ rights in the material. A licensor may make special requests,
+ such as asking that all changes be marked or described.
+ Although not required by our licenses, you are encouraged to
+ respect those requests where reasonable. More_considerations
+ for the public:
+wiki.creativecommons.org/Considerations_for_licensees
+
+ +

=======================================================================

+

Creative Commons Attribution 4.0 International Public License

+

By exercising the Licensed Rights (defined below), You accept and agree +to be bound by the terms and conditions of this Creative Commons +Attribution 4.0 International Public License ("Public License"). To the +extent this Public License may be interpreted as a contract, You are +granted the Licensed Rights in consideration of Your acceptance of +these terms and conditions, and the Licensor grants You such rights in +consideration of benefits the Licensor receives from making the +Licensed Material available under these terms and conditions.

+

Section 1 -- Definitions.

+

a. Adapted Material means material subject to Copyright and Similar + Rights that is derived from or based upon the Licensed Material + and in which the Licensed Material is translated, altered, + arranged, transformed, or otherwise modified in a manner requiring + permission under the Copyright and Similar Rights held by the + Licensor. For purposes of this Public License, where the Licensed + Material is a musical work, performance, or sound recording, + Adapted Material is always produced where the Licensed Material is + synched in timed relation with a moving image.

+

b. Adapter's License means the license You apply to Your Copyright + and Similar Rights in Your contributions to Adapted Material in + accordance with the terms and conditions of this Public License.

+

c. Copyright and Similar Rights means copyright and/or similar rights + closely related to copyright including, without limitation, + performance, broadcast, sound recording, and Sui Generis Database + Rights, without regard to how the rights are labeled or + categorized. For purposes of this Public License, the rights + specified in Section 2(b)(1)-(2) are not Copyright and Similar + Rights.

+

d. Effective Technological Measures means those measures that, in the + absence of proper authority, may not be circumvented under laws + fulfilling obligations under Article 11 of the WIPO Copyright + Treaty adopted on December 20, 1996, and/or similar international + agreements.

+

e. Exceptions and Limitations means fair use, fair dealing, and/or + any other exception or limitation to Copyright and Similar Rights + that applies to Your use of the Licensed Material.

+

f. Licensed Material means the artistic or literary work, database, + or other material to which the Licensor applied this Public + License.

+

g. Licensed Rights means the rights granted to You subject to the + terms and conditions of this Public License, which are limited to + all Copyright and Similar Rights that apply to Your use of the + Licensed Material and that the Licensor has authority to license.

+

h. Licensor means the individual(s) or entity(ies) granting rights + under this Public License.

+

i. Share means to provide material to the public by any means or + process that requires permission under the Licensed Rights, such + as reproduction, public display, public performance, distribution, + dissemination, communication, or importation, and to make material + available to the public including in ways that members of the + public may access the material from a place and at a time + individually chosen by them.

+

j. Sui Generis Database Rights means rights other than copyright + resulting from Directive 96/9/EC of the European Parliament and of + the Council of 11 March 1996 on the legal protection of databases, + as amended and/or succeeded, as well as other essentially + equivalent rights anywhere in the world.

+

k. You means the individual or entity exercising the Licensed Rights + under this Public License. Your has a corresponding meaning.

+

Section 2 -- Scope.

+

a. License grant.

+
   1. Subject to the terms and conditions of this Public License,
+      the Licensor hereby grants You a worldwide, royalty-free,
+      non-sublicensable, non-exclusive, irrevocable license to
+      exercise the Licensed Rights in the Licensed Material to:
+
+        a. reproduce and Share the Licensed Material, in whole or
+           in part; and
+
+        b. produce, reproduce, and Share Adapted Material.
+
+   2. Exceptions and Limitations. For the avoidance of doubt, where
+      Exceptions and Limitations apply to Your use, this Public
+      License does not apply, and You do not need to comply with
+      its terms and conditions.
+
+   3. Term. The term of this Public License is specified in Section
+      6(a).
+
+   4. Media and formats; technical modifications allowed. The
+      Licensor authorizes You to exercise the Licensed Rights in
+      all media and formats whether now known or hereafter created,
+      and to make technical modifications necessary to do so. The
+      Licensor waives and/or agrees not to assert any right or
+      authority to forbid You from making technical modifications
+      necessary to exercise the Licensed Rights, including
+      technical modifications necessary to circumvent Effective
+      Technological Measures. For purposes of this Public License,
+      simply making modifications authorized by this Section 2(a)
+      (4) never produces Adapted Material.
+
+   5. Downstream recipients.
+
+        a. Offer from the Licensor -- Licensed Material. Every
+           recipient of the Licensed Material automatically
+           receives an offer from the Licensor to exercise the
+           Licensed Rights under the terms and conditions of this
+           Public License.
+
+        b. No downstream restrictions. You may not offer or impose
+           any additional or different terms or conditions on, or
+           apply any Effective Technological Measures to, the
+           Licensed Material if doing so restricts exercise of the
+           Licensed Rights by any recipient of the Licensed
+           Material.
+
+   6. No endorsement. Nothing in this Public License constitutes or
+      may be construed as permission to assert or imply that You
+      are, or that Your use of the Licensed Material is, connected
+      with, or sponsored, endorsed, or granted official status by,
+      the Licensor or others designated to receive attribution as
+      provided in Section 3(a)(1)(A)(i).
+
+ +

b. Other rights.

+
   1. Moral rights, such as the right of integrity, are not
+      licensed under this Public License, nor are publicity,
+      privacy, and/or other similar personality rights; however, to
+      the extent possible, the Licensor waives and/or agrees not to
+      assert any such rights held by the Licensor to the limited
+      extent necessary to allow You to exercise the Licensed
+      Rights, but not otherwise.
+
+   2. Patent and trademark rights are not licensed under this
+      Public License.
+
+   3. To the extent possible, the Licensor waives any right to
+      collect royalties from You for the exercise of the Licensed
+      Rights, whether directly or through a collecting society
+      under any voluntary or waivable statutory or compulsory
+      licensing scheme. In all other cases the Licensor expressly
+      reserves any right to collect such royalties.
+
+ +

Section 3 -- License Conditions.

+

Your exercise of the Licensed Rights is expressly made subject to the +following conditions.

+

a. Attribution.

+
   1. If You Share the Licensed Material (including in modified
+      form), You must:
+
+        a. retain the following if it is supplied by the Licensor
+           with the Licensed Material:
+
+             i. identification of the creator(s) of the Licensed
+                Material and any others designated to receive
+                attribution, in any reasonable manner requested by
+                the Licensor (including by pseudonym if
+                designated);
+
+            ii. a copyright notice;
+
+           iii. a notice that refers to this Public License;
+
+            iv. a notice that refers to the disclaimer of
+                warranties;
+
+             v. a URI or hyperlink to the Licensed Material to the
+                extent reasonably practicable;
+
+        b. indicate if You modified the Licensed Material and
+           retain an indication of any previous modifications; and
+
+        c. indicate the Licensed Material is licensed under this
+           Public License, and include the text of, or the URI or
+           hyperlink to, this Public License.
+
+   2. You may satisfy the conditions in Section 3(a)(1) in any
+      reasonable manner based on the medium, means, and context in
+      which You Share the Licensed Material. For example, it may be
+      reasonable to satisfy the conditions by providing a URI or
+      hyperlink to a resource that includes the required
+      information.
+
+   3. If requested by the Licensor, You must remove any of the
+      information required by Section 3(a)(1)(A) to the extent
+      reasonably practicable.
+
+   4. If You Share Adapted Material You produce, the Adapter's
+      License You apply must not prevent recipients of the Adapted
+      Material from complying with this Public License.
+
+ +

Section 4 -- Sui Generis Database Rights.

+

Where the Licensed Rights include Sui Generis Database Rights that +apply to Your use of the Licensed Material:

+

a. for the avoidance of doubt, Section 2(a)(1) grants You the right + to extract, reuse, reproduce, and Share all or a substantial + portion of the contents of the database;

+

b. if You include all or a substantial portion of the database + contents in a database in which You have Sui Generis Database + Rights, then the database in which You have Sui Generis Database + Rights (but not its individual contents) is Adapted Material; and

+

c. You must comply with the conditions in Section 3(a) if You Share + all or a substantial portion of the contents of the database.

+

For the avoidance of doubt, this Section 4 supplements and does not +replace Your obligations under this Public License where the Licensed +Rights include other Copyright and Similar Rights.

+

Section 5 -- Disclaimer of Warranties and Limitation of Liability.

+

a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE + EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS + AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF + ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, + IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, + WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR + PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, + ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT + KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT + ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.

+

b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE + TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, + NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, + INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, + COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR + USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN + ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR + DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR + IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.

+

c. The disclaimer of warranties and limitation of liability provided + above shall be interpreted in a manner that, to the extent + possible, most closely approximates an absolute disclaimer and + waiver of all liability.

+

Section 6 -- Term and Termination.

+

a. This Public License applies for the term of the Copyright and + Similar Rights licensed here. However, if You fail to comply with + this Public License, then Your rights under this Public License + terminate automatically.

+

b. Where Your right to use the Licensed Material has terminated under + Section 6(a), it reinstates:

+
   1. automatically as of the date the violation is cured, provided
+      it is cured within 30 days of Your discovery of the
+      violation; or
+
+   2. upon express reinstatement by the Licensor.
+
+ For the avoidance of doubt, this Section 6(b) does not affect any
+ right the Licensor may have to seek remedies for Your violations
+ of this Public License.
+
+ +

c. For the avoidance of doubt, the Licensor may also offer the + Licensed Material under separate terms or conditions or stop + distributing the Licensed Material at any time; however, doing so + will not terminate this Public License.

+

d. Sections 1, 5, 6, 7, and 8 survive termination of this Public + License.

+

Section 7 -- Other Terms and Conditions.

+

a. The Licensor shall not be bound by any additional or different + terms or conditions communicated by You unless expressly agreed.

+

b. Any arrangements, understandings, or agreements regarding the + Licensed Material not stated herein are separate from and + independent of the terms and conditions of this Public License.

+

Section 8 -- Interpretation.

+

a. For the avoidance of doubt, this Public License does not, and + shall not be interpreted to, reduce, limit, restrict, or impose + conditions on any use of the Licensed Material that could lawfully + be made without permission under this Public License.

+

b. To the extent possible, if any provision of this Public License is + deemed unenforceable, it shall be automatically reformed to the + minimum extent necessary to make it enforceable. If the provision + cannot be reformed, it shall be severed from this Public License + without affecting the enforceability of the remaining terms and + conditions.

+

c. No term or condition of this Public License will be waived and no + failure to comply consented to unless expressly agreed to by the + Licensor.

+

d. Nothing in this Public License constitutes or may be interpreted + as a limitation upon, or waiver of, any privileges and immunities + that apply to the Licensor or You, including from the legal + processes of any jurisdiction or authority.

+

=======================================================================

+

Creative Commons is not a party to its public +licenses. Notwithstanding, Creative Commons may elect to apply one of +its public licenses to material it publishes and in those instances +will be considered the “Licensor.” The text of the Creative Commons +public licenses is dedicated to the public domain under the CC0 Public +Domain Dedication. Except for the limited purpose of indicating that +material is shared under a Creative Commons public license or as +otherwise permitted by the Creative Commons policies published at +creativecommons.org/policies, Creative Commons does not authorize the +use of the trademark "Creative Commons" or any other trademark or logo +of Creative Commons without its prior written consent including, +without limitation, in connection with any unauthorized modifications +to any of its public licenses or any other arrangements, +understandings, or agreements concerning use of licensed material. For +the avoidance of doubt, this paragraph does not form part of the +public licenses.

+

Creative Commons may be contacted at creativecommons.org

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 00000000..09624f03 Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/images/final-cta-background.png b/assets/images/final-cta-background.png new file mode 100644 index 00000000..bf955291 Binary files /dev/null and b/assets/images/final-cta-background.png differ diff --git a/assets/javascripts/bundle.081f42fc.min.js b/assets/javascripts/bundle.081f42fc.min.js new file mode 100644 index 00000000..32734cd3 --- /dev/null +++ b/assets/javascripts/bundle.081f42fc.min.js @@ -0,0 +1,29 @@ +"use strict";(()=>{var Fi=Object.create;var gr=Object.defineProperty;var ji=Object.getOwnPropertyDescriptor;var Wi=Object.getOwnPropertyNames,Dt=Object.getOwnPropertySymbols,Ui=Object.getPrototypeOf,xr=Object.prototype.hasOwnProperty,no=Object.prototype.propertyIsEnumerable;var oo=(e,t,r)=>t in e?gr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,R=(e,t)=>{for(var r in t||(t={}))xr.call(t,r)&&oo(e,r,t[r]);if(Dt)for(var r of Dt(t))no.call(t,r)&&oo(e,r,t[r]);return e};var io=(e,t)=>{var r={};for(var o in e)xr.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&Dt)for(var o of Dt(e))t.indexOf(o)<0&&no.call(e,o)&&(r[o]=e[o]);return r};var yr=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var Di=(e,t,r,o)=>{if(t&&typeof t=="object"||typeof t=="function")for(let n of Wi(t))!xr.call(e,n)&&n!==r&&gr(e,n,{get:()=>t[n],enumerable:!(o=ji(t,n))||o.enumerable});return e};var Vt=(e,t,r)=>(r=e!=null?Fi(Ui(e)):{},Di(t||!e||!e.__esModule?gr(r,"default",{value:e,enumerable:!0}):r,e));var ao=(e,t,r)=>new Promise((o,n)=>{var i=p=>{try{s(r.next(p))}catch(c){n(c)}},a=p=>{try{s(r.throw(p))}catch(c){n(c)}},s=p=>p.done?o(p.value):Promise.resolve(p.value).then(i,a);s((r=r.apply(e,t)).next())});var co=yr((Er,so)=>{(function(e,t){typeof Er=="object"&&typeof so!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(Er,function(){"use strict";function e(r){var o=!0,n=!1,i=null,a={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function s(H){return!!(H&&H!==document&&H.nodeName!=="HTML"&&H.nodeName!=="BODY"&&"classList"in H&&"contains"in H.classList)}function p(H){var mt=H.type,ze=H.tagName;return!!(ze==="INPUT"&&a[mt]&&!H.readOnly||ze==="TEXTAREA"&&!H.readOnly||H.isContentEditable)}function c(H){H.classList.contains("focus-visible")||(H.classList.add("focus-visible"),H.setAttribute("data-focus-visible-added",""))}function l(H){H.hasAttribute("data-focus-visible-added")&&(H.classList.remove("focus-visible"),H.removeAttribute("data-focus-visible-added"))}function f(H){H.metaKey||H.altKey||H.ctrlKey||(s(r.activeElement)&&c(r.activeElement),o=!0)}function u(H){o=!1}function h(H){s(H.target)&&(o||p(H.target))&&c(H.target)}function w(H){s(H.target)&&(H.target.classList.contains("focus-visible")||H.target.hasAttribute("data-focus-visible-added"))&&(n=!0,window.clearTimeout(i),i=window.setTimeout(function(){n=!1},100),l(H.target))}function A(H){document.visibilityState==="hidden"&&(n&&(o=!0),te())}function te(){document.addEventListener("mousemove",J),document.addEventListener("mousedown",J),document.addEventListener("mouseup",J),document.addEventListener("pointermove",J),document.addEventListener("pointerdown",J),document.addEventListener("pointerup",J),document.addEventListener("touchmove",J),document.addEventListener("touchstart",J),document.addEventListener("touchend",J)}function ie(){document.removeEventListener("mousemove",J),document.removeEventListener("mousedown",J),document.removeEventListener("mouseup",J),document.removeEventListener("pointermove",J),document.removeEventListener("pointerdown",J),document.removeEventListener("pointerup",J),document.removeEventListener("touchmove",J),document.removeEventListener("touchstart",J),document.removeEventListener("touchend",J)}function J(H){H.target.nodeName&&H.target.nodeName.toLowerCase()==="html"||(o=!1,ie())}document.addEventListener("keydown",f,!0),document.addEventListener("mousedown",u,!0),document.addEventListener("pointerdown",u,!0),document.addEventListener("touchstart",u,!0),document.addEventListener("visibilitychange",A,!0),te(),r.addEventListener("focus",h,!0),r.addEventListener("blur",w,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var Yr=yr((Rt,Kr)=>{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof Rt=="object"&&typeof Kr=="object"?Kr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof Rt=="object"?Rt.ClipboardJS=r():t.ClipboardJS=r()})(Rt,function(){return function(){var e={686:function(o,n,i){"use strict";i.d(n,{default:function(){return Ii}});var a=i(279),s=i.n(a),p=i(370),c=i.n(p),l=i(817),f=i.n(l);function u(V){try{return document.execCommand(V)}catch(_){return!1}}var h=function(_){var O=f()(_);return u("cut"),O},w=h;function A(V){var _=document.documentElement.getAttribute("dir")==="rtl",O=document.createElement("textarea");O.style.fontSize="12pt",O.style.border="0",O.style.padding="0",O.style.margin="0",O.style.position="absolute",O.style[_?"right":"left"]="-9999px";var j=window.pageYOffset||document.documentElement.scrollTop;return O.style.top="".concat(j,"px"),O.setAttribute("readonly",""),O.value=V,O}var te=function(_,O){var j=A(_);O.container.appendChild(j);var D=f()(j);return u("copy"),j.remove(),D},ie=function(_){var O=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},j="";return typeof _=="string"?j=te(_,O):_ instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(_==null?void 0:_.type)?j=te(_.value,O):(j=f()(_),u("copy")),j},J=ie;function H(V){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?H=function(O){return typeof O}:H=function(O){return O&&typeof Symbol=="function"&&O.constructor===Symbol&&O!==Symbol.prototype?"symbol":typeof O},H(V)}var mt=function(){var _=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},O=_.action,j=O===void 0?"copy":O,D=_.container,Y=_.target,ke=_.text;if(j!=="copy"&&j!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(Y!==void 0)if(Y&&H(Y)==="object"&&Y.nodeType===1){if(j==="copy"&&Y.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(j==="cut"&&(Y.hasAttribute("readonly")||Y.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(ke)return J(ke,{container:D});if(Y)return j==="cut"?w(Y):J(Y,{container:D})},ze=mt;function Ie(V){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Ie=function(O){return typeof O}:Ie=function(O){return O&&typeof Symbol=="function"&&O.constructor===Symbol&&O!==Symbol.prototype?"symbol":typeof O},Ie(V)}function _i(V,_){if(!(V instanceof _))throw new TypeError("Cannot call a class as a function")}function ro(V,_){for(var O=0;O<_.length;O++){var j=_[O];j.enumerable=j.enumerable||!1,j.configurable=!0,"value"in j&&(j.writable=!0),Object.defineProperty(V,j.key,j)}}function Ai(V,_,O){return _&&ro(V.prototype,_),O&&ro(V,O),V}function Ci(V,_){if(typeof _!="function"&&_!==null)throw new TypeError("Super expression must either be null or a function");V.prototype=Object.create(_&&_.prototype,{constructor:{value:V,writable:!0,configurable:!0}}),_&&br(V,_)}function br(V,_){return br=Object.setPrototypeOf||function(j,D){return j.__proto__=D,j},br(V,_)}function Hi(V){var _=Pi();return function(){var j=Wt(V),D;if(_){var Y=Wt(this).constructor;D=Reflect.construct(j,arguments,Y)}else D=j.apply(this,arguments);return ki(this,D)}}function ki(V,_){return _&&(Ie(_)==="object"||typeof _=="function")?_:$i(V)}function $i(V){if(V===void 0)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return V}function Pi(){if(typeof Reflect=="undefined"||!Reflect.construct||Reflect.construct.sham)return!1;if(typeof Proxy=="function")return!0;try{return Date.prototype.toString.call(Reflect.construct(Date,[],function(){})),!0}catch(V){return!1}}function Wt(V){return Wt=Object.setPrototypeOf?Object.getPrototypeOf:function(O){return O.__proto__||Object.getPrototypeOf(O)},Wt(V)}function vr(V,_){var O="data-clipboard-".concat(V);if(_.hasAttribute(O))return _.getAttribute(O)}var Ri=function(V){Ci(O,V);var _=Hi(O);function O(j,D){var Y;return _i(this,O),Y=_.call(this),Y.resolveOptions(D),Y.listenClick(j),Y}return Ai(O,[{key:"resolveOptions",value:function(){var D=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof D.action=="function"?D.action:this.defaultAction,this.target=typeof D.target=="function"?D.target:this.defaultTarget,this.text=typeof D.text=="function"?D.text:this.defaultText,this.container=Ie(D.container)==="object"?D.container:document.body}},{key:"listenClick",value:function(D){var Y=this;this.listener=c()(D,"click",function(ke){return Y.onClick(ke)})}},{key:"onClick",value:function(D){var Y=D.delegateTarget||D.currentTarget,ke=this.action(Y)||"copy",Ut=ze({action:ke,container:this.container,target:this.target(Y),text:this.text(Y)});this.emit(Ut?"success":"error",{action:ke,text:Ut,trigger:Y,clearSelection:function(){Y&&Y.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(D){return vr("action",D)}},{key:"defaultTarget",value:function(D){var Y=vr("target",D);if(Y)return document.querySelector(Y)}},{key:"defaultText",value:function(D){return vr("text",D)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(D){var Y=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return J(D,Y)}},{key:"cut",value:function(D){return w(D)}},{key:"isSupported",value:function(){var D=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],Y=typeof D=="string"?[D]:D,ke=!!document.queryCommandSupported;return Y.forEach(function(Ut){ke=ke&&!!document.queryCommandSupported(Ut)}),ke}}]),O}(s()),Ii=Ri},828:function(o){var n=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function a(s,p){for(;s&&s.nodeType!==n;){if(typeof s.matches=="function"&&s.matches(p))return s;s=s.parentNode}}o.exports=a},438:function(o,n,i){var a=i(828);function s(l,f,u,h,w){var A=c.apply(this,arguments);return l.addEventListener(u,A,w),{destroy:function(){l.removeEventListener(u,A,w)}}}function p(l,f,u,h,w){return typeof l.addEventListener=="function"?s.apply(null,arguments):typeof u=="function"?s.bind(null,document).apply(null,arguments):(typeof l=="string"&&(l=document.querySelectorAll(l)),Array.prototype.map.call(l,function(A){return s(A,f,u,h,w)}))}function c(l,f,u,h){return function(w){w.delegateTarget=a(w.target,f),w.delegateTarget&&h.call(l,w)}}o.exports=p},879:function(o,n){n.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},n.nodeList=function(i){var a=Object.prototype.toString.call(i);return i!==void 0&&(a==="[object NodeList]"||a==="[object HTMLCollection]")&&"length"in i&&(i.length===0||n.node(i[0]))},n.string=function(i){return typeof i=="string"||i instanceof String},n.fn=function(i){var a=Object.prototype.toString.call(i);return a==="[object Function]"}},370:function(o,n,i){var a=i(879),s=i(438);function p(u,h,w){if(!u&&!h&&!w)throw new Error("Missing required arguments");if(!a.string(h))throw new TypeError("Second argument must be a String");if(!a.fn(w))throw new TypeError("Third argument must be a Function");if(a.node(u))return c(u,h,w);if(a.nodeList(u))return l(u,h,w);if(a.string(u))return f(u,h,w);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function c(u,h,w){return u.addEventListener(h,w),{destroy:function(){u.removeEventListener(h,w)}}}function l(u,h,w){return Array.prototype.forEach.call(u,function(A){A.addEventListener(h,w)}),{destroy:function(){Array.prototype.forEach.call(u,function(A){A.removeEventListener(h,w)})}}}function f(u,h,w){return s(document.body,u,h,w)}o.exports=p},817:function(o){function n(i){var a;if(i.nodeName==="SELECT")i.focus(),a=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var s=i.hasAttribute("readonly");s||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),s||i.removeAttribute("readonly"),a=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var p=window.getSelection(),c=document.createRange();c.selectNodeContents(i),p.removeAllRanges(),p.addRange(c),a=p.toString()}return a}o.exports=n},279:function(o){function n(){}n.prototype={on:function(i,a,s){var p=this.e||(this.e={});return(p[i]||(p[i]=[])).push({fn:a,ctx:s}),this},once:function(i,a,s){var p=this;function c(){p.off(i,c),a.apply(s,arguments)}return c._=a,this.on(i,c,s)},emit:function(i){var a=[].slice.call(arguments,1),s=((this.e||(this.e={}))[i]||[]).slice(),p=0,c=s.length;for(p;p{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var ts=/["'&<>]/;ei.exports=rs;function rs(e){var t=""+e,r=ts.exec(t);if(!r)return t;var o,n="",i=0,a=0;for(i=r.index;i0&&i[i.length-1])&&(c[0]===6||c[0]===2)){r=0;continue}if(c[0]===3&&(!i||c[1]>i[0]&&c[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function N(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],a;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(s){a={error:s}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(a)throw a.error}}return i}function q(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||s(u,h)})})}function s(u,h){try{p(o[u](h))}catch(w){f(i[0][3],w)}}function p(u){u.value instanceof nt?Promise.resolve(u.value.v).then(c,l):f(i[0][2],u)}function c(u){s("next",u)}function l(u){s("throw",u)}function f(u,h){u(h),i.shift(),i.length&&s(i[0][0],i[0][1])}}function mo(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof de=="function"?de(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(a){return new Promise(function(s,p){a=e[i](a),n(s,p,a.done,a.value)})}}function n(i,a,s,p){Promise.resolve(p).then(function(c){i({value:c,done:s})},a)}}function k(e){return typeof e=="function"}function ft(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var zt=ft(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(o,n){return n+1+") "+o.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function qe(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Fe=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var a=this._parentage;if(a)if(this._parentage=null,Array.isArray(a))try{for(var s=de(a),p=s.next();!p.done;p=s.next()){var c=p.value;c.remove(this)}}catch(A){t={error:A}}finally{try{p&&!p.done&&(r=s.return)&&r.call(s)}finally{if(t)throw t.error}}else a.remove(this);var l=this.initialTeardown;if(k(l))try{l()}catch(A){i=A instanceof zt?A.errors:[A]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=de(f),h=u.next();!h.done;h=u.next()){var w=h.value;try{fo(w)}catch(A){i=i!=null?i:[],A instanceof zt?i=q(q([],N(i)),N(A.errors)):i.push(A)}}}catch(A){o={error:A}}finally{try{h&&!h.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new zt(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)fo(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&qe(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&qe(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var Tr=Fe.EMPTY;function qt(e){return e instanceof Fe||e&&"closed"in e&&k(e.remove)&&k(e.add)&&k(e.unsubscribe)}function fo(e){k(e)?e():e.unsubscribe()}var $e={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var ut={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,a=n.isStopped,s=n.observers;return i||a?Tr:(this.currentObservers=null,s.push(r),new Fe(function(){o.currentObservers=null,qe(s,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,a=o.isStopped;n?r.error(i):a&&r.complete()},t.prototype.asObservable=function(){var r=new F;return r.source=this,r},t.create=function(r,o){return new Eo(r,o)},t}(F);var Eo=function(e){re(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:Tr},t}(g);var _r=function(e){re(t,e);function t(r){var o=e.call(this)||this;return o._value=r,o}return Object.defineProperty(t.prototype,"value",{get:function(){return this.getValue()},enumerable:!1,configurable:!0}),t.prototype._subscribe=function(r){var o=e.prototype._subscribe.call(this,r);return!o.closed&&r.next(this._value),o},t.prototype.getValue=function(){var r=this,o=r.hasError,n=r.thrownError,i=r._value;if(o)throw n;return this._throwIfClosed(),i},t.prototype.next=function(r){e.prototype.next.call(this,this._value=r)},t}(g);var Lt={now:function(){return(Lt.delegate||Date).now()},delegate:void 0};var _t=function(e){re(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=Lt);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,a=o._infiniteTimeWindow,s=o._timestampProvider,p=o._windowTime;n||(i.push(r),!a&&i.push(s.now()+p)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,a=n._buffer,s=a.slice(),p=0;p0?e.prototype.schedule.call(this,r,o):(this.delay=o,this.state=r,this.scheduler.flush(this),this)},t.prototype.execute=function(r,o){return o>0||this.closed?e.prototype.execute.call(this,r,o):this._execute(r,o)},t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!=null&&n>0||n==null&&this.delay>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.flush(this),0)},t}(vt);var So=function(e){re(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t}(gt);var Hr=new So(To);var Oo=function(e){re(t,e);function t(r,o){var n=e.call(this,r,o)||this;return n.scheduler=r,n.work=o,n}return t.prototype.requestAsyncId=function(r,o,n){return n===void 0&&(n=0),n!==null&&n>0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=bt.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var a=r.actions;o!=null&&((i=a[a.length-1])===null||i===void 0?void 0:i.id)!==o&&(bt.cancelAnimationFrame(o),r._scheduled=void 0)},t}(vt);var Mo=function(e){re(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o=this._scheduled;this._scheduled=void 0;var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t}(gt);var me=new Mo(Oo);var M=new F(function(e){return e.complete()});function Yt(e){return e&&k(e.schedule)}function kr(e){return e[e.length-1]}function Xe(e){return k(kr(e))?e.pop():void 0}function He(e){return Yt(kr(e))?e.pop():void 0}function Bt(e,t){return typeof kr(e)=="number"?e.pop():t}var xt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Gt(e){return k(e==null?void 0:e.then)}function Jt(e){return k(e[ht])}function Xt(e){return Symbol.asyncIterator&&k(e==null?void 0:e[Symbol.asyncIterator])}function Zt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Gi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var er=Gi();function tr(e){return k(e==null?void 0:e[er])}function rr(e){return lo(this,arguments,function(){var r,o,n,i;return Nt(this,function(a){switch(a.label){case 0:r=e.getReader(),a.label=1;case 1:a.trys.push([1,,9,10]),a.label=2;case 2:return[4,nt(r.read())];case 3:return o=a.sent(),n=o.value,i=o.done,i?[4,nt(void 0)]:[3,5];case 4:return[2,a.sent()];case 5:return[4,nt(n)];case 6:return[4,a.sent()];case 7:return a.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function or(e){return k(e==null?void 0:e.getReader)}function W(e){if(e instanceof F)return e;if(e!=null){if(Jt(e))return Ji(e);if(xt(e))return Xi(e);if(Gt(e))return Zi(e);if(Xt(e))return Lo(e);if(tr(e))return ea(e);if(or(e))return ta(e)}throw Zt(e)}function Ji(e){return new F(function(t){var r=e[ht]();if(k(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function Xi(e){return new F(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?v(function(n,i){return e(n,i,o)}):le,Te(1),r?Be(t):zo(function(){return new ir}))}}function Fr(e){return e<=0?function(){return M}:y(function(t,r){var o=[];t.subscribe(T(r,function(n){o.push(n),e=2,!0))}function pe(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new g}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,a=i===void 0?!0:i,s=e.resetOnRefCountZero,p=s===void 0?!0:s;return function(c){var l,f,u,h=0,w=!1,A=!1,te=function(){f==null||f.unsubscribe(),f=void 0},ie=function(){te(),l=u=void 0,w=A=!1},J=function(){var H=l;ie(),H==null||H.unsubscribe()};return y(function(H,mt){h++,!A&&!w&&te();var ze=u=u!=null?u:r();mt.add(function(){h--,h===0&&!A&&!w&&(f=Wr(J,p))}),ze.subscribe(mt),!l&&h>0&&(l=new at({next:function(Ie){return ze.next(Ie)},error:function(Ie){A=!0,te(),f=Wr(ie,n,Ie),ze.error(Ie)},complete:function(){w=!0,te(),f=Wr(ie,a),ze.complete()}}),W(H).subscribe(l))})(c)}}function Wr(e,t){for(var r=[],o=2;oe.next(document)),e}function $(e,t=document){return Array.from(t.querySelectorAll(e))}function P(e,t=document){let r=fe(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function fe(e,t=document){return t.querySelector(e)||void 0}function Re(){var e,t,r,o;return(o=(r=(t=(e=document.activeElement)==null?void 0:e.shadowRoot)==null?void 0:t.activeElement)!=null?r:document.activeElement)!=null?o:void 0}var xa=S(d(document.body,"focusin"),d(document.body,"focusout")).pipe(_e(1),Q(void 0),m(()=>Re()||document.body),B(1));function et(e){return xa.pipe(m(t=>e.contains(t)),K())}function kt(e,t){return C(()=>S(d(e,"mouseenter").pipe(m(()=>!0)),d(e,"mouseleave").pipe(m(()=>!1))).pipe(t?Ht(r=>Me(+!r*t)):le,Q(e.matches(":hover"))))}function Bo(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Bo(e,r)}function x(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)Bo(o,n);return o}function sr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function wt(e){let t=x("script",{src:e});return C(()=>(document.head.appendChild(t),S(d(t,"load"),d(t,"error").pipe(b(()=>$r(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),L(()=>document.head.removeChild(t)),Te(1))))}var Go=new g,ya=C(()=>typeof ResizeObserver=="undefined"?wt("https://unpkg.com/resize-observer-polyfill"):I(void 0)).pipe(m(()=>new ResizeObserver(e=>e.forEach(t=>Go.next(t)))),b(e=>S(Ke,I(e)).pipe(L(()=>e.disconnect()))),B(1));function ce(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ge(e){let t=e;for(;t.clientWidth===0&&t.parentElement;)t=t.parentElement;return ya.pipe(E(r=>r.observe(t)),b(r=>Go.pipe(v(o=>o.target===t),L(()=>r.unobserve(t)))),m(()=>ce(e)),Q(ce(e)))}function Tt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function cr(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}function Jo(e){let t=[],r=e.parentElement;for(;r;)(e.clientWidth>r.clientWidth||e.clientHeight>r.clientHeight)&&t.push(r),r=(e=r).parentElement;return t.length===0&&t.push(document.documentElement),t}function Ue(e){return{x:e.offsetLeft,y:e.offsetTop}}function Xo(e){let t=e.getBoundingClientRect();return{x:t.x+window.scrollX,y:t.y+window.scrollY}}function Zo(e){return S(d(window,"load"),d(window,"resize")).pipe(Le(0,me),m(()=>Ue(e)),Q(Ue(e)))}function pr(e){return{x:e.scrollLeft,y:e.scrollTop}}function De(e){return S(d(e,"scroll"),d(window,"scroll"),d(window,"resize")).pipe(Le(0,me),m(()=>pr(e)),Q(pr(e)))}var en=new g,Ea=C(()=>I(new IntersectionObserver(e=>{for(let t of e)en.next(t)},{threshold:0}))).pipe(b(e=>S(Ke,I(e)).pipe(L(()=>e.disconnect()))),B(1));function tt(e){return Ea.pipe(E(t=>t.observe(e)),b(t=>en.pipe(v(({target:r})=>r===e),L(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function tn(e,t=16){return De(e).pipe(m(({y:r})=>{let o=ce(e),n=Tt(e);return r>=n.height-o.height-t}),K())}var lr={drawer:P("[data-md-toggle=drawer]"),search:P("[data-md-toggle=search]")};function rn(e){return lr[e].checked}function Je(e,t){lr[e].checked!==t&&lr[e].click()}function Ve(e){let t=lr[e];return d(t,"change").pipe(m(()=>t.checked),Q(t.checked))}function wa(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function Ta(){return S(d(window,"compositionstart").pipe(m(()=>!0)),d(window,"compositionend").pipe(m(()=>!1))).pipe(Q(!1))}function on(){let e=d(window,"keydown").pipe(v(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:rn("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),v(({mode:t,type:r})=>{if(t==="global"){let o=Re();if(typeof o!="undefined")return!wa(o,r)}return!0}),pe());return Ta().pipe(b(t=>t?M:e))}function xe(){return new URL(location.href)}function pt(e,t=!1){if(G("navigation.instant")&&!t){let r=x("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function nn(){return new g}function an(){return location.hash.slice(1)}function sn(e){let t=x("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function Sa(e){return S(d(window,"hashchange"),e).pipe(m(an),Q(an()),v(t=>t.length>0),B(1))}function cn(e){return Sa(e).pipe(m(t=>fe(`[id="${t}"]`)),v(t=>typeof t!="undefined"))}function $t(e){let t=matchMedia(e);return ar(r=>t.addListener(()=>r(t.matches))).pipe(Q(t.matches))}function pn(){let e=matchMedia("print");return S(d(window,"beforeprint").pipe(m(()=>!0)),d(window,"afterprint").pipe(m(()=>!1))).pipe(Q(e.matches))}function Nr(e,t){return e.pipe(b(r=>r?t():M))}function zr(e,t){return new F(r=>{let o=new XMLHttpRequest;return o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network error"))}),o.addEventListener("abort",()=>{r.complete()}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{var i;if(n.lengthComputable)t.progress$.next(n.loaded/n.total*100);else{let a=(i=o.getResponseHeader("Content-Length"))!=null?i:0;t.progress$.next(n.loaded/+a*100)}}),t.progress$.next(5)),o.send(),()=>o.abort()})}function Ne(e,t){return zr(e,t).pipe(b(r=>r.text()),m(r=>JSON.parse(r)),B(1))}function ln(e,t){let r=new DOMParser;return zr(e,t).pipe(b(o=>o.text()),m(o=>r.parseFromString(o,"text/html")),B(1))}function mn(e,t){let r=new DOMParser;return zr(e,t).pipe(b(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),B(1))}function fn(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function un(){return S(d(window,"scroll",{passive:!0}),d(window,"resize",{passive:!0})).pipe(m(fn),Q(fn()))}function dn(){return{width:innerWidth,height:innerHeight}}function hn(){return d(window,"resize",{passive:!0}).pipe(m(dn),Q(dn()))}function bn(){return z([un(),hn()]).pipe(m(([e,t])=>({offset:e,size:t})),B(1))}function mr(e,{viewport$:t,header$:r}){let o=t.pipe(Z("size")),n=z([o,r]).pipe(m(()=>Ue(e)));return z([r,t,n]).pipe(m(([{height:i},{offset:a,size:s},{x:p,y:c}])=>({offset:{x:a.x-p,y:a.y-c+i},size:s})))}function Oa(e){return d(e,"message",t=>t.data)}function Ma(e){let t=new g;return t.subscribe(r=>e.postMessage(r)),t}function vn(e,t=new Worker(e)){let r=Oa(t),o=Ma(t),n=new g;n.subscribe(o);let i=o.pipe(X(),ne(!0));return n.pipe(X(),Pe(r.pipe(U(i))),pe())}var La=P("#__config"),St=JSON.parse(La.textContent);St.base=`${new URL(St.base,xe())}`;function ye(){return St}function G(e){return St.features.includes(e)}function Ee(e,t){return typeof t!="undefined"?St.translations[e].replace("#",t.toString()):St.translations[e]}function Se(e,t=document){return P(`[data-md-component=${e}]`,t)}function ae(e,t=document){return $(`[data-md-component=${e}]`,t)}function _a(e){let t=P(".md-typeset > :first-child",e);return d(t,"click",{once:!0}).pipe(m(()=>P(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function gn(e){if(!G("announce.dismiss")||!e.childElementCount)return M;if(!e.hidden){let t=P(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return C(()=>{let t=new g;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),_a(e).pipe(E(r=>t.next(r)),L(()=>t.complete()),m(r=>R({ref:e},r)))})}function Aa(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function xn(e,t){let r=new g;return r.subscribe(({hidden:o})=>{e.hidden=o}),Aa(e,t).pipe(E(o=>r.next(o)),L(()=>r.complete()),m(o=>R({ref:e},o)))}function Pt(e,t){return t==="inline"?x("div",{class:"md-tooltip md-tooltip--inline",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"})):x("div",{class:"md-tooltip",id:e,role:"tooltip"},x("div",{class:"md-tooltip__inner md-typeset"}))}function yn(...e){return x("div",{class:"md-tooltip2",role:"tooltip"},x("div",{class:"md-tooltip2__inner md-typeset"},e))}function En(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return x("aside",{class:"md-annotation",tabIndex:0},Pt(t),x("a",{href:r,class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}else return x("aside",{class:"md-annotation",tabIndex:0},Pt(t),x("span",{class:"md-annotation__index",tabIndex:-1},x("span",{"data-md-annotation-id":e})))}function wn(e){return x("button",{class:"md-clipboard md-icon",title:Ee("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}function qr(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(p=>!e.terms[p]).reduce((p,c)=>[...p,x("del",null,c)," "],[]).slice(0,-1),i=ye(),a=new URL(e.location,i.base);G("search.highlight")&&a.searchParams.set("h",Object.entries(e.terms).filter(([,p])=>p).reduce((p,[c])=>`${p} ${c}`.trim(),""));let{tags:s}=ye();return x("a",{href:`${a}`,class:"md-search-result__link",tabIndex:-1},x("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&x("div",{class:"md-search-result__icon md-icon"}),r>0&&x("h1",null,e.title),r<=0&&x("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&e.tags.map(p=>{let c=s?p in s?`md-tag-icon md-tag--${s[p]}`:"md-tag-icon":"";return x("span",{class:`md-tag ${c}`},p)}),o>0&&n.length>0&&x("p",{class:"md-search-result__terms"},Ee("search.result.term.missing"),": ",...n)))}function Tn(e){let t=e[0].score,r=[...e],o=ye(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),a=r.findIndex(l=>l.scoreqr(l,1)),...p.length?[x("details",{class:"md-search-result__more"},x("summary",{tabIndex:-1},x("div",null,p.length>0&&p.length===1?Ee("search.result.more.one"):Ee("search.result.more.other",p.length))),...p.map(l=>qr(l,1)))]:[]];return x("li",{class:"md-search-result__item"},c)}function Sn(e){return x("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>x("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?sr(r):r)))}function Qr(e){let t=`tabbed-control tabbed-control--${e}`;return x("div",{class:t,hidden:!0},x("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function On(e){return x("div",{class:"md-typeset__scrollwrap"},x("div",{class:"md-typeset__table"},e))}function Ca(e){var o;let t=ye(),r=new URL(`../${e.version}/`,t.base);return x("li",{class:"md-version__item"},x("a",{href:`${r}`,class:"md-version__link"},e.title,((o=t.version)==null?void 0:o.alias)&&e.aliases.length>0&&x("span",{class:"md-version__alias"},e.aliases[0])))}function Mn(e,t){var o;let r=ye();return e=e.filter(n=>{var i;return!((i=n.properties)!=null&&i.hidden)}),x("div",{class:"md-version"},x("button",{class:"md-version__current","aria-label":Ee("select.version")},t.title,((o=r.version)==null?void 0:o.alias)&&t.aliases.length>0&&x("span",{class:"md-version__alias"},t.aliases[0])),x("ul",{class:"md-version__list"},e.map(Ca)))}var Ha=0;function ka(e){let t=z([et(e),kt(e)]).pipe(m(([o,n])=>o||n),K()),r=C(()=>Jo(e)).pipe(oe(De),ct(1),m(()=>Xo(e)));return t.pipe(Ae(o=>o),b(()=>z([t,r])),m(([o,n])=>({active:o,offset:n})),pe())}function $a(e,t){let{content$:r,viewport$:o}=t,n=`__tooltip2_${Ha++}`;return C(()=>{let i=new g,a=new _r(!1);i.pipe(X(),ne(!1)).subscribe(a);let s=a.pipe(Ht(c=>Me(+!c*250,Hr)),K(),b(c=>c?r:M),E(c=>c.id=n),pe());z([i.pipe(m(({active:c})=>c)),s.pipe(b(c=>kt(c,250)),Q(!1))]).pipe(m(c=>c.some(l=>l))).subscribe(a);let p=a.pipe(v(c=>c),ee(s,o),m(([c,l,{size:f}])=>{let u=e.getBoundingClientRect(),h=u.width/2;if(l.role==="tooltip")return{x:h,y:8+u.height};if(u.y>=f.height/2){let{height:w}=ce(l);return{x:h,y:-16-w}}else return{x:h,y:16+u.height}}));return z([s,i,p]).subscribe(([c,{offset:l},f])=>{c.style.setProperty("--md-tooltip-host-x",`${l.x}px`),c.style.setProperty("--md-tooltip-host-y",`${l.y}px`),c.style.setProperty("--md-tooltip-x",`${f.x}px`),c.style.setProperty("--md-tooltip-y",`${f.y}px`),c.classList.toggle("md-tooltip2--top",f.y<0),c.classList.toggle("md-tooltip2--bottom",f.y>=0)}),a.pipe(v(c=>c),ee(s,(c,l)=>l),v(c=>c.role==="tooltip")).subscribe(c=>{let l=ce(P(":scope > *",c));c.style.setProperty("--md-tooltip-width",`${l.width}px`),c.style.setProperty("--md-tooltip-tail","0px")}),a.pipe(K(),be(me),ee(s)).subscribe(([c,l])=>{l.classList.toggle("md-tooltip2--active",c)}),z([a.pipe(v(c=>c)),s]).subscribe(([c,l])=>{l.role==="dialog"?(e.setAttribute("aria-controls",n),e.setAttribute("aria-haspopup","dialog")):e.setAttribute("aria-describedby",n)}),a.pipe(v(c=>!c)).subscribe(()=>{e.removeAttribute("aria-controls"),e.removeAttribute("aria-describedby"),e.removeAttribute("aria-haspopup")}),ka(e).pipe(E(c=>i.next(c)),L(()=>i.complete()),m(c=>R({ref:e},c)))})}function lt(e,{viewport$:t},r=document.body){return $a(e,{content$:new F(o=>{let n=e.title,i=yn(n);return o.next(i),e.removeAttribute("title"),r.append(i),()=>{i.remove(),e.setAttribute("title",n)}}),viewport$:t})}function Pa(e,t){let r=C(()=>z([Zo(e),De(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:a,height:s}=ce(e);return{x:o-i.x+a/2,y:n-i.y+s/2}}));return et(e).pipe(b(o=>r.pipe(m(n=>({active:o,offset:n})),Te(+!o||1/0))))}function Ln(e,t,{target$:r}){let[o,n]=Array.from(e.children);return C(()=>{let i=new g,a=i.pipe(X(),ne(!0));return i.subscribe({next({offset:s}){e.style.setProperty("--md-tooltip-x",`${s.x}px`),e.style.setProperty("--md-tooltip-y",`${s.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),tt(e).pipe(U(a)).subscribe(s=>{e.toggleAttribute("data-md-visible",s)}),S(i.pipe(v(({active:s})=>s)),i.pipe(_e(250),v(({active:s})=>!s))).subscribe({next({active:s}){s?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe(Le(16,me)).subscribe(({active:s})=>{o.classList.toggle("md-tooltip--active",s)}),i.pipe(ct(125,me),v(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:s})=>s)).subscribe({next(s){s?e.style.setProperty("--md-tooltip-0",`${-s}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),d(n,"click").pipe(U(a),v(s=>!(s.metaKey||s.ctrlKey))).subscribe(s=>{s.stopPropagation(),s.preventDefault()}),d(n,"mousedown").pipe(U(a),ee(i)).subscribe(([s,{active:p}])=>{var c;if(s.button!==0||s.metaKey||s.ctrlKey)s.preventDefault();else if(p){s.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(c=Re())==null||c.blur()}}),r.pipe(U(a),v(s=>s===o),Ge(125)).subscribe(()=>e.focus()),Pa(e,t).pipe(E(s=>i.next(s)),L(()=>i.complete()),m(s=>R({ref:e},s)))})}function Ra(e){return e.tagName==="CODE"?$(".c, .c1, .cm",e):[e]}function Ia(e){let t=[];for(let r of Ra(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let a;for(;a=/(\(\d+\))(!)?/.exec(i.textContent);){let[,s,p]=a;if(typeof p=="undefined"){let c=i.splitText(a.index);i=c.splitText(s.length),t.push(c)}else{i.textContent=s,t.push(i);break}}}}return t}function _n(e,t){t.append(...Array.from(e.childNodes))}function fr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,a=new Map;for(let s of Ia(t)){let[,p]=s.textContent.match(/\((\d+)\)/);fe(`:scope > li:nth-child(${p})`,e)&&(a.set(p,En(p,i)),s.replaceWith(a.get(p)))}return a.size===0?M:C(()=>{let s=new g,p=s.pipe(X(),ne(!0)),c=[];for(let[l,f]of a)c.push([P(".md-typeset",f),P(`:scope > li:nth-child(${l})`,e)]);return o.pipe(U(p)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of c)l?_n(f,u):_n(u,f)}),S(...[...a].map(([,l])=>Ln(l,t,{target$:r}))).pipe(L(()=>s.complete()),pe())})}function An(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return An(t)}}function Cn(e,t){return C(()=>{let r=An(e);return typeof r!="undefined"?fr(r,e,t):M})}var Hn=Vt(Yr());var Fa=0;function kn(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return kn(t)}}function ja(e){return ge(e).pipe(m(({width:t})=>({scrollable:Tt(e).width>t})),Z("scrollable"))}function $n(e,t){let{matches:r}=matchMedia("(hover)"),o=C(()=>{let n=new g,i=n.pipe(Fr(1));n.subscribe(({scrollable:c})=>{c&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")});let a=[];if(Hn.default.isSupported()&&(e.closest(".copy")||G("content.code.copy")&&!e.closest(".no-copy"))){let c=e.closest("pre");c.id=`__code_${Fa++}`;let l=wn(c.id);c.insertBefore(l,e),G("content.tooltips")&&a.push(lt(l,{viewport$}))}let s=e.closest(".highlight");if(s instanceof HTMLElement){let c=kn(s);if(typeof c!="undefined"&&(s.classList.contains("annotate")||G("content.code.annotate"))){let l=fr(c,e,t);a.push(ge(s).pipe(U(i),m(({width:f,height:u})=>f&&u),K(),b(f=>f?l:M)))}}return $(":scope > span[id]",e).length&&e.classList.add("md-code__content"),ja(e).pipe(E(c=>n.next(c)),L(()=>n.complete()),m(c=>R({ref:e},c)),Pe(...a))});return G("content.lazy")?tt(e).pipe(v(n=>n),Te(1),b(()=>o)):o}function Wa(e,{target$:t,print$:r}){let o=!0;return S(t.pipe(m(n=>n.closest("details:not([open])")),v(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(v(n=>n||!o),E(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Pn(e,t){return C(()=>{let r=new g;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),Wa(e,t).pipe(E(o=>r.next(o)),L(()=>r.complete()),m(o=>R({ref:e},o)))})}var Rn=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel rect,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel rect{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color);stroke-width:.05rem}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs #classDiagram-compositionEnd,defs #classDiagram-compositionStart,defs #classDiagram-dependencyEnd,defs #classDiagram-dependencyStart,defs #classDiagram-extensionEnd,defs #classDiagram-extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs #classDiagram-aggregationEnd,defs #classDiagram-aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel,.nodeLabel p{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}a .nodeLabel{text-decoration:underline}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.attributeBoxEven,.attributeBoxOdd{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var Br,Da=0;function Va(){return typeof mermaid=="undefined"||mermaid instanceof Element?wt("https://unpkg.com/mermaid@10/dist/mermaid.min.js"):I(void 0)}function In(e){return e.classList.remove("mermaid"),Br||(Br=Va().pipe(E(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Rn,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),B(1))),Br.subscribe(()=>ao(this,null,function*(){e.classList.add("mermaid");let t=`__mermaid_${Da++}`,r=x("div",{class:"mermaid"}),o=e.textContent,{svg:n,fn:i}=yield mermaid.render(t,o),a=r.attachShadow({mode:"closed"});a.innerHTML=n,e.replaceWith(r),i==null||i(a)})),Br.pipe(m(()=>({ref:e})))}var Fn=x("table");function jn(e){return e.replaceWith(Fn),Fn.replaceWith(On(e)),I({ref:e})}function Na(e){let t=e.find(r=>r.checked)||e[0];return S(...e.map(r=>d(r,"change").pipe(m(()=>P(`label[for="${r.id}"]`))))).pipe(Q(P(`label[for="${t.id}"]`)),m(r=>({active:r})))}function Wn(e,{viewport$:t,target$:r}){let o=P(".tabbed-labels",e),n=$(":scope > input",e),i=Qr("prev");e.append(i);let a=Qr("next");return e.append(a),C(()=>{let s=new g,p=s.pipe(X(),ne(!0));z([s,ge(e)]).pipe(U(p),Le(1,me)).subscribe({next([{active:c},l]){let f=Ue(c),{width:u}=ce(c);e.style.setProperty("--md-indicator-x",`${f.x}px`),e.style.setProperty("--md-indicator-width",`${u}px`);let h=pr(o);(f.xh.x+l.width)&&o.scrollTo({left:Math.max(0,f.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),z([De(o),ge(o)]).pipe(U(p)).subscribe(([c,l])=>{let f=Tt(o);i.hidden=c.x<16,a.hidden=c.x>f.width-l.width-16}),S(d(i,"click").pipe(m(()=>-1)),d(a,"click").pipe(m(()=>1))).pipe(U(p)).subscribe(c=>{let{width:l}=ce(o);o.scrollBy({left:l*c,behavior:"smooth"})}),r.pipe(U(p),v(c=>n.includes(c))).subscribe(c=>c.click()),o.classList.add("tabbed-labels--linked");for(let c of n){let l=P(`label[for="${c.id}"]`);l.replaceChildren(x("a",{href:`#${l.htmlFor}`,tabIndex:-1},...Array.from(l.childNodes))),d(l.firstElementChild,"click").pipe(U(p),v(f=>!(f.metaKey||f.ctrlKey)),E(f=>{f.preventDefault(),f.stopPropagation()})).subscribe(()=>{history.replaceState({},"",`#${l.htmlFor}`),l.click()})}return G("content.tabs.link")&&s.pipe(Ce(1),ee(t)).subscribe(([{active:c},{offset:l}])=>{let f=c.innerText.trim();if(c.hasAttribute("data-md-switching"))c.removeAttribute("data-md-switching");else{let u=e.offsetTop-l.y;for(let w of $("[data-tabs]"))for(let A of $(":scope > input",w)){let te=P(`label[for="${A.id}"]`);if(te!==c&&te.innerText.trim()===f){te.setAttribute("data-md-switching",""),A.click();break}}window.scrollTo({top:e.offsetTop-u});let h=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([f,...h])])}}),s.pipe(U(p)).subscribe(()=>{for(let c of $("audio, video",e))c.pause()}),tt(e).pipe(b(()=>Na(n)),E(c=>s.next(c)),L(()=>s.complete()),m(c=>R({ref:e},c)))}).pipe(Qe(se))}function Un(e,{viewport$:t,target$:r,print$:o}){return S(...$(".annotate:not(.highlight)",e).map(n=>Cn(n,{target$:r,print$:o})),...$("pre:not(.mermaid) > code",e).map(n=>$n(n,{target$:r,print$:o})),...$("pre.mermaid",e).map(n=>In(n)),...$("table:not([class])",e).map(n=>jn(n)),...$("details",e).map(n=>Pn(n,{target$:r,print$:o})),...$("[data-tabs]",e).map(n=>Wn(n,{viewport$:t,target$:r})),...$("[title]",e).filter(()=>G("content.tooltips")).map(n=>lt(n,{viewport$:t})))}function za(e,{alert$:t}){return t.pipe(b(r=>S(I(!0),I(!1).pipe(Ge(2e3))).pipe(m(o=>({message:r,active:o})))))}function Dn(e,t){let r=P(".md-typeset",e);return C(()=>{let o=new g;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),za(e,t).pipe(E(n=>o.next(n)),L(()=>o.complete()),m(n=>R({ref:e},n)))})}var qa=0;function Qa(e,t){document.body.append(e);let{width:r}=ce(e);e.style.setProperty("--md-tooltip-width",`${r}px`),e.remove();let o=cr(t),n=typeof o!="undefined"?De(o):I({x:0,y:0}),i=S(et(t),kt(t)).pipe(K());return z([i,n]).pipe(m(([a,s])=>{let{x:p,y:c}=Ue(t),l=ce(t),f=t.closest("table");return f&&t.parentElement&&(p+=f.offsetLeft+t.parentElement.offsetLeft,c+=f.offsetTop+t.parentElement.offsetTop),{active:a,offset:{x:p-s.x+l.width/2-r/2,y:c-s.y+l.height+8}}}))}function Vn(e){let t=e.title;if(!t.length)return M;let r=`__tooltip_${qa++}`,o=Pt(r,"inline"),n=P(".md-typeset",o);return n.innerHTML=t,C(()=>{let i=new g;return i.subscribe({next({offset:a}){o.style.setProperty("--md-tooltip-x",`${a.x}px`),o.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){o.style.removeProperty("--md-tooltip-x"),o.style.removeProperty("--md-tooltip-y")}}),S(i.pipe(v(({active:a})=>a)),i.pipe(_e(250),v(({active:a})=>!a))).subscribe({next({active:a}){a?(e.insertAdjacentElement("afterend",o),e.setAttribute("aria-describedby",r),e.removeAttribute("title")):(o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t))},complete(){o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t)}}),i.pipe(Le(16,me)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(ct(125,me),v(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?o.style.setProperty("--md-tooltip-0",`${-a}px`):o.style.removeProperty("--md-tooltip-0")},complete(){o.style.removeProperty("--md-tooltip-0")}}),Qa(o,e).pipe(E(a=>i.next(a)),L(()=>i.complete()),m(a=>R({ref:e},a)))}).pipe(Qe(se))}function Ka({viewport$:e}){if(!G("header.autohide"))return I(!1);let t=e.pipe(m(({offset:{y:n}})=>n),Ye(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),K()),o=Ve("search");return z([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),K(),b(n=>n?r:I(!1)),Q(!1))}function Nn(e,t){return C(()=>z([ge(e),Ka(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),K((r,o)=>r.height===o.height&&r.hidden===o.hidden),B(1))}function zn(e,{header$:t,main$:r}){return C(()=>{let o=new g,n=o.pipe(X(),ne(!0));o.pipe(Z("active"),We(t)).subscribe(([{active:a},{hidden:s}])=>{e.classList.toggle("md-header--shadow",a&&!s),e.hidden=s});let i=ue($("[title]",e)).pipe(v(()=>G("content.tooltips")),oe(a=>Vn(a)));return r.subscribe(o),t.pipe(U(n),m(a=>R({ref:e},a)),Pe(i.pipe(U(n))))})}function Ya(e,{viewport$:t,header$:r}){return mr(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=ce(e);return{active:o>=n}}),Z("active"))}function qn(e,t){return C(()=>{let r=new g;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=fe(".md-content h1");return typeof o=="undefined"?M:Ya(o,t).pipe(E(n=>r.next(n)),L(()=>r.complete()),m(n=>R({ref:e},n)))})}function Qn(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),K()),n=o.pipe(b(()=>ge(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),Z("bottom"))));return z([o,n,t]).pipe(m(([i,{top:a,bottom:s},{offset:{y:p},size:{height:c}}])=>(c=Math.max(0,c-Math.max(0,a-p,i)-Math.max(0,c+p-s)),{offset:a-i,height:c,active:a-i<=p})),K((i,a)=>i.offset===a.offset&&i.height===a.height&&i.active===a.active))}function Ba(e){let t=__md_get("__palette")||{index:e.findIndex(o=>matchMedia(o.getAttribute("data-md-color-media")).matches)},r=Math.max(0,Math.min(t.index,e.length-1));return I(...e).pipe(oe(o=>d(o,"change").pipe(m(()=>o))),Q(e[r]),m(o=>({index:e.indexOf(o),color:{media:o.getAttribute("data-md-color-media"),scheme:o.getAttribute("data-md-color-scheme"),primary:o.getAttribute("data-md-color-primary"),accent:o.getAttribute("data-md-color-accent")}})),B(1))}function Kn(e){let t=$("input",e),r=x("meta",{name:"theme-color"});document.head.appendChild(r);let o=x("meta",{name:"color-scheme"});document.head.appendChild(o);let n=$t("(prefers-color-scheme: light)");return C(()=>{let i=new g;return i.subscribe(a=>{if(document.body.setAttribute("data-md-color-switching",""),a.color.media==="(prefers-color-scheme)"){let s=matchMedia("(prefers-color-scheme: light)"),p=document.querySelector(s.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']");a.color.scheme=p.getAttribute("data-md-color-scheme"),a.color.primary=p.getAttribute("data-md-color-primary"),a.color.accent=p.getAttribute("data-md-color-accent")}for(let[s,p]of Object.entries(a.color))document.body.setAttribute(`data-md-color-${s}`,p);for(let s=0;sa.key==="Enter"),ee(i,(a,s)=>s)).subscribe(({index:a})=>{a=(a+1)%t.length,t[a].click(),t[a].focus()}),i.pipe(m(()=>{let a=Se("header"),s=window.getComputedStyle(a);return o.content=s.colorScheme,s.backgroundColor.match(/\d+/g).map(p=>(+p).toString(16).padStart(2,"0")).join("")})).subscribe(a=>r.content=`#${a}`),i.pipe(be(se)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")}),Ba(t).pipe(U(n.pipe(Ce(1))),st(),E(a=>i.next(a)),L(()=>i.complete()),m(a=>R({ref:e},a)))})}function Yn(e,{progress$:t}){return C(()=>{let r=new g;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(E(o=>r.next({value:o})),L(()=>r.complete()),m(o=>({ref:e,value:o})))})}var Gr=Vt(Yr());function Ga(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r.trimEnd()}function Bn({alert$:e}){Gr.default.isSupported()&&new F(t=>{new Gr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||Ga(P(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(E(t=>{t.trigger.focus()}),m(()=>Ee("clipboard.copied"))).subscribe(e)}function Gn(e,t){return e.protocol=t.protocol,e.hostname=t.hostname,e}function Ja(e,t){let r=new Map;for(let o of $("url",e)){let n=P("loc",o),i=[Gn(new URL(n.textContent),t)];r.set(`${i[0]}`,i);for(let a of $("[rel=alternate]",o)){let s=a.getAttribute("href");s!=null&&i.push(Gn(new URL(s),t))}}return r}function ur(e){return mn(new URL("sitemap.xml",e)).pipe(m(t=>Ja(t,new URL(e))),ve(()=>I(new Map)))}function Xa(e,t){if(!(e.target instanceof Element))return M;let r=e.target.closest("a");if(r===null)return M;if(r.target||e.metaKey||e.ctrlKey)return M;let o=new URL(r.href);return o.search=o.hash="",t.has(`${o}`)?(e.preventDefault(),I(new URL(r.href))):M}function Jn(e){let t=new Map;for(let r of $(":scope > *",e.head))t.set(r.outerHTML,r);return t}function Xn(e){for(let t of $("[href], [src]",e))for(let r of["href","src"]){let o=t.getAttribute(r);if(o&&!/^(?:[a-z]+:)?\/\//i.test(o)){t[r]=t[r];break}}return I(e)}function Za(e){for(let o of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...G("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let n=fe(o),i=fe(o,e);typeof n!="undefined"&&typeof i!="undefined"&&n.replaceWith(i)}let t=Jn(document);for(let[o,n]of Jn(e))t.has(o)?t.delete(o):document.head.appendChild(n);for(let o of t.values()){let n=o.getAttribute("name");n!=="theme-color"&&n!=="color-scheme"&&o.remove()}let r=Se("container");return je($("script",r)).pipe(b(o=>{let n=e.createElement("script");if(o.src){for(let i of o.getAttributeNames())n.setAttribute(i,o.getAttribute(i));return o.replaceWith(n),new F(i=>{n.onload=()=>i.complete()})}else return n.textContent=o.textContent,o.replaceWith(n),M}),X(),ne(document))}function Zn({location$:e,viewport$:t,progress$:r}){let o=ye();if(location.protocol==="file:")return M;let n=ur(o.base);I(document).subscribe(Xn);let i=d(document.body,"click").pipe(We(n),b(([p,c])=>Xa(p,c)),pe()),a=d(window,"popstate").pipe(m(xe),pe());i.pipe(ee(t)).subscribe(([p,{offset:c}])=>{history.replaceState(c,""),history.pushState(null,"",p)}),S(i,a).subscribe(e);let s=e.pipe(Z("pathname"),b(p=>ln(p,{progress$:r}).pipe(ve(()=>(pt(p,!0),M)))),b(Xn),b(Za),pe());return S(s.pipe(ee(e,(p,c)=>c)),s.pipe(b(()=>e),Z("pathname"),b(()=>e),Z("hash")),e.pipe(K((p,c)=>p.pathname===c.pathname&&p.hash===c.hash),b(()=>i),E(()=>history.back()))).subscribe(p=>{var c,l;history.state!==null||!p.hash?window.scrollTo(0,(l=(c=history.state)==null?void 0:c.y)!=null?l:0):(history.scrollRestoration="auto",sn(p.hash),history.scrollRestoration="manual")}),e.subscribe(()=>{history.scrollRestoration="manual"}),d(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),t.pipe(Z("offset"),_e(100)).subscribe(({offset:p})=>{history.replaceState(p,"")}),s}var ri=Vt(ti());function oi(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,a)=>`${i}${a}`;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return a=>(0,ri.default)(a).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function It(e){return e.type===1}function dr(e){return e.type===3}function ni(e,t){let r=vn(e);return S(I(location.protocol!=="file:"),Ve("search")).pipe(Ae(o=>o),b(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:G("search.suggest")}}})),r}function ii({document$:e}){let t=ye(),r=Ne(new URL("../versions.json",t.base)).pipe(ve(()=>M)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:a,aliases:s})=>a===i||s.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),b(n=>d(document.body,"click").pipe(v(i=>!i.metaKey&&!i.ctrlKey),ee(o),b(([i,a])=>{if(i.target instanceof Element){let s=i.target.closest("a");if(s&&!s.target&&n.has(s.href)){let p=s.href;return!i.target.closest(".md-version")&&n.get(p)===a?M:(i.preventDefault(),I(p))}}return M}),b(i=>ur(new URL(i)).pipe(m(a=>{let p=xe().href.replace(t.base,i);return a.has(p.split("#")[0])?new URL(p):new URL(i)})))))).subscribe(n=>pt(n,!0)),z([r,o]).subscribe(([n,i])=>{P(".md-header__topic").appendChild(Mn(n,i))}),e.pipe(b(()=>o)).subscribe(n=>{var a;let i=__md_get("__outdated",sessionStorage);if(i===null){i=!0;let s=((a=t.version)==null?void 0:a.default)||"latest";Array.isArray(s)||(s=[s]);e:for(let p of s)for(let c of n.aliases.concat(n.version))if(new RegExp(p,"i").test(c)){i=!1;break e}__md_set("__outdated",i,sessionStorage)}if(i)for(let s of ae("outdated"))s.hidden=!1})}function ns(e,{worker$:t}){let{searchParams:r}=xe();r.has("q")&&(Je("search",!0),e.value=r.get("q"),e.focus(),Ve("search").pipe(Ae(i=>!i)).subscribe(()=>{let i=xe();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=et(e),n=S(t.pipe(Ae(It)),d(e,"keyup"),o).pipe(m(()=>e.value),K());return z([n,o]).pipe(m(([i,a])=>({value:i,focus:a})),B(1))}function ai(e,{worker$:t}){let r=new g,o=r.pipe(X(),ne(!0));z([t.pipe(Ae(It)),r],(i,a)=>a).pipe(Z("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(Z("focus")).subscribe(({focus:i})=>{i&&Je("search",i)}),d(e.form,"reset").pipe(U(o)).subscribe(()=>e.focus());let n=P("header [for=__search]");return d(n,"click").subscribe(()=>e.focus()),ns(e,{worker$:t}).pipe(E(i=>r.next(i)),L(()=>r.complete()),m(i=>R({ref:e},i)),B(1))}function si(e,{worker$:t,query$:r}){let o=new g,n=tn(e.parentElement).pipe(v(Boolean)),i=e.parentElement,a=P(":scope > :first-child",e),s=P(":scope > :last-child",e);Ve("search").subscribe(l=>s.setAttribute("role",l?"list":"presentation")),o.pipe(ee(r),Ur(t.pipe(Ae(It)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:a.textContent=f.length?Ee("search.result.none"):Ee("search.result.placeholder");break;case 1:a.textContent=Ee("search.result.one");break;default:let u=sr(l.length);a.textContent=Ee("search.result.other",u)}});let p=o.pipe(E(()=>s.innerHTML=""),b(({items:l})=>S(I(...l.slice(0,10)),I(...l.slice(10)).pipe(Ye(4),Vr(n),b(([f])=>f)))),m(Tn),pe());return p.subscribe(l=>s.appendChild(l)),p.pipe(oe(l=>{let f=fe("details",l);return typeof f=="undefined"?M:d(f,"toggle").pipe(U(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(v(dr),m(({data:l})=>l)).pipe(E(l=>o.next(l)),L(()=>o.complete()),m(l=>R({ref:e},l)))}function is(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=xe();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function ci(e,t){let r=new g,o=r.pipe(X(),ne(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),d(e,"click").pipe(U(o)).subscribe(n=>n.preventDefault()),is(e,t).pipe(E(n=>r.next(n)),L(()=>r.complete()),m(n=>R({ref:e},n)))}function pi(e,{worker$:t,keyboard$:r}){let o=new g,n=Se("search-query"),i=S(d(n,"keydown"),d(n,"focus")).pipe(be(se),m(()=>n.value),K());return o.pipe(We(i),m(([{suggest:s},p])=>{let c=p.split(/([\s-]+)/);if(s!=null&&s.length&&c[c.length-1]){let l=s[s.length-1];l.startsWith(c[c.length-1])&&(c[c.length-1]=l)}else c.length=0;return c})).subscribe(s=>e.innerHTML=s.join("").replace(/\s/g," ")),r.pipe(v(({mode:s})=>s==="search")).subscribe(s=>{switch(s.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(v(dr),m(({data:s})=>s)).pipe(E(s=>o.next(s)),L(()=>o.complete()),m(()=>({ref:e})))}function li(e,{index$:t,keyboard$:r}){let o=ye();try{let n=ni(o.search,t),i=Se("search-query",e),a=Se("search-result",e);d(e,"click").pipe(v(({target:p})=>p instanceof Element&&!!p.closest("a"))).subscribe(()=>Je("search",!1)),r.pipe(v(({mode:p})=>p==="search")).subscribe(p=>{let c=Re();switch(p.type){case"Enter":if(c===i){let l=new Map;for(let f of $(":first-child [href]",a)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,h])=>h-u);f.click()}p.claim()}break;case"Escape":case"Tab":Je("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof c=="undefined")i.focus();else{let l=[i,...$(":not(details) > [href], summary, details[open] [href]",a)],f=Math.max(0,(Math.max(0,l.indexOf(c))+l.length+(p.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}p.claim();break;default:i!==Re()&&i.focus()}}),r.pipe(v(({mode:p})=>p==="global")).subscribe(p=>{switch(p.type){case"f":case"s":case"/":i.focus(),i.select(),p.claim();break}});let s=ai(i,{worker$:n});return S(s,si(a,{worker$:n,query$:s})).pipe(Pe(...ae("search-share",e).map(p=>ci(p,{query$:s})),...ae("search-suggest",e).map(p=>pi(p,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,Ke}}function mi(e,{index$:t,location$:r}){return z([t,r.pipe(Q(xe()),v(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>oi(o.config)(n.searchParams.get("h"))),m(o=>{var a;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let s=i.nextNode();s;s=i.nextNode())if((a=s.parentElement)!=null&&a.offsetHeight){let p=s.textContent,c=o(p);c.length>p.length&&n.set(s,c)}for(let[s,p]of n){let{childNodes:c}=x("span",null,p);s.replaceWith(...Array.from(c))}return{ref:e,nodes:n}}))}function as(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return z([r,t]).pipe(m(([{offset:i,height:a},{offset:{y:s}}])=>(a=a+Math.min(n,Math.max(0,s-i))-n,{height:a,locked:s>=i+n})),K((i,a)=>i.height===a.height&&i.locked===a.locked))}function Jr(e,o){var n=o,{header$:t}=n,r=io(n,["header$"]);let i=P(".md-sidebar__scrollwrap",e),{y:a}=Ue(i);return C(()=>{let s=new g,p=s.pipe(X(),ne(!0)),c=s.pipe(Le(0,me));return c.pipe(ee(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*a}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),c.pipe(Ae()).subscribe(()=>{for(let l of $(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:h}=ce(f);f.scrollTo({top:u-h/2})}}}),ue($("label[tabindex]",e)).pipe(oe(l=>d(l,"click").pipe(be(se),m(()=>l),U(p)))).subscribe(l=>{let f=P(`[id="${l.htmlFor}"]`);P(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),as(e,r).pipe(E(l=>s.next(l)),L(()=>s.complete()),m(l=>R({ref:e},l)))})}function fi(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return Ct(Ne(`${r}/releases/latest`).pipe(ve(()=>M),m(o=>({version:o.tag_name})),Be({})),Ne(r).pipe(ve(()=>M),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),Be({}))).pipe(m(([o,n])=>R(R({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return Ne(r).pipe(m(o=>({repositories:o.public_repos})),Be({}))}}function ui(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return Ne(r).pipe(ve(()=>M),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),Be({}))}function di(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return fi(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return ui(r,o)}return M}var ss;function cs(e){return ss||(ss=C(()=>{let t=__md_get("__source",sessionStorage);if(t)return I(t);if(ae("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return M}return di(e.href).pipe(E(o=>__md_set("__source",o,sessionStorage)))}).pipe(ve(()=>M),v(t=>Object.keys(t).length>0),m(t=>({facts:t})),B(1)))}function hi(e){let t=P(":scope > :last-child",e);return C(()=>{let r=new g;return r.subscribe(({facts:o})=>{t.appendChild(Sn(o)),t.classList.add("md-source__repository--active")}),cs(e).pipe(E(o=>r.next(o)),L(()=>r.complete()),m(o=>R({ref:e},o)))})}function ps(e,{viewport$:t,header$:r}){return ge(document.body).pipe(b(()=>mr(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),Z("hidden"))}function bi(e,t){return C(()=>{let r=new g;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(G("navigation.tabs.sticky")?I({hidden:!1}):ps(e,t)).pipe(E(o=>r.next(o)),L(()=>r.complete()),m(o=>R({ref:e},o)))})}function ls(e,{viewport$:t,header$:r}){let o=new Map,n=$(".md-nav__link",e);for(let s of n){let p=decodeURIComponent(s.hash.substring(1)),c=fe(`[id="${p}"]`);typeof c!="undefined"&&o.set(s,c)}let i=r.pipe(Z("height"),m(({height:s})=>{let p=Se("main"),c=P(":scope > :first-child",p);return s+.8*(c.offsetTop-p.offsetTop)}),pe());return ge(document.body).pipe(Z("height"),b(s=>C(()=>{let p=[];return I([...o].reduce((c,[l,f])=>{for(;p.length&&o.get(p[p.length-1]).tagName>=f.tagName;)p.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let h=f.offsetParent;for(;h;h=h.offsetParent)u+=h.offsetTop;return c.set([...p=[...p,l]].reverse(),u)},new Map))}).pipe(m(p=>new Map([...p].sort(([,c],[,l])=>c-l))),We(i),b(([p,c])=>t.pipe(jr(([l,f],{offset:{y:u},size:h})=>{let w=u+h.height>=Math.floor(s.height);for(;f.length;){let[,A]=f[0];if(A-c=u&&!w)f=[l.pop(),...f];else break}return[l,f]},[[],[...p]]),K((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([s,p])=>({prev:s.map(([c])=>c),next:p.map(([c])=>c)})),Q({prev:[],next:[]}),Ye(2,1),m(([s,p])=>s.prev.length{let i=new g,a=i.pipe(X(),ne(!0));if(i.subscribe(({prev:s,next:p})=>{for(let[c]of p)c.classList.remove("md-nav__link--passed"),c.classList.remove("md-nav__link--active");for(let[c,[l]]of s.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",c===s.length-1)}),G("toc.follow")){let s=S(t.pipe(_e(1),m(()=>{})),t.pipe(_e(250),m(()=>"smooth")));i.pipe(v(({prev:p})=>p.length>0),We(o.pipe(be(se))),ee(s)).subscribe(([[{prev:p}],c])=>{let[l]=p[p.length-1];if(l.offsetHeight){let f=cr(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:h}=ce(f);f.scrollTo({top:u-h/2,behavior:c})}}})}return G("navigation.tracking")&&t.pipe(U(a),Z("offset"),_e(250),Ce(1),U(n.pipe(Ce(1))),st({delay:250}),ee(i)).subscribe(([,{prev:s}])=>{let p=xe(),c=s[s.length-1];if(c&&c.length){let[l]=c,{hash:f}=new URL(l.href);p.hash!==f&&(p.hash=f,history.replaceState({},"",`${p}`))}else p.hash="",history.replaceState({},"",`${p}`)}),ls(e,{viewport$:t,header$:r}).pipe(E(s=>i.next(s)),L(()=>i.complete()),m(s=>R({ref:e},s)))})}function ms(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:a}})=>a),Ye(2,1),m(([a,s])=>a>s&&s>0),K()),i=r.pipe(m(({active:a})=>a));return z([i,n]).pipe(m(([a,s])=>!(a&&s)),K(),U(o.pipe(Ce(1))),ne(!0),st({delay:250}),m(a=>({hidden:a})))}function gi(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new g,a=i.pipe(X(),ne(!0));return i.subscribe({next({hidden:s}){e.hidden=s,s?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(U(a),Z("height")).subscribe(({height:s})=>{e.style.top=`${s+16}px`}),d(e,"click").subscribe(s=>{s.preventDefault(),window.scrollTo({top:0})}),ms(e,{viewport$:t,main$:o,target$:n}).pipe(E(s=>i.next(s)),L(()=>i.complete()),m(s=>R({ref:e},s)))}function xi({document$:e,viewport$:t}){e.pipe(b(()=>$(".md-ellipsis")),oe(r=>tt(r).pipe(U(e.pipe(Ce(1))),v(o=>o),m(()=>r),Te(1))),v(r=>r.offsetWidth{let o=r.innerText,n=r.closest("a")||r;return n.title=o,lt(n,{viewport$:t}).pipe(U(e.pipe(Ce(1))),L(()=>n.removeAttribute("title")))})).subscribe(),e.pipe(b(()=>$(".md-status")),oe(r=>lt(r,{viewport$:t}))).subscribe()}function yi({document$:e,tablet$:t}){e.pipe(b(()=>$(".md-toggle--indeterminate")),E(r=>{r.indeterminate=!0,r.checked=!1}),oe(r=>d(r,"change").pipe(Dr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),ee(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function fs(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function Ei({document$:e}){e.pipe(b(()=>$("[data-md-scrollfix]")),E(t=>t.removeAttribute("data-md-scrollfix")),v(fs),oe(t=>d(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function wi({viewport$:e,tablet$:t}){z([Ve("search"),t]).pipe(m(([r,o])=>r&&!o),b(r=>I(r).pipe(Ge(r?400:100))),ee(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function us(){return location.protocol==="file:"?wt(`${new URL("search/search_index.js",Xr.base)}`).pipe(m(()=>__index),B(1)):Ne(new URL("search/search_index.json",Xr.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var ot=Yo(),jt=nn(),Ot=cn(jt),Zr=on(),Oe=bn(),hr=$t("(min-width: 960px)"),Si=$t("(min-width: 1220px)"),Oi=pn(),Xr=ye(),Mi=document.forms.namedItem("search")?us():Ke,eo=new g;Bn({alert$:eo});var to=new g;G("navigation.instant")&&Zn({location$:jt,viewport$:Oe,progress$:to}).subscribe(ot);var Ti;((Ti=Xr.version)==null?void 0:Ti.provider)==="mike"&&ii({document$:ot});S(jt,Ot).pipe(Ge(125)).subscribe(()=>{Je("drawer",!1),Je("search",!1)});Zr.pipe(v(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=fe("link[rel=prev]");typeof t!="undefined"&&pt(t);break;case"n":case".":let r=fe("link[rel=next]");typeof r!="undefined"&&pt(r);break;case"Enter":let o=Re();o instanceof HTMLLabelElement&&o.click()}});xi({viewport$:Oe,document$:ot});yi({document$:ot,tablet$:hr});Ei({document$:ot});wi({viewport$:Oe,tablet$:hr});var rt=Nn(Se("header"),{viewport$:Oe}),Ft=ot.pipe(m(()=>Se("main")),b(e=>Qn(e,{viewport$:Oe,header$:rt})),B(1)),ds=S(...ae("consent").map(e=>xn(e,{target$:Ot})),...ae("dialog").map(e=>Dn(e,{alert$:eo})),...ae("header").map(e=>zn(e,{viewport$:Oe,header$:rt,main$:Ft})),...ae("palette").map(e=>Kn(e)),...ae("progress").map(e=>Yn(e,{progress$:to})),...ae("search").map(e=>li(e,{index$:Mi,keyboard$:Zr})),...ae("source").map(e=>hi(e))),hs=C(()=>S(...ae("announce").map(e=>gn(e)),...ae("content").map(e=>Un(e,{viewport$:Oe,target$:Ot,print$:Oi})),...ae("content").map(e=>G("search.highlight")?mi(e,{index$:Mi,location$:jt}):M),...ae("header-title").map(e=>qn(e,{viewport$:Oe,header$:rt})),...ae("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?Nr(Si,()=>Jr(e,{viewport$:Oe,header$:rt,main$:Ft})):Nr(hr,()=>Jr(e,{viewport$:Oe,header$:rt,main$:Ft}))),...ae("tabs").map(e=>bi(e,{viewport$:Oe,header$:rt})),...ae("toc").map(e=>vi(e,{viewport$:Oe,header$:rt,main$:Ft,target$:Ot})),...ae("top").map(e=>gi(e,{viewport$:Oe,header$:rt,main$:Ft,target$:Ot})))),Li=ot.pipe(b(()=>hs),Pe(ds),B(1));Li.subscribe();window.document$=ot;window.location$=jt;window.target$=Ot;window.keyboard$=Zr;window.viewport$=Oe;window.tablet$=hr;window.screen$=Si;window.print$=Oi;window.alert$=eo;window.progress$=to;window.component$=Li;})(); +//# sourceMappingURL=bundle.081f42fc.min.js.map + diff --git a/assets/javascripts/bundle.081f42fc.min.js.map b/assets/javascripts/bundle.081f42fc.min.js.map new file mode 100644 index 00000000..e055db5a --- /dev/null +++ b/assets/javascripts/bundle.081f42fc.min.js.map @@ -0,0 +1,7 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/clipboard/dist/clipboard.js", "node_modules/escape-html/index.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/rxjs/node_modules/tslib/tslib.es6.js", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/BehaviorSubject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/QueueAction.ts", "node_modules/rxjs/src/internal/scheduler/QueueScheduler.ts", "node_modules/rxjs/src/internal/scheduler/queue.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounce.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/hover/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/tooltip2/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/tooltip/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/ellipsis/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"], + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*\n * Copyright (c) 2016-2024 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchEllipsis,\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchEllipsis({ viewport$, document$ })\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation.\r\n\r\nPermission to use, copy, modify, and/or distribute this software for any\r\npurpose with or without fee is hereby granted.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\r\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\r\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\r\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\r\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\r\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\r\nPERFORMANCE OF THIS SOFTWARE.\r\n***************************************************************************** */\r\n/* global Reflect, Promise */\r\n\r\nvar extendStatics = function(d, b) {\r\n extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\r\n return extendStatics(d, b);\r\n};\r\n\r\nexport function __extends(d, b) {\r\n if (typeof b !== \"function\" && b !== null)\r\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n}\r\n\r\nexport var __assign = function() {\r\n __assign = Object.assign || function __assign(t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n }\r\n return __assign.apply(this, arguments);\r\n}\r\n\r\nexport function __rest(s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n}\r\n\r\nexport function __decorate(decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n}\r\n\r\nexport function __param(paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n}\r\n\r\nexport function __metadata(metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n}\r\n\r\nexport function __awaiter(thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n}\r\n\r\nexport function __generator(thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n}\r\n\r\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });\r\n}) : (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n o[k2] = m[k];\r\n});\r\n\r\nexport function __exportStar(m, o) {\r\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\r\n}\r\n\r\nexport function __values(o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n}\r\n\r\nexport function __read(o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spread() {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spreadArrays() {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n}\r\n\r\nexport function __spreadArray(to, from, pack) {\r\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\r\n if (ar || !(i in from)) {\r\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\r\n ar[i] = from[i];\r\n }\r\n }\r\n return to.concat(ar || Array.prototype.slice.call(from));\r\n}\r\n\r\nexport function __await(v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n}\r\n\r\nexport function __asyncGenerator(thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n}\r\n\r\nexport function __asyncDelegator(o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n}\r\n\r\nexport function __asyncValues(o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n}\r\n\r\nexport function __makeTemplateObject(cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n};\r\n\r\nvar __setModuleDefault = Object.create ? (function(o, v) {\r\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\r\n}) : function(o, v) {\r\n o[\"default\"] = v;\r\n};\r\n\r\nexport function __importStar(mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\r\n __setModuleDefault(result, mod);\r\n return result;\r\n}\r\n\r\nexport function __importDefault(mod) {\r\n return (mod && mod.__esModule) ? mod : { default: mod };\r\n}\r\n\r\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\r\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\r\n}\r\n\r\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\r\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\r\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\r\n}\r\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n *\n * @class Subscription\n */\nexport class Subscription implements SubscriptionLike {\n /** @nocollapse */\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n * @return {void}\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n *\n * @class Subscriber\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @nocollapse\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param {T} [value] The `next` value.\n * @return {void}\n */\n next(value?: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param {any} [err] The `error` exception.\n * @return {void}\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n * @return {void}\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as (((value: T) => void) | undefined),\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent\n * @param subscriber The stopped subscriber\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n *\n * @class Observable\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @constructor\n * @param {Function} subscribe the function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @owner Observable\n * @method create\n * @param {Function} subscribe? the subscriber function to be passed to the Observable constructor\n * @return {Observable} a new observable\n * @nocollapse\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @method lift\n * @param operator the operator defining the operation to take on the observable\n * @return a new observable with the Operator applied\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening.\n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param {Observer|Function} observerOrNext (optional) Either an observer with methods to be called,\n * or the first of three possible handlers, which is the handler for each value emitted from the subscribed\n * Observable.\n * @param {Function} error (optional) A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param {Function} complete (optional) A handler for a terminal event resulting from successful completion.\n * @return {Subscription} a subscription reference to the registered handlers\n * @method subscribe\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next a handler for each value emitted by the observable\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @method Symbol.observable\n * @return {Observable} this instance of the observable\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n * @method pipe\n * @return {Observable} the Observable result of all of the operators having\n * been called in the order they were passed in.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n */\n pipe(...operations: OperatorFunction[]): Observable {\n return pipeFromArray(operations)(this);\n }\n\n /* tslint:disable:max-line-length */\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: typeof Promise): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: PromiseConstructorLike): Promise;\n /* tslint:enable:max-line-length */\n\n /**\n * Subscribe to this Observable and get a Promise resolving on\n * `complete` with the last emission (if any).\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * @method toPromise\n * @param [promiseCtor] a constructor function used to instantiate\n * the Promise\n * @return A Promise that resolves with the last value emit, or\n * rejects on an error. If there were no emissions, Promise\n * resolves with undefined.\n * @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise\n */\n toPromise(promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n let value: T | undefined;\n this.subscribe(\n (x: T) => (value = x),\n (err: any) => reject(err),\n () => resolve(value)\n );\n }) as Promise;\n }\n}\n\n/**\n * Decides between a passed promise constructor from consuming code,\n * A default configured promise constructor, and the native promise\n * constructor and returns it. If nothing can be found, it will throw\n * an error.\n * @param promiseCtor The optional promise constructor to passed by consuming code\n */\nfunction getPromiseCtor(promiseCtor: PromiseConstructorLike | undefined) {\n return promiseCtor ?? config.Promise ?? Promise;\n}\n\nfunction isObserver(value: any): value is Observer {\n return value && isFunction(value.next) && isFunction(value.error) && isFunction(value.complete);\n}\n\nfunction isSubscriber(value: any): value is Subscriber {\n return (value && value instanceof Subscriber) || (isObserver(value) && isSubscription(value));\n}\n", "import { Observable } from '../Observable';\nimport { Subscriber } from '../Subscriber';\nimport { OperatorFunction } from '../types';\nimport { isFunction } from './isFunction';\n\n/**\n * Used to determine if an object is an Observable with a lift function.\n */\nexport function hasLift(source: any): source is { lift: InstanceType['lift'] } {\n return isFunction(source?.lift);\n}\n\n/**\n * Creates an `OperatorFunction`. Used to define operators throughout the library in a concise way.\n * @param init The logic to connect the liftedSource to the subscriber at the moment of subscription.\n */\nexport function operate(\n init: (liftedSource: Observable, subscriber: Subscriber) => (() => void) | void\n): OperatorFunction {\n return (source: Observable) => {\n if (hasLift(source)) {\n return source.lift(function (this: Subscriber, liftedSource: Observable) {\n try {\n return init(liftedSource, this);\n } catch (err) {\n this.error(err);\n }\n });\n }\n throw new TypeError('Unable to lift unknown Observable type');\n };\n}\n", "import { Subscriber } from '../Subscriber';\n\n/**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional teardown logic here. This will only be called on teardown if the\n * subscriber itself is not already closed. This is called after all other teardown logic is executed.\n */\nexport function createOperatorSubscriber(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n onFinalize?: () => void\n): Subscriber {\n return new OperatorSubscriber(destination, onNext, onComplete, onError, onFinalize);\n}\n\n/**\n * A generic helper for allowing operators to be created with a Subscriber and\n * use closures to capture necessary state from the operator function itself.\n */\nexport class OperatorSubscriber extends Subscriber {\n /**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional finalization logic here. This will only be called on finalization if the\n * subscriber itself is not already closed. This is called after all other finalization logic is executed.\n * @param shouldUnsubscribe An optional check to see if an unsubscribe call should truly unsubscribe.\n * NOTE: This currently **ONLY** exists to support the strange behavior of {@link groupBy}, where unsubscription\n * to the resulting observable does not actually disconnect from the source if there are active subscriptions\n * to any grouped observable. (DO NOT EXPOSE OR USE EXTERNALLY!!!)\n */\n constructor(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n private onFinalize?: () => void,\n private shouldUnsubscribe?: () => boolean\n ) {\n // It's important - for performance reasons - that all of this class's\n // members are initialized and that they are always initialized in the same\n // order. This will ensure that all OperatorSubscriber instances have the\n // same hidden class in V8. This, in turn, will help keep the number of\n // hidden classes involved in property accesses within the base class as\n // low as possible. If the number of hidden classes involved exceeds four,\n // the property accesses will become megamorphic and performance penalties\n // will be incurred - i.e. inline caches won't be used.\n //\n // The reasons for ensuring all instances have the same hidden class are\n // further discussed in this blog post from Benedikt Meurer:\n // https://benediktmeurer.de/2018/03/23/impact-of-polymorphism-on-component-based-frameworks-like-react/\n super(destination);\n this._next = onNext\n ? function (this: OperatorSubscriber, value: T) {\n try {\n onNext(value);\n } catch (err) {\n destination.error(err);\n }\n }\n : super._next;\n this._error = onError\n ? function (this: OperatorSubscriber, err: any) {\n try {\n onError(err);\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._error;\n this._complete = onComplete\n ? function (this: OperatorSubscriber) {\n try {\n onComplete();\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._complete;\n }\n\n unsubscribe() {\n if (!this.shouldUnsubscribe || this.shouldUnsubscribe()) {\n const { closed } = this;\n super.unsubscribe();\n // Execute additional teardown if we have any and we didn't already do so.\n !closed && this.onFinalize?.();\n }\n }\n}\n", "import { Subscription } from '../Subscription';\n\ninterface AnimationFrameProvider {\n schedule(callback: FrameRequestCallback): Subscription;\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n delegate:\n | {\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n }\n | undefined;\n}\n\nexport const animationFrameProvider: AnimationFrameProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n schedule(callback) {\n let request = requestAnimationFrame;\n let cancel: typeof cancelAnimationFrame | undefined = cancelAnimationFrame;\n const { delegate } = animationFrameProvider;\n if (delegate) {\n request = delegate.requestAnimationFrame;\n cancel = delegate.cancelAnimationFrame;\n }\n const handle = request((timestamp) => {\n // Clear the cancel function. The request has been fulfilled, so\n // attempting to cancel the request upon unsubscription would be\n // pointless.\n cancel = undefined;\n callback(timestamp);\n });\n return new Subscription(() => cancel?.(handle));\n },\n requestAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.requestAnimationFrame || requestAnimationFrame)(...args);\n },\n cancelAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.cancelAnimationFrame || cancelAnimationFrame)(...args);\n },\n delegate: undefined,\n};\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface ObjectUnsubscribedError extends Error {}\n\nexport interface ObjectUnsubscribedErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (): ObjectUnsubscribedError;\n}\n\n/**\n * An error thrown when an action is invalid because the object has been\n * unsubscribed.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n *\n * @class ObjectUnsubscribedError\n */\nexport const ObjectUnsubscribedError: ObjectUnsubscribedErrorCtor = createErrorClass(\n (_super) =>\n function ObjectUnsubscribedErrorImpl(this: any) {\n _super(this);\n this.name = 'ObjectUnsubscribedError';\n this.message = 'object unsubscribed';\n }\n);\n", "import { Operator } from './Operator';\nimport { Observable } from './Observable';\nimport { Subscriber } from './Subscriber';\nimport { Subscription, EMPTY_SUBSCRIPTION } from './Subscription';\nimport { Observer, SubscriptionLike, TeardownLogic } from './types';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { arrRemove } from './util/arrRemove';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A Subject is a special type of Observable that allows values to be\n * multicasted to many Observers. Subjects are like EventEmitters.\n *\n * Every Subject is an Observable and an Observer. You can subscribe to a\n * Subject, and you can call next to feed values as well as error and complete.\n */\nexport class Subject extends Observable implements SubscriptionLike {\n closed = false;\n\n private currentObservers: Observer[] | null = null;\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n observers: Observer[] = [];\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n isStopped = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n hasError = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n thrownError: any = null;\n\n /**\n * Creates a \"subject\" by basically gluing an observer to an observable.\n *\n * @nocollapse\n * @deprecated Recommended you do not use. Will be removed at some point in the future. Plans for replacement still under discussion.\n */\n static create: (...args: any[]) => any = (destination: Observer, source: Observable): AnonymousSubject => {\n return new AnonymousSubject(destination, source);\n };\n\n constructor() {\n // NOTE: This must be here to obscure Observable's constructor.\n super();\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n lift(operator: Operator): Observable {\n const subject = new AnonymousSubject(this, this);\n subject.operator = operator as any;\n return subject as any;\n }\n\n /** @internal */\n protected _throwIfClosed() {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n }\n\n next(value: T) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n if (!this.currentObservers) {\n this.currentObservers = Array.from(this.observers);\n }\n for (const observer of this.currentObservers) {\n observer.next(value);\n }\n }\n });\n }\n\n error(err: any) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.hasError = this.isStopped = true;\n this.thrownError = err;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.error(err);\n }\n }\n });\n }\n\n complete() {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.isStopped = true;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.complete();\n }\n }\n });\n }\n\n unsubscribe() {\n this.isStopped = this.closed = true;\n this.observers = this.currentObservers = null!;\n }\n\n get observed() {\n return this.observers?.length > 0;\n }\n\n /** @internal */\n protected _trySubscribe(subscriber: Subscriber): TeardownLogic {\n this._throwIfClosed();\n return super._trySubscribe(subscriber);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._checkFinalizedStatuses(subscriber);\n return this._innerSubscribe(subscriber);\n }\n\n /** @internal */\n protected _innerSubscribe(subscriber: Subscriber) {\n const { hasError, isStopped, observers } = this;\n if (hasError || isStopped) {\n return EMPTY_SUBSCRIPTION;\n }\n this.currentObservers = null;\n observers.push(subscriber);\n return new Subscription(() => {\n this.currentObservers = null;\n arrRemove(observers, subscriber);\n });\n }\n\n /** @internal */\n protected _checkFinalizedStatuses(subscriber: Subscriber) {\n const { hasError, thrownError, isStopped } = this;\n if (hasError) {\n subscriber.error(thrownError);\n } else if (isStopped) {\n subscriber.complete();\n }\n }\n\n /**\n * Creates a new Observable with this Subject as the source. You can do this\n * to create custom Observer-side logic of the Subject and conceal it from\n * code that uses the Observable.\n * @return {Observable} Observable that the Subject casts to\n */\n asObservable(): Observable {\n const observable: any = new Observable();\n observable.source = this;\n return observable;\n }\n}\n\n/**\n * @class AnonymousSubject\n */\nexport class AnonymousSubject extends Subject {\n constructor(\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n public destination?: Observer,\n source?: Observable\n ) {\n super();\n this.source = source;\n }\n\n next(value: T) {\n this.destination?.next?.(value);\n }\n\n error(err: any) {\n this.destination?.error?.(err);\n }\n\n complete() {\n this.destination?.complete?.();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n return this.source?.subscribe(subscriber) ?? EMPTY_SUBSCRIPTION;\n }\n}\n", "import { Subject } from './Subject';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\n\n/**\n * A variant of Subject that requires an initial value and emits its current\n * value whenever it is subscribed to.\n *\n * @class BehaviorSubject\n */\nexport class BehaviorSubject extends Subject {\n constructor(private _value: T) {\n super();\n }\n\n get value(): T {\n return this.getValue();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n const subscription = super._subscribe(subscriber);\n !subscription.closed && subscriber.next(this._value);\n return subscription;\n }\n\n getValue(): T {\n const { hasError, thrownError, _value } = this;\n if (hasError) {\n throw thrownError;\n }\n this._throwIfClosed();\n return _value;\n }\n\n next(value: T): void {\n super.next((this._value = value));\n }\n}\n", "import { TimestampProvider } from '../types';\n\ninterface DateTimestampProvider extends TimestampProvider {\n delegate: TimestampProvider | undefined;\n}\n\nexport const dateTimestampProvider: DateTimestampProvider = {\n now() {\n // Use the variable rather than `this` so that the function can be called\n // without being bound to the provider.\n return (dateTimestampProvider.delegate || Date).now();\n },\n delegate: undefined,\n};\n", "import { Subject } from './Subject';\nimport { TimestampProvider } from './types';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * A variant of {@link Subject} that \"replays\" old values to new subscribers by emitting them when they first subscribe.\n *\n * `ReplaySubject` has an internal buffer that will store a specified number of values that it has observed. Like `Subject`,\n * `ReplaySubject` \"observes\" values by having them passed to its `next` method. When it observes a value, it will store that\n * value for a time determined by the configuration of the `ReplaySubject`, as passed to its constructor.\n *\n * When a new subscriber subscribes to the `ReplaySubject` instance, it will synchronously emit all values in its buffer in\n * a First-In-First-Out (FIFO) manner. The `ReplaySubject` will also complete, if it has observed completion; and it will\n * error if it has observed an error.\n *\n * There are two main configuration items to be concerned with:\n *\n * 1. `bufferSize` - This will determine how many items are stored in the buffer, defaults to infinite.\n * 2. `windowTime` - The amount of time to hold a value in the buffer before removing it from the buffer.\n *\n * Both configurations may exist simultaneously. So if you would like to buffer a maximum of 3 values, as long as the values\n * are less than 2 seconds old, you could do so with a `new ReplaySubject(3, 2000)`.\n *\n * ### Differences with BehaviorSubject\n *\n * `BehaviorSubject` is similar to `new ReplaySubject(1)`, with a couple of exceptions:\n *\n * 1. `BehaviorSubject` comes \"primed\" with a single value upon construction.\n * 2. `ReplaySubject` will replay values, even after observing an error, where `BehaviorSubject` will not.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n * @see {@link shareReplay}\n */\nexport class ReplaySubject extends Subject {\n private _buffer: (T | number)[] = [];\n private _infiniteTimeWindow = true;\n\n /**\n * @param bufferSize The size of the buffer to replay on subscription\n * @param windowTime The amount of time the buffered items will stay buffered\n * @param timestampProvider An object with a `now()` method that provides the current timestamp. This is used to\n * calculate the amount of time something has been buffered.\n */\n constructor(\n private _bufferSize = Infinity,\n private _windowTime = Infinity,\n private _timestampProvider: TimestampProvider = dateTimestampProvider\n ) {\n super();\n this._infiniteTimeWindow = _windowTime === Infinity;\n this._bufferSize = Math.max(1, _bufferSize);\n this._windowTime = Math.max(1, _windowTime);\n }\n\n next(value: T): void {\n const { isStopped, _buffer, _infiniteTimeWindow, _timestampProvider, _windowTime } = this;\n if (!isStopped) {\n _buffer.push(value);\n !_infiniteTimeWindow && _buffer.push(_timestampProvider.now() + _windowTime);\n }\n this._trimBuffer();\n super.next(value);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._trimBuffer();\n\n const subscription = this._innerSubscribe(subscriber);\n\n const { _infiniteTimeWindow, _buffer } = this;\n // We use a copy here, so reentrant code does not mutate our array while we're\n // emitting it to a new subscriber.\n const copy = _buffer.slice();\n for (let i = 0; i < copy.length && !subscriber.closed; i += _infiniteTimeWindow ? 1 : 2) {\n subscriber.next(copy[i] as T);\n }\n\n this._checkFinalizedStatuses(subscriber);\n\n return subscription;\n }\n\n private _trimBuffer() {\n const { _bufferSize, _timestampProvider, _buffer, _infiniteTimeWindow } = this;\n // If we don't have an infinite buffer size, and we're over the length,\n // use splice to truncate the old buffer values off. Note that we have to\n // double the size for instances where we're not using an infinite time window\n // because we're storing the values and the timestamps in the same array.\n const adjustedBufferSize = (_infiniteTimeWindow ? 1 : 2) * _bufferSize;\n _bufferSize < Infinity && adjustedBufferSize < _buffer.length && _buffer.splice(0, _buffer.length - adjustedBufferSize);\n\n // Now, if we're not in an infinite time window, remove all values where the time is\n // older than what is allowed.\n if (!_infiniteTimeWindow) {\n const now = _timestampProvider.now();\n let last = 0;\n // Search the array for the first timestamp that isn't expired and\n // truncate the buffer up to that point.\n for (let i = 1; i < _buffer.length && (_buffer[i] as number) <= now; i += 2) {\n last = i;\n }\n last && _buffer.splice(0, last + 1);\n }\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Subscription } from '../Subscription';\nimport { SchedulerAction } from '../types';\n\n/**\n * A unit of work to be executed in a `scheduler`. An action is typically\n * created from within a {@link SchedulerLike} and an RxJS user does not need to concern\n * themselves about creating and manipulating an Action.\n *\n * ```ts\n * class Action extends Subscription {\n * new (scheduler: Scheduler, work: (state?: T) => void);\n * schedule(state?: T, delay: number = 0): Subscription;\n * }\n * ```\n *\n * @class Action\n */\nexport class Action extends Subscription {\n constructor(scheduler: Scheduler, work: (this: SchedulerAction, state?: T) => void) {\n super();\n }\n /**\n * Schedules this action on its parent {@link SchedulerLike} for execution. May be passed\n * some context object, `state`. May happen at some point in the future,\n * according to the `delay` parameter, if specified.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler.\n * @return {void}\n */\n public schedule(state?: T, delay: number = 0): Subscription {\n return this;\n }\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetIntervalFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearIntervalFunction = (handle: TimerHandle) => void;\n\ninterface IntervalProvider {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n delegate:\n | {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n }\n | undefined;\n}\n\nexport const intervalProvider: IntervalProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setInterval(handler: () => void, timeout?: number, ...args) {\n const { delegate } = intervalProvider;\n if (delegate?.setInterval) {\n return delegate.setInterval(handler, timeout, ...args);\n }\n return setInterval(handler, timeout, ...args);\n },\n clearInterval(handle) {\n const { delegate } = intervalProvider;\n return (delegate?.clearInterval || clearInterval)(handle as any);\n },\n delegate: undefined,\n};\n", "import { Action } from './Action';\nimport { SchedulerAction } from '../types';\nimport { Subscription } from '../Subscription';\nimport { AsyncScheduler } from './AsyncScheduler';\nimport { intervalProvider } from './intervalProvider';\nimport { arrRemove } from '../util/arrRemove';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncAction extends Action {\n public id: TimerHandle | undefined;\n public state?: T;\n // @ts-ignore: Property has no initializer and is not definitely assigned\n public delay: number;\n protected pending: boolean = false;\n\n constructor(protected scheduler: AsyncScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (this.closed) {\n return this;\n }\n\n // Always replace the current state with the new state.\n this.state = state;\n\n const id = this.id;\n const scheduler = this.scheduler;\n\n //\n // Important implementation note:\n //\n // Actions only execute once by default, unless rescheduled from within the\n // scheduled callback. This allows us to implement single and repeat\n // actions via the same code path, without adding API surface area, as well\n // as mimic traditional recursion but across asynchronous boundaries.\n //\n // However, JS runtimes and timers distinguish between intervals achieved by\n // serial `setTimeout` calls vs. a single `setInterval` call. An interval of\n // serial `setTimeout` calls can be individually delayed, which delays\n // scheduling the next `setTimeout`, and so on. `setInterval` attempts to\n // guarantee the interval callback will be invoked more precisely to the\n // interval period, regardless of load.\n //\n // Therefore, we use `setInterval` to schedule single and repeat actions.\n // If the action reschedules itself with the same delay, the interval is not\n // canceled. If the action doesn't reschedule, or reschedules with a\n // different delay, the interval will be canceled after scheduled callback\n // execution.\n //\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, delay);\n }\n\n // Set the pending flag indicating that this action has been scheduled, or\n // has recursively rescheduled itself.\n this.pending = true;\n\n this.delay = delay;\n // If this action has already an async Id, don't request a new one.\n this.id = this.id ?? this.requestAsyncId(scheduler, this.id, delay);\n\n return this;\n }\n\n protected requestAsyncId(scheduler: AsyncScheduler, _id?: TimerHandle, delay: number = 0): TimerHandle {\n return intervalProvider.setInterval(scheduler.flush.bind(scheduler, this), delay);\n }\n\n protected recycleAsyncId(_scheduler: AsyncScheduler, id?: TimerHandle, delay: number | null = 0): TimerHandle | undefined {\n // If this action is rescheduled with the same delay time, don't clear the interval id.\n if (delay != null && this.delay === delay && this.pending === false) {\n return id;\n }\n // Otherwise, if the action's delay time is different from the current delay,\n // or the action has been rescheduled before it's executed, clear the interval id\n if (id != null) {\n intervalProvider.clearInterval(id);\n }\n\n return undefined;\n }\n\n /**\n * Immediately executes this action and the `work` it contains.\n * @return {any}\n */\n public execute(state: T, delay: number): any {\n if (this.closed) {\n return new Error('executing a cancelled action');\n }\n\n this.pending = false;\n const error = this._execute(state, delay);\n if (error) {\n return error;\n } else if (this.pending === false && this.id != null) {\n // Dequeue if the action didn't reschedule itself. Don't call\n // unsubscribe(), because the action could reschedule later.\n // For example:\n // ```\n // scheduler.schedule(function doWork(counter) {\n // /* ... I'm a busy worker bee ... */\n // var originalAction = this;\n // /* wait 100ms before rescheduling the action */\n // setTimeout(function () {\n // originalAction.schedule(counter + 1);\n // }, 100);\n // }, 1000);\n // ```\n this.id = this.recycleAsyncId(this.scheduler, this.id, null);\n }\n }\n\n protected _execute(state: T, _delay: number): any {\n let errored: boolean = false;\n let errorValue: any;\n try {\n this.work(state);\n } catch (e) {\n errored = true;\n // HACK: Since code elsewhere is relying on the \"truthiness\" of the\n // return here, we can't have it return \"\" or 0 or false.\n // TODO: Clean this up when we refactor schedulers mid-version-8 or so.\n errorValue = e ? e : new Error('Scheduled action threw falsy error');\n }\n if (errored) {\n this.unsubscribe();\n return errorValue;\n }\n }\n\n unsubscribe() {\n if (!this.closed) {\n const { id, scheduler } = this;\n const { actions } = scheduler;\n\n this.work = this.state = this.scheduler = null!;\n this.pending = false;\n\n arrRemove(actions, this);\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, null);\n }\n\n this.delay = null!;\n super.unsubscribe();\n }\n }\n}\n", "import { Action } from './scheduler/Action';\nimport { Subscription } from './Subscription';\nimport { SchedulerLike, SchedulerAction } from './types';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * An execution context and a data structure to order tasks and schedule their\n * execution. Provides a notion of (potentially virtual) time, through the\n * `now()` getter method.\n *\n * Each unit of work in a Scheduler is called an `Action`.\n *\n * ```ts\n * class Scheduler {\n * now(): number;\n * schedule(work, delay?, state?): Subscription;\n * }\n * ```\n *\n * @class Scheduler\n * @deprecated Scheduler is an internal implementation detail of RxJS, and\n * should not be used directly. Rather, create your own class and implement\n * {@link SchedulerLike}. Will be made internal in v8.\n */\nexport class Scheduler implements SchedulerLike {\n public static now: () => number = dateTimestampProvider.now;\n\n constructor(private schedulerActionCtor: typeof Action, now: () => number = Scheduler.now) {\n this.now = now;\n }\n\n /**\n * A getter method that returns a number representing the current time\n * (at the time this function was called) according to the scheduler's own\n * internal clock.\n * @return {number} A number that represents the current time. May or may not\n * have a relation to wall-clock time. May or may not refer to a time unit\n * (e.g. milliseconds).\n */\n public now: () => number;\n\n /**\n * Schedules a function, `work`, for execution. May happen at some point in\n * the future, according to the `delay` parameter, if specified. May be passed\n * some context object, `state`, which will be passed to the `work` function.\n *\n * The given arguments will be processed an stored as an Action object in a\n * queue of actions.\n *\n * @param {function(state: ?T): ?Subscription} work A function representing a\n * task, or some unit of work to be executed by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler itself.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @return {Subscription} A subscription in order to be able to unsubscribe\n * the scheduled work.\n */\n public schedule(work: (this: SchedulerAction, state?: T) => void, delay: number = 0, state?: T): Subscription {\n return new this.schedulerActionCtor(this, work).schedule(state, delay);\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Action } from './Action';\nimport { AsyncAction } from './AsyncAction';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncScheduler extends Scheduler {\n public actions: Array> = [];\n /**\n * A flag to indicate whether the Scheduler is currently executing a batch of\n * queued actions.\n * @type {boolean}\n * @internal\n */\n public _active: boolean = false;\n /**\n * An internal ID used to track the latest asynchronous task such as those\n * coming from `setTimeout`, `setInterval`, `requestAnimationFrame`, and\n * others.\n * @type {any}\n * @internal\n */\n public _scheduled: TimerHandle | undefined;\n\n constructor(SchedulerAction: typeof Action, now: () => number = Scheduler.now) {\n super(SchedulerAction, now);\n }\n\n public flush(action: AsyncAction): void {\n const { actions } = this;\n\n if (this._active) {\n actions.push(action);\n return;\n }\n\n let error: any;\n this._active = true;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions.shift()!)); // exhaust the scheduler queue\n\n this._active = false;\n\n if (error) {\n while ((action = actions.shift()!)) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\n/**\n *\n * Async Scheduler\n *\n * Schedule task as if you used setTimeout(task, duration)\n *\n * `async` scheduler schedules tasks asynchronously, by putting them on the JavaScript\n * event loop queue. It is best used to delay tasks in time or to schedule tasks repeating\n * in intervals.\n *\n * If you just want to \"defer\" task, that is to perform it right after currently\n * executing synchronous code ends (commonly achieved by `setTimeout(deferredTask, 0)`),\n * better choice will be the {@link asapScheduler} scheduler.\n *\n * ## Examples\n * Use async scheduler to delay task\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * const task = () => console.log('it works!');\n *\n * asyncScheduler.schedule(task, 2000);\n *\n * // After 2 seconds logs:\n * // \"it works!\"\n * ```\n *\n * Use async scheduler to repeat task in intervals\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * function task(state) {\n * console.log(state);\n * this.schedule(state + 1, 1000); // `this` references currently executing Action,\n * // which we reschedule with new state and delay\n * }\n *\n * asyncScheduler.schedule(task, 3000, 0);\n *\n * // Logs:\n * // 0 after 3s\n * // 1 after 4s\n * // 2 after 5s\n * // 3 after 6s\n * ```\n */\n\nexport const asyncScheduler = new AsyncScheduler(AsyncAction);\n\n/**\n * @deprecated Renamed to {@link asyncScheduler}. Will be removed in v8.\n */\nexport const async = asyncScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { Subscription } from '../Subscription';\nimport { QueueScheduler } from './QueueScheduler';\nimport { SchedulerAction } from '../types';\nimport { TimerHandle } from './timerHandle';\n\nexport class QueueAction extends AsyncAction {\n constructor(protected scheduler: QueueScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (delay > 0) {\n return super.schedule(state, delay);\n }\n this.delay = delay;\n this.state = state;\n this.scheduler.flush(this);\n return this;\n }\n\n public execute(state: T, delay: number): any {\n return delay > 0 || this.closed ? super.execute(state, delay) : this._execute(state, delay);\n }\n\n protected requestAsyncId(scheduler: QueueScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n\n if ((delay != null && delay > 0) || (delay == null && this.delay > 0)) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n\n // Otherwise flush the scheduler starting with this action.\n scheduler.flush(this);\n\n // HACK: In the past, this was returning `void`. However, `void` isn't a valid\n // `TimerHandle`, and generally the return value here isn't really used. So the\n // compromise is to return `0` which is both \"falsy\" and a valid `TimerHandle`,\n // as opposed to refactoring every other instanceo of `requestAsyncId`.\n return 0;\n }\n}\n", "import { AsyncScheduler } from './AsyncScheduler';\n\nexport class QueueScheduler extends AsyncScheduler {\n}\n", "import { QueueAction } from './QueueAction';\nimport { QueueScheduler } from './QueueScheduler';\n\n/**\n *\n * Queue Scheduler\n *\n * Put every next task on a queue, instead of executing it immediately\n *\n * `queue` scheduler, when used with delay, behaves the same as {@link asyncScheduler} scheduler.\n *\n * When used without delay, it schedules given task synchronously - executes it right when\n * it is scheduled. However when called recursively, that is when inside the scheduled task,\n * another task is scheduled with queue scheduler, instead of executing immediately as well,\n * that task will be put on a queue and wait for current one to finish.\n *\n * This means that when you execute task with `queue` scheduler, you are sure it will end\n * before any other task scheduled with that scheduler will start.\n *\n * ## Examples\n * Schedule recursively first, then do something\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(() => {\n * queueScheduler.schedule(() => console.log('second')); // will not happen now, but will be put on a queue\n *\n * console.log('first');\n * });\n *\n * // Logs:\n * // \"first\"\n * // \"second\"\n * ```\n *\n * Reschedule itself recursively\n * ```ts\n * import { queueScheduler } from 'rxjs';\n *\n * queueScheduler.schedule(function(state) {\n * if (state !== 0) {\n * console.log('before', state);\n * this.schedule(state - 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * console.log('after', state);\n * }\n * }, 0, 3);\n *\n * // In scheduler that runs recursively, you would expect:\n * // \"before\", 3\n * // \"before\", 2\n * // \"before\", 1\n * // \"after\", 1\n * // \"after\", 2\n * // \"after\", 3\n *\n * // But with queue it logs:\n * // \"before\", 3\n * // \"after\", 3\n * // \"before\", 2\n * // \"after\", 2\n * // \"before\", 1\n * // \"after\", 1\n * ```\n */\n\nexport const queueScheduler = new QueueScheduler(QueueAction);\n\n/**\n * @deprecated Renamed to {@link queueScheduler}. Will be removed in v8.\n */\nexport const queue = queueScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\nimport { SchedulerAction } from '../types';\nimport { animationFrameProvider } from './animationFrameProvider';\nimport { TimerHandle } from './timerHandle';\n\nexport class AnimationFrameAction extends AsyncAction {\n constructor(protected scheduler: AnimationFrameScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n protected requestAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay is greater than 0, request as an async action.\n if (delay !== null && delay > 0) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n // Push the action to the end of the scheduler queue.\n scheduler.actions.push(this);\n // If an animation frame has already been requested, don't request another\n // one. If an animation frame hasn't been requested yet, request one. Return\n // the current animation frame request id.\n return scheduler._scheduled || (scheduler._scheduled = animationFrameProvider.requestAnimationFrame(() => scheduler.flush(undefined)));\n }\n\n protected recycleAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle | undefined {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n if (delay != null ? delay > 0 : this.delay > 0) {\n return super.recycleAsyncId(scheduler, id, delay);\n }\n // If the scheduler queue has no remaining actions with the same async id,\n // cancel the requested animation frame and set the scheduled flag to\n // undefined so the next AnimationFrameAction will request its own.\n const { actions } = scheduler;\n if (id != null && actions[actions.length - 1]?.id !== id) {\n animationFrameProvider.cancelAnimationFrame(id as number);\n scheduler._scheduled = undefined;\n }\n // Return undefined so the action knows to request a new async id if it's rescheduled.\n return undefined;\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\nexport class AnimationFrameScheduler extends AsyncScheduler {\n public flush(action?: AsyncAction): void {\n this._active = true;\n // The async id that effects a call to flush is stored in _scheduled.\n // Before executing an action, it's necessary to check the action's async\n // id to determine whether it's supposed to be executed in the current\n // flush.\n // Previous implementations of this method used a count to determine this,\n // but that was unsound, as actions that are unsubscribed - i.e. cancelled -\n // are removed from the actions array and that can shift actions that are\n // scheduled to be executed in a subsequent flush into positions at which\n // they are executed within the current flush.\n const flushId = this._scheduled;\n this._scheduled = undefined;\n\n const { actions } = this;\n let error: any;\n action = action || actions.shift()!;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions[0]) && action.id === flushId && actions.shift());\n\n this._active = false;\n\n if (error) {\n while ((action = actions[0]) && action.id === flushId && actions.shift()) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AnimationFrameAction } from './AnimationFrameAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\n\n/**\n *\n * Animation Frame Scheduler\n *\n * Perform task when `window.requestAnimationFrame` would fire\n *\n * When `animationFrame` scheduler is used with delay, it will fall back to {@link asyncScheduler} scheduler\n * behaviour.\n *\n * Without delay, `animationFrame` scheduler can be used to create smooth browser animations.\n * It makes sure scheduled task will happen just before next browser content repaint,\n * thus performing animations as efficiently as possible.\n *\n * ## Example\n * Schedule div height animation\n * ```ts\n * // html:
\n * import { animationFrameScheduler } from 'rxjs';\n *\n * const div = document.querySelector('div');\n *\n * animationFrameScheduler.schedule(function(height) {\n * div.style.height = height + \"px\";\n *\n * this.schedule(height + 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * }, 0, 0);\n *\n * // You will see a div element growing in height\n * ```\n */\n\nexport const animationFrameScheduler = new AnimationFrameScheduler(AnimationFrameAction);\n\n/**\n * @deprecated Renamed to {@link animationFrameScheduler}. Will be removed in v8.\n */\nexport const animationFrame = animationFrameScheduler;\n", "import { Observable } from '../Observable';\nimport { SchedulerLike } from '../types';\n\n/**\n * A simple Observable that emits no items to the Observer and immediately\n * emits a complete notification.\n *\n * Just emits 'complete', and nothing else.\n *\n * ![](empty.png)\n *\n * A simple Observable that only emits the complete notification. It can be used\n * for composing with other Observables, such as in a {@link mergeMap}.\n *\n * ## Examples\n *\n * Log complete notification\n *\n * ```ts\n * import { EMPTY } from 'rxjs';\n *\n * EMPTY.subscribe({\n * next: () => console.log('Next'),\n * complete: () => console.log('Complete!')\n * });\n *\n * // Outputs\n * // Complete!\n * ```\n *\n * Emit the number 7, then complete\n *\n * ```ts\n * import { EMPTY, startWith } from 'rxjs';\n *\n * const result = EMPTY.pipe(startWith(7));\n * result.subscribe(x => console.log(x));\n *\n * // Outputs\n * // 7\n * ```\n *\n * Map and flatten only odd numbers to the sequence `'a'`, `'b'`, `'c'`\n *\n * ```ts\n * import { interval, mergeMap, of, EMPTY } from 'rxjs';\n *\n * const interval$ = interval(1000);\n * const result = interval$.pipe(\n * mergeMap(x => x % 2 === 1 ? of('a', 'b', 'c') : EMPTY),\n * );\n * result.subscribe(x => console.log(x));\n *\n * // Results in the following to the console:\n * // x is equal to the count on the interval, e.g. (0, 1, 2, 3, ...)\n * // x will occur every 1000ms\n * // if x % 2 is equal to 1, print a, b, c (each on its own)\n * // if x % 2 is not equal to 1, nothing will be output\n * ```\n *\n * @see {@link Observable}\n * @see {@link NEVER}\n * @see {@link of}\n * @see {@link throwError}\n */\nexport const EMPTY = new Observable((subscriber) => subscriber.complete());\n\n/**\n * @param scheduler A {@link SchedulerLike} to use for scheduling\n * the emission of the complete notification.\n * @deprecated Replaced with the {@link EMPTY} constant or {@link scheduled} (e.g. `scheduled([], scheduler)`). Will be removed in v8.\n */\nexport function empty(scheduler?: SchedulerLike) {\n return scheduler ? emptyScheduled(scheduler) : EMPTY;\n}\n\nfunction emptyScheduled(scheduler: SchedulerLike) {\n return new Observable((subscriber) => scheduler.schedule(() => subscriber.complete()));\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport function isScheduler(value: any): value is SchedulerLike {\n return value && isFunction(value.schedule);\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\nimport { isScheduler } from './isScheduler';\n\nfunction last(arr: T[]): T | undefined {\n return arr[arr.length - 1];\n}\n\nexport function popResultSelector(args: any[]): ((...args: unknown[]) => unknown) | undefined {\n return isFunction(last(args)) ? args.pop() : undefined;\n}\n\nexport function popScheduler(args: any[]): SchedulerLike | undefined {\n return isScheduler(last(args)) ? args.pop() : undefined;\n}\n\nexport function popNumber(args: any[], defaultValue: number): number {\n return typeof last(args) === 'number' ? args.pop()! : defaultValue;\n}\n", "export const isArrayLike = ((x: any): x is ArrayLike => x && typeof x.length === 'number' && typeof x !== 'function');", "import { isFunction } from \"./isFunction\";\n\n/**\n * Tests to see if the object is \"thennable\".\n * @param value the object to test\n */\nexport function isPromise(value: any): value is PromiseLike {\n return isFunction(value?.then);\n}\n", "import { InteropObservable } from '../types';\nimport { observable as Symbol_observable } from '../symbol/observable';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being Observable (but not necessary an Rx Observable) */\nexport function isInteropObservable(input: any): input is InteropObservable {\n return isFunction(input[Symbol_observable]);\n}\n", "import { isFunction } from './isFunction';\n\nexport function isAsyncIterable(obj: any): obj is AsyncIterable {\n return Symbol.asyncIterator && isFunction(obj?.[Symbol.asyncIterator]);\n}\n", "/**\n * Creates the TypeError to throw if an invalid object is passed to `from` or `scheduled`.\n * @param input The object that was passed.\n */\nexport function createInvalidObservableTypeError(input: any) {\n // TODO: We should create error codes that can be looked up, so this can be less verbose.\n return new TypeError(\n `You provided ${\n input !== null && typeof input === 'object' ? 'an invalid object' : `'${input}'`\n } where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.`\n );\n}\n", "export function getSymbolIterator(): symbol {\n if (typeof Symbol !== 'function' || !Symbol.iterator) {\n return '@@iterator' as any;\n }\n\n return Symbol.iterator;\n}\n\nexport const iterator = getSymbolIterator();\n", "import { iterator as Symbol_iterator } from '../symbol/iterator';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being an Iterable */\nexport function isIterable(input: any): input is Iterable {\n return isFunction(input?.[Symbol_iterator]);\n}\n", "import { ReadableStreamLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport async function* readableStreamLikeToAsyncGenerator(readableStream: ReadableStreamLike): AsyncGenerator {\n const reader = readableStream.getReader();\n try {\n while (true) {\n const { value, done } = await reader.read();\n if (done) {\n return;\n }\n yield value!;\n }\n } finally {\n reader.releaseLock();\n }\n}\n\nexport function isReadableStreamLike(obj: any): obj is ReadableStreamLike {\n // We don't want to use instanceof checks because they would return\n // false for instances from another Realm, like an + + + +
+ +
+
+
+
+

What are you waiting for?

+

Get hands-on experience with Polkadot's in-depth tutorials. Covering everything from blockchain basics to advanced skills, our tutorials help you build expertise and start creating with confidence.

+ + + +
+
+
+
+ + + + + + + + + + + + + +
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/index.html b/infrastructure/index.html new file mode 100644 index 00000000..06ef322d --- /dev/null +++ b/infrastructure/index.html @@ -0,0 +1,3361 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-node/index.html b/infrastructure/running-a-node/index.html new file mode 100644 index 00000000..9bcb52ca --- /dev/null +++ b/infrastructure/running-a-node/index.html @@ -0,0 +1,3363 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-node/setup-bootnode/index.html b/infrastructure/running-a-node/setup-bootnode/index.html new file mode 100644 index 00000000..55332dab --- /dev/null +++ b/infrastructure/running-a-node/setup-bootnode/index.html @@ -0,0 +1,3683 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Set Up a Bootnode | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+ +
+
+ + +
+ +
+ + + + +

Set Up a Bootnode

+

Introduction

+

Bootnodes are essential for helping blockchain nodes discover peers and join the network. When a node starts, it needs to find other nodes, and bootnodes provide an initial point of contact. Once connected, a node can expand its peer connections and play its role in the network, like participating as a validator.

+

This guide will walk you through setting up a Polkadot bootnode, configuring P2P, WebSocket (WS), secure WSS connections, and managing network keys. You'll also learn how to test your bootnode to ensure it is running correctly and accessible to other nodes.

+

Prerequisites

+

Before you start, you need to have the following prerequisites:

+
    +
  • Verify a working Polkadot (polkadot) binary is available on your machine
  • +
  • Ensure you have nginx installed. Please refer to the Installation Guide for help with installation if needed
  • +
  • A VPS or other dedicated server setup
  • +
+

Accessing the Bootnode

+

Bootnodes must be accessible through three key channels to connect with other nodes in the network:

+
    +
  • +

    P2P - a direct peer-to-peer connection, set by:

    +
    --listen-addr /ip4/0.0.0.0/tcp/INSERT_PORT
    +
    +
    +

    Note

    +

    This is not enabled by default on non-validator nodes like archive RPC nodes.

    +
    +
  • +
  • +

    P2P/WS - a WebSocket (WS) connection, also configured via --listen-addr

    +
  • +
  • P2P/WSS - a secure WebSocket (WSS) connection using SSL, often required for light clients. An SSL proxy is needed, as the node itself cannot handle certificates
  • +
+

Node Key

+

A node key is the ED25519 key used by libp2p to assign your node an identity or peer ID. Generating a known node key for a bootnode is crucial, as it gives you a consistent key that can be placed in chain specifications as a known, reliable bootnode.

+

Starting a node creates its node key in the chains/INSERT_CHAIN/network/secret_ed25519 file.

+

You can create a node key using:

+
polkadot key generate-node-key
+
+

This key can be used in the startup command line.

+

It is imperative that you backup the node key. If it is included in the polkadot binary, it is hardcoded into the binary, which must be recompiled to change the key.

+

Running the Bootnode

+

A bootnode can be run as follows:

+
polkadot --chain polkadot \
+--name dot-bootnode \
+--listen-addr /ip4/0.0.0.0/tcp/30310 \
+--listen-addr /ip4/0.0.0.0/tcp/30311/ws
+
+

This assigns the p2p to port 30310 and p2p/ws to port 30311. For the p2p/wss port, a proxy must be set up with a DNS name and a corresponding certificate. The following example is for the popular nginx server and enables p2p/wss on port 30312 by adding a proxy to the p2p/ws port 30311:

+
/etc/nginx/sites-enabled/dot-bootnode
server {
+       listen       30312 ssl http2 default_server;
+       server_name  dot-bootnode.stakeworld.io;
+       root         /var/www/html;
+
+       ssl_certificate "INSERT_YOUR_CERT";
+       ssl_certificate_key "INSERT_YOUR_KEY";
+
+       location / {
+         proxy_buffers 16 4k;
+         proxy_buffer_size 2k;
+         proxy_pass http://localhost:30311;
+         proxy_http_version 1.1;
+         proxy_set_header Upgrade $http_upgrade;
+         proxy_set_header Connection "Upgrade";
+         proxy_set_header Host $host;
+   }
+
+}
+
+

Testing Bootnode Connection

+

If the preceding node is running with DNS name dot-bootnode.stakeworld.io, which contains a proxy with a valid certificate and node-id 12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg then the following commands should output syncing 1 peers.

+
+

Tip

+

You can add -lsub-libp2p=trace on the end to get libp2p trace logging for debugging purposes.

+
+

P2P

+
polkadot --chain polkadot \
+--base-path /tmp/node \
+--name "Bootnode testnode" \
+--reserved-only \
+--reserved-nodes "/dns/dot-bootnode.stakeworld.io/tcp/30310/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg" \
+--no-hardware-benchmarks
+
+

P2P/WS

+
polkadot --chain polkadot \
+--base-path /tmp/node \
+--name "Bootnode testnode" \
+--reserved-only \
+--reserved-nodes "/dns/dot-bootnode.stakeworld.io/tcp/30311/ws/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg" \
+--no-hardware-benchmarks
+
+

P2P/WSS

+
polkadot --chain polkadot \
+--base-path /tmp/node \
+--name "Bootnode testnode" \
+--reserved-only \
+--reserved-nodes "/dns/dot-bootnode.stakeworld.io/tcp/30312/wss/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg" \
+--no-hardware-benchmarks
+
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-node/setup-full-node/index.html b/infrastructure/running-a-node/setup-full-node/index.html new file mode 100644 index 00000000..c118c76e --- /dev/null +++ b/infrastructure/running-a-node/setup-full-node/index.html @@ -0,0 +1,3373 @@ + + + + + + + + + + + + + + + + + + + + + + + + Set Up a Full Node | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Set Up a Full Node

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/index.html b/infrastructure/running-a-validator/index.html new file mode 100644 index 00000000..ec72cbc2 --- /dev/null +++ b/infrastructure/running-a-validator/index.html @@ -0,0 +1,3459 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Running a Validator | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ + +
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/onboarding-and-offboarding/index.html b/infrastructure/running-a-validator/onboarding-and-offboarding/index.html new file mode 100644 index 00000000..92f64625 --- /dev/null +++ b/infrastructure/running-a-validator/onboarding-and-offboarding/index.html @@ -0,0 +1,3365 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/index.html b/infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/index.html new file mode 100644 index 00000000..7df4fedd --- /dev/null +++ b/infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/index.html @@ -0,0 +1,3532 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Stop Validating | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+ +
+ + +
+ +
+ + + + +

Stop Validating

+

Introduction

+

If you're ready to stop validating on Polkadot, there are essential steps to ensure a smooth transition while protecting your funds and account integrity. Whether you're taking a break for maintenance or unbonding entirely, you'll need to chill your validator, purge session keys, and unbond your tokens. This guide explains how to use Polkadot's tools and extrinsics to safely withdraw from validation activities, safeguarding your account's future usability.

+

Pause Versus Stop

+

If you wish to remain a validator or nominator (for example, stopping for planned downtime or server maintenance), submitting the chill extrinsic in the staking pallet should suffice. Additional steps are only needed to unbond funds or reap an account.

+

The following are steps to ensure a smooth stop to validation:

+
    +
  • Chill the validator
  • +
  • Purge validator session keys
  • +
  • Unbond your tokens
  • +
+

Chill Validator

+

When stepping back from validating, the first step is to chill your validator status. This action stops your validator from being considered for the next era without fully unbonding your tokens, which can be useful for temporary pauses like maintenance or planned downtime.

+

Use the staking.chill extrinsic to initiate this. For more guidance on chilling your node, refer to the Pause Validatingguide. You may also claim any pending staking rewards at this point.

+

Purge Validator Session Keys

+

Purging validator session keys is a critical step in removing the association between your validator account and its session keys, which ensures that your account is fully disassociated from validator activities. The session.purgeKeys extrinsic removes the reference to your session keys from the stash or staking proxy account that originally set them.

+

Here are a couple of important things to know about purging keys:

+
    +
  • Account used to purge keys - always use the same account to purge keys you originally used to set them, usually your stash or staking proxy account. Using a different account may leave an unremovable reference to the session keys on the original account, preventing its reaping
  • +
  • Account reaping issue - failing to purge keys will prevent you from reaping (fully deleting) your stash account. If you attempt to transfer tokens without purging, you'll need to rebond, purge the session keys, unbond again, and wait through the unbonding period before any transfer
  • +
+

Unbond Your Tokens

+

After chilling your node and purging session keys, the final step is to unbond your staked tokens. This action removes them from staking and begins the unbonding period (usually 28 days for Polkadot and seven days for Kusama), after which the tokens will be transferable.

+

To unbond tokens, go to Network > Staking > Account Actions on Polkadot.js Apps. Select your stash account, click on the dropdown menu, and choose Unbond Funds. Alternatively, you can use the staking.unbond extrinsic if you handle this via a staking proxy account.

+

Once the unbonding period is complete, your tokens will be available for use in transactions or transfers outside of staking.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/operational-tasks/index.html b/infrastructure/running-a-validator/operational-tasks/index.html new file mode 100644 index 00000000..38554c98 --- /dev/null +++ b/infrastructure/running-a-validator/operational-tasks/index.html @@ -0,0 +1,3365 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/operational-tasks/pause-validating/index.html b/infrastructure/running-a-validator/operational-tasks/pause-validating/index.html new file mode 100644 index 00000000..3bf56f49 --- /dev/null +++ b/infrastructure/running-a-validator/operational-tasks/pause-validating/index.html @@ -0,0 +1,3548 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Pause Validating | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Pause Validating

+

Introduction

+

If you need to temporarily stop participating in Polkadot staking activities without fully unbonding your funds, chilling your account allows you to do so efficiently. Chilling removes your node from active validation or nomination in the next era while keeping your funds bonded, making it ideal for planned downtimes or temporary pauses.

+

This guide covers the steps for chilling as a validator or nominator, using the chill and chillOther extrinsics, and how these affect your staking status and nominations.

+

Chilling Your Node

+

If you need to temporarily step back from staking without unbonding your funds, you can "chill" your account. Chilling pauses your active staking participation, setting your account to inactive in the next era while keeping your funds bonded.

+

To chill your account, go to the Network > Staking > Account Actions page on Polkadot.js Apps, and select Stop. Alternatively, you can call the chill extrinsic in the Staking pallet.

+

Staking Election Timing Considerations

+

When a node actively participates in staking but then chills, it will continue contributing for the remainder of the current era. However, its eligibility for the next election depends on the chill status at the start of the new era:

+
    +
  • Chilled during previous era - will not participate in the current era election and will remain inactive until reactivated +-Chilled during current era - will not be selected for the next era's election +-Chilled after current era - may be selected if it was active during the previous era and is now chilled
  • +
+

Chilling as a Nominator

+

When you choose to chill as a nominator, your active nominations are reset. Upon re-entering the nominating process, you must reselect validators to support manually. Depending on preferences, these can be the same validators as before or a new set. Remember that your previous nominations won’t be saved or automatically reactivated after chilling.

+

While chilled, your nominator account remains bonded, preserving your staked funds without requiring a full unbonding process. When you’re ready to start nominating again, you can issue a new nomination call to activate your bond with a fresh set of validators. This process bypasses the need for re-bonding, allowing you to maintain your stake while adjusting your involvement in active staking.

+

Chilling as a Validator

+

When you chill as a validator, your active validator status is paused. Although your nominators remain bonded to you, the validator bond will no longer appear as an active choice for new or revised nominations until reactivated. Any existing nominators who take no action will still have their stake linked to the validator, meaning they don’t need to reselect the validator upon reactivation. However, if nominators adjust their stakes while the validator is chilled, they will not be able to nominate the chilled validator until it resumes activity.

+

Upon reactivating as a validator, you must also reconfigure your validator preferences, such as commission rate and other parameters. These can be set to match your previous configuration or updated as desired. This step is essential for rejoining the active validator set and regaining eligibility for nominations.

+

Chill Other

+

Historical constraints in the runtime prevented unlimited nominators and validators from being supported. These constraints created a need for checks to keep the size of the staking system manageable. One of these checks is the chillOther extrinsic, allowing users to chill accounts that no longer met standards such as minimum staking requirements set through on-chain governance.

+

This control mechanism included a ChillThreshold, which was structured to define how close to the maximum number of nominators or validators the staking system would be allowed to get before users could start chilling one another. With the passage of Referendum #90, the value for maxNominatorCount on Polkadot was set to None, effectively removing the limit on how many nominators and validators can participate. This means the ChillThreshold will never be met; thus, chillOther no longer has any effect.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/operational-tasks/upgrade-your-node/index.html b/infrastructure/running-a-validator/operational-tasks/upgrade-your-node/index.html new file mode 100644 index 00000000..fae55aa7 --- /dev/null +++ b/infrastructure/running-a-validator/operational-tasks/upgrade-your-node/index.html @@ -0,0 +1,3604 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Upgrade a Validator Node | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Upgrade a Validator Node

+

Introduction

+

Upgrading a Polkadot validator node is essential for staying current with network updates and maintaining optimal performance. This guide covers routine and extended maintenance scenarios, including software upgrades and major server changes. Following these steps, you can manage session keys and transition smoothly between servers without risking downtime, slashing, or network disruptions. The process requires strategic planning, especially if you need to perform long-lead maintenance, ensuring your validator remains active and compliant.

+

This guide will allow validators to seamlessly substitute an active validator server to allow for maintenance operations. The process can take several hours, so ensure you understand the instructions first and plan accordingly.

+

Prerequisites

+

Before beginning the upgrade process for your validator node, ensure the following:

+
    +
  • You have a fully functional validator setup with all required binaries installed. See Set Up a Validator and Validator Requirements for additional guidance
  • +
  • Your VPS infrastructure has enough capacity to run a secondary validator instance temporarily for the upgrade process
  • +
+

Session Keys

+

Session keys are used to sign validator operations and establish a connection between your validator node and your staking proxy account. These keys are stored in the client, and any change to them requires a waiting period. Specifically, if you modify your session keys, the change will take effect only after the current session is completed and two additional sessions have passed.

+

Remembering this delayed effect when planning upgrades is crucial to ensure that your validator continues to function correctly and avoids interruptions. To learn more about session keys and their importance, visit the Keys section.

+

Keystore

+

Your validator server's keystore folder holds the private keys needed for signing network-level transactions. It is important not to duplicate or transfer this folder between validator instances. Doing so could result in multiple validators signing with the duplicate keys, leading to severe consequences such as equivocation slashing. Instead, always generate new session keys for each validator instance.

+

The default path to the keystore is as follows:

+
/home/polkadot/.local/share/polkadot/chains/<chain>/keystore
+
+

Taking care to manage your keys securely ensures that your validator operates safely and without the risk of slashing penalties.

+

Upgrade Using Backup Validator

+

The following instructions outline how to temporarily switch between two validator nodes. The original active validator is referred to as Validator A and the backup node used for maintenance purposes as Validator B.

+

Session N

+
    +
  1. Start Validator B - launch a secondary node and wait until it is fully synced with the network. Once synced, start it with the --validator flag. This node will now act as Validator B
  2. +
  3. Generate session keys - create new session keys specifically for Validator B
  4. +
  5. Submit the set_key extrinsic - use your staking proxy account to submit a set_key extrinsic, linking the session keys for Validator B to your staking setup
  6. +
  7. Record the session - make a note of the session in which you executed this extrinsic
  8. +
  9. Wait for session changes - allow the current session to end and then wait for two additional full sessions for the new keys to take effect
  10. +
+
+

Keep Validator A running

+

It is crucial to keep Validator A operational during this entire waiting period. Since set_key does not take effect immediately, turning off Validator A too early may result in chilling or even slashing.

+
+

Session N+3

+

At this stage, Validator B becomes your active validator. You can now safely perform any maintenance tasks on Validator A.

+

Complete the following steps when you are ready to bring Validator A back online:

+
    +
  1. Start Validator A - launch Validator A, sync the blockchain database, and ensure it is running with the --validator flag
  2. +
  3. Generate new session keys for Validator A - create fresh session keys for Validator A
  4. +
  5. Submit the set_key extrinsic - using your staking proxy account, submit a set_key extrinsic with the new Validator A session keys
  6. +
  7. Record the session - again, make a note of the session in which you executed this extrinsic
  8. +
+

Keep Validator B active until the session during which you executed the set-key extrinsic completes plus two additional full sessions have passed. Once Validator A has successfully taken over, you can safely stop Validator B. This process helps ensure a smooth handoff between nodes and minimizes the risk of downtime or penalties. Verify the transition by checking for finalized blocks in the new session. The logs should indicate the successful change, similar to the example below:

+
+ INSERT_COMMAND + 2019-10-28 21:44:13 Applying authority set change scheduled at block #450092 + 2019-10-28 21:44:13 Applying GRANDPA set change to new set with 20 authorities + +
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/staking-mechanics/index.html b/infrastructure/staking-mechanics/index.html new file mode 100644 index 00000000..59e0ada3 --- /dev/null +++ b/infrastructure/staking-mechanics/index.html @@ -0,0 +1,3363 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/staking-mechanics/offenses-and-slashes/index.html b/infrastructure/staking-mechanics/offenses-and-slashes/index.html new file mode 100644 index 00000000..f373bad4 --- /dev/null +++ b/infrastructure/staking-mechanics/offenses-and-slashes/index.html @@ -0,0 +1,3910 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Offenses and Slashes | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Offenses and Slashes

+

Introduction

+

In Polkadot's Nominated Proof of Stake (NPoS) system, validator misconduct is deterred through a combination of slashing, disabling, and reputation penalties. Validators and nominators who stake tokens face consequences for validator misbehavior, which range from token slashes to restrictions on network participation.

+

This page outlines the types of offenses recognized by Polkadot, including block equivocations and invalid votes, as well as the corresponding penalties. While some parachains may implement additional custom slashing mechanisms, this guide focuses on the offenses tied to staking within the Polkadot ecosystem.

+

Offenses

+

Polkadot is a public permissionless network. As such, it has a mechanism to disincentivize offenses and incentivize good behavior. You can review the parachain protocol to understand better the terminology used to describe offenses. Polkadot validator offenses fall into two categories: invalid votes and equivocations.

+

Invalid Votes

+

A validator will be penalized for inappropriate voting activity during the block inclusion and approval processes. The invalid voting related offenses are as follows:

+
    +
  • Backing an invalid block - a para-validator backs an invalid block for inclusion in a fork of the relay chain
  • +
  • ForInvalid vote - when acting as a secondary checker, the validator votes in favor of an invalid block
  • +
  • AgainstValid vote - when acting as a secondary checker, the validator votes against a valid block. This type of vote wastes network resources required to resolve the disparate votes and resulting dispute
  • +
+

Equivocations

+

Equivocation occurs when a validator produces statements that conflict with each other when producing blocks or voting. Unintentional equivocations usually occur when duplicate signing keys reside on the validator host. If keys are never duplicated, the probability of an honest equivocation slash decreases to near zero. The equivocation related offenses are as follows:

+
    +
  • Equivocation - the validator produces two or more of the same block or vote
      +
    • GRANDPA and BEEFY equivocation - the validator signs two or more votes in the same round on different chains
    • +
    • BABE equivocation - the validator produces two or more blocks on the relay chain in the same time slot
    • +
    +
  • +
  • Double seconded equivocation - the validator attempts to second, or back, more than one block in the same round
  • +
  • Seconded and valid equivocation - the validator seconds, or backs, a block and then attempts to hide their role as the responsible backer by later placing a standard validation vote
  • +
+

Penalties

+

On Polkadot, offenses to the network incur different penalties depending on severity. There are three main penalties: slashing, disabling, and reputation changes.

+

Slashing

+

Validators engaging in bad actor behavior in the network may be subject to slashing if they commit a qualifying offense. When a validator is slashed, they and their nominators lose a percentage of their staked DOT or KSM, from as little as 0.01% up to 100% based on the severity of the offense. Nominators are evaluated for slashing against their active validations at any given time. Validator nodes are evaluated as discrete entities, meaning an operator can't attempt to mitigate the offense on another node they operate in order to avoid a slash.

+

Any slashed DOT or KSM will be added to the Treasury rather than burned or distributed as rewards. Moving slashed funds to the Treasury allows tokens to be quickly moved away from malicious validators while maintaining the ability to revert faulty slashes when needed.

+
+

Multiple active nominations

+

A nominator with a very large bond may nominate several validators in a single era. In this case, a slash is proportionate to the amount staked to the offending validator. Stake allocation and validator activation is controlled by the Phragmén algorithm.

+
+

A validator slash creates an unapplied state transition. You can view pending slashes on Polkadot.js Apps. The UI will display the slash per validator, the affected nominators, and the slash amounts. The unapplied state includes a 27-day grace period during which a governance proposal can be made to reverse the slash. Once this grace period expires, the slash is applied.

+

Equivocation Slash

+

The Web3 Foundation's Slashing mechanisms page provides guidelines for evaluating the security threat level of different offenses and determining penalties proportionate to the threat level of the offense. Offenses requiring coordination between validators or extensive computational costs to the system will typically call for harsher penalties than those more likely to be unintentional than malicious. A description of potential offenses for each threat level and the corresponding penalties is as follows:

+
    +
  • Level 1 - honest misconduct such as isolated cases of unresponsiveness
      +
    • Penalty - validator can be kicked out or slashed up to 0.1% of stake in the validator slot
    • +
    +
  • +
  • Level 2 - misconduct that can occur honestly but is a sign of bad practices. Examples include repeated cases of unresponsiveness and isolated cases of equivocation
      +
    • Penalty - slash of up to 1% of stake in the validator slot
    • +
    +
  • +
  • Level 3 - misconduct that is likely intentional but of limited effect on the performance or security of the network. This level will typically include signs of coordination between validators. Examples include repeated cases of equivocation or isolated cases of unjustified voting on GRANDPA
      +
    • Penalty - reduction in networking reputation metrics, slash of up to 10% of stake in the validator slot
    • +
    +
  • +
  • Level 4 - misconduct that poses severe security or monetary risk to the system or mass collusion. Examples include signs of extensive coordination, creating a serious security risk to the system, or forcing the system to use extensive resources to counter the misconduct
      +
    • Penalty - slash of up to 100% of stake in the validator slot
    • +
    +
  • +
+

See the next section to understand how slash amounts for equivocations are calculated. If you want to know more details about slashing, please look at the research page on Slashing mechanisms.

+

Slash Calculation for Equivocation

+

The slashing penalty for GRANDPA, BABE, and BEEFY equivocations is calculated using the formula below, where x represents the number of offenders and n is the total number of validators in the active set:

+
min((3 * x / n )^2, 1)
+
+

The following scenarios demonstrate how this formula means slash percentages can increase exponentially based on the number of offenders involved compared to the size of the validator pool:

+
    +
  • +

    Minor offense - assume 1 validator out of a 100 validator active set equivocates in a slot. A single validator committing an isolated offense is most likely a mistake rather than malicious attack on the network. This offense results in a 0.09% slash to the stake in the validator slot

    +
    flowchart LR
    +N["Total Validators = 100"]
    +X["Offenders = 1"]
    +F["min(3 * 1 / 100)^2, 1) = 0.0009"]
    +G["0.09% slash of stake"]
    +
    +N --> F
    +X --> F
    +F --> G
    +
  • +
  • +

    Moderate offense - assume 5 validators out a 100 validator active set equivocate in a slot. This is a slightly more serious event as there may be some element of coordination involved. This offense results in a 2.25% slash to the stake in the validator slot

    +
    flowchart LR
    +N["Total Validators = 100"]
    +X["Offenders = 5"]
    +F["min((3 * 5 / 100)^2, 1) = 0.0225"]
    +G["2.25% slash of stake"]
    +
    +N --> F
    +X --> F
    +F --> G
    +
  • +
  • +

    Major offense - assume 20 validators out a 100 validator active set equivocate in a slot. This is a major security threat as it possible represents a coordinated attack on the network. This offense results in a 36% slash and all slashed validators will also be chilled +

    flowchart LR
    +N["Total Validators = 100"]
    +X["Offenders = 20"]
    +F["min((3 * 20 / 100)^2, 1) = 0.36"]
    +G["36% slash of stake"]
    +
    +N --> F
    +X --> F
    +F --> G

    +
  • +
+

The examples above show the risk of nominating or running many validators in the active set. While rewards grow linearly (two validators will get you approximately twice as many staking rewards as one), slashing grows exponentially. Going from a single validator equivocating to two validators equivocating causes a slash four time as much as the single validator.

+

Validators may run their nodes on multiple machines to ensure they can still perform validation work if one of their nodes goes down. Still, validator operators should be cautious when setting these up. Equivocation is possible if they don't coordinate well in managing signing machines.

+

Best Practices to Avoid Slashing

+

The following are advised to node operators to ensure that they obtain pristine binaries or source code and to ensure the security of their node:

+
    +
  • Always download either source files or binaries from the official Parity repository
  • +
  • Verify the hash of downloaded files
  • +
  • Use the W3F secure validator setup or adhere to its principles
  • +
  • Ensure essential security items are checked, use a firewall, manage user access, use SSH certificates
  • +
  • Avoid using your server as a general-purpose system. Hosting a validator on your workstation or one that hosts other services increases the risk of maleficence
  • +
  • Avoid cloning servers (copying all contents) when migrating to new hardware. If an image is needed, create it before generating keys
  • +
  • High Availability (HA) systems are generally not recommended as equivocation may occur if concurrent operations happen—such as when a failed server restarts or two servers are falsely online simultaneously
  • +
  • Copying the keystore folder when moving a database between instances can cause equivocation. Even brief use of duplicated keystores can result in slashing
  • +
+

Below are some examples of small equivocations that happened in the past:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NetworkEraEvent TypeDetailsAction Taken
Polkadot774Small EquivocationThe validator migrated servers and cloned the keystore folder. The on-chain event can be viewed on Subscan.The validator didn't submit a request for the slash to be canceled.
Kusama3329Small EquivocationThe validator operated a test machine with cloned keys. The test machine was online simultaneously as the primary, which resulted in a slash. Details can be found on Polkassembly.The validator requested a slash cancellation, but the council declined.
Kusama3995Small EquivocationThe validator noticed several errors, after which the client crashed, and a slash was applied. The validator recorded all events and opened GitHub issues to allow for technical opinions to be shared. Details can be found on Polkassembly.The validator requested to cancel the slash. The council approved the request as they believed the error wasn't operator-related.
+

Slashing Across Eras

+

There are three main difficulties to account for with slashing in NPoS:

+
    +
  • A nominator can nominate multiple validators and be slashed as a result of actions taken by any of them
  • +
  • Until slashed, the stake is reused from era to era
  • +
  • Slashable offenses can be found after the fact and out of order
  • +
+

To balance this, the system applies only the maximum slash a participant can receive in a given time period rather than the sum. This ensures protection from excessive slashing.

+

Disabling

+

The disabling mechanism is triggered when validators commit serious infractions, such as backing invalid blocks or engaging in equivocations. Disabling stops validators from performing specific actions after they have committed an offense. Disabling is further divided into:

+
    +
  • On-chain disabling - lasts for a whole era and stops validators from authoring blocks, backing, and initiating a dispute
  • +
  • Off-chain disabling - lasts for a session, is caused by losing a dispute, and stops validators from initiating a dispute
  • +
+

Off-chain disabling is always a lower priority than on-chain disabling. Off-chain disabling prioritizes disabling first backers and then approval checkers.

+
+

Note

+

The material in this guide reflects the changes introduced in Stage 2. For more details, refer to the State of Disabling issue on GitHub.

+
+

Reputation Changes

+

Some minor offenses, such as spamming, are only punished by networking reputation changes. Validators use a reputation metric when choosing which peers to connect with. The system adds reputation if a peer provides valuable data and behaves appropriately. If they provide faulty or spam data, the system reduces their reputation. If a validator loses enough reputation, their peers will temporarily close their channels to them. This helps in fighting against Denial of Service (DoS) attacks. Performing validator tasks under reduced reputation will be harder, resulting in lower validator rewards.

+

Penalties by Offense

+

Below, you can find a summary of penalties for specific offenses:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OffenseSlash (%)On-Chain DisablingOff-Chain DisablingReputational Changes
Backing Invalid100%YesYes (High Priority)No
ForInvalid Vote-NoYes (Mid Priority)No
AgainstValid Vote-NoYes (Low Priority)No
GRANDPA / BABE / BEEFY Equivocations0.01-100%YesNoNo
Seconded + Valid Equivocation-NoNoNo
Double Seconded Equivocation-NoNoYes
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/staking-mechanics/rewards-payout/index.html b/infrastructure/staking-mechanics/rewards-payout/index.html new file mode 100644 index 00000000..0e76e5fa --- /dev/null +++ b/infrastructure/staking-mechanics/rewards-payout/index.html @@ -0,0 +1,3695 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Rewards Payout | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Rewards Payout

+

Introduction

+

Understanding how rewards are distributed to validators and nominators is essential for network participants. In Polkadot and Kusama, validators earn rewards based on their era points, which are accrued through actions like block production and parachain validation.

+

This guide explains the payout scheme, factors influencing rewards, and how multiple validators affect returns. Validators can also share rewards with nominators, who contribute by staking behind them. By following the payout mechanics, validators can optimize their earnings and better engage with their nominators.

+

Era Points

+

The Polkadot ecosystem measures their reward cycles in a unit called an era. Kusama eras are approximately 6 hours long, and Polkadot eras are 24 hours. At the end of each era, validators are paid proportionally to the amount of era points they have collected. Era points are reward points earned for payable actions like:

+
    +
  • Issuing validity statements for parachain blocks
  • +
  • Producing a non-uncle block in the relay chain
  • +
  • Producing a reference to a previously unreferenced uncle block
  • +
  • Producing a referenced uncle block
  • +
+
+

Note

+

An uncle block is a relay chain block that is valid in every regard but has failed to become canonical. This can happen when two or more validators are block producers in a single slot, and the block produced by one validator reaches the next block producer before the others. The lagging blocks are called uncle blocks.

+
+

Payments occur at the end of every era.

+

Reward Variance

+

Rewards in Polkadot and Kusama staking systems can fluctuate due to differences in era points earned by para-validators and non-para-validators. Para-validators generally contribute more to the overall reward distribution due to their role in validating parachain blocks, thus influencing the variance in staking rewards.

+

To illustrate this relationship:

+
    +
  • Para-validator era points tend to have a higher impact on the expected value of staking rewards compared to non-para-validator points
  • +
  • The variance in staking rewards increases as the total number of validators grows relative to the number of para-validators
  • +
  • In simpler terms, when more validators are added to the active set without increasing the para-validator pool, the disparity in rewards between validators becomes more pronounced
  • +
+

However, despite this increased variance, rewards tend to even out over time due to the continuous rotation of para-validators across eras. The network's design ensures that over multiple eras, each validator has an equal opportunity to participate in para-validation, eventually leading to a balanced distribution of rewards.

+
+Probability in Staking Rewards +

This should only serve as a high-level overview of the probabilistic nature for staking rewards.

+

Let:

+
    +
  • pe = para-validator era points
  • +
  • ne = non-para-validator era points
  • +
  • EV = expected value of staking rewards
  • +
+

Then, EV(pe) has more influence on the EV than EV(ne).

+

Since EV(pe) has a more weighted probability on the EV, the increase in variance against the EV becomes apparent between the different validator pools (aka. validators in the active set and the ones chosen to para-validate).

+

Also, let:

+
    +
  • v = the variance of staking rewards
  • +
  • p = number of para-validators
  • +
  • w = number validators in the active set
  • +
  • e = era
  • +
+

Then, v ↑ if w ↑, as this reduces p : w, with respect to e.

+

Increased v is expected, and initially keeping p ↓ using the same para-validator set for all parachains ensures availability and voting. In addition, despite v ↑ on an e to e basis, over time, the amount of rewards each validator receives will equal out based on the continuous selection of para-validators.

+

There are plans to scale the active para-validation set in the future.

+
+

Payout Scheme

+

Validator rewards are distributed equally among all validators in the active set, regardless of the total stake behind each validator. However, individual payouts may differ based on the number of era points a validator has earned. Although factors like network connectivity can affect era points, well-performing validators should accumulate similar totals over time.

+

Validators can also receive tips from users, which incentivize them to include certain transactions in their blocks. Validators retain 100% of these tips.

+

Rewards are paid out in the network's native token (DOT for Polkadot and KSM for Kusama).

+

The following example illustrates a four member validator set with their names, amount they have staked, and how payout of rewards is divided. This scenario assumes all validators earned the same amount of era points and no one received tips:

+
%%Payout, 4 val set, A-D are validators/stakes, E is payout%%
+
+block-beta
+    columns 1
+  block
+    A["Alice (18 DOT)"]
+    B["Bob (9 DOT)"]
+    C["Carol (8 DOT)"]
+    D["Dave (7 DOT)"]
+  end
+    space
+    E["Payout (8 DOT total)"]:1
+    E --"2 DOT"--> A
+    E --"2 DOT"--> B
+    E --"2 DOT"--> C
+    E --"2 DOT"--> D 
+

Note that this is different than most other Proof-of-Stake systems. As long as a validator is in the validator set, it will receive the same block reward as every other validator. Validator Alice, who had 18 DOT staked, received the same 2 DOT reward in this era as Dave, who had only 7 DOT staked.

+

Running Multiple Validators

+

Running multiple validators can offer a more favorable risk/reward ratio compared to running a single one. If you have sufficient DOT or nominators staking on your validators, maintaining multiple validators within the active set can yield higher rewards.

+

In the preceding section, with 18 DOT staked and no nominators, Alice earned 2 DOT in one era. This example uses DOT, but the same principles apply for KSM on the Kusama network. By managing stake across multiple validators, you can potentially increase overall returns. Recall the set of validators from the preceding section:

+
%%Payout, 4 val set, A-D are validators/stakes, E is payout%%
+
+block-beta
+    columns 1
+  block
+    A["Alice (18 DOT)"]
+    B["Bob (9 DOT)"]
+    C["Carol (8 DOT)"]
+    D["Dave (7 DOT)"]
+  end
+    space
+    E["Payout (8 DOT total)"]:1
+    E --"2 DOT"--> A
+    E --"2 DOT"--> B
+    E --"2 DOT"--> C
+    E --"2 DOT"--> D 
+

Now, assume Alice decides to split their stake and run two validators, each with a nine DOT stake. This validator set only has four spots and priority is given to validators with a larger stake. In this example, Dave has the smallest stake and loses his spot in the validator set. Now, Alice will earn two shares of the total payout each era as illustrated below:

+
%%Payout, 4 val set, A-D are validators/stakes, E is payout%%
+
+block-beta
+    columns 1
+  block
+    A["Alice (9 DOT)"]
+    F["Alice (9 DOT)"]
+    B["Bob (9 DOT)"]
+    C["Carol (8 DOT)"]
+  end
+    space
+    E["Payout (8 DOT total)"]:1
+    E --"2 DOT"--> A
+    E --"2 DOT"--> B
+    E --"2 DOT"--> C
+    E --"2 DOT"--> F 
+

With enough stake, you could run more than two validators. However, each validator must have enough stake behind it to maintain a spot in the validator set.

+

Nominators and Validator Payments

+

A nominator's stake allows them to vote for validators and earn a share of the rewards without managing a validator node. Although staking rewards depend on validator activity during an era, validators themselves never control or own nominator rewards. To trigger payouts, anyone can call the staking.payoutStakers or staking.payoutStakerByPage methods, which mint and distribute rewards directly to the recipients. This trustless process ensures nominators receive their earned rewards.

+

Validators set a commission rate as a percentage of the block reward, affecting how rewards are shared with nominators. A 0% commission means the validator keeps only rewards from their self-stake, while a 100% commission means they retain all rewards, leaving none for nominators.

+

The following examples model splitting validator payments between nominator and validator using various commission percentages. For simplicity, these examples assume a Polkadot-SDK based relay chain that uses DOT as a native token and a single nominator per validator. Calculations of KSM reward payouts for Kusama follow the same formula.

+

Start with the original validator set from the previous section:

+
block-beta
+    columns 1
+  block:e
+    A["Alice (18 DOT)"]
+    B["Bob (9 DOT)"]
+    C["Carol (8 DOT)"]
+    D["Dave (7 DOT)"]
+  end
+    space
+    E["Payout (8 DOT total)"]:1
+    E --"2 DOT"--> A
+    E --"2 DOT"--> B
+    E --"2 DOT"--> C
+    E --"2 DOT"--> D 
+

The preceding diagram shows each validator receiving a 2 DOT payout, but doesn't account for sharing rewards with nominators. The following diagram shows what nominator payout might look like for validator Alice. Alice has a 20% commission rate and holds 50% of the stake for their validator:

+

+flowchart TD
+    A["Gross Rewards = 2 DOT"]
+    E["Commission = 20%"]
+    F["Alice Validator Payment = 0.4 DOT"]
+    G["Total Stake Rewards = 1.6 DOT"]
+    B["Alice Validator Stake = 18 DOT"]
+    C["9 DOT Alice (50%)"]
+    H["Alice Stake Reward = 0.8 DOT"]
+    I["Total Alice Validator Reward = 1.2 DOT"]
+    D["9 DOT Nominator (50%)"]
+    J["Total Nominator Reward = 0.8 DOT"]
+
+    A --> E
+    E --(2 x 0.20)--> F
+    F --(2 - 0.4)--> G
+    B --> C
+    B --> D
+    C --(1.6 x 0.50)--> H
+    H --(0.4 + 0.8)--> I
+    D --(1.60 x 0.50)--> J
+

Notice the validator commission rate is applied against the gross amount of rewards for the era. The validator commission is subtracted from the total rewards. After the commission is paid to the validator, the remaining amount is split among stake owners according to their percentage of the total stake. A validator's total rewards for an era include their commission plus their piece of the stake rewards.

+

Now, consider a different scenario for validator Bob where the commission rate is 40%, and Bob holds 33% of the stake for their validator:

+

+flowchart TD
+    A["Gross Rewards = 2 DOT"]
+    E["Commission = 40%"]
+    F["Bob Validator Payment = 0.8 DOT"]
+    G["Total Stake Rewards = 1.2 DOT"]
+    B["Bob Validator Stake = 9 DOT"]
+    C["3 DOT Bob (33%)"]
+    H["Bob Stake Reward = 0.4 DOT"]
+    I["Total Bob Validator Reward = 1.2 DOT"]
+    D["6 DOT Nominator (67%)"]
+    J["Total Nominator Reward = 0.8 DOT"]
+
+    A --> E
+    E --(2 x 0.4)--> F
+    F --(2 - 0.8)--> G
+    B --> C
+    B --> D
+    C --(1.2 x 0.33)--> H
+    H --(0.8 + 0.4)--> I
+    D --(1.2 x 0.67)--> J
+

Bob holds a smaller percentage of their node's total stake, making their stake reward smaller than Alice's. In this scenario, Bob makes up the difference by charging a 40% commission rate and ultimately ends up with the same total payment as Alice. Each validator will need to find their ideal balance between the amount of stake and commission rate to attract nominators while still making running a validator worthwhile.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/js/header-scroll.js b/js/header-scroll.js new file mode 100644 index 00000000..c388f2b7 --- /dev/null +++ b/js/header-scroll.js @@ -0,0 +1,10 @@ +// The purpose of this script is to move the header up out of view while +// the user is scrolling down a page +let lastScrollY = window.scrollY; +const header = document.querySelector('.md-header__inner'); + +window.addEventListener('scroll', () => { + // Toggle header visibility based on scroll direction + header.classList.toggle('hidden', window.scrollY > lastScrollY); + lastScrollY = window.scrollY; +}); diff --git a/js/search-bar-results.js b/js/search-bar-results.js new file mode 100644 index 00000000..9c59a949 --- /dev/null +++ b/js/search-bar-results.js @@ -0,0 +1,27 @@ +// The purpose of this script is to modify the default search functionality +// so that the "Type to start searching" text does not render in the search +// results dropdown and so that the dropdown only appears once a user has started +// to type in the input field +document.addEventListener('DOMContentLoaded', () => { + const searchInput = document.querySelector('.md-search__input'); + const searchOutput = document.querySelector('.md-search__output'); + const searchResultMeta = document.querySelector('.md-search-result__meta'); + + if (searchResultMeta.textContent.trim() === 'Initializing search') { + searchResultMeta.style.display = 'none'; + } + + searchInput.addEventListener('input', () => { + // Only show the search results if the user has started to type + // Toggle "visible" class based on input content + searchOutput.classList.toggle('visible', searchInput.value.trim() !== ''); + + // Do not show the search result meta text unless a user has started typing + // a value in the input field + if (searchInput.value.trim() === '' && searchResultMeta) { + searchResultMeta.style.display = 'none'; + } else if (searchInput.value.trim().length > 0) { + searchResultMeta.style.display = 'block'; + } + }); +}); diff --git a/package-lock.json b/package-lock.json new file mode 100644 index 00000000..65b7775a --- /dev/null +++ b/package-lock.json @@ -0,0 +1,43 @@ +{ + "name": "polkadot-docs", + "version": "1.0.0", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "polkadot-docs", + "version": "1.0.0", + "license": "ISC", + "devDependencies": { + "@taplo/cli": "^0.7.0", + "husky": "^8.0.0" + } + }, + "node_modules/@taplo/cli": { + "version": "0.7.0", + "resolved": "https://registry.npmjs.org/@taplo/cli/-/cli-0.7.0.tgz", + "integrity": "sha512-Ck3zFhQhIhi02Hl6T4ZmJsXdnJE+wXcJz5f8klxd4keRYgenMnip3JDPMGDRLbnC/2iGd8P0sBIQqI3KxfVjBg==", + "dev": true, + "license": "MIT", + "bin": { + "taplo": "dist/cli.js" + } + }, + "node_modules/husky": { + "version": "8.0.3", + "resolved": "https://registry.npmjs.org/husky/-/husky-8.0.3.tgz", + "integrity": "sha512-+dQSyqPh4x1hlO1swXBiNb2HzTDN1I2IGLQx1GrBuiqFJfoMrnZWwVmatvSiO+Iz8fBUnf+lekwNo4c2LlXItg==", + "dev": true, + "license": "MIT", + "bin": { + "husky": "lib/bin.js" + }, + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/typicode" + } + } + } +} diff --git a/package.json b/package.json new file mode 100644 index 00000000..0abdd363 --- /dev/null +++ b/package.json @@ -0,0 +1,17 @@ +{ + "name": "polkadot-docs", + "version": "1.0.0", + "description": "This package contains tools to support the development and maintenance of the polkadot-docs repository.", + "main": "index.js", + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1", + "prepare": "husky install" + }, + "keywords": [], + "author": "", + "license": "ISC", + "devDependencies": { + "@taplo/cli": "^0.7.0", + "husky": "^8.0.0" + } +} diff --git a/polkadot-protocol/architecture/index.html b/polkadot-protocol/architecture/index.html new file mode 100644 index 00000000..932eaeed --- /dev/null +++ b/polkadot-protocol/architecture/index.html @@ -0,0 +1,3363 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/parachains/index.html b/polkadot-protocol/architecture/parachains/index.html new file mode 100644 index 00000000..cc5e7e26 --- /dev/null +++ b/polkadot-protocol/architecture/parachains/index.html @@ -0,0 +1,3363 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/polkadot-chain/index.html b/polkadot-protocol/architecture/polkadot-chain/index.html new file mode 100644 index 00000000..818b8557 --- /dev/null +++ b/polkadot-protocol/architecture/polkadot-chain/index.html @@ -0,0 +1,3365 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/polkadot-chain/overview/index.html b/polkadot-protocol/architecture/polkadot-chain/overview/index.html new file mode 100644 index 00000000..bd7705f2 --- /dev/null +++ b/polkadot-protocol/architecture/polkadot-chain/overview/index.html @@ -0,0 +1,3775 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview of the Polkadot Relay Chain | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Overview

+

Introduction

+

Polkadot is a next-generation blockchain protocol designed to support a multi-chain future by enabling secure communication and interoperability between different blockchains. Built as a Layer-0 protocol, Polkadot introduces innovations like application-specific Layer-1 chains (parachains), shared security through Nominated Proof of Stake (NPoS), and seamless cross-chain interactions via its native Cross-Consensus Messaging Format (XCM).

+

This guide covers key aspects of Polkadot’s architecture, including its high-level protocol structure, runtime upgrades, blockspace commoditization, and the role of its native token, DOT, in governance, staking, and resource allocation.

+

Polkadot 1.0

+

Polkadot 1.0 represents the state of Polkadot as of 2023, coinciding with the release of Polkadot runtime v1.0.0. This section will focus on Polkadot 1.0, along with philosophical insights into network resilience and blockspace.

+

As a Layer-0 blockchain, Polkadot contributes to the multi-chain vision through several key innovations and initiatives, including:

+
    +
  • +

    Application-specific Layer-1 blockchains (parachains) - Polkadot's sharded network allows for parallel transaction processing, with shards that can have unique state transition functions, enabling custom-built L1 chains optimized for specific applications

    +
  • +
  • +

    Shared security and scalability - L1 chains connected to Polkadot benefit from its Nominated-Proof-of-Stake (NPoS) system, providing security out-of-the-box without the need to bootstrap their own

    +
  • +
  • +

    Secure interoperability - Polkadot's native interoperability enables seamless data and value exchange between parachains. This interoperability can also be used outside of the ecosystem for bridging with external networks

    +
  • +
  • +

    Resilient infrastructure - decentralized and scalable, Polkadot ensures ongoing support for development and community initiatives via its on-chain treasury and governance

    +
  • +
  • +

    Rapid L1 development - the Polkadot SDK allows fast, flexible creation and deployment of Layer-1 chains

    +
  • +
  • +

    Cultivating the next generation of Web3 developers - Polkadot supports the growth of Web3 core developers through initiatives such as:

    + +
  • +
+

High-Level Architecture

+

Polkadot features a chain that serves as the central component of the system. This chain is depicted as a ring encircled by several parachains that are connected to it.

+

According to Polkadot's design, any blockchain that can compile to WebAssembly (Wasm) and adheres to the Parachains Protocol becomes a parachain on the Polkadot network.

+

Here’s a high-level overview of the Polkadot protocol architecture:

+

+

Parachains propose blocks to Polkadot validators, who check for availability and validity before finalizing them. With the relay chain providing security, collators—full nodes of parachains—can focus on their tasks without needing strong incentives.

+

The Cross-Consensus Messaging Format (XCM) allows parachains to exchange messages freely, leveraging the chain's security for trust-free communication.

+

In order to interact with chains that want to use their own finalization process (e.g., Bitcoin), Polkadot has bridges that offer two-way compatibility, meaning that transactions can be made between different parachains.

+

Polkadot's Additional Functionalities

+

The Polkadot chain oversees crowdloans and auctions. Chain cores were leased through auctions for three-month periods, up to a maximum of two years.

+

Crowdloans enabled users to securely lend funds to teams for lease deposits in exchange for pre-sale tokens, which is the only way to access slots on Polkadot 1.0.

+
+

Note

+

Auctions are deprecated in favor of coretime.

+
+

Additionally, the chain handles staking, accounts, balances, and governance.

+

Agile Coretime

+

The new and more efficient way of obtaining core on Polkadot is to go through the process of purchasing coretime.

+

Agile coretime improves the efficient use of Polkadot's network resources and offers economic flexibility for developers, extending Polkadot's capabilities far beyond the original vision outlined in the whitepaper.

+

It enables parachains to purchase monthly "bulk" allocations of coretime (the time allocated for utilizing a core, measured in Polkadot relay chain blocks), ensuring heavy-duty parachains that can author a block every six seconds with Asynchronous Backing can reliably renew their coretime each month. Although six-second block times are now the default, parachains have the option of producing blocks less frequently.

+

Renewal orders are prioritized over new orders, offering stability against price fluctuations and helping parachains budget more effectively for project costs.

+

Polkadot's Resilience

+

Decentralization is a vital component of blockchain networks, but it comes with trade-offs:

+
    +
  • An overly decentralized network may face challenges in reaching consensus and require significant energy to operate
  • +
  • Also, a network that achieves consensus quickly risks centralization, making it easier to manipulate or attack
  • +
+

A network should be decentralized enough to prevent manipulative or malicious influence. In this sense, decentralization is a tool for achieving resilience.

+

Polkadot 1.0 currently achieves resilience through several strategies:

+
    +
  • +

    Nominated Proof of Stake (NPoS) - this ensures that the stake per validator is maximized and evenly distributed among validators

    +
  • +
  • +

    Decentralized nodes - designed to encourage operators to join the network. This program aims to expand and diversify the validators in the ecosystem who aim to become independent of the program during their term. Feel free to explore more about the program on the official Decentralized Nodes page

    +
  • +
  • +

    On-chain treasury and governance - known as OpenGov, this system allows every decision to be made through public referenda, enabling any token holder to cast a vote

    +
  • +
+

Polkadot's Blockspace

+

Polkadot 1.0’s design allows for the commoditization of blockspace.

+

Blockspace is a blockchain's capacity to finalize and commit operations, encompassing its security, computing, and storage capabilities. Its characteristics can vary across different blockchains, affecting security, flexibility, and availability.

+
    +
  • +

    Security - measures the robustness of blockspace in Proof of Stake (PoS) networks linked to the stake locked on validator nodes, the variance in stake among validators, and the total number of validators. It also considers social centralization (how many validators are owned by single operators) and physical centralization (how many validators run on the same service provider)

    +
  • +
  • +

    Flexibility - reflects the functionalities and types of data that can be stored, with high-quality data essential to avoid bottlenecks in critical processes

    +
  • +
  • +

    Availability - indicates how easily users can access blockspace. It should be easily accessible, allowing diverse business models to thrive, ideally regulated by a marketplace based on demand and supplemented by options for "second-hand" blockspace

    +
  • +
+

Polkadot is built on core blockspace principles, but there's room for improvement. Tasks like balance transfers, staking, and governance are managed on the relay chain.

+

Delegating these responsibilities to system chains could enhance flexibility and allow the relay chain to concentrate on providing shared security and interoperability.

+
+

Note

+

For more information about blockspace, watch Robert Habermeier’s interview or read his technical blog post.

+
+

DOT Token

+

DOT is the native token of the Polkadot network, much like BTC for Bitcoin and Ether for the Ethereum blockchain. DOT has 10 decimals, uses the Planck base unit, and has a balance type of u128. The same is true for Kusama's KSM token with the exception of having 12 decimals.

+
+Redenomination of DOT +

Polkadot conducted a community poll, which ended on 27 July 2020 at block 888,888, to decide whether to redenominate the DOT token. The stakeholders chose to redenominate the token, changing the value of 1 DOT from 1e12 plancks to 1e10 plancks.

+

Importantly, this did not affect the network's total number of base units (plancks); it only affects how a single DOT is represented.

+

The redenomination became effective 72 hours after transfers were enabled, occurring at block 1,248,328 on 21 August 2020 around 16:50 UTC.

+
+

The Planck Unit

+

The smallest unit of account balance on Substrate-based blockchains (such as Polkadot and Kusama) is called Planck, named after the Planck length, the smallest measurable distance in the physical universe.

+

Similar to how BTC's smallest unit is the Satoshi and ETH's is the Wei, Polkadot's native token DOT equals 1e10 Planck, while Kusama's native token KSM equals 1e12 Planck.

+

Uses for DOT

+

DOT serves three primary functions within the Polkadot network:

+
    +
  • Governance - it is used to participate in the governance of the network
  • +
  • Staking - DOT is staked to support the network's operation and security
  • +
  • Buying coretime - used to purchase coretime in-bulk or on-demand and access the chain to benefit from Polkadot's security and interoperability
  • +
+

Additionally, DOT can serve as a transferable token. For example, DOT, held in the treasury, can be allocated to teams developing projects that benefit the Polkadot ecosystem.

+

JAM and the Road Ahead

+

The Join-Accumulate Machine (JAM) represents a transformative redesign of Polkadot's core architecture, envisioned as the successor to the current relay chain. Unlike traditional blockchain architectures, JAM introduces a unique computational model that processes work through two primary functions:

+
    +
  • Join - handles data integration
  • +
  • Accumulate - folds computations into the chain's state
  • +
+

JAM removes many of the opinions and constraints of the current relay chain while maintaining its core security properties. Expected improvements include:

+
    +
  • Permissionless code execution - JAM is designed to be more generic and flexible, allowing for permissionless code execution through services that can be deployed without governance approval
  • +
  • More effective block time utilization - JAM's efficient pipeline processing model places the prior state root in block headers instead of the posterior state root, enabling more effective utilization of block time for computations
  • +
+

This architectural evolution promises to enhance Polkadot's scalability and flexibility while maintaining robust security guarantees. JAM is planned to be rolled out to Polkadot as a single, complete upgrade rather than a stream of smaller updates. This approach seeks to minimize the developer overhead required to address any breaking changes.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/polkadot-chain/pos-consensus/index.html b/polkadot-protocol/architecture/polkadot-chain/pos-consensus/index.html new file mode 100644 index 00000000..d1998d05 --- /dev/null +++ b/polkadot-protocol/architecture/polkadot-chain/pos-consensus/index.html @@ -0,0 +1,3729 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Proof of Stake Consensus | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Proof of Stake Consensus

+

Introduction

+

Polkadot's Proof of Stake consensus model leverages a unique hybrid approach by design to promote decentralized and secure network operations. In traditional Proof of Stake (PoS) systems, a node's ability to validate transactions is tied to its token holdings, which can lead to centralization risks and limited validator participation. Polkadot addresses these concerns through its Nominated Proof of Stake (NPoS) model and a combination of advanced consensus mechanisms to ensure efficient block production and strong finality guarantees. This combination enables the Polkadot network to scale while maintaining security and decentralization.

+

Nominated Proof of Stake

+

Polkadot uses Nominated Proof of Stake (NPoS) to select the validator set and secure the network. This model is designed to maximize decentralization and security by balancing the roles of validators and nominators.

+
    +
  • Validators - play a key role in maintaining the network's integrity. They produce new blocks, validate parachain blocks, and ensure the finality of transactions across the relay chain
  • +
  • Nominators - support the network by selecting validators to back with their stake. This mechanism allows users who don't want to run a validator node to still participate in securing the network and earn rewards based on the validators they support
  • +
+

In Polkadot's NPoS system, nominators can delegate their tokens to trusted validators, giving them voting power in selecting validators while spreading security responsibilities across the network.

+

Hybrid Consensus

+

Polkadot employs a hybrid consensus model that combines two key protocols: a finality gadget called GRANDPA and a block production mechanism known as BABE. This hybrid approach enables the network to benefit from both rapid block production and provable finality, ensuring security and performance.

+

The hybrid consensus model has some key advantages:

+
    +
  • +

    Probabilistic finality - with BABE constantly producing new blocks, Polkadot ensures that the network continues to make progress, even when a final decision has not yet been reached on which chain is the true canonical chain

    +
  • +
  • +

    Provable finality - GRANDPA guarantees that once a block is finalized, it can never be reverted, ensuring that all network participants agree on the finalized chain

    +
  • +
+

By using separate protocols for block production and finality, Polkadot can achieve rapid block creation and strong guarantees of finality while avoiding the typical trade-offs seen in traditional consensus mechanisms.

+

Block Production - BABE

+

Blind Assignment for Blockchain Extension (BABE) is Polkadot's block production mechanism, working with GRANDPA to ensure blocks are produced consistently across the network. As validators participate in BABE, they are assigned block production slots through a randomness-based lottery system. This helps determine which validator is responsible for producing a block at a given time. BABE shares similarities with Ouroboros Praos but differs in key aspects like chain selection rules and slot timing.

+

Key features of BABE include:

+
    +
  • +

    Epochs and slots - BABE operates in phases called epochs, each of which is divided into slots (around 6 seconds per slot). Validators are assigned slots at the beginning of each epoch based on stake and randomness

    +
  • +
  • +

    Randomized block production - validators enter a lottery to determine which will produce a block in a specific slot. This randomness is sourced from the relay chain's randomness cycle

    +
  • +
  • +

    Multiple block producers per slot - in some cases, more than one validator might win the lottery for the same slot, resulting in multiple blocks being produced. These blocks are broadcasted, and the network's fork choice rule helps decide which chain to follow

    +
  • +
  • +

    Handling empty slots - if no validators win the lottery for a slot, a secondary selection algorithm ensures that a block is still produced. Validators selected through this method always produce a block, ensuring no slots are skipped

    +
  • +
+

BABE's combination of randomness and slot allocation creates a secure, decentralized system for consistent block production while also allowing for fork resolution when multiple validators produce blocks for the same slot.

+
+Additional Information +
    +
  • Refer to the BABE paper for further technical insights, including cryptographic details and formal proofs
  • +
  • Visit the Block Production Lottery section of the Polkadot Protocol Specification for technical definitions and formulas
  • +
+
+

Validator Participation

+

In BABE, validators participate in a lottery for every slot to determine whether they are responsible for producing a block during that slot. This process's randomness ensures a decentralized and unpredictable block production mechanism.

+

There are two lottery outcomes for any given slot that initiate additional processes:

+
    +
  • +

    Multiple validators in a slot - due to the randomness, multiple validators can be selected to produce a block for the same slot. When this happens, each validator produces a block and broadcasts it to the network resulting in a race condition. The network's topology and latency then determine which block reaches the majority of nodes first. BABE allows both chains to continue building until the finalization process resolves which one becomes canonical. The Fork Choice rule is then used to decide which chain the network should follow

    +
  • +
  • +

    No validators in a slot - on occasions when no validator is selected by the lottery, a secondary validator selection algorithm steps in. This backup ensures that a block is still produced, preventing skipped slots. However, if the primary block produced by a verifiable random function (VRF)-selected validator exists for that slot, the secondary block will be ignored. As a result, every slot will have either a primary or a secondary block

    +
  • +
+

This design ensures continuous block production, even in cases of multiple competing validators or an absence of selected validators.

+

Finality Gadget - GRANDPA

+

GRANDPA (GHOST-based Recursive ANcestor Deriving Prefix Agreement) serves as the finality gadget for Polkadot's relay chain. Operating alongside the BABE block production mechanism, it ensures provable finality, giving participants confidence that blocks finalized by GRANDPA cannot be reverted.

+

Key features of GRANDPA include:

+
    +
  • Independent finality service – GRANDPA runs separately from the block production process, operating in parallel to ensure seamless finalization
  • +
  • Chain-based finalization – instead of finalizing one block at a time, GRANDPA finalizes entire chains, speeding up the process significantly
  • +
  • Batch finalization – can finalize multiple blocks in a single round, enhancing efficiency and minimizing delays in the network
  • +
  • Partial synchrony tolerance – GRANDPA works effectively in a partially synchronous network environment, managing both asynchronous and synchronous conditions
  • +
  • Byzantine fault tolerance – can handle up to 1/5 Byzantine (malicious) nodes, ensuring the system remains secure even when faced with adversarial behavior
  • +
+
+What is GHOST? +

GHOST (Greedy Heaviest-Observed Subtree) is a consensus protocol used in blockchain networks to select the heaviest branch in a block tree. Unlike traditional longest-chain rules, GHOST can more efficiently handle high block production rates by considering the weight of subtrees rather than just the chain length.

+
+

Probabilistic vs. Provable Finality

+

In traditional Proof-of-Work (PoW) blockchains, finality is probabilistic. As blocks are added to the chain, the probability that a block is final increases, but it can never be guaranteed. Eventual consensus means that over time, all nodes will agree on a single version of the blockchain, but this process can be unpredictable and slow.

+

Conversely, GRANDPA provides provable finality, which means that once a block is finalized, it is irreversible. By using Byzantine fault-tolerant agreements, GRANDPA finalizes blocks more efficiently and securely than probabilistic mechanisms like Nakamoto consensus. Like Ethereum's Casper the Friendly Finality Gadget(FFG), GRANDPA ensures that finalized blocks cannot be reverted, offering stronger guarantees of consensus.

+
+Additional Information +

For more details, including formal proofs and detailed algorithms, see the GRANDPA paper.

+
+

Fork Choice

+

The fork choice of the relay chain combines BABE and GRANDPA:

+
    +
  1. BABE must always build on the chain that GRANDPA has finalized
  2. +
  3. When there are forks after the finalized head, BABE builds on the chain with the most primary blocks to provide probabilistic finality
  4. +
+

Fork choice diagram

+

In the preceding diagram, finalized blocks are black, and non-finalized blocks are yellow. Primary blocks are labeled '1', and secondary blocks are labeled '2.' The topmost chain is the longest chain originating from the last finalized block, but it is not selected because it only has one primary block at the time of evaluation. In comparison, the one below it originates from the last finalized block and has three primary blocks.

+

Bridging - BEEFY

+

Bridge Efficiency Enabling Finality Yielder (BEEFY) is a specialized protocol that extends the finality guarantees provided by GRANDPA. It is specifically designed to facilitate efficient bridging between Polkadot relay chains (such as Polkadot and Kusama) and external blockchains like Ethereum. While GRANDPA is well-suited for finalizing blocks within Polkadot, it has limitations when bridging external chains that weren't built with Polkadot's interoperability features in mind. BEEFY addresses these limitations by ensuring other networks can efficiently verify finality proofs.

+

Key features of BEEFY include:

+
    +
  • Efficient finality proof verification - BEEFY enables external networks to easily verify Polkadot finality proofs, ensuring seamless communication between chains
  • +
  • Merkle Mountain Ranges (MMR) - this data structure is used to efficiently store and transmit proofs between chains, optimizing data storage and reducing transmission overhead
  • +
  • ECDSA signature schemes - BEEFY uses ECDSA signatures, which are widely supported on Ethereum and other EVM-based chains, making integration with these ecosystems smoother
  • +
  • Light client optimization - BEEFY reduces the computational burden on light clients by allowing them to check for a super-majority of validator votes rather than needing to process all validator signatures, improving performance
  • +
+
+Additional Information +

For more details, including technical definitions and formulas, see Bridge design (BEEFY) in the Polkadot Protocol Specification.

+
+

Resources

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/system-chains/asset-hub/index.html b/polkadot-protocol/architecture/system-chains/asset-hub/index.html new file mode 100644 index 00000000..cc07e59e --- /dev/null +++ b/polkadot-protocol/architecture/system-chains/asset-hub/index.html @@ -0,0 +1,4000 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Asset Hub | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Asset Hub

+

Introduction

+

The Asset Hub is a critical component in the Polkadot ecosystem, enabling the management of fungible and non-fungible assets across the network. Since the relay chain focuses on maintaining security and consensus without direct asset management, Asset Hub provides a streamlined platform for creating, managing, and using on-chain assets in a fee-efficient manner. This guide outlines the core features of Asset Hub, including how it handles asset operations, cross-chain transfers, and asset integration using XCM, as well as essential tools like API Sidecar and TxWrapper for developers working with on-chain assets.

+

Assets Basics

+

In the Polkadot ecosystem, the relay chain does not natively support additional assets beyond its native token (DOT for Polkadot, KSM for Kusama). The Asset Hub parachain on Polkadot and Kusama provides a fungible and non-fungible assets framework. Asset Hub allows developers and users to create, manage, and use assets across the ecosystem.

+

Asset creators can use Asset Hub to track their asset issuance across multiple parachains and manage assets through operations such as minting, burning, and transferring. Projects that need a standardized method of handling on-chain assets will find this particularly useful. The fungible asset interface provided by Asset Hub closely resembles Ethereum's ERC-20 standard but is directly integrated into Polkadot's runtime, making it more efficient in terms of speed and transaction fees.

+

Integrating with Asset Hub offers several key benefits, particularly for infrastructure providers and users:

+
    +
  • Support for non-native on-chain assets - Asset Hub enables seamless asset creation and management, allowing projects to develop tokens or assets that can interact with the broader ecosystem
  • +
  • Lower transaction fees - Asset Hub offers significantly lower transaction costs—approximately one-tenth of the fees on the relay chain, providing cost-efficiency for regular operations
  • +
  • Reduced deposit requirements - depositing assets in Asset Hub is more accessible, with deposit requirements that are around one one-hundredth of those on the relay chain
  • +
  • Payment of transaction fees with non-native assets - users can pay transaction fees in assets other than the native token (DOT or KSM), offering more flexibility for developers and users
  • +
+

Assets created on the Asset Hub are stored as part of a map, where each asset has a unique ID that links to information about the asset, including details like:

+
    +
  • The management team
  • +
  • The total supply
  • +
  • The number of accounts holding the asset
  • +
  • Sufficiency for account existence - whether the asset alone is enough to maintain an account without a native token balance
  • +
  • The metadata of the asset, including its name, symbol, and the number of decimals for representation
  • +
+

Some assets can be regarded as sufficient to maintain an account's existence, meaning that users can create accounts on the network without needing a native token balance (i.e., no existential deposit required). Developers can also set minimum balances for their assets. If an account's balance drops below the minimum, the balance is considered dust and may be cleared.

+

Assets Pallet

+

The Polkadot SDK's Assets pallet is a powerful module designated for creating and managing fungible asset classes with a fixed supply. It offers a secure and flexible way to issue, transfer, freeze, and destroy assets. The pallet supports various operations and includes permissioned and non-permissioned functions to cater to simple and advanced use cases.

+

Visit the Assets Pallet Rust docs for more in-depth information.

+

Key Features

+

Key features of the Assets pallet include:

+
    +
  • Asset issuance - allows the creation of a new asset, where the total supply is assigned to the creator's account
  • +
  • Asset transfer - enables transferring assets between accounts while maintaining a balance in both accounts
  • +
  • Asset freezing - prevents transfers of a specific asset from one account, locking it from further transactions
  • +
  • Asset destruction - allows accounts to burn or destroy their holdings, removing those assets from circulation
  • +
  • Non-custodial transfers - a non-custodial mechanism to enable one account to approve a transfer of assets on behalf of another
  • +
+

Main Functions

+

The Assets pallet provides a broad interface for managing fungible assets. Some of the main dispatchable functions include:

+
    +
  • create() - create a new asset class by placing a deposit, applicable when asset creation is permissionless
  • +
  • issue() - mint a fixed supply of a new asset and assign it to the creator's account
  • +
  • transfer() - transfer a specified amount of an asset between two accounts
  • +
  • approve_transfer() - approve a non-custodial transfer, allowing a third party to move assets between accounts
  • +
  • destroy() - destroy an entire asset class, removing it permanently from the chain
  • +
  • freeze() and thaw() - administrators or privileged users can lock or unlock assets from being transferred
  • +
+

For a full list of dispatchable and privileged functions, see the dispatchables Rust docs.

+

Querying Functions

+

The Assets pallet exposes several key querying functions that developers can interact with programmatically. These functions allow you to query asset information and perform operations essential for managing assets across accounts. The two main querying functions are:

+
    +
  • +

    balance(asset_id, account) - retrieves the balance of a given asset for a specified account. Useful for checking the holdings of an asset class across different accounts

    +
  • +
  • +

    total_supply(asset_id) - returns the total supply of the asset identified by asset_id. Allows users to verify how much of the asset exists on-chain

    +
  • +
+

In addition to these basic functions, other utility functions are available for querying asset metadata and performing asset transfers. You can view the complete list of querying functions in the Struct Pallet Rust docs.

+

Permission Models and Roles

+

The Assets pallet incorporates a robust permission model, enabling control over who can perform specific operations like minting, transferring, or freezing assets. The key roles within the permission model are:

+
    +
  • Admin - can freeze (preventing transfers) and forcibly transfer assets between accounts. Admins also have the power to reduce the balance of an asset class across arbitrary accounts. They manage the more sensitive and administrative aspects of the asset class
  • +
  • Issuer - responsible for minting new tokens. When new assets are created, the Issuer is the account that controls their distribution to other accounts
  • +
  • Freezer - can lock the transfer of assets from an account, preventing the account holder from moving their balance. This function is useful for freezing accounts involved in disputes or fraud
  • +
  • Owner - has overarching control, including destroying an entire asset class. Owners can also set or update the Issuer, Freezer, and Admin roles
  • +
+

These permissions provide fine-grained control over assets, enabling developers and asset managers to ensure secure, controlled operations. Each of these roles is crucial for managing asset lifecycles and ensuring that assets are used appropriately across the network.

+

Asset Freezing

+

The Assets pallet allows you to freeze assets. This feature prevents transfers or spending from a specific account, effectively locking the balance of an asset class until it is explicitly unfrozen. Asset freezing is beneficial when assets are restricted due to security concerns or disputes.

+

Freezing assets is controlled by the Freezer role, as mentioned earlier. Only the account with the Freezer privilege can perform these operations. Here are the key freezing functions:

+
    +
  • freeze(asset_id, account) - locks the specified asset of the account. While the asset is frozen, no transfers can be made from the frozen account
  • +
  • thaw(asset_id, account) - corresponding function for unfreezing, allowing the asset to be transferred again
  • +
+

This approach enables secure and flexible asset management, providing administrators the tools to control asset movement in special circumstances.

+

Non-Custodial Transfers (Approval API)

+

The Assets pallet also supports non-custodial transfers through the Approval API. This feature allows one account to approve another account to transfer a specific amount of its assets to a third-party recipient without granting full control over the account's balance. Non-custodial transfers enable secure transactions where trust is required between multiple parties.

+

Here's a brief overview of the key functions for non-custodial asset transfers:

+
    +
  • approve_transfer(asset_id, delegate, amount) - approves a delegate to transfer up to a certain amount of the asset on behalf of the original account holder
  • +
  • cancel_approval(asset_id, delegate) - cancels a previous approval for the delegate. Once canceled, the delegate no longer has permission to transfer the approved amount
  • +
  • transfer_approved(asset_id, owner, recipient, amount) - executes the approved asset transfer from the owner’s account to the recipient. The delegate account can call this function once approval is granted
  • +
+

These delegated operations make it easier to manage multi-step transactions and dApps that require complex asset flows between participants.

+

Foreign Assets

+

Foreign assets in Asset Hub refer to assets originating from external blockchains or parachains that are registered in the Asset Hub. These assets are typically native tokens from other parachains within the Polkadot ecosystem or bridged tokens from external blockchains such as Ethereum.

+

Once a foreign asset is registered in the Asset Hub by its originating blockchain's root origin, users are able to send these tokens to the Asset Hub and interact with them as they would any other asset within the Polkadot ecosystem.

+

Handling Foreign Assets

+

The Foreign Assets pallet, an instance of the Assets pallet, manages these assets. Since foreign assets are integrated into the same interface as native assets, developers can use the same functionalities, such as transferring and querying balances. However, there are important distinctions when dealing with foreign assets.

+
    +
  • +

    Asset identifier - unlike native assets, foreign assets are identified using an XCM Multilocation rather than a simple numeric AssetId. This multilocation identifier represents the cross-chain location of the asset and provides a standardized way to reference it across different parachains and relay chains

    +
  • +
  • +

    Transfers - once registered in the Asset Hub, foreign assets can be transferred between accounts, just like native assets. Users can also send these assets back to their originating blockchain if supported by the relevant cross-chain messaging mechanisms

    +
  • +
+

Integration

+

Asset Hub supports a variety of integration tools that make it easy for developers to manage assets and interact with the blockchain in their applications. The tools and libraries provided by Parity Technologies enable streamlined operations, such as querying asset information, building transactions, and monitoring cross-chain asset transfers.

+

Developers can integrate Asset Hub into their projects using these core tools:

+

API Sidecar

+

API Sidecar is a RESTful service that can be deployed alongside Polkadot and Kusama nodes. It provides endpoints to retrieve real-time blockchain data, including asset information. When used with Asset Hub, Sidecar allows querying:

+
    +
  • Asset look-ups - retrieve specific assets using AssetId
  • +
  • Asset balances - view the balance of a particular asset on Asset Hub
  • +
+

Public instances of API Sidecar connected to Asset Hub are available, such as:

+ +

These public instances are primarily for ad-hoc testing and quick checks.

+

TxWrapper

+

TxWrapper is a library that simplifies constructing and signing transactions for Polkadot SDK-based chains, including Polkadot and Kusama. This tool includes support for working with Asset Hub, enabling developers to:

+
    +
  • Construct offline transactions
  • +
  • Leverage asset-specific functions such as minting, burning, and transferring assets
  • +
+

TxWrapper provides the flexibility needed to integrate asset operations into custom applications while maintaining the security and efficiency of Polkadot's transaction model.

+

Asset Transfer API

+

Asset Transfer API is a library focused on simplifying the construction of asset transfers for Polkadot SDK-based chains that involve system parachains like Asset Hub. It exposes a reduced set of methods that facilitate users sending transfers to other parachains or locally. Refer to the cross-chain support table for the current status of cross-chain support development.

+

Key features include:

+
    +
  • Support for cross-chain transfers between parachains
  • +
  • Streamlined transaction construction with support for the necessary parachain metadata
  • +
+

The API supports various asset operations, such as paying transaction fees with non-native tokens and managing asset liquidity.

+

Parachain Node

+

To fully leverage the Asset Hub's functionality, developers will need to run a system parachain node. Setting up an Asset Hub node allows users to interact with the parachain in real time, syncing data and participating in the broader Polkadot ecosystem. Guidelines for setting up an Asset Hub node are available in the Parity documentation.

+

Using these integration tools, developers can manage assets seamlessly and integrate Asset Hub functionality into their applications, leveraging Polkadot's powerful infrastructure.

+

XCM Transfer Monitoring

+

Since Asset Hub facilitates cross-chain asset transfers across the Polkadot ecosystem, XCM transfer monitoring becomes an essential practice for developers and infrastructure providers. This section outlines how to monitor the cross-chain movement of assets between parachains, the relay chain, and other systems.

+

Monitor XCM Deposits

+

As assets move between chains, tracking the cross-chain transfers in real time is crucial. Whether assets are transferred via a teleport from system parachains or through a reserve-backed transfer from any other parachain, each transfer emits a relevant event (such as the balances.minted event).

+

To ensure accurate monitoring of these events:

+
    +
  • Track XCM deposits - query every new block created in the relay chain or Asset Hub, loop through the events array, and filter for any balances.minted events which confirm the asset was successfully transferred to the account
  • +
  • Track event origins - each balances.minted event points to a specific address. By monitoring this, service providers can verify that assets have arrived in the correct account
  • +
+

Track XCM Information Back to the Source

+

While the balances.minted event confirms the arrival of assets, there may be instances where you need to trace the origin of the cross-chain message that triggered the event. In such cases, you can:

+
    +
  1. Query the relevant chain at the block where the balances.minted event was emitted
  2. +
  3. Look for a messageQueue(Processed) event within that block's initialization. This event contains a parameter (Id) that identifies the cross-chain message received by the relay chain or Asset Hub. You can use this Id to trace the message back to its origin chain, offering full visibility of the asset transfer's journey
  4. +
+

Practical Monitoring Examples

+

The preceding sections outline the process of monitoring XCM deposits to specific accounts and then tracing back the origin of these deposits. The process of tracking an XCM transfer and the specific events to monitor may vary based on the direction of the XCM message. Here are some examples to showcase the slight differences:

+
    +
  • Transfer from parachain to relay chain - track parachainsystem(UpwardMessageSent) on the parachain and messagequeue(Processed) on the relay chain
  • +
  • Transfer from relay chain to parachain - track xcmPallet(sent) on the relay chain and dmpqueue(ExecutedDownward) on the parachain
  • +
  • Transfer between parachains - track xcmpqueue(XcmpMessageSent) on the system parachain and xcmpqueue(Success) on the destination parachain
  • +
+

Monitor for Failed XCM Transfers

+

Sometimes, XCM transfers may fail due to liquidity or other errors. Failed transfers emit specific error events, which are key to resolving issues in asset transfers. Monitoring for these failure events helps catch issues before they affect asset balances.

+
    +
  • Relay chain to system parachain - look for the dmpqueue(ExecutedDownward) event on the parachain with an Incomplete outcome and an error type such as UntrustedReserveLocation
  • +
  • Parachain to parachain - monitor for xcmpqueue(Fail) on the destination parachain with error types like TooExpensive
  • +
+

For detailed error management in XCM, see Gavin Wood's blog post on XCM Execution and Error Management.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/system-chains/bridge-hub/index.html b/polkadot-protocol/architecture/system-chains/bridge-hub/index.html new file mode 100644 index 00000000..b0d5182a --- /dev/null +++ b/polkadot-protocol/architecture/system-chains/bridge-hub/index.html @@ -0,0 +1,3568 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Bridge Hub | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+ +
+ + +
+ +
+ + + + +

Bridge Hub

+ +

Introduction

+

The Bridge Hub system parachain plays a crucial role in facilitating trustless interactions between Polkadot, Kusama, Ethereum, and other blockchain ecosystems. By implementing on-chain light clients and supporting protocols like BEEFY and GRANDPA, Bridge Hub ensures seamless message transmission and state verification across chains. It also provides essential pallets for sending and receiving messages, making it a cornerstone of Polkadot’s interoperability framework. With built-in support for XCM (Cross-Consensus Messaging), Bridge Hub enables secure, efficient communication between diverse blockchain networks.

+

This guide covers the architecture, components, and deployment of the Bridge Hub system. You'll explore its trustless bridging mechanisms, key pallets for various blockchains, and specific implementations like Snowbridge and the Polkadot <> Kusama bridge. By the end, you'll understand how Bridge Hub enhances connectivity within the Polkadot ecosystem and beyond.

+

Trustless Bridging

+

Bridge Hub provides a mode of trustless bridging through its implementation of on-chain light clients and trustless relayers. The target chain and source chain both provide ways of verifying one another's state and actions (such as a transfer) based on the consensus and finality of both chains rather than an external mechanism controlled by a third party.

+

BEEFY (Bridge Efficiency Enabling Finality Yielder) is instrumental in this solution. It provides a more efficient way to verify the consensus on the relay chain. It allows the participants in a network to verify finality proofs, meaning a remote chain like Ethereum can verify the state of Polkadot at a given block height.

+
+

Info

+

In this context, "trustless" refers to the lack of need to trust a human when interacting with various system components. Trustless systems are based instead on trusting mathematics, cryptography, and code.

+
+

Trustless bridges are essentially two one-way bridges, where each chain has a method of verifying the state of the other in a trustless manner through consensus proofs.

+

For example, the Ethereum and Polkadot bridging solution that Snowbridge implements involves two light clients: one which verifies the state of Polkadot and the other which verifies the state of Ethereum. The light client for Polkadot is implemented in the runtime as a pallet, whereas the light client for Ethereum is implemented as a smart contract on the beacon chain.

+

Bridging Components

+

In any given Bridge Hub implementation (Kusama, Polkadot, or other relay chains), there are a few primary pallets that are utilized:

+ +

Ethereum-Specific Support

+

Bridge Hub also has a set of components and pallets that support a bridge between Polkadot and Ethereum through Snowbridge.

+

To view the complete list of which pallets are included in Bridge Hub, visit the Subscan Runtime Modules page. Alternatively, the source code for those pallets can be found in the Polkadot SDK Snowbridge Pallets repository.

+

Deployed Bridges

+
    +
  • Snowbridge - a general-purpose, trustless bridge between Polkadot and Ethereum
  • +
  • Hyperbridge - a cross-chain solution built as an interoperability coprocessor, providing state-proof-based interoperability across all blockchains
  • +
  • Polkadot <> Kusama Bridge - a bridge that utilizes relayers to bridge the Polkadot and Kusama relay chains trustlessly
  • +
+

Where to Go Next

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/system-chains/coretime/index.html b/polkadot-protocol/architecture/system-chains/coretime/index.html new file mode 100644 index 00000000..c92328c0 --- /dev/null +++ b/polkadot-protocol/architecture/system-chains/coretime/index.html @@ -0,0 +1,3536 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Coretime | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Coretime

+ +

Introduction

+

The Coretime system chain facilitates the allocation, procurement, sale, and scheduling of bulk coretime, enabling tasks (such as parachains) to utilize the computation and security provided by Polkadot.

+

The Broker pallet, along with Cross Consensus Messaging (XCM), enables this functionality to be delegated to the system chain rather than the relay chain. Using XCMP's Upward Message Passing (UMP) to the relay chain allows for core assignments to take place for a task registered on the relay chain.

+

The Fellowship RFC RFC-1: Agile Coretime contains the specification for the Coretime system chain and coretime as a concept.

+

Besides core management, its responsibilities include:

+
    +
  • The number of cores that should be made available
  • +
  • Which tasks should be running on which cores and in what ratios
  • +
  • Accounting information for the on-demand pool
  • +
+

From the relay chain, it expects the following via Downward Message Passing (DMP):

+
    +
  • The number of cores available to be scheduled
  • +
  • Account information on on-demand scheduling
  • +
+

The details for this interface can be found in RFC-5: Coretime Interface.

+

Bulk Coretime Assignment

+

The Coretime chain allocates coretime before its usage. It also manages the ownership of a core. As cores are made up of regions (by default, one core is a single region), a region is recognized as a non-fungible asset. The Coretime chain exposes Regions over XCM as an NFT. Users can transfer individual regions, partition, interlace, or allocate them to a task. Regions describe how a task may use a core.

+
+

One core can contain more than one region.

+

A core can be considered a logical representation of an active validator set on the relay chain, where these validators commit to verifying the state changes for a particular task running on that region. With partitioning, having more than one region per core is possible, allowing for different computational schemes. Therefore, running more than one task on a single core is possible.

+
+ + +

Regions can be managed in the following manner on the Coretime chain:

+
    +
  • Assigning region - regions can be assigned to a task on the relay chain, such as a parachain/rollup using the assign dispatchable
  • +
+
+

Coretime Availability

+

When bulk coretime is obtained, block production is not immediately available. It becomes available to produce blocks for a task in the next Coretime cycle. To view the status of the current or next Coretime cycle, go to the Subscan Coretime Dashboard.

+
+
    +
  • +

    Transferring regions - regions may be transferred on the Coretime chain, upon which the transfer dispatchable in the Broker pallet would assign a new owner to that specific region

    +
  • +
  • +

    Partitioning regions - using the partition dispatchable, regions may be partitioned into two non-overlapping subregions within the same core. A partition involves specifying a pivot, wherein the new region will be defined and available for use

    +
  • +
  • +

    Interlacing regions - using the interlace dispatchable, interlacing regions allows a core to have alternative-compute strategies. Whereas partitioned regions are mutually exclusive, interlaced regions overlap because multiple tasks may utilize a single core in an alternating manner

    +
  • +
+

For more information regarding these mechanisms, visit the coretime page on the Polkadot Wiki: Introduction to Agile Coretime.

+

On Demand Coretime

+

At this writing, on-demand coretime is currently deployed on the relay chain and will eventually be deployed to the Coretime chain. On-demand coretime allows parachains (previously known as parathreads) to utilize available cores per block.

+

The Coretime chain also handles coretime sales, details of which can be found on the Polkadot Wiki: Agile Coretime: Coretime Sales.

+

Where to Go Next

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/system-chains/index.html b/polkadot-protocol/architecture/system-chains/index.html new file mode 100644 index 00000000..9afd5818 --- /dev/null +++ b/polkadot-protocol/architecture/system-chains/index.html @@ -0,0 +1,3365 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/system-chains/overview/index.html b/polkadot-protocol/architecture/system-chains/overview/index.html new file mode 100644 index 00000000..e0833aaf --- /dev/null +++ b/polkadot-protocol/architecture/system-chains/overview/index.html @@ -0,0 +1,3657 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview of Polkadot's System Chains | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+ +
+
+ + +
+ +
+ + + + +

Overview

+ +

Introduction

+

Polkadot's relay chain is designed to secure parachains and facilitate seamless inter-chain communication. However, resource-intensive—tasks like governance, asset management, and bridging are more efficiently handled by system parachains. These specialized chains offload functionality from the relay chain, leveraging Polkadot's parallel execution model to improve performance and scalability. By distributing key functionalities across system parachains, Polkadot can maximize its relay chain's blockspace for its core purpose of securing and validating parachains.

+

This guide will explore how system parachains operate within Polkadot and Kusama, detailing their critical roles in network governance, asset management, and bridging. You'll learn about the currently deployed system parachains, their unique functions, and how they enhance Polkadot's decentralized ecosystem.

+

System Chains

+

System parachains contain core Polkadot protocol features, but in parachains rather than the relay chain. Execution cores for system chains are allocated via network governance rather than purchasing coretime on a marketplace.

+

System parachains defer to on-chain governance to manage their upgrades and other sensitive actions as they do not have native tokens or governance systems separate from DOT or KSM. It is not uncommon to see a system parachain implemented specifically to manage network governance.

+
+

Note

+

You may see system parachains called common good parachains in articles and discussions. This nomenclature caused confusion as the network evolved, so system parachains is preferred.

+

For more details on this evolution, review this parachains forum discussion.

+
+

Existing System Chains

+
---
+title: System Parachains at a Glance
+---
+flowchart TB
+    subgraph POLKADOT["Polkadot"]
+        direction LR
+            PAH["Polkadot Asset Hub"]
+            PCOL["Polkadot Collectives"]
+            PBH["Polkadot Bridge Hub"]
+            PPC["Polkadot People Chain"]
+            PCC["Polkadot Coretime Chain"]
+    end
+
+    subgraph KUSAMA["Kusama"]
+        direction LR
+            KAH["Kusama Asset Hub"]
+            KBH["Kusama Bridge Hub"]
+            KPC["Kusama People Chain"]
+            KCC["Kusama Coretime Chain"]
+            E["Encointer"]
+        end
+

All system parachains are on both Polkadot and Kusama with the following exceptions:

+ +

Asset Hub

+

The Asset Hub is an asset portal for the entire network. It helps asset creators, such as reserve-backed stablecoin issuers, track the total issuance of an asset in the network, including amounts transferred to other parachains. It also serves as the hub where asset creators can perform on-chain operations, such as minting and burning, to manage their assets effectively.

+

This asset management logic is encoded directly in the runtime of the chain rather than in smart contracts. The efficiency of executing logic in a parachain allows for fees and deposits that are about 1/10th of what is required on the relay chain. These low fees mean that the Asset Hub is well suited for handling the frequent transactions required when managing balances, transfers, and on-chain assets.

+

The Asset Hub also supports non-fungible assets (NFTs) via the Uniques pallet and NFTs pallet. For more information about NFTs, see the Polkadot Wiki section on NFT Pallets.

+

Collectives

+

The Polkadot Collectives parachain was added in Referendum 81 and exists on Polkadot but not on Kusama. The Collectives chain hosts on-chain collectives that serve the Polkadot network, including the following:

+
    +
  • Polkadot Alliance - provides a set of ethics and standards for the community to follow. Includes an on-chain means to call out bad actors
  • +
  • Polkadot Technical Fellowship - a rules-based social organization to support and incentivize highly-skilled developers to contribute to the technical stability, security, and progress of the network
  • +
+

These on-chain collectives will play essential roles in the future of network stewardship and decentralized governance. Networks can use a bridge hub to help them act as collectives and express their legislative voices as single opinions within other networks.

+

Bridge Hub

+

Before parachains, the only way to design a bridge was to put the logic onto the relay chain. Since both networks now support parachains and the isolation they provide, each network can have a parachain dedicated to bridges.

+

The Bridge Hub system parachain operates on the relay chain, and is responsible for faciliating bridges to the wider Web3 space. It contains the required bridge pallets in its runtime, which enable trustless bridging with other blockchain networks like Polkadot, Kusama, and Ethereum. The Bridge Hub uses the native token of the relay chain.

+

See the Bridge Hub documentation for additional information.

+

People Chain

+

The People Chain provides a naming system that allows users to manage and verify their account identity.

+

Coretime Chain

+

The Coretime system chain lets users buy coretime to access Polkadot's computation. Coretime marketplaces run on top of the Coretime chain. Kusama does not use the Collectives system chain. Instead, Kusama relies on the Encointer system chain, which provides Sybil resistance as a service to the entire Kusama ecosystem.

+

Visit Introduction to Agile Coretime in the Polkadot Wiki for more information.

+

Encointer

+

Encointer is a blockchain platform for self-sovereign ID and a global universal basic income (UBI). The Encointer protocol uses a novel Proof of Personhood (PoP) system to create unique identities and resist Sybil attacks. PoP is based on the notion that a person can only be in one place at any given time. Encointer offers a framework that allows for any group of real people to create, distribute, and use their own digital community tokens.

+

Participants are requested to attend physical key-signing ceremonies with small groups of random people at randomized locations. These local meetings are part of one global signing ceremony occurring at the same time. Participants use the Encointer wallet app to participate in these ceremonies and manage local community currencies.

+

Referendums marking key Encointer adoption milestones include:

+ +
+

Tip

+

To learn more about Encointer, check out the official Encointer book or watch an Encointer ceremony in action.

+
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/accounts/index.html b/polkadot-protocol/basics/accounts/index.html new file mode 100644 index 00000000..517f77d5 --- /dev/null +++ b/polkadot-protocol/basics/accounts/index.html @@ -0,0 +1,4310 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Polkadot SDK Accounts | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Accounts

+

Introduction

+

Accounts are essential for managing identity, transactions, and governance on the network in the Polkadot SDK. Understanding these components is critical for seamless development and operation on the network, whether you're building or interacting with Polkadot-based chains.

+

This page will guide you through the essential aspects of accounts, including their data structure, balance types, reference counters, and address formats. You’ll learn how accounts are managed within the runtime, how balances are categorized, and how addresses are encoded and validated.

+

Account Data Structure

+

Accounts are foundational to any blockchain, and the Polkadot SDK provides a flexible management system. This section explains how the Polkadot SDK defines accounts and manages their lifecycle through data structures within the runtime.

+

Account

+

The Account data type is a storage map within the System pallet that links an account ID to its corresponding data. This structure is fundamental for mapping account-related information within the chain.

+

The code snippet below shows how accounts are defined:

+
 /// The full account information for a particular account ID
+ #[pallet::storage]
+ #[pallet::getter(fn account)]
+ pub type Account<T: Config> = StorageMap<
+   _,
+   Blake2_128Concat,
+   T::AccountId,
+   AccountInfo<T::Nonce, T::AccountData>,
+   ValueQuery,
+ >;
+
+

The preceding code block defines a storage map named Account. The StorageMap is a type of on-chain storage that maps keys to values. In the Account map, the key is an account ID, and the value is the account's information. Here, T represents the generic parameter for the runtime configuration, which is defined by the pallet's configuration trait (Config).

+

The StorageMap consists of the following parameters:

+
    +
  • _ - used in macro expansion and acts as a placeholder for the storage prefix type. Tells the macro to insert the default prefix during expansion
  • +
  • Blake2_128Concat - the hashing function applied to keys in the storage map
  • +
  • T::AccountId - represents the key type, which corresponds to the account’s unique ID
  • +
  • AccountInfo<T::Nonce, T::AccountData> - the value type stored in the map. For each account ID, the map stores an AccountInfo struct containing:
      +
    • T::Nonce - a nonce for the account, which is incremented with each transaction to ensure transaction uniqueness
    • +
    • T::AccountData - custom account data defined by the runtime configuration, which could include balances, locked funds, or other relevant information
    • +
    +
  • +
  • ValueQuery - defines how queries to the storage map behave when no value is found; returns a default value instead of None
  • +
+
+Additional information +

For a detailed explanation of storage maps, refer to the StorageMap Rust docs.

+
+

Account Info

+

The AccountInfo structure is another key element within the System pallet, providing more granular details about each account's state. This structure tracks vital data, such as the number of transactions and the account’s relationships with other modules.

+
#[derive(Clone, Eq, PartialEq, Default, RuntimeDebug, Encode, Decode)]
+pub struct AccountInfo<Nonce, AccountData> {
+  pub nonce: Nonce,
+  pub consumers: RefCount,
+  pub providers: RefCount,
+  pub sufficients: RefCount,
+  pub data: AccountData,
+}
+
+

The AccountInfo structure includes the following components:

+
    +
  • nonce - tracks the number of transactions initiated by the account, which ensures transaction uniqueness and prevents replay attacks
  • +
  • consumers - counts how many other modules or pallets rely on this account’s existence. The account cannot be removed from the chain (reaped) until this count reaches zero
  • +
  • providers - tracks how many modules permit this account’s existence. An account can only be reaped once both providers and sufficients are zero
  • +
  • sufficients - represents the number of modules that allow the account to exist for internal purposes, independent of any other modules
  • +
  • AccountData - a flexible data structure that can be customized in the runtime configuration, usually containing balances or other user-specific data
  • +
+

This structure helps manage an account's state and prevents its premature removal while it is still referenced by other on-chain data or modules. The AccountInfo structure can vary as long as it satisfies the trait bounds defined by the AccountData associated type in the frame-system::pallet::Config trait.

+

Account Reference Counters

+

Polkadot SDK uses reference counters to track an account’s dependencies across different runtime modules. These counters ensure that accounts remain active while data is associated with them.

+

The reference counters include:

+
    +
  • consumers - prevents account removal while other pallets still rely on the account
  • +
  • providers - ensures an account is active before other pallets store data related to it
  • +
  • sufficients - indicates the account’s independence, ensuring it can exist even without a native token balance, such as when holding sufficient alternative assets
  • +
+

Providers Reference Counters

+

The providers counter ensures that an account is ready to be depended upon by other runtime modules. For example, it is incremented when an account has a balance above the existential deposit, which marks the account as active.

+

The system requires this reference counter to be greater than zero for the consumers counter to be incremented, ensuring the account is stable before any dependencies are added.

+

Consumers Reference Counters

+

The consumers counter ensures that the account cannot be reaped until all references to it across the runtime have been removed. This check prevents the accidental deletion of accounts that still have active on-chain data.

+

It is the user’s responsibility to clear out any data from other runtime modules if they wish to remove their account and reclaim their existential deposit.

+

Sufficients Reference Counter

+

The sufficients counter tracks accounts that can exist independently without relying on a native account balance. This is useful for accounts holding other types of assets, like tokens, without needing a minimum balance in the native token.

+

For instance, the Assets pallet, may increment this counter for an account holding sufficient tokens.

+

Account Deactivation

+

In Polkadot SDK-based chains, an account is deactivated when its reference counters (such as providers, consumers, and sufficient) reach zero. These counters ensure the account remains active as long as other runtime modules or pallets reference it.

+

When all dependencies are cleared and the counters drop to zero, the account becomes deactivated and may be removed from the chain (reaped). This is particularly important in Polkadot SDK-based blockchains, where accounts with balances below the existential deposit threshold are pruned from storage to conserve state resources.

+

Each pallet that references an account has cleanup functions that decrement these counters when the pallet no longer depends on the account. Once these counters reach zero, the account is marked for deactivation.

+

Updating Counters

+

The Polkadot SDK provides runtime developers with various methods to manage account lifecycle events, such as deactivation or incrementing reference counters. These methods ensure that accounts cannot be reaped while still in use.

+

The following helper functions manage these counters:

+
    +
  • inc_consumers() - increments the consumer reference counter for an account, signaling that another pallet depends on it
  • +
  • dec_consumers() - decrements the consumer reference counter, signaling that a pallet no longer relies on the account
  • +
  • inc_providers() - increments the provider reference counter, ensuring the account remains active
  • +
  • dec_providers() - decrements the provider reference counter, allowing for account deactivation when no longer in use
  • +
  • inc_sufficients() - increments the sufficient reference counter for accounts that hold sufficient assets
  • +
  • dec_sufficients() - decrements the sufficient reference counter
  • +
+

To ensure proper account cleanup and lifecycle management, a corresponding decrement should be made for each increment action.

+

The System pallet offers three query functions to assist developers in tracking account states:

+
    +
  • can_inc_consumer() - checks if the account can safely increment the consumer reference
  • +
  • can_dec_provider() - ensures that no consumers exist before allowing the decrement of the provider counter
  • +
  • is_provider_required() - verifies whether the account still has any active consumer references
  • +
+

This modular and flexible system of reference counters tightly controls the lifecycle of accounts in Polkadot SDK-based blockchains, preventing the accidental removal or retention of unneeded accounts. You can refer to the System pallet Rust docs for more details.

+

Account Balance Types

+

In the Polkadot ecosystem, account balances are categorized into different types based on how the funds are utilized and their availability. These balance types determine the actions that can be performed, such as transferring tokens, paying transaction fees, or participating in governance activities. Understanding these balance types helps developers manage user accounts and implement balance-dependent logic.

+
+

A more efficient distribution of account balance types is in development

+

Soon, pallets in the Polkadot SDK will implement the fungible trait (see the tracking issue for more details). This update will enable more efficient use of account balances, allowing the free balance to be utilized for on-chain activities such as setting proxies and managing identities.

+
+

Balance Types

+

The five main balance types are:

+
    +
  • Free balance - represents the total tokens available to the account for any on-chain activity, including staking, governance, and voting. However, it may not be fully spendable or transferrable if portions of it are locked or reserved
  • +
  • Locked balance - portions of the free balance that cannot be spent or transferred because they are tied up in specific activities like staking, vesting, or participating in governance. While the tokens remain part of the free balance, they are non-transferable for the duration of the lock
  • +
  • Reserved balance - funds locked by specific system actions, such as setting up an identity, creating proxies, or submitting deposits for governance proposals. These tokens are not part of the free balance and cannot be spent unless they are unreserved
  • +
  • Spendable balance - the portion of the free balance that is available for immediate spending or transfers. It is calculated by subtracting the maximum of locked or reserved amounts from the free balance, ensuring that existential deposit limits are met
  • +
  • Untouchable balance - funds that cannot be directly spent or transferred but may still be utilized for on-chain activities, such as governance participation or staking. These tokens are typically tied to certain actions or locked for a specific period
  • +
+

The spendable balance is calculated as follows:

+
spendable = free - max(locked - reserved, ED)
+
+

Here, free, locked, and reserved are defined above. The ED represents the existential deposit, the minimum balance required to keep an account active and prevent it from being reaped. You may find you can't see all balance types when looking at your account via a wallet. Wallet providers often display only spendable, locked, and reserved balances.

+

Locks

+

Locks are applied to an account's free balance, preventing that portion from being spent or transferred. Locks are automatically placed when an account participates in specific on-chain activities, such as staking or governance. Although multiple locks may be applied simultaneously, they do not stack. Instead, the largest lock determines the total amount of locked tokens.

+

Locks follow these basic rules:

+
    +
  • If different locks apply to varying amounts, the largest lock amount takes precedence
  • +
  • If multiple locks apply to the same amount, the lock with the longest duration governs when the balance can be unlocked
  • +
+

Locks Example

+

Consider an example where an account has 80 DOT locked for both staking and governance purposes like so:

+
    +
  • 80 DOT is staked with a 28-day lock period
  • +
  • 24 DOT is locked for governance with a 1x conviction and a 7-day lock period
  • +
  • 4 DOT is locked for governance with a 6x conviction and a 224-day lock period
  • +
+

In this case, the total locked amount is 80 DOT because only the largest lock (80 DOT from staking) governs the locked balance. These 80 DOT will be released at different times based on the lock durations. In this example, the 24 DOT locked for governance will be released first since the shortest lock period is seven days. The 80 DOT stake with a 28-day lock period is released next. Now, all that remains locked is the 4 DOT for governance. After 224 days, all 80 DOT (minus the existential deposit) will be free and transferrable.

+

Illustration of Lock Example

+

Edge Cases for Locks

+

In scenarios where multiple convictions and lock periods are active, the lock duration and amount are determined by the longest period and largest amount. For example, if you delegate with different convictions and attempt to undelegate during an active lock period, the lock may be extended for the full amount of tokens. For a detailed discussion on edge case lock behavior, see this Stack Exchange post.

+

Balance Types on Polkadot.js

+

Polkadot.js provides a user-friendly interface for managing and visualizing various account balances on Polkadot and Kusama networks. When interacting with Polkadot.js, you will encounter multiple balance types that are critical for understanding how your funds are distributed and restricted. This section explains how different balances are displayed in the Polkadot.js UI and what each type represents.

+

+

The most common balance types displayed on Polkadot.js are:

+
    +
  • +

    Total balance - the total number of tokens available in the account. This includes all tokens, whether they are transferable, locked, reserved, or vested. However, the total balance does not always reflect what can be spent immediately. In this example, the total balance is 0.6274 KSM

    +
  • +
  • +

    Transferrable balance - shows how many tokens are immediately available for transfer. It is calculated by subtracting the locked and reserved balances from the total balance. For example, if an account has a total balance of 0.6274 KSM and a transferrable balance of 0.0106 KSM, only the latter amount can be sent or spent freely

    +
  • +
  • +

    Vested balance - tokens that allocated to the account but released according to a specific schedule. Vested tokens remain locked and cannot be transferred until fully vested. For example, an account with a vested balance of 0.2500 KSM means that this amount is owned but not yet transferable

    +
  • +
  • +

    Locked balance - tokens that are temporarily restricted from being transferred or spent. These locks typically result from participating in staking, governance, or vested transfers. In Polkadot.js, locked balances do not stack—only the largest lock is applied. For instance, if an account has 0.5500 KSM locked for governance and staking, the locked balance would display 0.5500 KSM, not the sum of all locked amounts

    +
  • +
  • +

    Reserved balance - refers to tokens locked for specific on-chain actions, such as setting an identity, creating a proxy, or making governance deposits. Reserved tokens are not part of the free balance, but can be freed by performing certain actions. For example, removing an identity would unreserve those funds

    +
  • +
  • +

    Bonded balance - the tokens locked for staking purposes. Bonded tokens are not transferrable until they are unbonded after the unbonding period

    +
  • +
  • +

    Redeemable balance - the number of tokens that have completed the unbonding period and are ready to be unlocked and transferred again. For example, if an account has a redeemable balance of 0.1000 KSM, those tokens are now available for spending

    +
  • +
  • +

    Democracy balance - reflects the number of tokens locked for governance activities, such as voting on referenda. These tokens are locked for the duration of the governance action and are only released after the lock period ends

    +
  • +
+

By understanding these balance types and their implications, developers and users can better manage their funds and engage with on-chain activities more effectively.

+

Address Formats

+

The SS58 address format is a core component of the Polkadot SDK that enables accounts to be uniquely identified across Polkadot-based networks. This format is a modified version of Bitcoin's Base58Check encoding, specifically designed to accommodate the multi-chain nature of the Polkadot ecosystem. SS58 encoding allows each chain to define its own set of addresses while maintaining compatibility and checksum validation for security.

+

Basic Format

+

SS58 addresses consist of three main components:

+
base58encode(concat(<address-type>, <address>, <checksum>))
+
+
    +
  • Address type - a byte or set of bytes that define the network (or chain) for which the address is intended. This ensures that addresses are unique across different Polkadot SDK-based chains
  • +
  • Address - the public key of the account encoded as bytes
  • +
  • Checksum - a hash-based checksum which ensures that addresses are valid and unaltered. The checksum is derived from the concatenated address type and address components, ensuring integrity
  • +
+

The encoding process transforms the concatenated components into a Base58 string, providing a compact and human-readable format that avoids easily confused characters (e.g., zero '0', capital 'O', lowercase 'l'). This encoding function (encode) is implemented exactly as defined in Bitcoin and IPFS specifications, using the same alphabet as both implementations.

+
+Additional information +

Refer to Ss58Codec for more details on the SS58 address format implementation.

+
+

Address Type

+

The address type defines how an address is interpreted and to which network it belongs. Polkadot SDK uses different prefixes to distinguish between various chains and address formats:

+
    +
  • Address types 0-63 - simple addresses, commonly used for network identifiers
  • +
  • Address types 64-127 - full addresses that support a wider range of network identifiers
  • +
  • Address types 128-255 - reserved for future address format extensions
  • +
+

For example, Polkadot’s main network uses an address type of 0, while Kusama uses 2. This ensures that addresses can be used without confusion between networks.

+

The address type is always encoded as part of the SS58 address, making it easy to quickly identify the network. Refer to the SS58 registry for the canonical listing of all address type identifiers and how they map to Polkadot SDK-based networks.

+

Address Length

+

SS58 addresses can have different lengths depending on the specific format. Address lengths range from as short as 3 to 35 bytes, depending on the complexity of the address and network requirements. This flexibility allows SS58 addresses to adapt to different chains while providing a secure encoding mechanism.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TotalTypeRaw accountChecksum
3111
4121
5122
6141
7142
8143
9144
10181
11182
12183
13184
14185
15186
16187
17188
351322
+

SS58 addresses also support different payload sizes, allowing a flexible range of account identifiers.

+

Checksum Types

+

A checksum is applied to validate SS58 addresses. Polkadot SDK uses a Blake2b-512 hash function to calculate the checksum, which is appended to the address before encoding. The checksum length can vary depending on the address format (e.g., 1-byte, 2-byte, or longer), providing varying levels of validation strength.

+

The checksum ensures that an address is not modified or corrupted, adding an extra layer of security for account management.

+

Validating Addresses

+

SS58 addresses can be validated using the subkey command-line interface or the Polkadot.js API. These tools help ensure an address is correctly formatted and valid for the intended network. The following sections will provide an overview of how validation works with these tools.

+

Using Subkey

+

Subkey is a CLI tool provided by Polkadot SDK for generating and managing keys. It can inspect and validate SS58 addresses.

+

The inspect command gets a public key and an SS58 address from the provided secret URI. The basic syntax for the subkey inspect command is:

+
subkey inspect [flags] [options] uri
+
+

For the uri command-line argument, you can specify the secret seed phrase, a hex-encoded private key, or an SS58 address. If the input is a valid address, the subkey program displays the corresponding hex-encoded public key, account identifier, and SS58 addresses.

+

For example, to inspect the public keys derived from a secret seed phrase, you can run a command similar to the following:

+
subkey inspect "caution juice atom organ advance problem want pledge someone senior holiday very"
+
+

The command displays output similar to the following:

+
+

subkey inspect "caution juice atom organ advance problem want pledge someone senior holiday very" + Secret phrase caution juice atom organ advance problem want pledge someone senior holiday very is account: + Secret seed: 0xc8fa03532fb22ee1f7f6908b9c02b4e72483f0dbd66e4cd456b8f34c6230b849 + Public key (hex): 0xd6a3105d6768e956e9e5d41050ac29843f98561410d3a47f9dd5b3b227ab8746 + Public key (SS58): 5Gv8YYFu8H1btvmrJy9FjjAWfb99wrhV3uhPFoNEr918utyR + Account ID: 0xd6a3105d6768e956e9e5d41050ac29843f98561410d3a47f9dd5b3b227ab8746 + SS58 Address: 5Gv8YYFu8H1btvmrJy9FjjAWfb99wrhV3uhPFoNEr918utyR

+
+

The subkey program assumes an address is based on a public/private key pair. If you inspect an address, the command returns the 32-byte account identifier.

+

However, not all addresses in Polkadot SDK-based networks are based on keys.

+

Depending on the command-line options you specify and the input you provided, the command output might also display the network for which the address has been encoded. For example:

+
subkey inspect "12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU"
+
+

The command displays output similar to the following:

+
+

subkey inspect "12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU" + Public Key URI 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU is account: + Network ID/Version: polkadot + Public key (hex): 0x46ebddef8cd9bb167dc30878d7113b7e168e6f0646beffd77d69d39bad76b47a + Account ID: 0x46ebddef8cd9bb167dc30878d7113b7e168e6f0646beffd77d69d39bad76b47a + Public key (SS58): 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU + SS58 Address: 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU

+
+

Using Polkadot.js API

+

To verify an address in JavaScript or TypeScript projects, you can use the functions built into the Polkadot.js API. For example:

+
// Import Polkadot.js API dependencies
+const { decodeAddress, encodeAddress } = require('@polkadot/keyring');
+const { hexToU8a, isHex } = require('@polkadot/util');
+
+// Specify an address to test.
+const address = 'INSERT_ADDRESS_TO_TEST';
+
+// Check address
+const isValidSubstrateAddress = () => {
+  try {
+    encodeAddress(isHex(address) ? hexToU8a(address) : decodeAddress(address));
+
+    return true;
+  } catch (error) {
+    return false;
+  }
+};
+
+// Query result
+const isValid = isValidSubstrateAddress();
+console.log(isValid);
+
+

If the function returns true, the specified address is a valid address.

+

Other SS58 Implementations

+

Support for encoding and decoding Polkadot SDK SS58 addresses has been implemented in several other languages and libraries.

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/blocks-transactions-fees/blocks/index.html b/polkadot-protocol/basics/blocks-transactions-fees/blocks/index.html new file mode 100644 index 00000000..d58173ed --- /dev/null +++ b/polkadot-protocol/basics/blocks-transactions-fees/blocks/index.html @@ -0,0 +1,3620 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Blocks | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Blocks

+

Introduction

+

In the Polkadot SDK, blocks are fundamental to the functioning of the blockchain, serving as containers for transactions and changes to the chain's state. Blocks consist of headers and an array of transactions, ensuring the integrity and validity of operations on the network. This guide explores the essential components of a block, the process of block production, and how blocks are validated and imported across the network. By understanding these concepts, developers can better grasp how blockchains maintain security, consistency, and performance within the Polkadot ecosystem.

+

What is a Block?

+

In the Polkadot SDK, a block is a fundamental unit that encapsulates both the header and an array of transactions. The block header includes critical metadata to ensure the integrity and sequence of the blockchain. Here's a breakdown of its components:

+
    +
  • Block height - indicates the number of blocks created in the chain so far
  • +
  • Parent hash - the hash of the previous block, providing a link to maintain the blockchain's immutability
  • +
  • Transaction root - cryptographic digest summarizing all transactions in the block
  • +
  • State root - a cryptographic digest representing the post-execution state
  • +
  • Digest - additional information that can be attached to a block, such as consensus-related messages
  • +
+

Each transaction is part of a series that is executed according to the runtime's rules. The transaction root is a cryptographic digest of this series, which prevents alterations and enables succinct verification by light clients. This verification process allows light clients to confirm whether a transaction exists in a block with only the block header, avoiding downloading the entire block.

+

Block Production

+

When an authoring node is authorized to create a new block, it selects transactions from the transaction queue based on priority. This step, known as block production, relies heavily on the executive module to manage the initialization and finalization of blocks. The process is summarized as follows:

+

Initialize Block

+

The block initialization process begins with a series of function calls that prepare the block for transaction execution:

+
    +
  1. Call on_initialize - the executive module calls the on_initialize hook from the system pallet and other runtime pallets to prepare for the block's transactions
  2. +
  3. Coordinate runtime calls - coordinates function calls in the order defined by the transaction queue
  4. +
  5. Verify information - once on_initialize functions are executed, the executive module checks the parent hash in the block header and the trie root to verify information is consistent
  6. +
+

Finalize Block

+

Once transactions are processed, the block must be finalized before being broadcast to the network. The finalization steps are as follows:

+
    +
  1. -Call on_finalize - the executive module calls the on_finalize hooks in each pallet to ensure any remaining state updates or checks are completed before the block is sealed and published
  2. +
  3. -Verify information - the block's digest and storage root in the header are checked against the initialized block to ensure consistency
  4. +
  5. -Call on_idle - the on_idle hook is triggered to process any remaining tasks using the leftover weight from the block
  6. +
+

Block Authoring and Import

+

Once the block is finalized, it is gossiped to other nodes in the network. Nodes follow this procedure:

+
    +
  1. Receive transactions - the authoring node collects transactions from the network
  2. +
  3. Validate - transactions are checked for validity
  4. +
  5. Queue - valid transactions are placed in the transaction pool for execution
  6. +
  7. Execute - state changes are made as the transactions are executed
  8. +
  9. Publish - the finalized block is broadcast to the network
  10. +
+

Block Import Queue

+

After a block is published, other nodes on the network can import it into their chain state. The block import queue is part of the outer node in every Polkadot SDK-based node and ensures incoming blocks are valid before adding them to the node's state.

+

In most cases, you don't need to know details about how transactions are gossiped or how other nodes on the network import blocks. The following traits are relevant, however, if you plan to write any custom consensus logic or want a deeper dive into the block import queue:

+
    +
  • ImportQueue - the trait that defines the block import queue
  • +
  • Link - the trait that defines the link between the block import queue and the network
  • +
  • BasicQueue - a basic implementation of the block import queue
  • +
  • Verifier - the trait that defines the block verifier
  • +
  • BlockImport - the trait that defines the block import process
  • +
+

These traits govern how blocks are validated and imported across the network, ensuring consistency and security.

+
+Additional information +

Refer to the Block reference to learn more about the block structure in the Polkadot SDK runtime.

+
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/blocks-transactions-fees/fees/index.html b/polkadot-protocol/basics/blocks-transactions-fees/fees/index.html new file mode 100644 index 00000000..1b94b713 --- /dev/null +++ b/polkadot-protocol/basics/blocks-transactions-fees/fees/index.html @@ -0,0 +1,4079 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Transactions Weights and Fees | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Transactions Weights and Fees

+

Introductions

+

When transactions are executed, or data is stored on-chain, the activity changes the chain's state and consumes blockchain resources. Because the resources available to a blockchain are limited, managing how operations on-chain consume them is important. In addition to being limited in practical terms, such as storage capacity, blockchain resources represent a potential attack vector for malicious users. For example, a malicious user might attempt to overload the network with messages to stop the network from producing new blocks. To protect blockchain resources from being drained or overloaded, you need to manage how they are made available and how they are consumed. The resources to be aware of include:

+
    +
  • Memory usage
  • +
  • Storage input and output
  • +
  • Computation
  • +
  • Transaction and block size
  • +
  • State database size
  • +
+

The Polkadot SDK provides block authors with several ways to manage access to resources and to prevent individual components of the chain from consuming too much of any single resource. Two of the most important mechanisms available to block authors are weights and transaction fees.

+

Weights manage the time it takes to validate a block and characterize the time it takes to execute the calls in the block's body. By controlling the execution time a block can consume, weights set limits on storage input, output, and computation.

+

Some of the weight allowed for a block is consumed as part of the block's initialization and finalization. The weight might also be used to execute mandatory inherent extrinsic calls. To help ensure blocks don’t consume too much execution time and prevent malicious users from overloading the system with unnecessary calls, weights are combined with transaction fees.

+

Transaction fees provide an economic incentive to limit execution time, computation, and the number of calls required to perform operations. Transaction fees are also used to make the blockchain economically sustainable because they are typically applied to transactions initiated by users and deducted before a transaction request is executed.

+

How Fees are Calculated

+

The final fee for a transaction is calculated using the following parameters:

+
    +
  • base fee - this is the minimum amount a user pays for a transaction. It is declared a base weight in the runtime and converted to a fee using the WeightToFee conversion
  • +
  • weight fee - a fee proportional to the execution time (input and output and computation) that a transaction consumes
  • +
  • length fee - a fee proportional to the encoded length of the transaction
  • +
  • tip - an optional tip to increase the transaction’s priority, giving it a higher chance to be included in the transaction queue
  • +
+

The base fee and proportional weight and length fees constitute the inclusion fee. The inclusion fee is the minimum fee that must be available for a transaction to be included in a block.

+
inclusion fee = base fee + weight fee + length fee
+
+

Transaction fees are withdrawn before the transaction is executed. After the transaction is executed, the weight can be adjusted to reflect the resources used. If a transaction uses fewer resources than expected, the transaction fee is corrected, and the adjusted transaction fee is deposited.

+

Using the Transaction Payment Pallet

+

The Transaction Payment pallet provides the basic logic for calculating the inclusion fee. You can also use the Transaction Payment pallet to:

+ +

You can learn more about these configuration traits in the Transaction Payment documentation.

+

Understanding the Inclusion Fee

+

The formula for calculating the inclusion fee is as follows:

+
inclusion_fee = base_fee + length_fee + [targeted_fee_adjustment * weight_fee]
+
+

And then, for calculating the final fee:

+
final_fee = inclusion_fee + tip
+
+

In the first formula, the targeted_fee_adjustment is a multiplier that can tune the final fee based on the network’s congestion.

+
    +
  • The base_fee derived from the base weight covers inclusion overhead like signature verification
  • +
  • The length_fee is a per-byte fee that is multiplied by the length of the encoded extrinsic
  • +
  • The weight_fee fee is calculated using two parameters:
  • +
  • The ExtrinsicBaseWeight that is declared in the runtime and applies to all extrinsics
  • +
  • The #[pallet::weight] annotation that accounts for an extrinsic's complexity
  • +
+

To convert the weight to Currency, the runtime must define a WeightToFee struct that implements a conversion function, Convert<Weight,Balance>.

+

Note that the extrinsic sender is charged the inclusion fee before the extrinsic is invoked. The fee is deducted from the sender's balance even if the transaction fails upon execution.

+

Accounts with an Insufficient Balance

+

If an account does not have a sufficient balance to pay the inclusion fee and remain alive—that is, enough to pay the inclusion fee and maintain the minimum existential deposit—then you should ensure the transaction is canceled so that no fee is deducted and the transaction does not begin execution.

+

The Polkadot SDK doesn't enforce this rollback behavior. However, this scenario would be rare because the transaction queue and block-making logic perform checks to prevent it before adding an extrinsic to a block.

+

Fee Multipliers

+

The inclusion fee formula always results in the same fee for the same input. However, weight can be dynamic and—based on how WeightToFee is defined—the final fee can include some degree of variability. +The Transaction Payment pallet provides the FeeMultiplierUpdate configurable parameter to account for this variability.

+

The Polkadot network inspires the default update function and implements a targeted adjustment in which a target saturation level of block weight is defined. If the previous block is more saturated, the fees increase slightly. Similarly, if the last block has fewer transactions than the target, fees are decreased by a small amount. For more information about fee multiplier adjustments, see the Web3 Research Page.

+

Transactions with Special Requirements

+

Inclusion fees must be computable before execution and can only represent fixed logic. Some transactions warrant limiting resources with other strategies. For example:

+
    +
  • Bonds are a type of fee that might be returned or slashed after some on-chain event. For example, you might want to require users to place a bond to participate in a vote. The bond might then be returned at the end of the referendum or slashed if the voter attempted malicious behavior
  • +
  • Deposits are fees that might be returned later. For example, you might require users to pay a deposit to execute an operation that uses storage. The user’s deposit could be returned if a subsequent operation frees up storage
  • +
  • Burn operations are used to pay for a transaction based on its internal logic. For example, a transaction might burn funds from the sender if the transaction creates new storage items to pay for the increased state size
  • +
  • Limits enable you to enforce constant or configurable limits on specific operations. For example, the default Staking pallet only allows nominators to nominate 16 validators to limit the complexity of the validator election process
  • +
+

It is important to note that if you query the chain for a transaction fee, it only returns the inclusion fee.

+

Default Weight Annotations

+

All dispatchable functions in the Polkadot SDK must specify a weight. The way of doing that is using the annotation-based system that lets you combine fixed values for database read/write weight and/or fixed values based on benchmarks. The most basic example would look like this:

+
#[pallet::weight(100_000)]
+fn my_dispatchable() {
+    // ...
+}
+
+

Note that the ExtrinsicBaseWeight is automatically added to the declared weight to account for the costs of simply including an empty extrinsic into a block.

+

Weights and Database Read/Write Operations

+

To make weight annotations independent of the deployed database backend, they are defined as a constant and then used in the annotations when expressing database accesses performed by the dispatchable:

+
#[pallet::weight(T::DbWeight::get().reads_writes(1, 2) + 20_000)]
+fn my_dispatchable() {
+    // ...
+}
+
+

This dispatchable allows one database to read and two to write, in addition to other things that add the additional 20,000. Database access is generally every time a value declared inside the #[pallet::storage] block is accessed. However, unique accesses are counted because after a value is accessed, it is cached, and reaccessing it does not result in a database operation. That is:

+
    +
  • Multiple reads of the exact value count as one read
  • +
  • Multiple writes of the exact value count as one write
  • +
  • Multiple reads of the same value, followed by a write to that value, count as one read and one write
  • +
  • A write followed by a read-only counts as one write
  • +
+

Dispatch Classes

+

Dispatches are broken into three classes:

+
    +
  • Normal
  • +
  • Operational
  • +
  • Mandatory
  • +
+

If a dispatch is not defined as Operational or Mandatory in the weight annotation, the dispatch is identified as Normal by default. You can specify that the dispatchable uses another class like this:

+
#[pallet::dispatch((DispatchClass::Operational))]
+fn my_dispatchable() {
+    // ...
+}
+
+

This tuple notation also allows you to specify a final argument determining whether the user is charged based on the annotated weight. If you don't specify otherwise, Pays::Yes is assumed:

+
#[pallet::dispatch(DispatchClass::Normal, Pays::No)]
+fn my_dispatchable() {
+    // ...
+}
+
+

Normal Dispatches

+

Dispatches in this class represent normal user-triggered transactions. These types of dispatches only consume a portion of a block's total weight limit. For information about the maximum portion of a block that can be consumed for normal dispatches, see AvailableBlockRatio. Normal dispatches are sent to the transaction pool.

+

Operational Dispatches

+

Unlike normal dispatches, which represent the usage of network capabilities, operational dispatches are those that provide network capabilities. Operational dispatches can consume the entire weight limit of a block. They are not bound by the AvailableBlockRatio. Dispatches in this class are given maximum priority and are exempt from paying the length_fee.

+

Mandatory Dispatches

+

Mandatory dispatches are included in a block even if they cause the block to surpass its weight limit. You can only use the mandatory dispatch class for inherent transactions that the block author submits. This dispatch class is intended to represent functions in the block validation process. Because these dispatches are always included in a block regardless of the function weight, the validation process must prevent malicious nodes from abusing the function to craft valid but impossibly heavy blocks. You can typically accomplish this by ensuring that:

+
    +
  • The operation performed is always light
  • +
  • The operation can only be included in a block once
  • +
+

To make it more difficult for malicious nodes to abuse mandatory dispatches, they cannot be included in blocks that return errors. This dispatch class serves the assumption that it is better to allow an overweight block to be created than not to allow any block to be created at all.

+

Dynamic Weights

+

In addition to purely fixed weights and constants, the weight calculation can consider the input arguments of a dispatchable. The weight should be trivially computable from the input arguments with some basic arithmetic:

+
use frame_support:: {
+    dispatch:: {
+        DispatchClass::Normal,
+        Pays::Yes,
+    },
+   weights::Weight,
+};
+
+#[pallet::weight(FunctionOf(
+  |args: (&Vec<User>,)| args.0.len().saturating_mul(10_000),
+  )
+]
+fn handle_users(origin, calls: Vec<User>) {
+    // Do something per user
+}
+
+

Post Dispatch Weight Correction

+

Depending on the execution logic, a dispatchable function might consume less weight than was prescribed pre-dispatch. To correct weight, the function declares a different return type and returns its actual weight:

+
#[pallet::weight(10_000 + 500_000_000)]
+fn expensive_or_cheap(input: u64) -> DispatchResultWithPostInfo {
+    let was_heavy = do_calculation(input);
+
+    if (was_heavy) {
+        // None means "no correction" from the weight annotation.
+        Ok(None.into())
+    } else {
+        // Return the actual weight consumed.
+        Ok(Some(10_000).into())
+    }
+}
+
+

Custom Fees

+

You can also define custom fee systems through custom weight functions or inclusion fee functions.

+

Custom Weights

+

Instead of using the default weight annotations, you can create a custom weight calculation type using the weights module. The custom weight calculation type must implement the following traits:

+ +

The Polkadot SDK then bundles the output information of the three traits into the DispatchInfo struct and provides it by implementing the GetDispatchInfo for all Call variants and opaque extrinsic types. This is used internally by the System and Executive modules.

+

ClassifyDispatchWeighData, and PaysFee are generic over T, which gets resolved into the tuple of all dispatch arguments except for the origin. The following example illustrates a struct that calculates the weight as m * len(args), where m is a given multiplier and args is the concatenated tuple of all dispatch arguments. In this example, the dispatch class is Operational if the transaction has more than 100 bytes of length in arguments and will pay fees if the encoded length exceeds 10 bytes.

+
struct LenWeight(u32);
+impl<T> WeighData<T> for LenWeight {
+    fn weigh_data(&self, target: T) -> Weight {
+        let multiplier = self.0;
+        let encoded_len = target.encode().len() as u32;
+        multiplier * encoded_len
+    }
+}
+
+impl<T> ClassifyDispatch<T> for LenWeight {
+    fn classify_dispatch(&self, target: T) -> DispatchClass {
+        let encoded_len = target.encode().len() as u32;
+        if encoded_len > 100 {
+            DispatchClass::Operational
+        } else {
+            DispatchClass::Normal
+        }
+    }
+}
+
+impl<T> PaysFee<T> {
+    fn pays_fee(&self, target: T) -> Pays {
+        let encoded_len = target.encode().len() as u32;
+        if encoded_len > 10 {
+            Pays::Yes
+        } else {
+            Pays::No
+        }
+    }
+}
+
+

A weight calculator function can also be coerced to the final type of the argument instead of defining it as a vague type that can be encoded. The code would roughly look like this:

+
struct CustomWeight;
+impl WeighData<(&u32, &u64)> for CustomWeight {
+    fn weigh_data(&self, target: (&u32, &u64)) -> Weight {
+        ...
+    }
+}
+
+// given a dispatch:
+#[pallet::call]
+impl<T: Config<I>, I: 'static> Pallet<T, I> {
+    #[pallet::weight(CustomWeight)]
+    fn foo(a: u32, b: u64) { ... }
+}
+
+

In this example, the CustomWeight can only be used in conjunction with a dispatch with a particular signature (u32, u64), as opposed to LenWeight, which can be used with anything because there aren't any assumptions about <T>.

+

Custom Inclusion Fee

+

The following example illustrates how to customize your inclusion fee. You must configure the appropriate associated types in the respective module.

+
// Assume this is the balance type
+type Balance = u64;
+
+// Assume we want all the weights to have a `100 + 2 * w` conversion to fees
+struct CustomWeightToFee;
+impl WeightToFee<Weight, Balance> for CustomWeightToFee {
+    fn convert(w: Weight) -> Balance {
+        let a = Balance::from(100);
+        let b = Balance::from(2);
+        let w = Balance::from(w);
+        a + b * w
+    }
+}
+
+parameter_types! {
+    pub const ExtrinsicBaseWeight: Weight = 10_000_000;
+}
+
+impl frame_system::Config for Runtime {
+    type ExtrinsicBaseWeight = ExtrinsicBaseWeight;
+}
+
+parameter_types! {
+    pub const TransactionByteFee: Balance = 10;
+}
+
+impl transaction_payment::Config {
+    type TransactionByteFee = TransactionByteFee;
+    type WeightToFee = CustomWeightToFee;
+    type FeeMultiplierUpdate = TargetedFeeAdjustment<TargetBlockFullness>;
+}
+
+struct TargetedFeeAdjustment<T>(sp_std::marker::PhantomData<T>);
+impl<T: Get<Perquintill>> WeightToFee<Fixed128, Fixed128> for TargetedFeeAdjustment<T> {
+    fn convert(multiplier: Fixed128) -> Fixed128 {
+        // Don't change anything. Put any fee update info here.
+        multiplier
+    }
+}
+
+

Further Resources

+

You now know the weight system, how it affects transaction fee computation, and how to specify weights for your dispatchable calls. The next step is determining the correct weight for your dispatchable operations. You can use Substrate benchmarking functions and frame-benchmarking calls to test your functions with different parameters and empirically determine the proper weight in their worst-case scenarios.

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/blocks-transactions-fees/index.html b/polkadot-protocol/basics/blocks-transactions-fees/index.html new file mode 100644 index 00000000..108ab4a4 --- /dev/null +++ b/polkadot-protocol/basics/blocks-transactions-fees/index.html @@ -0,0 +1,3365 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/blocks-transactions-fees/transactions/index.html b/polkadot-protocol/basics/blocks-transactions-fees/transactions/index.html new file mode 100644 index 00000000..aa40d06c --- /dev/null +++ b/polkadot-protocol/basics/blocks-transactions-fees/transactions/index.html @@ -0,0 +1,4018 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Transactions | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Transactions

+

Introduction

+

Transactions are essential components of blockchain networks, enabling state changes and the execution of key operations. In the Polkadot SDK, transactions, often called extrinsics, come in multiple forms, including signed, unsigned, and inherent transactions.

+

This guide walks you through the different transaction types and how they're formatted, validated, and processed within the Polkadot ecosystem. You'll also learn how to customize transaction formats and construct transactions for FRAME-based runtimes, ensuring a complete understanding of how transactions are built and executed in Polkadot SDK-based chains.

+

What Is a Transaction?

+

In the Polkadot SDK, transactions represent operations that modify the chain's state, bundled into blocks for execution. The term extrinsic is often used to refer to any data that originates outside the runtime and is included in the chain. While other blockchain systems typically refer to these operations as "transactions," the Polkadot SDK adopts the broader term "extrinsic" to capture the wide variety of data types that can be added to a block.

+

There are three primary types of transactions (extrinsics) in the Polkadot SDK:

+
    +
  • Signed transactions - signed by the submitting account, often carrying transaction fees
  • +
  • Unsigned transactions - submitted without a signature, often requiring custom validation logic
  • +
  • Inherent transactions - typically inserted directly into blocks by block authoring nodes, without gossiping between peers
  • +
+

Each type serves a distinct purpose, and understanding when and how to use each is key to efficiently working with the Polkadot SDK.

+

Signed Transactions

+

Signed transactions require an account's signature and typically involve submitting a request to execute a runtime call. The signature serves as a form of cryptographic proof that the sender has authorized the action, using their private key. These transactions often involve a transaction fee to cover the cost of execution and incentivize block producers.

+

Signed transactions are the most common type of transaction and are integral to user-driven actions, such as token transfers. For instance, when you transfer tokens from one account to another, the sending account must sign the transaction to authorize the operation.

+

For example, the pallet_balances::Call::transfer_allow_death extrinsic in the Balances pallet allows you to transfer tokens. Since your account initiates this transaction, your account key is used to sign it. You'll also be responsible for paying the associated transaction fee, with the option to include an additional tip to incentivize faster inclusion in the block.

+

Unsigned Transactions

+

Unsigned transactions do not require a signature or account-specific data from the sender. Unlike signed transactions, they do not come with any form of economic deterrent, such as fees, which makes them susceptible to spam or replay attacks. Custom validation logic must be implemented to mitigate these risks and ensure these transactions are secure.

+

Unsigned transactions typically involve scenarios where including a fee or signature is unnecessary or counterproductive. However, due to the absence of fees, they require careful validation to protect the network. For example, pallet_im_online::Call::heartbeat extrinsic allows validators to send a heartbeat signal, indicating they are active. Since only validators can make this call, the logic embedded in the transaction ensures that the sender is a validator, making the need for a signature or fee redundant.

+

Unsigned transactions are more resource-intensive than signed ones because custom validation is required, but they play a crucial role in certain operational scenarios, especially when regular user accounts aren't involved.

+

Inherent Transactions

+

Inherent transactions are a specialized type of unsigned transaction that is used primarily for block authoring. Unlike signed or other unsigned transactions, inherent transactions are added directly by block producers and are not broadcasted to the network or stored in the transaction queue. They don't require signatures or the usual validation steps and are generally used to insert system-critical data directly into blocks.

+

A key example of an inherent transaction is inserting a timestamp into each block. The pallet_timestamp::Call::now extrinsic allows block authors to include the current time in the block they are producing. Since the block producer adds this information, there is no need for transaction validation, like signature verification. The validation in this case is done indirectly by the validators, who check whether the timestamp is within an acceptable range before finalizing the block.

+

Another example is the paras_inherent::Call::enter extrinsic, which enables parachain collator nodes to send validation data to the relay chain. This inherent transaction ensures that the necessary parachain data is included in each block without the overhead of gossiped transactions.

+

Inherent transactions serve a critical role in block authoring by allowing important operational data to be added directly to the chain without needing the validation processes required for standard transactions.

+

Transaction Formats

+

Understanding the structure of signed and unsigned transactions is crucial for developers building on Polkadot SDK-based chains. Whether you're optimizing transaction processing, customizing formats, or interacting with the transaction pool, knowing the format of extrinsics, Polkadot's term for transactions, is essential.

+

Types of Transaction Formats

+

In Polkadot SDK-based chains, extrinsics can fall into three main categories:

+
    +
  • Unchecked extrinsics - typically used for signed transactions that require validation. They contain a signature and additional data, such as a nonce and information for fee calculation. Unchecked extrinsics are named as such because they require validation checks before being accepted into the transaction pool
  • +
  • Checked extrinsics - typically used for inherent extrinsics (unsigned transactions); these don't require signature verification. Instead, they carry information such as where the extrinsic originates and any additional data required for the block authoring process
  • +
  • Opaque extrinsics - used when the format of an extrinsic is not yet fully committed or finalized. They are still decodable, but their structure can be flexible depending on the context
  • +
+

Signed Transaction Data Structure

+

A signed transaction typically includes the following components:

+
    +
  • Signature - verifies the authenticity of the transaction sender
  • +
  • Call - the actual function or method call the transaction is requesting (for example, transferring funds)
  • +
  • Nonce - tracks the number of prior transactions sent from the account, helping to prevent replay attacks
  • +
  • Tip - an optional incentive to prioritize the transaction in block inclusion
  • +
  • Additional data - includes details such as spec version, block hash, and genesis hash to ensure the transaction is valid within the correct runtime and chain context
  • +
+

Here's a simplified breakdown of how signed transactions are typically constructed in a Polkadot SDK runtime:

+
<signing account ID> + <signature> + <additional data>
+
+

Each part of the signed transaction has a purpose, ensuring the transaction's authenticity and context within the blockchain.

+

Signed Extensions

+

Polkadot SDK also provides the concept of signed extensions, which allow developers to extend extrinsics with additional data or validation logic before they are included in a block. The SignedExtension set helps enforce custom rules or protections, such as ensuring the transaction's validity or calculating priority.

+

The transaction queue regularly calls signed extensions to verify a transaction's validity before placing it in the ready queue. This safeguard ensures transactions won't fail in a block. Signed extensions are commonly used to enforce validation logic and protect the transaction pool from spam and replay attacks.

+

In FRAME, a signed extension can hold any of the following types by default:

+
    +
  • AccountId - to encode the sender's identity
  • +
  • Call - to encode the pallet call to be dispatched. This data is used to calculate transaction fees
  • +
  • AdditionalSigned - to handle any additional data to go into the signed payload allowing you to attach any custom logic prior to dispatching a transaction
  • +
  • Pre - to encode the information that can be passed from before a call is dispatched to after it gets dispatched
  • +
+

Signed extensions can enforce checks like:

+
    +
  • CheckSpecVersion - ensures the transaction is compatible with the runtime's current version
  • +
  • CheckWeight - calculates the weight (or computational cost) of the transaction, ensuring the block doesn't exceed the maximum allowed weight
  • +
+

These extensions are critical in the transaction lifecycle, ensuring that only valid and prioritized transactions are processed.

+

Transaction Construction

+

Building transactions in the Polkadot SDK involves constructing a payload that can be verified, signed, and submitted for inclusion in a block. Each runtime in the Polkadot SDK has its own rules for validating and executing transactions, but there are common patterns for constructing a signed transaction.

+

Construct a Signed Transaction

+

A signed transaction in the Polkadot SDK includes various pieces of data to ensure security, prevent replay attacks, and prioritize processing. Here's an overview of how to construct one:

+
    +
  1. Construct the unsigned payload - gather the necessary information for the call, including:
      +
    • Pallet index - identifies the pallet where the runtime function resides
    • +
    • Function index - specifies the particular function to call in the pallet
    • +
    • Parameters - any additional arguments required by the function call
    • +
    +
  2. +
  3. Create a signing payload - once the unsigned payload is ready, additional data must be included:
      +
    • Transaction nonce - unique identifier to prevent replay attacks
    • +
    • Era information - defines how long the transaction is valid before it's dropped from the pool
    • +
    • Block hash - ensures the transaction doesn't execute on the wrong chain or fork
    • +
    +
  4. +
  5. Sign the payload - using the sender's private key, sign the payload to ensure that the transaction can only be executed by the account holder
  6. +
  7. Serialize the signed payload - once signed, the transaction must be serialized into a binary format, ensuring the data is compact and easy to transmit over the network
  8. +
  9. Submit the serialized transaction - finally, submit the serialized transaction to the network, where it will enter the transaction pool and wait for processing by an authoring node
  10. +
+

The following is an example of how a signed transaction might look:

+
node_runtime::UncheckedExtrinsic::new_signed(
+    function.clone(),                                      // some call
+    sp_runtime::AccountId32::from(sender.public()).into(), // some sending account
+    node_runtime::Signature::Sr25519(signature.clone()),   // the account's signature
+    extra.clone(),                                         // the signed extensions
+)
+
+

Transaction Encoding

+

Before a transaction is sent to the network, it is serialized and encoded using a structured encoding process that ensures consistency and prevents tampering:

+
    +
  • [1] - compact encoded length in bytes of the entire transaction
  • +
  • [2] - a u8 containing 1 byte to indicate whether the transaction is signed or unsigned (1 bit) and the encoded transaction version ID (7 bits)
  • +
  • [3] - if signed, this field contains an account ID, an SR25519 signature, and some extra data
  • +
  • [4] - encoded call data, including pallet and function indices and any required arguments
  • +
+

This encoded format ensures consistency and efficiency in processing transactions across the network. By adhering to this format, applications can construct valid transactions and pass them to the network for execution.

+
+Additional Information +

Learn how compact encoding works using SCALE.

+
+

Customize Transaction Construction

+

Although the basic steps for constructing transactions are consistent across Polkadot SDK-based chains, developers can customize transaction formats and validation rules. For example:

+
    +
  • Custom pallets - you can define new pallets with custom function calls, each with its own parameters and validation logic
  • +
  • Signed extensions - developers can implement custom extensions that modify how transactions are prioritized, validated, or included in blocks
  • +
+

By leveraging Polkadot SDK's modular design, developers can create highly specialized transaction logic tailored to their chain's needs.

+

Lifecycle of a Transaction

+

In the Polkadot SDK, transactions are often referred to as extrinsics because the data in transactions originates outside of the runtime. These transactions contain data that initiates changes to the chain state. The most common type of extrinsic is a signed transaction, which is cryptographically verified and typically incurs a fee. This section focuses on how signed transactions are processed, validated, and ultimately included in a block.

+

Define Transaction Properties

+

The Polkadot SDK runtime defines key transaction properties, such as:

+
    +
  • Transaction validity - ensures the transaction meets all runtime requirements
  • +
  • Signed or unsigned - identifies whether a transaction needs to be signed by an account
  • +
  • State changes - determines how the transaction modifies the state of the chain
  • +
+

Pallets, which compose the runtime's logic, define the specific transactions that your chain supports. When a user submits a transaction, such as a token transfer, it becomes a signed transaction, verified by the user's account signature. If the account has enough funds to cover fees, the transaction is executed, and the chain's state is updated accordingly.

+

Process on a Block Authoring Node

+

In Polkadot SDK-based networks, some nodes are authorized to author blocks. These nodes validate and process transactions. When a transaction is sent to a node that can produce blocks, it undergoes a lifecycle that involves several stages, including validation and execution. Non-authoring nodes gossip the transaction across the network until an authoring node receives it. The following diagram illustrates the lifecycle of a transaction that's submitted to a network and processed by an authoring node.

+

Transaction lifecycle diagram

+

Validate and Queue

+

Once a transaction reaches an authoring node, it undergoes an initial validation process to ensure it meets specific conditions defined in the runtime. This validation includes checks for:

+
    +
  • Correct nonce - ensures the transaction is sequentially valid for the account
  • +
  • Sufficient funds - confirms the account can cover any associated transaction fees
  • +
  • Signature validity - verifies that the sender's signature matches the transaction data
  • +
+

After these checks, valid transactions are placed in the transaction pool, where they are queued for inclusion in a block. The transaction pool regularly re-validates queued transactions to ensure they remain valid before being processed. To reach consensus, two-thirds of the nodes must agree on the order of the transactions executed and the resulting state change. Transactions are validated and queued on the local node in a transaction pool to prepare for consensus.

+

Transaction Pool

+

The transaction pool is responsible for managing valid transactions. It ensures that only transactions that pass initial validity checks are queued. Transactions that fail validation, expire, or become invalid for other reasons are removed from the pool.

+

The transaction pool organizes transactions into two queues:

+
    +
  • Ready queue - transactions that are valid and ready to be included in a block
  • +
  • Future queue - transactions that are not yet valid but could be in the future, such as transactions with a nonce too high for the current state
  • +
+

Details on how the transaction pool validates transactions, including fee and signature handling, can be found in the validate_transaction method.

+

Invalid Transactions

+

If a transaction is invalid, for example, due to an invalid signature or insufficient funds, it is rejected and won't be added to the block. Invalid transactions might be rejected for reasons such as:

+
    +
  • The transaction has already been included in a block
  • +
  • The transaction's signature does not match the sender
  • +
  • The transaction is too large to fit in the current block
  • +
+

Transaction Ordering and Priority

+

When a node is selected as the next block author, it prioritizes transactions based on weight, length, and tip amount. The goal is to fill the block with high-priority transactions without exceeding its maximum size or computational limits. Transactions are ordered as follows:

+
    +
  • Inherents first - inherent transactions, such as block timestamp updates, are always placed first
  • +
  • Nonce-based ordering - transactions from the same account are ordered by their nonce
  • +
  • Fee-based ordering - among transactions with the same nonce or priority level, those with higher fees are prioritized
  • +
+

Transaction Execution

+

Once a block author selects transactions from the pool, the transactions are executed in priority order. As each transaction is processed, the state changes are written directly to the chain's storage. It's important to note that these changes are not cached, meaning a failed transaction won't revert earlier state changes, which could leave the block in an inconsistent state.

+

Events are also written to storage. Runtime logic should not emit an event before performing the associated actions. If the associated transaction fails after the event was emitted, the event will not revert.

+
+Additional Information +

Watch Seminar: Lifecycle of a transaction for a video overview of the lifecycle of transactions and the types of transactions that exist.

+
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/chain-data/index.html b/polkadot-protocol/basics/chain-data/index.html new file mode 100644 index 00000000..2f10ed57 --- /dev/null +++ b/polkadot-protocol/basics/chain-data/index.html @@ -0,0 +1,4003 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Chain Data | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Chain Data

+

Introduction

+

Understanding and leveraging on-chain data is a fundamental aspect of blockchain development. Whether you're building frontend applications or backend systems, accessing and decoding runtime metadata is vital to interacting with the blockchain. This guide introduces you to the tools and processes for generating and retrieving metadata, explains its role in application development, and outlines the additional APIs available for interacting with a Polkadot node. By mastering these components, you can ensure seamless communication between your applications and the blockchain.

+

Application Development

+

You might not be directly involved in building frontend applications as a blockchain developer. However, most applications that run on a blockchain require some form of frontend or user-facing client to enable users or other programs to access and modify the data that the blockchain stores. For example, you might develop a browser-based, mobile, or desktop application that allows users to submit transactions, post articles, view their assets, or track previous activity. The backend for that application is configured in the runtime logic for your blockchain, but the frontend client makes the runtime features accessible to your users.

+

For your custom chain to be useful to others, you'll need to provide a client application that allows users to view, interact with, or update information that the blockchain keeps track of. In this article, you'll learn how to expose information about your runtime so that client applications can use it, see examples of the information exposed, and explore tools and libraries that use it.

+

Understand Metadata

+

Polkadot SDK-based blockchain networks are designed to expose their runtime information, allowing developers to learn granular details regarding pallets, RPC calls, and runtime APIs. The metadata also exposes their related documentation. The chain's metadata is SCALE-encoded, allowing for the development of browser-based, mobile, or desktop applications to support the chain's runtime upgrades seamlessly. It is also possible to develop applications compatible with multiple Polkadot SDK-based chains simultaneously.

+

Expose Runtime Information as Metadata

+

To interact with a node or the state of the blockchain, you need to know how to connect to the chain and access the exposed runtime features. This interaction involves a Remote Procedure Call (RPC) through a node endpoint address, commonly through a secure web socket connection.

+

An application developer typically needs to know the contents of the runtime logic, including the following details:

+
    +
  • Version of the runtime the application is connecting to
  • +
  • Supported APIs
  • +
  • Implemented pallets
  • +
  • Defined functions and corresponding type signatures
  • +
  • Defined custom types
  • +
  • Exposed parameters users can set
  • +
+

As the Polkadot SDK is modular and provides a composable framework for building blockchains, there are limitless opportunities to customize the schema of properties. Each runtime can be configured with its properties, including function calls and types, which can be changed over time with runtime upgrades.

+

The Polkadot SDK enables you to generate the runtime metadata schema to capture information unique to a runtime. The metadata for a runtime describes the pallets in use and types defined for a specific runtime version. The metadata includes information about each pallet's storage items, functions, events, errors, and constants. The metadata also provides type definitions for any custom types included in the runtime.

+

Metadata provides a complete inventory of a chain's runtime. It is key to enabling client applications to interact with the node, parse responses, and correctly format message payloads sent back to that chain.

+

Generate Metadata

+

To efficiently use the blockchain's networking resources and minimize the data transmitted over the network, the metadata schema is encoded using the Parity SCALE Codec. This encoding is done automatically through the scale-infocrate.

+

At a high level, generating the metadata involves the following steps:

+
    +
  1. The pallets in the runtime logic expose callable functions, types, parameters, and documentation that need to be encoded in the metadata
  2. +
  3. The scale-info crate collects type information for the pallets in the runtime, builds a registry of the pallets that exist in a particular runtime, and the relevant types for each pallet in the registry. The type information is detailed enough to enable encoding and decoding for every type
  4. +
  5. The frame-metadata crate describes the structure of the runtime based on the registry provided by the scale-info crate
  6. +
  7. Nodes provide the RPC method state_getMetadata to return a complete description of all the types in the current runtime as a hex-encoded vector of SCALE-encoded bytes
  8. +
+

Retrieve Runtime Metadata

+

The type information provided by the metadata enables applications to communicate with nodes using different runtime versions and across chains that expose different calls, events, types, and storage items. The metadata also allows libraries to generate a substantial portion of the code needed to communicate with a given node, enabling libraries like subxt to generate frontend interfaces that are specific to a target chain.

+

Use Polkadot.js

+

Visit the Polkadot.js Portal and select the Developer dropdown in the top banner. Select RPC Calls to make the call to request metadata. Follow these steps to make the RPC call:

+
    +
  1. Select state as the endpoint to call
  2. +
  3. Select getMetadata(at) as the method to call
  4. +
  5. Click Submit RPC call to submit the call and return the metadata in JSON format
  6. +
+

Use Curl

+

You can fetch the metadata for the network by calling the node's RPC endpoint. This request returns the metadata in bytes rather than human-readable JSON:

+
curl -H "Content-Type: application/json" \
+-d '{"id":1, "jsonrpc":"2.0", "method": "state_getMetadata"}' \
+https://rpc.polkadot.io
+
+

Use Subxt

+

subxt may also be used to fetch the metadata of any data in a human-readable JSON format:

+
subxt metadata  --url wss://rpc.polkadot.io --format json > spec.json
+
+

Another option is to use the subxt explorer web UI.

+

Client Applications and Metadata

+

The metadata exposes the expected way to decode each type, meaning applications can send, retrieve, and process application information without manual encoding and decoding. Client applications must use the SCALE codec library to encode and decode RPC payloads to use the metadata. Client applications use the metadata to interact with the node, parse responses, and format message payloads sent to the node.

+

Metadata Format

+

Although the SCALE-encoded bytes can be decoded using the frame-metadata and parity-scale-codec libraries, there are other tools, such as subxt and the Polkadot-JS API, that can convert the raw data to human-readable JSON format.

+

The types and type definitions included in the metadata returned by the state_getMetadata RPC call depend on the runtime's metadata version.

+

In general, the metadata includes the following information:

+
    +
  • A constant identifying the file as containing metadata
  • +
  • The version of the metadata format used in the runtime
  • +
  • Type definitions for all types used in the runtime and generated by the scale-info crate
  • +
  • Pallet information for the pallets included in the runtime in the order that they are defined in the construct_runtime macro
  • +
+
+

Metadata formats may vary

+

Depending on the frontend library used (such as the Polkadot API), they may format the metadata differently than the raw format shown.

+
+

The following example illustrates a condensed and annotated section of metadata decoded and converted to JSON:

+
[
+    1635018093,
+    {
+        "V14": {
+            "types": {
+                "types": [{}]
+            },
+            "pallets": [{}],
+            "extrinsic": {
+                "ty": 126,
+                "version": 4,
+                "signed_extensions": [{}]
+            },
+            "ty": 141
+        }
+    }
+]
+
+

The constant 1635018093 is a magic number that identifies the file as a metadata file. The rest of the metadata is divided into the types, pallets, and extrinsic sections:

+
    +
  • The types section contains an index of the types and information about each type's type signature
  • +
  • The pallets section contains information about each pallet in the runtime
  • +
  • The extrinsic section describes the type identifier and transaction format version that the runtime uses
  • +
+

Different extrinsic versions can have varying formats, especially when considering signed transactions.

+

Pallets

+

The following is a condensed and annotated example of metadata for a single element in the pallets array (the sudo pallet):

+
{
+    "name": "Sudo",
+    "storage": {
+        "prefix": "Sudo",
+        "entries": [
+            {
+                "name": "Key",
+                "modifier": "Optional",
+                "ty": {
+                    "Plain": 0
+                },
+                "default": [0],
+                "docs": ["The `AccountId` of the sudo key."]
+            }
+        ]
+    },
+    "calls": {
+        "ty": 117
+    },
+    "event": {
+        "ty": 42
+    },
+    "constants": [],
+    "error": {
+        "ty": 124
+    },
+    "index": 8
+}
+
+

Every element metadata contains the name of the pallet it represents and information about its storage, calls, events, and errors. You can look up details about the definition of the calls, events, and errors by viewing the type index identifier. The type index identifier is the u32 integer used to access the type information for that item. For example, the type index identifier for calls in the Sudo pallet is 117. If you view information for that type identifier in the types section of the metadata, it provides information about the available calls, including the documentation for each call.

+

For example, the following is a condensed excerpt of the calls for the Sudo pallet:

+
{
+    "id": 117,
+    "type": {
+        "path": ["pallet_sudo", "pallet", "Call"],
+        "params": [
+            {
+                "name": "T",
+                "type": null
+            }
+        ],
+        "def": {
+            "variant": {
+                "variants": [
+                    {
+                        "name": "sudo",
+                        "fields": [
+                            {
+                                "name": "call",
+                                "type": 114,
+                                "typeName": "Box<<T as Config>::RuntimeCall>"
+                            }
+                        ],
+                        "index": 0,
+                        "docs": [
+                            "Authenticates sudo key, dispatches a function call with `Root` origin"
+                        ]
+                    },
+                    {
+                        "name": "sudo_unchecked_weight",
+                        "fields": [
+                            {
+                                "name": "call",
+                                "type": 114,
+                                "typeName": "Box<<T as Config>::RuntimeCall>"
+                            },
+                            {
+                                "name": "weight",
+                                "type": 8,
+                                "typeName": "Weight"
+                            }
+                        ],
+                        "index": 1,
+                        "docs": [
+                            "Authenticates sudo key, dispatches a function call with `Root` origin"
+                        ]
+                    },
+                    {
+                        "name": "set_key",
+                        "fields": [
+                            {
+                                "name": "new",
+                                "type": 103,
+                                "typeName": "AccountIdLookupOf<T>"
+                            }
+                        ],
+                        "index": 2,
+                        "docs": [
+                            "Authenticates current sudo key, sets the given AccountId (`new`) as the new sudo"
+                        ]
+                    },
+                    {
+                        "name": "sudo_as",
+                        "fields": [
+                            {
+                                "name": "who",
+                                "type": 103,
+                                "typeName": "AccountIdLookupOf<T>"
+                            },
+                            {
+                                "name": "call",
+                                "type": 114,
+                                "typeName": "Box<<T as Config>::RuntimeCall>"
+                            }
+                        ],
+                        "index": 3,
+                        "docs": [
+                            "Authenticates sudo key, dispatches a function call with `Signed` origin from a given account"
+                        ]
+                    }
+                ]
+            }
+        }
+    }
+}
+
+

For each field, you can access type information and metadata for the following:

+
    +
  • Storage metadata - provides the information required to enable applications to get information for specific storage items
  • +
  • Call metadata - includes information about the runtime calls defined by the #[pallet] macro including call names, arguments and documentation
  • +
  • Event metadata - provides the metadata generated by the #[pallet::event] macro, including the name, arguments, and documentation for each pallet event
  • +
  • Constants metadata - provides metadata generated by the #[pallet::constant] macro, including the name, type, and hex-encoded value of the constant
  • +
  • Error metadata - provides metadata generated by the #[pallet::error] macro, including the name and documentation for each pallet error
  • +
+
+

Note

+

Type identifiers change from time to time, so you should avoid relying on specific type identifiers in your applications.

+
+

Extrinsic

+

The runtime generates extrinsic metadata and provides useful information about transaction format. When decoded, the metadata contains the transaction version and the list of signed extensions.

+

For example:

+
{
+    "extrinsic": {
+        "ty": 126,
+        "version": 4,
+        "signed_extensions": [
+            {
+                "identifier": "CheckNonZeroSender",
+                "ty": 132,
+                "additional_signed": 41
+            },
+            {
+                "identifier": "CheckSpecVersion",
+                "ty": 133,
+                "additional_signed": 4
+            },
+            {
+                "identifier": "CheckTxVersion",
+                "ty": 134,
+                "additional_signed": 4
+            },
+            {
+                "identifier": "CheckGenesis",
+                "ty": 135,
+                "additional_signed": 11
+            },
+            {
+                "identifier": "CheckMortality",
+                "ty": 136,
+                "additional_signed": 11
+            },
+            {
+                "identifier": "CheckNonce",
+                "ty": 138,
+                "additional_signed": 41
+            },
+            {
+                "identifier": "CheckWeight",
+                "ty": 139,
+                "additional_signed": 41
+            },
+            {
+                "identifier": "ChargeTransactionPayment",
+                "ty": 140,
+                "additional_signed": 41
+            }
+        ]
+    },
+    "ty": 141
+}
+
+

The type system is composite, meaning each type identifier contains a reference to a specific type or to another type identifier that provides information about the associated primitive types.

+

For example, you can encode the BitVec<Order, Store> type, but to decode it properly, you must know the types used for the Order and Store types. To find type information for Order and Store, you can use the path in the decoded JSON to locate their type identifiers.

+

Included RPC APIs

+

A standard node comes with the following APIs to interact with a node:

+
    +
  • AuthorApiServer - make calls into a full node, including authoring extrinsics and verifying session keys
  • +
  • ChainApiServer - retrieve block header and finality information
  • +
  • OffchainApiServer - make RPC calls for off-chain workers
  • +
  • StateApiServer - query information about on-chain state such as runtime version, storage items, and proofs
  • +
  • SystemApiServer - retrieve information about network state, such as connected peers and node roles
  • +
+

Additional Resources

+

The following tools can help you locate and decode metadata:

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/cryptography/index.html b/polkadot-protocol/basics/cryptography/index.html new file mode 100644 index 00000000..05d3d0ff --- /dev/null +++ b/polkadot-protocol/basics/cryptography/index.html @@ -0,0 +1,3866 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Cryptography | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Cryptography

+

Introduction

+

Cryptography forms the backbone of blockchain technology, providing the mathematical verifiability crucial for consensus systems, data integrity, and user security. While a deep understanding of the underlying mathematical processes isn't necessary for most blockchain developers, grasping the fundamental applications of cryptography is essential. This page comprehensively overviews cryptographic implementations used across Polkadot SDK-based chains and the broader blockchain ecosystem.

+

Hash Functions

+

Hash functions are fundamental to blockchain technology, creating a unique digital fingerprint for any piece of data, including simple text, images, or any other form of file. They map input data of any size to a fixed-size output (typically 32 bytes) using complex mathematical operations. Hashing is used to verify data integrity, create digital signatures, and provide a secure way to store passwords. This form of mapping is known as the "pigeonhole principle," it is primarily implemented to efficiently and verifiably identify data from large sets.

+

Key Properties of Hash Functions

+
    +
  • Deterministic - the same input always produces the same output
  • +
  • Quick computation - it's easy to calculate the hash value for any given input
  • +
  • Pre-image resistance - it's infeasible to generate the input data from its hash
  • +
  • Small changes in input yield large changes in output - known as the "avalanche effect"
  • +
  • Collision resistance - the probabilities are extremely low to find two different inputs with the same hash
  • +
+

Blake2

+

The Polkadot SDK utilizes Blake2, a state-of-the-art hashing method that offers:

+
    +
  • Equal or greater security compared to SHA-2
  • +
  • Significantly faster performance than other algorithms
  • +
+

These properties make Blake2 ideal for blockchain systems, reducing sync times for new nodes and lowering the resources required for validation.

+
+

Note

+

For detailed technical specifications on Blake2, refer to the official Blake2 paper.

+
+

Types of Cryptography

+

There are two different ways that cryptographic algorithms are implemented: symmetric cryptography and asymmetric cryptography.

+

Symmetric Cryptography

+

Symmetric encryption is a branch of cryptography that isn't based on one-way functions, unlike asymmetric cryptography. It uses the same cryptographic key to encrypt plain text and decrypt the resulting ciphertext.

+

Symmetric cryptography is a type of encryption that has been used throughout history, such as the Enigma Cipher and the Caesar Cipher. It is still widely used today and can be found in Web2 and Web3 applications alike. There is only one single key, and a recipient must also have access to it to access the contained information.

+

Advantages

+
    +
  • Fast and efficient for large amounts of data
  • +
  • Requires less computational power
  • +
+

Disadvantages

+
    +
  • Key distribution can be challenging
  • +
  • Scalability issues in systems with many users
  • +
+

Asymmetric Cryptography

+

Asymmetric encryption is a type of cryptography that uses two different keys, known as a keypair: a public key, used to encrypt plain text, and a private counterpart, used to decrypt the ciphertext.

+

The public key encrypts a fixed-length message that can only be decrypted with the recipient's private key and, sometimes, a set password. The public key can be used to cryptographically verify that the corresponding private key was used to create a piece of data without compromising the private key, such as with digital signatures. This has obvious implications for identity, ownership, and properties and is used in many different protocols across Web2 and Web3.

+

Advantages

+
    +
  • Solves the key distribution problem
  • +
  • Enables digital signatures and secure key exchange
  • +
+

Disadvantages

+
    +
  • Slower than symmetric encryption
  • +
  • Requires more computational resources
  • +
+

Trade-offs and Compromises

+

Symmetric cryptography is faster and requires fewer bits in the key to achieve the same level of security that asymmetric cryptography provides. However, it requires a shared secret before communication can occur, which poses issues to its integrity and a potential compromise point. On the other hand, asymmetric cryptography doesn't require the secret to be shared ahead of time, allowing for far better end-user security.

+

Hybrid symmetric and asymmetric cryptography is often used to overcome the engineering issues of asymmetric cryptography, as it is slower and requires more bits in the key to achieve the same level of security. It encrypts a key and then uses the comparatively lightweight symmetric cipher to do the "heavy lifting" with the message.

+

Digital Signatures

+

Digital signatures are a way of verifying the authenticity of a document or message using asymmetric keypairs. They are used to ensure that a sender or signer's document or message hasn't been tampered with in transit, and for recipients to verify that the data is accurate and from the expected sender.

+

Signing digital signatures only requires a low-level understanding of mathematics and cryptography. For a conceptual example -- when signing a check, it is expected that it cannot be cashed multiple times. This isn't a feature of the signature system but rather the check serialization system. The bank will check that the serial number on the check hasn't already been used. Digital signatures essentially combine these two concepts, allowing the signature to provide the serialization via a unique cryptographic fingerprint that cannot be reproduced.

+

Unlike pen-and-paper signatures, knowledge of a digital signature cannot be used to create other signatures. Digital signatures are often used in bureaucratic processes, as they are more secure than simply scanning in a signature and pasting it onto a document.

+

Polkadot SDK provides multiple different cryptographic schemes and is generic so that it can support anything that implements the Pair trait.

+

Example of Creating a Digital Signature

+

The process of creating and verifying a digital signature involves several steps:

+
    +
  1. The sender creates a hash of the message
  2. +
  3. The hash is encrypted using the sender's private key, creating the signature
  4. +
  5. The message and signature are sent to the recipient
  6. +
  7. The recipient decrypts the signature using the sender's public key
  8. +
  9. The recipient hashes the received message and compares it to the decrypted hash
  10. +
+

If the hashes match, the signature is valid, confirming the message's integrity and the sender's identity.

+

Elliptic Curve

+

Blockchain technology requires the ability to have multiple keys creating a signature for block proposal and validation. To this end, Elliptic Curve Digital Signature Algorithm (ECDSA) and Schnorr signatures are two of the most commonly used methods. While ECDSA is a far simpler implementation, Schnorr signatures are more efficient when it comes to multi-signatures.

+

Schnorr signatures bring some noticeable features over the ECDSA/EdDSA schemes:

+
    +
  • It is better for hierarchical deterministic key derivations
  • +
  • It allows for native multi-signature through signature aggregation
  • +
  • It is generally more resistant to misuse
  • +
+

One sacrifice that is made when using Schnorr signatures over ECDSA is that both require 64 bytes, but only ECDSA signatures communicate their public key.

+

Various Implementations

+
    +
  • +

    ECDSA - Polkadot SDK provides an ECDSA signature scheme using the secp256k1 curve. This is the same cryptographic algorithm used to secure Bitcoin and Ethereum

    +
  • +
  • +

    Ed25519 - is an EdDSA signature scheme using Curve25519. It is carefully engineered at several levels of design and implementation to achieve very high speeds without compromising security

    +
  • +
  • +

    SR25519 - is based on the same underlying curve as Ed25519. However, it uses Schnorr signatures instead of the EdDSA scheme

    +
  • +
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/data-encoding/index.html b/polkadot-protocol/basics/data-encoding/index.html new file mode 100644 index 00000000..9ac7dba9 --- /dev/null +++ b/polkadot-protocol/basics/data-encoding/index.html @@ -0,0 +1,3789 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Data Encoding | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Data Encoding

+

Introduction

+

The Polkadot SDK uses a lightweight and efficient encoding/decoding mechanism to optimize data transmission across the network. This mechanism, known as the SCALE codec, is used for serializing and deserializing data.

+

The SCALE codec enables communication between the runtime and the outer node. This mechanism is designed for high-performance, copy-free data encoding and decoding in resource-constrained environments like the Polkadot SDK Wasm runtime.

+

It is not self-describing, meaning the decoding context must fully know the encoded data types.

+

Parity's libraries utilize the parity-scale-codec crate (a Rust implementation of the SCALE codec) to handle encoding and decoding for interactions between RPCs and the runtime.

+

The codec mechanism is ideal for Polkadot SDK-based chains because:

+
    +
  • It is lightweight compared to generic serialization frameworks like serde, which add unnecessary bulk to binaries
  • +
  • It doesn’t rely on Rust’s libstd, making it compatible with no_std environments like Wasm runtime
  • +
  • It integrates seamlessly with Rust, allowing easy derivation of encoding and decoding logic for new types using #[derive(Encode, Decode)]
  • +
+

Defining a custom encoding scheme in the Polkadot SDK-based chains, rather than using an existing Rust codec library, is crucial for enabling cross-platform and multi-language support.

+

SCALE Codec

+

The codec is implemented using the following traits:

+ +

Encode

+

The Encode trait handles data encoding into SCALE format and includes the following key functions:

+
    +
  • size_hint(&self) -> usize - estimates the number of bytes required for encoding to prevent multiple memory allocations. This should be inexpensive and avoid complex operations. Optional if the size isn’t known
  • +
  • encode_to<T: Output>(&self, dest: &mut T) - encodes the data, appending it to a destination buffer
  • +
  • encode(&self) -> Vec<u8> - encodes the data and returns it as a byte vector
  • +
  • using_encoded<R, F: FnOnce(&[u8]) -> R>(&self, f: F) -> R - encodes the data and passes it to a closure, returning the result
  • +
  • encoded_size(&self) -> usize - calculates the encoded size. Should be used when the encoded data isn’t required
  • +
+
+

Note

+

For best performance, value types should override using_encoded, and allocating types should override encode_to. It's recommended to implement size_hint for all types where possible.

+
+

Decode

+

The Decode trait handles decoding SCALE-encoded data back into the appropriate types:

+
    +
  • fn decode<I: Input>(value: &mut I) -> Result<Self, Error> - decodes data from the SCALE format, returning an error if decoding fails
  • +
+

CompactAs

+

The CompactAs trait wraps custom types for compact encoding:

+
    +
  • encode_as(&self) -> &Self::As - encodes the type as a compact type
  • +
  • decode_from(_: Self::As) -> Result<Self, Error> - decodes from a compact encoded type
  • +
+

HasCompact

+

The HasCompact trait indicates a type supports compact encoding.

+

EncodeLike

+

The EncodeLike trait is used to ensure multiple types that encode similarly are accepted by the same function. When using derive, it is automatically implemented.

+

Data Types

+

The table below outlines how the Rust implementation of the Parity SCALE codec encodes different data types.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDescriptionExample SCALE Decoded ValueSCALE Encoded Value
BooleanBoolean values are encoded using the least significant bit of a single byte.false / true0x00 / 0x01
Compact/general integersA "compact" or general integer encoding is sufficient for encoding large integers (up to 2^536) and is more efficient at encoding most values than the fixed-width version.unsigned integer 0 / unsigned integer 1 / unsigned integer 42 / unsigned integer 69 / unsigned integer 65535 / BigInt(100000000000000)0x00 / 0x04 / 0xa8 / 0x1501 / 0xfeff0300 / 0x0b00407a10f35a
Enumerations (tagged-unions)A fixed number of variants
Fixed-width integersBasic integers are encoded using a fixed-width little-endian (LE) format.signed 8-bit integer 69 / unsigned 16-bit integer 42 / unsigned 32-bit integer 167772150x45 / 0x2a00 / 0xffffff00
OptionsOne or zero values of a particular type.Some / None0x01 followed by the encoded value / 0x00
ResultsResults are commonly used enumerations which indicate whether certain operations were successful or unsuccessful.Ok(42) / Err(false)0x002a / 0x0100
StringsStrings are Vectors of bytes (Vec) containing a valid UTF8 sequence.
StructsFor structures, the values are named, but that is irrelevant for the encoding (names are ignored - only order matters).SortedVecAsc::from([3, 5, 2, 8])[3, 2, 5, 8]
TuplesA fixed-size series of values, each with a possibly different but predetermined and fixed type. This is simply the concatenation of each encoded value.Tuple of compact unsigned integer and boolean: (3, false)0x0c00
Vectors (lists, series, sets)A collection of same-typed values is encoded, prefixed with a compact encoding of the number of items, followed by each item's encoding concatenated in turn.Vector of unsigned 16-bit integers: [4, 8, 15, 16, 23, 42]0x18040008000f00100017002a00
+

Encode and Decode Rust Trait Implementations

+

Here's how the Encode and Decode traits are implemented:

+
use parity_scale_codec::{Encode, Decode};
+
+[derive(Debug, PartialEq, Encode, Decode)]
+enum EnumType {
+    #[codec(index = 15)]
+    A,
+    B(u32, u64),
+    C {
+        a: u32,
+        b: u64,
+    },
+}
+
+let a = EnumType::A;
+let b = EnumType::B(1, 2);
+let c = EnumType::C { a: 1, b: 2 };
+
+a.using_encoded(|ref slice| {
+    assert_eq!(slice, &b"\x0f");
+});
+
+b.using_encoded(|ref slice| {
+    assert_eq!(slice, &b"\x01\x01\0\0\0\x02\0\0\0\0\0\0\0");
+});
+
+c.using_encoded(|ref slice| {
+    assert_eq!(slice, &b"\x02\x01\0\0\0\x02\0\0\0\0\0\0\0");
+});
+
+let mut da: &[u8] = b"\x0f";
+assert_eq!(EnumType::decode(&mut da).ok(), Some(a));
+
+let mut db: &[u8] = b"\x01\x01\0\0\0\x02\0\0\0\0\0\0\0";
+assert_eq!(EnumType::decode(&mut db).ok(), Some(b));
+
+let mut dc: &[u8] = b"\x02\x01\0\0\0\x02\0\0\0\0\0\0\0";
+assert_eq!(EnumType::decode(&mut dc).ok(), Some(c));
+
+let mut dz: &[u8] = &[0];
+assert_eq!(EnumType::decode(&mut dz).ok(), None);
+
+

SCALE Codec Libraries

+

Several SCALE codec implementations are available in various languages. Here's a list of them:

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/index.html b/polkadot-protocol/basics/index.html new file mode 100644 index 00000000..87645678 --- /dev/null +++ b/polkadot-protocol/basics/index.html @@ -0,0 +1,3363 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/networks/index.html b/polkadot-protocol/basics/networks/index.html new file mode 100644 index 00000000..cbd801a1 --- /dev/null +++ b/polkadot-protocol/basics/networks/index.html @@ -0,0 +1,3685 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Networks | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Networks

+

Introduction

+

The Polkadot ecosystem is built on a robust set of networks designed to enable secure and scalable development. Whether you are testing new features or deploying to live production, Polkadot offers several layers of networks tailored for each stage of the development process. From local environments to experimental networks like Kusama and community-run TestNets such as Paseo, developers can thoroughly test, iterate, and validate their applications. This guide will introduce you to Polkadot's various networks and explain how they fit into the development workflow.

+

Network Overview

+

Polkadot's development process is structured to ensure new features and upgrades are rigorously tested before being deployed on live production networks. The progression follows a well-defined path, starting from local environments and advancing through TestNets, ultimately reaching the Polkadot MainNet. The diagram below outlines the typical progression of the Polkadot development cycle:

+


+flowchart LR
+    id1[Local] --> id2[Westend] --> id4[Kusama] --> id5[Polkadot]  
+    id1[Local] --> id3[Paseo] --> id5[Polkadot] 
+This flow ensures developers can thoroughly test and iterate without risking real tokens or affecting production networks. Testing tools like Chopsticks and various TestNets make it easier to experiment safely before releasing to production.

+

A typical journey through the Polkadot core protocol development process might look like this:

+
    +
  1. +

    Local development node - development starts in a local environment, where developers can create, test, and iterate on upgrades or new features using a local development node. This stage allows rapid experimentation in an isolated setup without any external dependencies

    +
  2. +
  3. +

    Westend - after testing locally, upgrades are deployed to Westend, Polkadot's primary TestNet. Westend simulates real-world conditions without using real tokens, making it the ideal place for rigorous feature testing before moving on to production networks

    +
  4. +
  5. +

    Kusama - once features have passed extensive testing on Westend, they move to Kusama, Polkadot's experimental and fast-moving "canary" network. Kusama operates as a high-fidelity testing ground with actual economic incentives, giving developers insights into how their features will perform in a real-world environment

    +
  6. +
  7. +

    Polkadot - after passing tests on Westend and Kusama, features are considered ready for deployment to Polkadot, the live production network

    +
  8. +
+

In addition, parachain developers can leverage local TestNets like Zombienet and deploy upgrades on parachain TestNets.

+
    +
  1. Paseo - For parachain and dApp developers, Paseo serves as a community-run TestNet that mirrors Polkadot's runtime. Like Westend for core protocol development, Paseo provides a testing ground for parachain development without affecting live networks
  2. +
+
+

Note

+

The Rococo TestNet deprecation date was October 14, 2024. Teams should use Westend for Polkadot protocol and feature testing and Paseo for chain development-related testing.

+
+

Polkadot Development Networks

+

Development and testing are crucial to building robust dApps and parachains and performing network upgrades within the Polkadot ecosystem. To achieve this, developers can leverage various networks and tools that provide a risk-free environment for experimentation and validation before deploying features to live networks. These networks help avoid the costs and risks associated with real tokens, enabling testing for functionalities like governance, cross-chain messaging, and runtime upgrades.

+

Kusama Network

+

Kusama is the experimental version of Polkadot, designed for developers who want to move quickly and test their applications in a real-world environment with economic incentives. Kusama serves as a production-grade testing ground where developers can deploy features and upgrades with the pressure of game theory and economics in mind. It mirrors Polkadot but operates as a more flexible space for innovation.

+

The native token for Kusama is KSM. For more information about KSM, visit the Native Assets page.

+

Test Networks

+

The following test networks provide controlled environments for testing upgrades and new features. TestNet tokens are available from the Polkadot faucet.

+

Westend

+

Westend is Polkadot's primary permanent TestNet. Unlike temporary test networks, Westend is not reset to the genesis block, making it an ongoing environment for testing Polkadot core features. Managed by Parity Technologies, Westend ensures that developers can test features in a real-world simulation without using actual tokens.

+

The native token for Westend is WND. More details about WND can be found on the Native Assets page.

+

Paseo

+

Paseo is a community-managed TestNet designed for parachain and dApp developers. It mirrors Polkadot's runtime and is maintained by Polkadot community members. Paseo provides a dedicated space for parachain developers to test their applications in a Polkadot-like environment without the risks associated with live networks.

+

The native token for Paseo is PAS. Additional information on PAS is available on the Native Assets page.

+

Local Test Networks

+

Local test networks are an essential part of the development cycle for blockchain developers using the Polkadot SDK. They allow for fast, iterative testing in controlled, private environments without connecting to public TestNets. Developers can quickly spin up local instances to experiment, debug, and validate their code before deploying to larger TestNets like Westend or Paseo. Two key tools for local network testing are Zombienet and Chopsticks.

+

Zombienet

+

Zombienet is a flexible testing framework for Polkadot SDK-based blockchains. It enables developers to create and manage ephemeral, short-lived networks. This feature makes Zombienet particularly useful for quick iterations, as it allows you to run multiple local networks concurrently, mimicking different runtime conditions. Whether you're developing a parachain or testing your custom blockchain logic, Zombienet gives you the tools to automate local testing.

+

Key features of Zombienet include:

+
    +
  • Creating dynamic, local networks with different configurations
  • +
  • Running parachains and relay chains in a simulated environment
  • +
  • Efficient testing of network components like cross-chain messaging and governance
  • +
+

Zombienet is ideal for developers looking to test quickly and thoroughly before moving to more resource-intensive public TestNets.

+

Chopsticks

+

Chopsticks is a tool designed to create forks of Polkadot SDK-based blockchains, allowing developers to interact with network forks as part of their testing process. This capability makes Chopsticks a powerful option for testing upgrades, runtime changes, or cross-chain applications in a forked network environment.

+

Key features of Chopsticks include:

+
    +
  • Forking live Polkadot SDK-based blockchains for isolated testing
  • +
  • Simulating cross-chain messages in a private, controlled setup
  • +
  • Debugging network behavior by interacting with the fork in real-time
  • +
+

Chopsticks provides a controlled environment for developers to safely explore the effects of runtime changes. It ensures that network behavior is tested and verified before upgrades are deployed to live networks.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/randomness/index.html b/polkadot-protocol/basics/randomness/index.html new file mode 100644 index 00000000..d935e8a5 --- /dev/null +++ b/polkadot-protocol/basics/randomness/index.html @@ -0,0 +1,3588 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Randomness | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Randomness

+

Introduction

+

Randomness is crucial in Proof of Stake (PoS) blockchains to ensure a fair and unpredictable distribution of validator duties. However, computers are inherently deterministic, meaning the same input always produces the same output. What we typically refer to as "random" numbers on a computer are actually pseudo-random. These numbers rely on an initial "seed," which can come from external sources like atmospheric noise, heart rates, or even lava lamps. While this may seem random, given the same "seed," the same sequence of numbers will always be generated.

+

In a global blockchain network, relying on real-world entropy for randomness isn’t feasible because these inputs vary by time and location. If nodes use different inputs, blockchains can fork. Hence, real-world randomness isn't suitable for use as a seed in blockchain systems.

+

Currently, two primary methods for generating randomness in blockchains are used: RANDAO and VRF (Verifiable Random Function). Polkadot adopts the VRF approach for its randomness.

+

VRF

+

A Verifiable Random Function (VRF) is a cryptographic function that generates a random number and proof that ensures the submitter produced the number. This proof allows anyone to verify the validity of the random number.

+

Polkadot's VRF is similar to the one used in Ouroboros Praos, which secures randomness for block production in systems like BABE (Polkadot’s block production mechanism).

+

The key difference is that Polkadot's VRF doesn’t rely on a central clock—avoiding the issue of whose clock to trust. Instead, it uses its own past results and slot numbers to simulate time and determine future outcomes.

+

How VRF Works

+

Slots on Polkadot are discrete units of time, each lasting six seconds, and can potentially hold a block. Multiple slots form an epoch, with 2400 slots making up one four-hour epoch.

+

In each slot, validators execute a "die roll" using a VRF. The VRF uses three inputs:

+
    +
  1. A "secret key", unique to each validator, is used for the die roll
  2. +
  3. An epoch randomness value, derived from the hash of VRF outputs from blocks two epochs ago (N-2), so past randomness influences the current epoch (N)
  4. +
  5. The current slot number
  6. +
+

This process helps maintain fair randomness across the network.

+

Here is a graphical representation:

+

+

The VRF produces two outputs: a result (the random number) and a proof (verifying that the number was generated correctly).

+

The result is checked by the validator against a protocol threshold. If it's below the threshold, the validator becomes a candidate for block production in that slot.

+

The validator then attempts to create a block, submitting it along with the PROOF and RESULT.

+

So, VRF can be expressed like:

+

(RESULT, PROOF) = VRF(SECRET, EPOCH_RANDOMNESS_VALUE, CURRENT_SLOT_NUMBER)

+

Put simply, performing a "VRF roll" generates a random number along with proof that the number was genuinely produced and not arbitrarily chosen.

+

After executing the VRF, the RESULT is compared to a protocol-defined THRESHOLD. If the RESULT is below the THRESHOLD, the validator becomes a valid candidate to propose a block for that slot. Otherwise, the validator skips the slot.

+

As a result, there may be multiple validators eligible to propose a block for a slot. In this case, the block accepted by other nodes will prevail, provided it is on the chain with the latest finalized block as determined by the GRANDPA finality gadget. It's also possible for no block producers to be available for a slot, in which case the AURA consensus takes over. AURA is a fallback mechanism that randomly selects a validator to produce a block, running in parallel with BABE and only stepping in when no block producers exist for a slot. Otherwise, it remains inactive.

+

Because validators roll independently, no block candidates may appear in some slots if all roll numbers are above the threshold.

+
+

Note

+

The resolution of this issue and the assurance that Polkadot block times remain near constant-time can be checked on the PoS Consensus page.

+
+

RANDAO

+

An alternative on-chain randomness method is Ethereum's RANDAO, where validators perform thousands of hashes on a seed and publish the final hash during a round. The collective input from all validators forms the random number, and as long as one honest validator participates, the randomness is secure.

+

To enhance security, RANDAO can optionally be combined with a Verifiable Delay Function (VDF), ensuring that randomness can't be predicted or manipulated during computation.

+
+

Note

+

More information about RANDAO can be found in the ETH documentation.

+
+

VDFs

+

Verifiable Delay Functions (VDFs) are time-bound computations that, even on parallel computers, take a set amount of time to complete.

+

They produce a unique result that can be quickly verified publicly. When combined with RANDAO, feeding RANDAO's output into a VDF introduces a delay that nullifies an attacker's chance to influence the randomness.

+

However, VDF likely requires specialized ASIC devices to run separately from standard nodes.

+
+

Warning

+

While only one is needed to secure the system, and they will be open-source and inexpensive, running them involves significant costs without direct incentives, adding friction for blockchain users.

+
+

Additional Resources

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/glossary/index.html b/polkadot-protocol/glossary/index.html new file mode 100644 index 00000000..daa370a9 --- /dev/null +++ b/polkadot-protocol/glossary/index.html @@ -0,0 +1,4114 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Glossary | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Glossary

+

Key definitions, concepts, and terminology specific to the Polkadot ecosystem are included here.

+

Additional glossaries from around the ecosystem you might find helpful:

+ +

Authority

+

The role in a blockchain that can participate in consensus mechanisms.

+ +

Authority sets can be used as a basis for consensus mechanisms such as the Nominated Proof of Stake (NPoS) protocol.

+

Authority Round (Aura)

+

A deterministic consensus protocol where block production is limited to a rotating list of authorities that take turns creating blocks. In authority round (Aura) consensus, most online authorities are assumed to be honest. It is often used in combination with GRANDPA as a hybrid consensus protocol.

+

Learn more by reading the official Aura consensus algorithm wiki article.

+

Blind Assignment of Blockchain Extension (BABE)

+

A block authoring protocol similar to Aura, except authorities win slots based on a Verifiable Random Function (VRF) instead of the round-robin selection method. The winning authority can select a chain and submit a new block.

+

Learn more by reading the official Web3 Foundation BABE research document.

+

Block Author

+

The node responsible for the creation of a block, also called block producers. In a Proof of Work (PoW) blockchain, these nodes are called miners.

+

Byzantine Fault Tolerance (BFT)

+

The ability of a distributed computer network to remain operational if a certain proportion of its nodes or authorities are defective or behaving maliciously.

+
+

Note

+

A distributed network is typically considered Byzantine fault tolerant if it can remain functional, with up to one-third of nodes assumed to be defective, offline, actively malicious, and part of a coordinated attack.

+
+

Byzantine Failure

+

The loss of a network service due to node failures that exceed the proportion of nodes required to reach consensus.

+

Practical Byzantine Fault Tolerance (pBFT)

+

An early approach to Byzantine fault tolerance (BFT), practical Byzantine fault tolerance (pBFT) systems tolerate Byzantine behavior from up to one-third of participants.

+

The communication overhead for such systems is O(n²), where n is the number of nodes (participants) in the system.

+

Call

+

In the context of pallets containing functions to be dispatched to the runtime, Call is an enumeration data type that describes the functions that can be dispatched with one variant per pallet. A Call represents a dispatch data structure object.

+

Chain Specification

+

A chain specification file defines the properties required to run a node in an active or new Polkadot SDK-built network. It often contains the initial genesis runtime code, network properties (such as the network's name), the initial state for some pallets, and the boot node list. The chain specification file makes it easy to use a single Polkadot SDK codebase as the foundation for multiple independently configured chains.

+

Collator

+

An author of a parachain network. +They aren't authorities in themselves, as they require a relay chain to coordinate consensus.

+

More details are found on the Polkadot Collator Wiki.

+

Collective

+

Most often used to refer to an instance of the Collective pallet on Polkadot SDK-based networks such as Kusama or Polkadot if the Collective pallet is part of the FRAME-based runtime for the network.

+

Consensus

+

Consensus is the process blockchain nodes use to agree on a chain's canonical fork. It is composed of authorship, finality, and fork-choice rule. In the Polkadot ecosystem, these three components are usually separate and the term consensus often refers specifically to authorship.

+

See also hybrid consensus.

+

Consensus Algorithm

+

Ensures a set of actors—who don't necessarily trust each other—can reach an agreement about the state as the result of some computation. Most consensus algorithms assume that up to one-third of the actors or nodes can be Byzantine fault tolerant.

+

Consensus algorithms are generally concerned with ensuring two properties:

+
    +
  • Safety - indicating that all honest nodes eventually agreed on the state of the chain
  • +
  • Liveness - indicating the ability of the chain to keep progressing
  • +
+

Consensus Engine

+

The node subsystem responsible for consensus tasks.

+

For detailed information about the consensus strategies of the Polkadot network, see the Polkadot Consensus blog series.

+

See also hybrid consensus.

+

Coretime

+

The time allocated for utilizing a core, measured in relay chain blocks. There are two types of coretime: on-demand and bulk.

+

On-demand coretime refers to coretime acquired through bidding in near real-time for the validation of a single parachain block on one of the cores reserved specifically for on-demand orders. They are available as an on-demand coretime pool. Set of cores that are available on-demand. Cores reserved through bulk coretime could also be made available in the on-demand coretime pool, in parts or in entirety.

+

Bulk coretime is a fixed duration of continuous coretime represented by an NFT that can be split, shared, or resold. It is managed by the Broker pallet.

+

Development Phrase

+

A mnemonic phrase that is intentionally made public.

+

Well-known development accounts, such as Alice, Bob, Charlie, Dave, Eve, and Ferdie, are generated from the same secret phrase:

+
bottom drive obey lake curtain smoke basket hold race lonely fit walk
+
+

Many tools in the Polkadot SDK ecosystem, such as subkey, allow you to implicitly specify an account using a derivation path such as //Alice.

+

Digest

+

An extensible field of the block header that encodes information needed by several actors in a blockchain network, including:

+
    +
  • Light clients for chain synchronization
  • +
  • Consensus engines for block verification
  • +
  • The runtime itself, in the case of pre-runtime digests
  • +
+

Dispatchable

+

Function objects that act as the entry points in FRAME pallets. Internal or external entities can call them to interact with the blockchain’s state. They are a core aspect of the runtime logic, handling transactions and other state-changing operations.

+

Events

+

A means of recording that some particular state transition happened.

+

In the context of FRAME, events are composable data types that each pallet can individually define. Events in FRAME are implemented as a set of transient storage items inspected immediately after a block has been executed and reset during block initialization.

+

Executor

+

A means of executing a function call in a given runtime with a set of dependencies. +There are two orchestration engines in Polkadot SDK, WebAssembly and native.

+
    +
  • +

    The native executor uses a natively compiled runtime embedded in the node to execute calls. This is a performance optimization available to up-to-date nodes

    +
  • +
  • +

    The WebAssembly executor uses a Wasm binary and a Wasm interpreter to execute calls. The binary is guaranteed to be up-to-date regardless of the version of the blockchain node because it is persisted in the state of the Polkadot SDK-based chain

    +
  • +
+

Existential Deposit

+

The minimum balance an account is allowed to have in the Balances pallet. Accounts cannot be created with a balance less than the existential deposit amount.

+

If an account balance drops below this amount, the Balances pallet uses a FRAME System API to drop its references to that account.

+

If the Balances pallet reference to an account is dropped, the account can be reaped.

+

Extrinsic

+

A general term for data that originates outside the runtime, is included in a block, and leads to some action. This includes user-initiated transactions and inherent transactions placed into the block by the block builder.

+

It is a SCALE-encoded array typically consisting of a version number, signature, and varying data types indicating the resulting runtime function to be called. Extrinsics can take two forms: inherents and transactions.

+

For more technical details, see the Polkadot spec.

+

Fork Choice Rule/Strategy

+

A fork choice rule or strategy helps determine which chain is valid when reconciling several network forks. A common fork choice rule is the longest chain, in which the chain with the most blocks is selected.

+

FRAME (Framework for Runtime Aggregation of Modularized Entities)

+

Enables developers to create blockchain runtime environments from a modular set of components called pallets. It utilizes a set of procedural macros to construct runtimes.

+

Visit the Polkadot SDK docs for more details on FRAME.

+

Full Node

+

A node that prunes historical states, keeping only recently finalized block states to reduce storage needs. Full nodes provide current chain state access and allow direct submission and validation of extrinsics, maintaining network decentralization.

+

Genesis Configuration

+

A mechanism for specifying the initial state of a blockchain. By convention, this initial state or first block is commonly referred to as the genesis state or genesis block. The genesis configuration for Polkadot SDK-based chains is accomplished by way of a chain specification file.

+

GRANDPA

+

A deterministic finality mechanism for blockchains that is implemented in the Rust programming language.

+

The formal specification is maintained by the Web3 Foundation.

+ +

A structure that aggregates the information used to summarize a block. Primarily, it consists of cryptographic information used by light clients to get minimally secure but very efficient chain synchronization.

+

Hybrid Consensus

+

A blockchain consensus protocol that consists of independent or loosely coupled mechanisms for block production and finality.

+

Hybrid consensus allows the chain to grow as fast as probabilistic consensus protocols, such as Aura, while maintaining the same level of security as deterministic finality consensus protocols, such as GRANDPA.

+

Inherent Transactions

+

A special type of unsigned transaction, referred to as inherents, that enables a block authoring node to insert information that doesn't require validation directly into a block.

+

Only the block-authoring node that calls the inherent transaction function can insert data into its block. In general, validators assume the data inserted using an inherent transaction is valid and reasonable even if it can't be deterministically verified.

+

JSON-RPC

+

A stateless, lightweight remote procedure call protocol encoded in JavaScript Object Notation (JSON). JSON-RPC provides a standard way to call functions on a remote system by using JSON.

+

For Polkadot SDK, this protocol is implemented through the Parity JSON-RPC crate.

+

Keystore

+

A subsystem for managing keys for the purpose of producing new blocks.

+

Kusama

+

Kusama is a Polkadot SDK-based blockchain that implements a design similar to the Polkadot network.

+

Kusama is a canary network and is referred to as Polkadot's "wild cousin."

+

As a canary network, Kusama is expected to be more stable than a test network like Westend but less stable than a production network like Polkadot. Kusama is controlled by its network participants and is intended to be stable enough to encourage meaningful experimentation.

+

libp2p

+

A peer-to-peer networking stack that allows the use of many transport mechanisms, including WebSockets (usable in a web browser).

+

Polkadot SDK uses the Rust implementation of the libp2p networking stack.

+

Light Client

+

A type of blockchain node that doesn't store the chain state or produce blocks.

+

A light client can verify cryptographic primitives and provides a remote procedure call (RPC) server, enabling blockchain users to interact with the network.

+

Metadata

+

Data that provides information about one or more aspects of a system. +The metadata that exposes information about a Polkadot SDK blockchain enables you to interact with that system.

+

Nominated Proof of Stake (NPoS)

+

A method for determining validators or authorities based on a willingness to commit their stake to the proper functioning of one or more block-producing nodes.

+

Oracle

+

An entity that connects a blockchain to a non-blockchain data source. Oracles enable the blockchain to access and act upon information from existing data sources and incorporate data from non-blockchain systems and services.

+

Origin

+

A FRAME primitive that identifies the source of a dispatched function call into the runtime. The FRAME System pallet defines three built-in origins. As a pallet developer, you can also define custom origins, such as those defined by the Collective pallet.

+

Pallet

+

A module that can be used to extend the capabilities of a FRAME-based runtime. +Pallets bundle domain-specific logic with runtime primitives like events and storage items.

+

Parachain

+

A parachain is a blockchain that derives shared infrastructure and security from a relay chain. +You can learn more about parachains on the Polkadot Wiki.

+

Paseo

+

Paseo TestNet provisions testing on Polkadot's "production" runtime, which means less chance of feature or code mismatch when developing parachain apps. Specifically, after the Polkadot Technical fellowship proposes a runtime upgrade for Polkadot, this TestNet is updated, giving a period where the TestNet will be ahead of Polkadot to allow for testing.

+

Polkadot

+

The Polkadot network is a blockchain that serves as the central hub of a heterogeneous blockchain network. It serves the role of the relay chain and provides shared infrastructure and security to support parachains.

+

Relay Chain

+

Relay chains are blockchains that provide shared infrastructure and security to the parachains in the network. In addition to providing consensus capabilities, relay chains allow parachains to communicate and exchange digital assets without needing to trust one another.

+

Rococo

+

A parachain test network for the Polkadot network. The Rococo network is a Polkadot SDK-based blockchain with an October 14, 2024 deprecation date. Development teams are encouraged to use the Paseo TestNet instead.

+

Runtime

+

The runtime provides the state transition function for a node. In Polkadot SDK, the runtime is stored as a Wasm binary in the chain state.

+

Slot

+

A fixed, equal interval of time used by consensus engines such as Aura and BABE. In each slot, a subset of authorities is permitted, or obliged, to author a block.

+

Sovereign Account

+

The unique account identifier for each chain in the relay chain ecosystem. It is often used in cross-consensus (XCM) interactions to sign XCM messages sent to the relay chain or other chains in the ecosystem.

+

The sovereign account for each chain is a root-level account that can only be accessed using the Sudo pallet or through governance. The account identifier is calculated by concatenating the Blake2 hash of a specific text string and the registered parachain identifier.

+

SS58 Address Format

+

A public key address based on the Bitcoin Base-58-check encoding. Each Polkadot SDK SS58 address uses a base-58 encoded value to identify a specific account on a specific Polkadot SDK-based chain

+

The canonical ss58-registry provides additional details about the address format used by different Polkadot SDK-based chains, including the network prefix and website used for different networks

+

State Transition Function (STF)

+

The logic of a blockchain that determines how the state changes when a block is processed. In Polkadot SDK, the state transition function is effectively equivalent to the runtime.

+

Storage Item

+

FRAME primitives that provide type-safe data persistence capabilities to the runtime. +Learn more in the storage items reference document in the Polkadot SDK.

+

Substrate

+

A flexible framework for building modular, efficient, and upgradeable blockchains. Substrate is written in the Rust programming language and is maintained by Parity Technologies.

+

Transaction

+

An extrinsic that includes a signature that can be used to verify the account authorizing it inherently or via signed extensions.

+

Transaction Era

+

A definable period expressed as a range of block numbers during which a transaction can be included in a block. +Transaction eras are used to protect against transaction replay attacks if an account is reaped and its replay-protecting nonce is reset to zero.

+

Trie (Patricia Merkle Tree)

+

A data structure used to represent sets of key-value pairs and enables the items in the data set to be stored and retrieved using a cryptographic hash. Because incremental changes to the data set result in a new hash, retrieving data is efficient even if the data set is very large. With this data structure, you can also prove whether the data set includes any particular key-value pair without access to the entire data set.

+

In Polkadot SDK-based blockchains, state is stored in a trie data structure that supports the efficient creation of incremental digests. This trie is exposed to the runtime as a simple key/value map where both keys and values can be arbitrary byte arrays.

+

Validator

+

A validator is a node that participates in the consensus mechanism of the network. Its roles include block production, transaction validation, network integrity, and security maintenance.

+

WebAssembly (Wasm)

+

An execution architecture that allows for the efficient, platform-neutral expression of +deterministic, machine-executable logic.

+

Wasm can be compiled from many languages, including +the Rust programming language. Polkadot SDK-based chains use a Wasm binary to provide portable runtimes that can be included as part of the chain's state.

+

Weight

+

A convention used in Polkadot SDK-based blockchains to measure and manage the time it takes to validate a block. +Polkadot SDK defines one unit of weight as one picosecond of execution time on reference hardware.

+

The maximum block weight should be equivalent to one-third of the target block time with an allocation of one-third each for:

+
    +
  • Block construction
  • +
  • Network propagation
  • +
  • Import and verification
  • +
+

By defining weights, you can trade-off the number of transactions per second and the hardware required to maintain the target block time appropriate for your use case. Weights are defined in the runtime, meaning you can tune them using runtime updates to keep up with hardware and software improvements.

+

Westend

+

Westend is a Parity-maintained, Polkadot SDK-based blockchain that serves as a test network for the Polkadot network.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/index.html b/polkadot-protocol/index.html new file mode 100644 index 00000000..3d9b5422 --- /dev/null +++ b/polkadot-protocol/index.html @@ -0,0 +1,3456 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Learn about Polkadot Protocol | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ + +
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/onchain-governance/index.html b/polkadot-protocol/onchain-governance/index.html new file mode 100644 index 00000000..1e498e26 --- /dev/null +++ b/polkadot-protocol/onchain-governance/index.html @@ -0,0 +1,3361 @@ + + + + + + + + + + + + + + + + + + + + + + + + Index | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Index

+ + +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/onchain-governance/overview/index.html b/polkadot-protocol/onchain-governance/overview/index.html new file mode 100644 index 00000000..652e7802 --- /dev/null +++ b/polkadot-protocol/onchain-governance/overview/index.html @@ -0,0 +1,3523 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + On-Chain Governance Overview | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

On-Chain Governance

+

Introduction

+

Polkadot’s governance system exemplifies decentralized decision-making, empowering its community of stakeholders to shape the network’s future through active participation. The latest evolution, OpenGov, builds on Polkadot’s foundation by providing a more inclusive and efficient governance model.

+

This guide will explain the principles and structure of OpenGov and walk you through its key components, such as Origins, Tracks, and Delegation. You will learn about improvements over earlier governance systems, including streamlined voting processes and enhanced stakeholder participation.

+

With OpenGov, Polkadot achieves a flexible, scalable, and democratic governance framework that allows multiple proposals to proceed simultaneously, ensuring the network evolves in alignment with its community's needs.

+

Governance Evolution

+

Polkadot’s governance journey began with Governance V1, a system that proved effective in managing treasury funds and protocol upgrades. However, it faced limitations, such as:

+
    +
  • Slow voting cycles, causing delays in decision-making
  • +
  • Inflexibility in handling multiple referendums, restricting scalability
  • +
+

To address these challenges, Polkadot introduced OpenGov, a governance model designed for greater inclusivity, efficiency, and scalability. OpenGov replaces the centralized structures of Governance V1, such as the Council and Technical Committee, with a fully decentralized and dynamic framework.

+

For a full comparison of the historic and current governance models, visit the Gov1 vs. Polkadot OpenGov section of the Polkadot Wiki.

+

OpenGov Key Features

+

OpenGov transforms Polkadot’s governance into a decentralized, stakeholder-driven model, eliminating centralized decision-making bodies like the Council. Key enhancements include:

+
    +
  • Decentralization - shifts all decision-making power to the public, ensuring a more democratic process
  • +
  • Enhanced delegation - allows users to delegate their votes to trusted experts across specific governance tracks
  • +
  • Simultaneous referendums - multiple proposals can progress at once, enabling faster decision-making
  • +
  • Polkadot Technical Fellowship - a broad, community-driven group replacing the centralized Technical Committee
  • +
+

This new system ensures Polkadot governance remains agile and inclusive, even as the ecosystem grows.

+

Origins and Tracks

+

In OpenGov, origins and tracks are central to managing proposals and votes.

+
    +
  • Origin - determines the authority level of a proposal (e.g., Treasury, Root) which decides the track of all referendums from that origin
  • +
  • Track - define the procedural flow of a proposal, such as voting duration, approval thresholds, and enactment timelines
  • +
+

Developers must be aware that referendums from different origins and tracks will take varying amounts of time to reach approval and enactment. The Polkadot Technical Fellowship has the option to shorten this timeline by whitelisting a proposal and allowing it to be enacted through the Whitelist Caller origin.

+

Visit Origins and Tracks Info for details on current origins and tracks, associated terminology, and parameters.

+

Referendums

+

In OpenGov, anyone can submit a referendum, fostering an open and participatory system. The timeline for a referendum depends on the privilege level of the origin with more significant changes offering more time for community voting and participation before enactment.

+

The timeline for an individual referendum includes four distinct periods:

+
    +
  • Lead-in - a minimum amount of time to allow for community participation, available room in the origin, and payment of the decision deposit. Voting is open during this period
  • +
  • Decision - voting continues
  • +
  • Confirmation - referendum must meet approval and support criteria during entire period to avoid rejection
  • +
  • Enactment - changes approved by the referendum are executed
  • +
+

Vote on Referendums

+

Voters can vote with their tokens on each referendum. Polkadot uses a voluntary token locking mechanism, called conviction voting, as a way for voters to increase their voting power. A token holder signals they have a stronger preference for approving a proposal based upon their willingness to lock up tokens. Longer voluntary token locks are seen as a signal of continual approval and translate to increased voting weight.

+

See Voting on a Referendum for a deeper look at conviction voting and related token locks.

+

Delegate Voting Power

+

The OpenGov system also supports multi-role delegations, allowing token holders to assign their voting power on different tracks to entities with expertise in those areas.

+

For example, if a token holder lacks the technical knowledge to evaluate proposals on the Root track, they can delegate their voting power for that track to an expert they trust to vote in the best interest of the network. This ensures informed decision-making across tracks while maintaining flexibility for token holders.

+

Visit Multirole Delegation for more details on delegating voting power.

+

Cancel a Referendum

+

Polkadot OpenGov has two origins for rejecting ongoing referendums:

+
    +
  • Referendum Canceller - cancels an active referendum when non-malicious errors occur and refunds the deposits to the originators
  • +
  • Referendum Killer - used for urgent, malicious cases this origin instantly terminates an active referendum and slashes deposits
  • +
+

See Cancelling, Killing, and Blacklisting for additional information on rejecting referendums.

+

Additional Resources

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..dc8177dd --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"LICENSE/","title":"LICENSE","text":"

Attribution 4.0 International

=======================================================================

Creative Commons Corporation (\"Creative Commons\") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an \"as-is\" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.

Using Creative Commons Public Licenses

Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.

 Considerations for licensors: Our public licenses are\n intended for use by those authorized to give the public\n permission to use material in ways otherwise restricted by\n copyright and certain other rights. Our licenses are\n irrevocable. Licensors should read and understand the terms\n and conditions of the license they choose before applying it.\n Licensors should also secure all rights necessary before\n applying our licenses so that the public can reuse the\n material as expected. Licensors should clearly mark any\n material not subject to the license. This includes other CC-\n licensed material, or material used under an exception or\n limitation to copyright. More considerations for licensors:\nwiki.creativecommons.org/Considerations_for_licensors\n\n Considerations for the public: By using one of our public\n licenses, a licensor grants the public permission to use the\n licensed material under specified terms and conditions. If\n the licensor's permission is not necessary for any reason--for\n example, because of any applicable exception or limitation to\n copyright--then that use is not regulated by the license. Our\n licenses grant only permissions under copyright and certain\n other rights that a licensor has authority to grant. Use of\n the licensed material may still be restricted for other\n reasons, including because others have copyright or other\n rights in the material. A licensor may make special requests,\n such as asking that all changes be marked or described.\n Although not required by our licenses, you are encouraged to\n respect those requests where reasonable. More_considerations\n for the public:\nwiki.creativecommons.org/Considerations_for_licensees\n

=======================================================================

Creative Commons Attribution 4.0 International Public License

By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution 4.0 International Public License (\"Public License\"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.

Section 1 -- Definitions.

a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.

b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.

c. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.

d. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.

e. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.

f. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.

g. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.

h. Licensor means the individual(s) or entity(ies) granting rights under this Public License.

i. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.

j. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.

k. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.

Section 2 -- Scope.

a. License grant.

   1. Subject to the terms and conditions of this Public License,\n      the Licensor hereby grants You a worldwide, royalty-free,\n      non-sublicensable, non-exclusive, irrevocable license to\n      exercise the Licensed Rights in the Licensed Material to:\n\n        a. reproduce and Share the Licensed Material, in whole or\n           in part; and\n\n        b. produce, reproduce, and Share Adapted Material.\n\n   2. Exceptions and Limitations. For the avoidance of doubt, where\n      Exceptions and Limitations apply to Your use, this Public\n      License does not apply, and You do not need to comply with\n      its terms and conditions.\n\n   3. Term. The term of this Public License is specified in Section\n      6(a).\n\n   4. Media and formats; technical modifications allowed. The\n      Licensor authorizes You to exercise the Licensed Rights in\n      all media and formats whether now known or hereafter created,\n      and to make technical modifications necessary to do so. The\n      Licensor waives and/or agrees not to assert any right or\n      authority to forbid You from making technical modifications\n      necessary to exercise the Licensed Rights, including\n      technical modifications necessary to circumvent Effective\n      Technological Measures. For purposes of this Public License,\n      simply making modifications authorized by this Section 2(a)\n      (4) never produces Adapted Material.\n\n   5. Downstream recipients.\n\n        a. Offer from the Licensor -- Licensed Material. Every\n           recipient of the Licensed Material automatically\n           receives an offer from the Licensor to exercise the\n           Licensed Rights under the terms and conditions of this\n           Public License.\n\n        b. No downstream restrictions. You may not offer or impose\n           any additional or different terms or conditions on, or\n           apply any Effective Technological Measures to, the\n           Licensed Material if doing so restricts exercise of the\n           Licensed Rights by any recipient of the Licensed\n           Material.\n\n   6. No endorsement. Nothing in this Public License constitutes or\n      may be construed as permission to assert or imply that You\n      are, or that Your use of the Licensed Material is, connected\n      with, or sponsored, endorsed, or granted official status by,\n      the Licensor or others designated to receive attribution as\n      provided in Section 3(a)(1)(A)(i).\n

b. Other rights.

   1. Moral rights, such as the right of integrity, are not\n      licensed under this Public License, nor are publicity,\n      privacy, and/or other similar personality rights; however, to\n      the extent possible, the Licensor waives and/or agrees not to\n      assert any such rights held by the Licensor to the limited\n      extent necessary to allow You to exercise the Licensed\n      Rights, but not otherwise.\n\n   2. Patent and trademark rights are not licensed under this\n      Public License.\n\n   3. To the extent possible, the Licensor waives any right to\n      collect royalties from You for the exercise of the Licensed\n      Rights, whether directly or through a collecting society\n      under any voluntary or waivable statutory or compulsory\n      licensing scheme. In all other cases the Licensor expressly\n      reserves any right to collect such royalties.\n

Section 3 -- License Conditions.

Your exercise of the Licensed Rights is expressly made subject to the following conditions.

a. Attribution.

   1. If You Share the Licensed Material (including in modified\n      form), You must:\n\n        a. retain the following if it is supplied by the Licensor\n           with the Licensed Material:\n\n             i. identification of the creator(s) of the Licensed\n                Material and any others designated to receive\n                attribution, in any reasonable manner requested by\n                the Licensor (including by pseudonym if\n                designated);\n\n            ii. a copyright notice;\n\n           iii. a notice that refers to this Public License;\n\n            iv. a notice that refers to the disclaimer of\n                warranties;\n\n             v. a URI or hyperlink to the Licensed Material to the\n                extent reasonably practicable;\n\n        b. indicate if You modified the Licensed Material and\n           retain an indication of any previous modifications; and\n\n        c. indicate the Licensed Material is licensed under this\n           Public License, and include the text of, or the URI or\n           hyperlink to, this Public License.\n\n   2. You may satisfy the conditions in Section 3(a)(1) in any\n      reasonable manner based on the medium, means, and context in\n      which You Share the Licensed Material. For example, it may be\n      reasonable to satisfy the conditions by providing a URI or\n      hyperlink to a resource that includes the required\n      information.\n\n   3. If requested by the Licensor, You must remove any of the\n      information required by Section 3(a)(1)(A) to the extent\n      reasonably practicable.\n\n   4. If You Share Adapted Material You produce, the Adapter's\n      License You apply must not prevent recipients of the Adapted\n      Material from complying with this Public License.\n

Section 4 -- Sui Generis Database Rights.

Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:

a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database;

b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and

c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.

For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.

Section 5 -- Disclaimer of Warranties and Limitation of Liability.

a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.

b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.

c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.

Section 6 -- Term and Termination.

a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.

b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:

   1. automatically as of the date the violation is cured, provided\n      it is cured within 30 days of Your discovery of the\n      violation; or\n\n   2. upon express reinstatement by the Licensor.\n\n For the avoidance of doubt, this Section 6(b) does not affect any\n right the Licensor may have to seek remedies for Your violations\n of this Public License.\n

c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.

d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.

Section 7 -- Other Terms and Conditions.

a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.

b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.

Section 8 -- Interpretation.

a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.

b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.

c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.

d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.

=======================================================================

Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the \u201cLicensor.\u201d The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark \"Creative Commons\" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.

Creative Commons may be contacted at creativecommons.org

"},{"location":"develop/blockchains/custom-blockchains/add-existing-pallets/","title":"Add a Pallet to the Runtime","text":""},{"location":"develop/blockchains/custom-blockchains/add-existing-pallets/#introduction","title":"Introduction","text":"

The Polkadot SDK Solochain Template provides a functional runtime that includes default FRAME development modules (pallets) to help you get started with building a custom blockchain.

Each pallet has specific configuration requirements, such as the parameters and types needed to enable the pallet's functionality. In this guide, you'll learn how to add a pallet to a runtime and configure the settings specific to that pallet.

The purpose of this article is to help you:

  • Learn how to update runtime dependencies to integrate a new pallet
  • Understand how to configure pallet-specific Rust traits to enable the pallet's functionality
  • Grasp the entire workflow of integrating a new pallet into your runtime
"},{"location":"develop/blockchains/custom-blockchains/add-existing-pallets/#configuring-runtime-dependencies","title":"Configuring Runtime Dependencies","text":"

For Rust programs, this configuration is defined in the Cargo.toml file, which specifies the settings and dependencies that control what gets compiled into the final binary. Since the Polkadot SDK runtime compiles to both a native binary (which includes standard Rust library functions) and a Wasm binary (which does not include the standard Rust library), the runtime/Cargo.toml file manages two key aspects:

  • The locations and versions of the pallets that are to be imported as dependencies for the runtime
  • The features in each pallet that should be enabled when compiling the native Rust binary. By enabling the standard (std) feature set from each pallet, you ensure that the runtime includes the functions, types, and primitives necessary for the native build, which are otherwise excluded when compiling the Wasm binary

Note

For information about adding dependencies in Cargo.toml files, see the Dependencies page in the Cargo documentation. For information about enabling and managing features from dependent packages, see the Features section in the Cargo documentation.

"},{"location":"develop/blockchains/custom-blockchains/add-existing-pallets/#dependencies-for-a-new-pallet","title":"Dependencies for a New Pallet","text":"

To add the dependencies for a new pallet to the runtime, you must modify the Cargo.toml file by adding a new line into the [workspace.dependencies] section with the pallet you want to add. This pallet definition might look like:

pallet-example = { version = \"4.0.0-dev\", default-features = false }\n

This line imports the pallet-example crate as a dependency and specifies the following:

  • version - the specific version of the crate to import
  • default-features - determines the behavior for including pallet features when compiling the runtime with standard Rust libraries

Note

If you\u2019re importing a pallet that isn\u2019t available on crates.io, you can specify the pallet's location (either locally or from a remote repository) by using the git or path key. For example:

pallet-example = { \n    version = \"4.0.0-dev\",\n    default-features = false,\n    git = \"INSERT_PALLET_REMOTE_URL\",\n}\n

In this case, replace INSERT_PALLET_REMOTE_URL with the correct repository URL. For local paths, use the path key like so:

pallet-example = { \n    version = \"4.0.0-dev\",\n    default-features = false,\n    path = \"INSERT_PALLET_RELATIVE_PATH\",\n}\n

Ensure that you substitute INSERT_PALLET_RELATIVE_PATH with the appropriate local path to the pallet.

Next, add this dependency to the [dependencies] section of the runtime/Cargo.toml file, so it inherits from the main Cargo.toml file:

pallet-examples.workspace = true\n

To enable the std feature of the pallet, add the pallet to the following section:

[features]\ndefault = [\"std\"]\nstd = [\n    ...\n    \"pallet-example/std\",\n    ...\n]\n

This section specifies the default feature set for the runtime, which includes the std features for each pallet. When the runtime is compiled with the std feature set, the standard library features for all listed pallets are enabled. For more details on how the runtime is compiled as both a native binary (using std) and a Wasm binary (using no_std), refer to the Wasm build section in the Polkadot SDK documentation.

Note

If you forget to update the features section in the Cargo.toml file, you might encounter cannot find function errors when compiling the runtime.

To ensure that the new dependencies resolve correctly for the runtime, you can run the following command:

cargo check --release\n
"},{"location":"develop/blockchains/custom-blockchains/add-existing-pallets/#config-trait-for-pallets","title":"Config Trait for Pallets","text":"

Every Polkadot SDK pallet defines a Rust trait called Config. This trait specifies the types and parameters that the pallet needs to integrate with the runtime and perform its functions. The primary purpose of this trait is to act as an interface between this pallet and the runtime in which it is embedded. A type, function, or constant in this trait is essentially left to be configured by the runtime that includes this pallet.

Consequently, a runtime that wants to include this pallet must implement this trait.

You can inspect any pallet\u2019s Config trait by reviewing its Rust documentation or source code. The Config trait ensures the pallet has access to the necessary types (like events, calls, or origins) and integrates smoothly with the rest of the runtime.

At its core, the Config trait typically looks like this:

#[pallet::config]\npub trait Config: frame_system::Config {\n    /// Event type used by the pallet.\n    type RuntimeEvent: From<Event> + IsType<<Self as frame_system::Config>::RuntimeEvent>;\n\n    /// Weight information for controlling extrinsic execution costs.\n    type WeightInfo: WeightInfo;\n}\n

This basic structure shows that every pallet must define certain types, such as RuntimeEvent and WeightInfo, to function within the runtime. The actual implementation can vary depending on the pallet\u2019s specific needs.

Example - Utility Pallet

For instance, in the\u00a0utility pallet, the Config trait is implemented with the following types:

#[pallet::config]\npub trait Config: frame_system::Config {\n    /// The overarching event type.\n    type RuntimeEvent: From<Event> + IsType<<Self as frame_system::Config>::RuntimeEvent>;\n\n    /// The overarching call type.\n    type RuntimeCall: Parameter\n    + Dispatchable<RuntimeOrigin = Self::RuntimeOrigin, PostInfo = PostDispatchInfo>\n    + GetDispatchInfo\n    + From<frame_system::Call<Self>>\n    + UnfilteredDispatchable<RuntimeOrigin = Self::RuntimeOrigin>\n    + IsSubType<Call<Self>>\n    + IsType<<Self as frame_system::Config>::RuntimeCall>;\n\n    /// The caller origin, overarching type of all pallets origins.\n    type PalletsOrigin: Parameter +\n    Into<<Self as frame_system::Config>::RuntimeOrigin> +\n    IsType<<<Self as frame_system::Config>::RuntimeOrigin as frame_support::traits::OriginTrait>::PalletsOrigin>;\n\n    /// Weight information for extrinsics in this pallet.\n    type WeightInfo: WeightInfo;\n}\n

This example shows how the Config trait defines types like RuntimeEvent, RuntimeCall, PalletsOrigin, and WeightInfo, which the pallet will use when interacting with the runtime.

"},{"location":"develop/blockchains/custom-blockchains/add-existing-pallets/#parameter-configuration-for-pallets","title":"Parameter Configuration for Pallets","text":"

Traits in Rust define shared behavior, and within the Polkadot SDK, they allow runtimes to integrate and utilize a pallet's functionality by implementing its associated configuration trait and parameters. Some of these parameters may require constant values, which can be defined using the parameter_types! macro. This macro simplifies development by expanding the constants into the appropriate struct types with functions that the runtime can use to access their types and values in a consistent manner.

For example, the following code snippet shows how the solochain template configures certain parameters through the parameter_types! macro in the runtime/lib.rs file:

parameter_types! {\n    pub const BlockHashCount: BlockNumber = 2400;\n    pub const Version: RuntimeVersion = VERSION;\n    /// We allow for 2 seconds of compute with a 6 second average block time.\n    pub BlockWeights: frame_system::limits::BlockWeights =\n        frame_system::limits::BlockWeights::with_sensible_defaults(\n            Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX),\n            NORMAL_DISPATCH_RATIO,\n        );\n    pub BlockLength: frame_system::limits::BlockLength = frame_system::limits::BlockLength\n        ::max_with_normal_ratio(5 * 1024 * 1024, NORMAL_DISPATCH_RATIO);\n    pub const SS58Prefix: u8 = 42;\n}\n
"},{"location":"develop/blockchains/custom-blockchains/add-existing-pallets/#pallet-config-in-the-runtime","title":"Pallet Config in the Runtime","text":"

To integrate a new pallet into the runtime, you must implement its Config trait in the runtime/lib.rs file. This is done by specifying the necessary types and parameters in Rust, as shown below:

impl pallet_example::Config for Runtime {\n    type RuntimeEvent = RuntimeEvent;\n    type WeightInfo = pallet_template::weights::SubstrateWeight<Runtime>;\n    ...\n}\n

Finally, to compose the runtime, update the list of pallets in the same file by modifying the #[frame_support::runtime] section. This Rust macro constructs the runtime with a specified name and the pallets you want to include. Use the following format when adding your pallet:

#[frame_support::runtime]\nmod runtime {\n    #[runtime::runtime]\n    #[runtime::derive(\n        RuntimeCall,\n        RuntimeEvent,\n        RuntimeError,\n        RuntimeOrigin,\n        RuntimeFreezeReason,\n        RuntimeHoldReason,\n        RuntimeSlashReason,\n        RuntimeLockId,\n        RuntimeTask\n    )]\n    pub struct Runtime;\n\n    #[runtime::pallet_index(0)]\n    pub type System = frame_system;\n\n    #[runtime::pallet_index(1)]\n    pub type Example = pallet_example;\n

Note

The #[frame_support::runtime] macro wraps the runtime's configuration, automatically generating boilerplate code for pallet inclusion.

"},{"location":"develop/blockchains/custom-blockchains/add-existing-pallets/#where-to-go-next","title":"Where to Go Next","text":"

With the pallet successfully added and configured, the runtime is ready to be compiled and used. Following this guide\u2019s steps, you\u2019ve integrated a new pallet into the runtime, set up its dependencies, and ensured proper configuration. You can now proceed to any of the following points:

  • Dive deeper by creating your custom pallet to expand the functionality of your blockchain
  • Ensure robustness with Pallet Testing to verify the accuracy and behavior of your code
"},{"location":"develop/blockchains/custom-blockchains/benchmarking/","title":"Benchmark Testing","text":""},{"location":"develop/blockchains/custom-blockchains/benchmarking/#introduction","title":"Introduction","text":"

Benchmark testing is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmark testing your custom pallets ensures that each extrinsic has a precise weight, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks.

The Polkadot SDK leverages the FRAME benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's benchmarking framework, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure.

"},{"location":"develop/blockchains/custom-blockchains/benchmarking/#the-case-for-benchmark-testing","title":"The Case for Benchmark Testing","text":"

Benchmark testing helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmark testing, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights.

Benchmark testing also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability.

"},{"location":"develop/blockchains/custom-blockchains/benchmarking/#benchmark-testing-and-weight","title":"Benchmark Testing and Weight","text":"

In Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as:

  • Computational complexity
  • Storage complexity (proof size)
  • Database reads and writes
  • Hardware specifications

Benchmark testing uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model.

Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmark testing. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period of time.

Within FRAME, each function call that is dispatched must have a #[pallet::weight] annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs:

#[pallet::call_index(0)]\n#[pallet::weight(T::WeightInfo::do_something())]\npub fn do_something(origin: OriginFor<T>) -> DispatchResultWithPostInfo { Ok(()) }\n

The WeightInfo file is automatically generated during benchmark testing. Based on these tests, this file provides accurate weights for each extrinsic.

"},{"location":"develop/blockchains/custom-blockchains/benchmarking/#benchmark-process","title":"Benchmark Process","text":"

Benchmark testing a pallet involves the following steps:

  1. Creating a benchmarking.rs file within your pallet's structure
  2. Writing a benchmarking test for each extrinsic
  3. Executing the benchmarking tool to calculate weights based on performance metrics

The benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmark testing pipeline is deactivated. To activate it, compile your runtime with the runtime-benchmarks feature flag.

"},{"location":"develop/blockchains/custom-blockchains/benchmarking/#prepare-your-environment","title":"Prepare Your Environment","text":"

Before writing benchmark tests, you need to ensure the frame-benchmarking crate is included in your pallet's Cargo.toml similar to the following:

Cargo.toml
frame-benchmarking = { version = \"37.0.0\", default-features = false }\n

You must also ensure that you add the runtime-benchmarks feature flag as follows under the [features] section of your pallet's Cargo.toml:

Cargo.toml
runtime-benchmarks = [\n  \"frame-benchmarking/runtime-benchmarks\",\n  \"frame-support/runtime-benchmarks\",\n  \"frame-system/runtime-benchmarks\",\n  \"sp-runtime/runtime-benchmarks\",\n]\n

Lastly, ensure that frame-benchmarking is included in std = []:

Cargo.toml
std = [\n  # ...\n  \"frame-benchmarking?/std\",\n  # ...\n]\n

Once complete, you have the required dependencies for writing benchmark tests for your pallet.

"},{"location":"develop/blockchains/custom-blockchains/benchmarking/#write-benchmark-tests","title":"Write Benchmark Tests","text":"

Create a benchmarking.rs file in your pallet's src/. Your directory structure should look similar to the following:

my-pallet/\n\u251c\u2500\u2500 src/\n\u2502   \u251c\u2500\u2500 lib.rs          # Main pallet implementation\n\u2502   \u2514\u2500\u2500 benchmarking.rs # Benchmarking\n\u2514\u2500\u2500 Cargo.toml\n

With the directory structure set, you can use the polkadot-sdk-parachain-template to get started as follows:

benchmarking.rs (starter template)
//! Benchmarking setup for pallet-template\n#![cfg(feature = \"runtime-benchmarks\")]\n\nuse super::*;\nuse frame_benchmarking::v2::*;\n\n#[benchmarks]\nmod benchmarks {\n    use super::*;\n    #[cfg(test)]\n    use crate::pallet::Pallet as Template;\n    use frame_system::RawOrigin;\n\n    #[benchmark]\n    fn do_something() {\n        let caller: T::AccountId = whitelisted_caller();\n        #[extrinsic_call]\n        do_something(RawOrigin::Signed(caller), 100);\n\n        assert_eq!(Something::<T>::get().map(|v| v.block_number), Some(100u32.into()));\n    }\n\n    #[benchmark]\n    fn cause_error() {\n        Something::<T>::put(CompositeStruct { block_number: 100u32.into() });\n        let caller: T::AccountId = whitelisted_caller();\n        #[extrinsic_call]\n        cause_error(RawOrigin::Signed(caller));\n\n        assert_eq!(Something::<T>::get().map(|v| v.block_number), Some(101u32.into()));\n    }\n\n    impl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test);\n}\n

In your benchmarking tests, employ these best practices:

  • Write custom testing functions - the function do_something in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as whitelisted_caller() to sign transactions and facilitate testing
  • Use the #[extrinsic_call] macro - this macro is used when calling the extrinsic itself and is a required part of a benchmark testing function. See the `extrinsic_call Rust docs for more details
  • Validate extrinsic behavior - the assert_eq expression ensures that the extrinsic is working properly within the benchmark context
"},{"location":"develop/blockchains/custom-blockchains/benchmarking/#add-benchmarks-to-runtime","title":"Add Benchmarks to Runtime","text":"

Before running the benchmarking tool, you must integrate benchmarks with your runtime as follows:

  1. Create a benchmarks.rs file. This file should contain the following macro, which registers all pallets for benchmarking, as well as their respective configurations: benchmarks.rs

    frame_benchmarking::define_benchmarks!(\n    [frame_system, SystemBench::<Runtime>]\n    [pallet_parachain_template, TemplatePallet]\n    [pallet_balances, Balances]\n    [pallet_session, SessionBench::<Runtime>]\n    [pallet_timestamp, Timestamp]\n    [pallet_message_queue, MessageQueue]\n    [pallet_sudo, Sudo]\n    [pallet_collator_selection, CollatorSelection]\n    [cumulus_pallet_parachain_system, ParachainSystem]\n    [cumulus_pallet_xcmp_queue, XcmpQueue]\n);\n
    For example, to register a pallet named pallet_parachain_template for benchmark testing, add it as follows: benchmarks.rs
    frame_benchmarking::define_benchmarks!(\n    [frame_system, SystemBench::<Runtime>]\n    [pallet_parachain_template, TemplatePallet]\n);\n

    Updating define_benchmarks! macro is required

    If the pallet isn't included in the define_benchmarks! macro, the CLI cannot access and benchmark it later.

  2. Navigate to the runtime's lib.rs file and add the import for benchmarks.rs as follows:

    lib.rs
    #[cfg(feature = \"runtime-benchmarks\")]\nmod benchmarks;\n

    The runtime-benchmarks feature gate ensures benchmark tests are isolated from production runtime code.

"},{"location":"develop/blockchains/custom-blockchains/benchmarking/#run-benchmarks","title":"Run Benchmarks","text":"

You can now compile your runtime with the runtime-benchmarks feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled:

  1. Run build with the feature flag included

    cargo build --features runtime-benchmarks --release\n
  2. Once compiled, run the benchmarking tool to measure extrinsic weights

    ./target/release/INSERT_NODE_BINARY_NAME benchmark pallet \\\n--runtime INSERT_PATH_TO_WASM_RUNTIME \\\n--pallet INSERT_NAME_OF_PALLET \\\n--extrinsic '*' \\\n--steps 20 \\\n--repeat 10 \\\n--output weights.rs\n

    Flag definitions

    • --runtime - the path to your runtime's Wasm
    • --pallet - the name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in define_benchmarks
    • --extrinsic - which extrinsic to test. Using '*' implies all extrinsics will be benchmarked
    • --output - where the output of the auto-generated weights will reside

The generated weights.rs file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity:

./target/release/INSERT_NODE_BINARY_NAME benchmark pallet \\ --runtime INSERT_PATH_TO_WASM_RUNTIME \\ --pallet INSERT_PALLET_NAME \\ --extrinsic '*' \\ --steps 20 \\ --repeat 10 \\ --output weights.rs 2024-10-28 11:07:25 Loading WASM from ./target/release/wbuild/educhain-runtime/educhain_runtime.wasm 2024-10-28 11:07:26 Could not find genesis preset 'development'. Falling back to default. 2024-10-28 11:07:26 assembling new collators for new session 0 at #0 2024-10-28 11:07:26 assembling new collators for new session 1 at #0 2024-10-28 11:07:26 Loading WASM from ./target/release/wbuild/educhain-runtime/educhain_runtime.wasm Pallet: \"pallet_parachain_template\", Extrinsic: \"do_something\", Lowest values: [], Highest values: [], Steps: 20, Repeat: 10 ... Created file: \"weights.rs\" 2024-10-28 11:07:27 [ 0 % ] Starting benchmark: pallet_parachain_template::do_something I2024-10-28 11:07:27 [ 50 % ] Starting benchmark: pallet_parachain_template::cause_error"},{"location":"develop/blockchains/custom-blockchains/benchmarking/#add-benchmark-weights-to-pallet","title":"Add Benchmark Weights to Pallet","text":"

Once the weights.rs is generated, you may add the generated weights to your pallet. It is common that weights.rs become part of your pallet's root in src/:

use crate::weights::WeightInfo;\n\n/// Configure the pallet by specifying the parameters and types on which it depends.\n#[pallet::config]\npub trait Config: frame_system::Config {\n    /// A type representing the weights required by the dispatchables of this pallet.\n    type WeightInfo: WeightInfo;\n}\n

After which, you may add this to the #[pallet::weight] annotation in the extrinsic via the Config:

#[pallet::call_index(0)]\n#[pallet::weight(T::WeightInfo::do_something())]\npub fn do_something(origin: OriginFor<T>) -> DispatchResultWithPostInfo { Ok(()) }\n
"},{"location":"develop/blockchains/custom-blockchains/benchmarking/#where-to-go-next","title":"Where to Go Next","text":"
  • View the Rust Docs for a more comprehensive, low-level view of the FRAME V2 Benchmarking Suite
  • Read the FRAME Benchmarking and Weights reference document, a concise guide which details how weights and benchmarking work
"},{"location":"develop/blockchains/custom-blockchains/frame-overview/","title":"FRAME Overview","text":""},{"location":"develop/blockchains/custom-blockchains/frame-overview/#introduction","title":"Introduction","text":"

The runtime is the core component of a Polkadot SDK-based blockchain, encapsulating essential business logic and serving as the state transition function. It is responsible for:

  • Defining storage items that represent the blockchain state
  • Specifying transactions that allow users to modify the state
  • Managing state changes in response to transactions

Polkadot SDK provides a comprehensive toolkit for constructing essential blockchain components, allowing developers to concentrate on crafting the specific runtime logic that defines their blockchain's unique set of use cases and capabilities.

FRAME (Framework for Runtime Aggregation of Modularized Entities) provides a robust collection of tools to facilitate Polkadot SDK-based blockchain development. It offers reusable modules and useful abstractions that streamline developers' development process. It consists of:

  • Pallets - modular components containing specific blockchain logic
  • Support libraries - tools and utilities to facilitate runtime development
"},{"location":"develop/blockchains/custom-blockchains/frame-overview/#frame-runtime-architecture","title":"FRAME Runtime Architecture","text":"

The following diagram illustrates how FRAME components integrate into the runtime:

All transactions sent to the runtime are handled by the frame_executive pallet, which dispatches them to the appropriate pallet for execution. These runtime modules contain the logic for specific blockchain features. The frame_system module provides core functions, while frame_support libraries offer useful tools to simplify pallet development. Together, these components form the backbone of a FRAME-based blockchain's runtime.

"},{"location":"develop/blockchains/custom-blockchains/frame-overview/#pallets","title":"Pallets","text":"

Pallets are modular components within the FRAME ecosystem that encapsulate specific blockchain functionalities. These modules offer customizable business logic for various use cases and features that can be integrated into a runtime.

Developers have the flexibility to implement any desired behavior in the core logic of the blockchain, such as:

  • Exposing new transactions
  • Storing information
  • Enforcing business rules

Pallets also include necessary wiring code to ensure proper integration and functionality within the runtime. FRAME provides a range of pre-built pallets for standard and common blockchain functionalities, including consensus algorithms, staking mechanisms, governance systems, and more. These pre-existing pallets serve as building blocks or templates, which developers can use as-is, modify, or reference when creating custom functionalities.

"},{"location":"develop/blockchains/custom-blockchains/frame-overview/#pallet-structure","title":"Pallet Structure","text":"

Polkadot SDK heavily utilizes Rust macros, allowing developers to focus on specific functional requirements when writing pallets instead of dealing with technicalities and scaffolding code.

A typical pallet skeleton looks like this:

pub use pallet::*;\n\n#[frame_support::pallet]\npub mod pallet {\n  use frame_support::pallet_prelude::*;\n  use frame_system::pallet_prelude::*;\n\n  #[pallet::pallet]\n  #[pallet::generate_store(pub(super) trait Store)]\n  pub struct Pallet<T>(_);\n\n  #[pallet::config]  // snip\n  #[pallet::event]   // snip\n  #[pallet::error]   // snip\n  #[pallet::storage] // snip\n  #[pallet::call]    // snip\n}\n

All pallets, including custom ones, can implement these attribute macros:

  • #[frame_support::pallet] - marks the module as usable in the runtime
  • #[pallet::pallet] - applied to a structure used to retrieve module information easily
  • #[pallet::config] - defines the configuration for the pallets's data types
  • #[pallet::event] - defines events to provide additional information to users
  • #[pallet::error] - lists possible errors in an enum to be returned upon unsuccessful execution
  • #[pallet::storage] - defines elements to be persisted in storage
  • #[pallet::call] - defines functions exposed as transactions, allowing dispatch to the runtime

These macros are applied as attributes to Rust modules, functions, structures, enums, and types. They enable the pallet to be built and added to the runtime, exposing the custom logic to the outer world.

Note

The macros above are the core components of a pallet. For a comprehensive guide on these and additional macros, refer to the pallet_macros section in the Polkadot SDK documentation.

"},{"location":"develop/blockchains/custom-blockchains/frame-overview/#support-libraries","title":"Support Libraries","text":"

In addition to purpose-specific pallets, FRAME offers services and core libraries that facilitate composing and interacting with the runtime:

  • frame_system pallet - provides low-level types, storage, and functions for the runtime
  • frame_executive pallet - orchestrates the execution of incoming function calls to the respective pallets in the runtime
  • frame_support crate - is a collection of Rust macros, types, traits, and modules that simplify the development of Substrate pallets
  • frame_benchmarking crate - contains common runtime patterns for benchmarking and testing purposes
"},{"location":"develop/blockchains/custom-blockchains/frame-overview/#compose-a-runtime-with-pallets","title":"Compose a Runtime with Pallets","text":"

The Polkadot SDK allows developers to construct a runtime by combining various pallets, both built-in and custom-made. This modular approach enables the creation of unique blockchain behaviors tailored to specific requirements.

The following diagram illustrates the process of selecting and combining FRAME pallets to compose a runtime:

This modular design allows developers to:

  • Rapidly prototype blockchain systems
  • Easily add or remove features by including or excluding pallets
  • Customize blockchain behavior without rebuilding core components
  • Leverage tested and optimized code from built-in pallets

For more detailed information on implementing this process, refer to the following sections:

  • Add a Pallet to Your Runtime
  • Create a Custom Pallet
"},{"location":"develop/blockchains/custom-blockchains/make-custom-pallet/","title":"Make a Custom Pallet","text":""},{"location":"develop/blockchains/custom-blockchains/make-custom-pallet/#introduction","title":"Introduction","text":"

FRAME provides a powerful set of tools for blockchain development, including a library of pre-built pallets. However, its true strength lies in the ability to create custom pallets tailored to your specific needs. This section will guide you through creating your own custom pallet, allowing you to extend your blockchain's functionality in unique ways.

To get the most out of this guide, ensure you're familiar with FRAME concepts.

Creating custom pallets offers several advantages over relying on pre-built pallets:

  • Flexibility - define runtime behavior that precisely matches your project requirements
  • Modularity - combine pre-built and custom pallets to achieve the desired blockchain functionality
  • Scalability - add or modify features as your project evolves

As you follow this guide to create your custom pallet, you'll work with the following key sections:

  1. Imports and dependencies - bring in necessary FRAME libraries and external modules
  2. Runtime configuration trait - specify the types and constants required for your pallet to interact with the runtime
  3. Runtime events - define events that your pallet can emit to communicate state changes
  4. Runtime errors - define the error types that can be returned from the function calls dispatched to the runtime
  5. Runtime storage - declare on-chain storage items for your pallet's state
  6. Extrinsics (function calls) - create callable functions that allow users to interact with your pallet and execute transactions

For additional macros you can include in a pallet, beyond those covered in this guide, refer to the pallet_macros section of the Polkadot SDK Docs.

"},{"location":"develop/blockchains/custom-blockchains/make-custom-pallet/#initial-setup","title":"Initial Setup","text":"

This section will guide you through the initial steps of creating the foundation for your custom FRAME pallet. You'll create a new Rust library project and set up the necessary dependencies.

  1. Create a new Rust library project using the following cargo command:

    cargo new --lib custom-pallet \\\n&& cd custom-pallet\n

    This command creates a new library project named custom-pallet and navigates into its directory.

  2. Configure the dependencies required for FRAME pallet development in the Cargo.toml file as follows:

    [package]\nname = \"custom-pallet\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n[dependencies]\nframe-support = { version = \"37.0.0\", default-features = false }\nframe-system = { version = \"37.0.0\", default-features = false }\ncodec = { version = \"3.6.12\", default-features = false, package = \"parity-scale-codec\", features = [\n  \"derive\",\n] }\nscale-info = { version = \"2.11.1\", default-features = false, features = [\n  \"derive\",\n] }\nsp-runtime = { version = \"39.0.0\", default-features = false }\n\n[features]\ndefault = [\"std\"]\nstd = [\n  \"frame-support/std\",\n  \"frame-system/std\",\n  \"codec/std\",\n  \"scale-info/std\",\n  \"sp-runtime/std\",\n]\n

    Note

    Proper version management is crucial for ensuring compatibility and reducing potential conflicts in your project. Carefully select the versions of the packages according to your project's specific requirements:

    • When developing for a specific Polkadot SDK runtime, ensure that your pallet's dependency versions match those of the target runtime
    • If you're creating this pallet within a Polkadot SDK workspace:

      • Define the actual versions in the root Cargo.toml file
      • Use workspace inheritance in your pallet's Cargo.toml to maintain consistency across your project
    • Regularly check for updates to FRAME and Polkadot SDK dependencies to benefit from the latest features, performance improvements, and security patches

    For detailed information on workspace inheritance and how to properly integrate your pallet with the runtime, refer to the Add an Existing Pallet to the Runtime page.

  3. Initialize the pallet structure by replacing the contents of src/lib.rs with the following scaffold code:

    pub use pallet::*;\n\n#[frame_support::pallet]\npub mod pallet {\n    use frame_support::pallet_prelude::*;\n    use frame_system::pallet_prelude::*;\n\n    #[pallet::pallet]\n    pub struct Pallet<T>(_);\n\n    #[pallet::config]  // snip\n    #[pallet::event]   // snip\n    #[pallet::error]   // snip\n    #[pallet::storage] // snip\n    #[pallet::call]    // snip\n}\n

    With this scaffold in place, you're ready to start implementing your custom pallet's specific logic and features. The subsequent sections of this guide will walk you through populating each of these components with the necessary code for your pallet's functionality.

"},{"location":"develop/blockchains/custom-blockchains/make-custom-pallet/#pallet-configuration","title":"Pallet Configuration","text":"

Every pallet includes a Rust trait called\u00a0Config, which exposes configurable options and links your pallet to other parts of the runtime. All types and constants the pallet depends on must be declared within this trait. These types are defined generically and made concrete when the pallet is instantiated in the runtime/src/lib.rs file of your blockchain.

In this step, you'll only configure the common types used by all pallets:

  • RuntimeEvent - since this pallet emits events, the runtime event type is required to handle them. This ensures that events generated by the pallet can be correctly processed and interpreted by the runtime
  • WeightInfo - this type defines the weights associated with the pallet's callable functions (also known as dispatchables). Weights help measure the computational cost of executing these functions. However, the WeightInfo type will be left unconfigured since setting up custom weights is outside the scope of this guide

Replace the line containing the #[pallet::config] macro with the following code block:

#[pallet::config]\npub trait Config: frame_system::Config {\n    /// The overarching runtime event type.\n    type RuntimeEvent: From<Event<Self>> + IsType<<Self as frame_system::Config>::RuntimeEvent>;\n    /// A type representing the weights required by the dispatchables of this pallet.\n    type WeightInfo;\n}\n
"},{"location":"develop/blockchains/custom-blockchains/make-custom-pallet/#pallet-events","title":"Pallet Events","text":"

After configuring the pallet to emit events, the next step is to define the events that can be triggered by functions within the pallet. Events provide a straightforward way to inform external entities, such as dApps, chain explorers, or users, that a significant change has occurred in the runtime. In a FRAME pallet, the details of each event and its parameters are included in the node\u2019s metadata, making them accessible to external tools and interfaces.

The generate_deposit macro generates a deposit_event function on the Pallet, which converts the pallet\u2019s event type into the RuntimeEvent (as specified in the Config trait) and deposits it using frame_system::Pallet::deposit_event.

This step adds an event called SomethingStored, which is triggered when a user successfully stores a value in the pallet. The event records both the value and the account that performed the action.

To define events, replace the #[pallet::event] line with the following code block:

#[pallet::event]\n#[pallet::generate_deposit(pub(super) fn deposit_event)]\npub enum Event<T: Config> {\n    /// A user has successfully set a new value.\n    SomethingStored {\n        /// The new value set.\n        something: u32,\n        /// The account who set the new value.\n        who: T::AccountId,\n    },\n}\n
"},{"location":"develop/blockchains/custom-blockchains/make-custom-pallet/#pallet-errors","title":"Pallet Errors","text":"

While events signal the successful completion of calls, errors indicate when and why a call has failed. It's essential to use informative names for errors to clearly communicate the cause of failure. Like events, error documentation is included in the node's metadata, so providing helpful descriptions is crucial.

Errors are defined as an enum named Error with a generic type. Variants can have fields or be fieldless. Any field type specified in the error must implement the TypeInfo trait, and the encoded size of each field should be as small as possible. Runtime errors can be up to 4 bytes in size, allowing the return of additional information when needed.

This step defines two basic errors: one for handling cases where no value has been set and another for managing arithmetic overflow.

To define errors, replace the #[pallet::error] line with the following code block:

#[pallet::error]\npub enum Error<T> {\n    /// The value retrieved was `None` as no value was previously set.\n    NoneValue,\n    /// There was an attempt to increment the value in storage over `u32::MAX`.\n    StorageOverflow,\n}\n
"},{"location":"develop/blockchains/custom-blockchains/make-custom-pallet/#pallet-storage","title":"Pallet Storage","text":"

To persist and store state/data within the pallet (and subsequently, the blockchain you are building), the #[pallet::storage] macro is used. This macro allows the definition of abstract storage within the runtime and sets metadata for that storage. It can be applied multiple times to define different storage items. Several types are available for defining storage, which you can explore in the Polkadot SDK documentation.

This step adds a simple storage item, Something, which stores a single u32 value in the pallet's runtime storage

To define storage, replace the #[pallet::storage] line with the following code block:

#[pallet::storage]\npub type Something<T> = StorageValue<_, u32>;\n
"},{"location":"develop/blockchains/custom-blockchains/make-custom-pallet/#pallet-dispatchable-extrinsics","title":"Pallet Dispatchable Extrinsics","text":"

Dispatchable functions enable users to interact with the pallet and trigger state changes. These functions are represented as \"extrinsics,\" which are similar to transactions. They must return a DispatchResult and be annotated with a weight and a call index.

The #[pallet::call_index] macro is used to explicitly define an index for calls in the Call enum. This is useful for maintaining backward compatibility in the event of new dispatchables being introduced, as changing the order of dispatchables would otherwise alter their index.

The #[pallet::weight] macro assigns a weight to each call, determining its execution cost.

This section adds two dispatchable functions:

  • do_something - takes a single u32 value, stores it in the pallet's storage, and emits an event
  • cause_error - checks if a value exists in storage. If the value is found, it increments and is stored back. If no value is present or an overflow occurs, a custom error is returned

To implement these calls, replace the #[pallet::call] line with the following code block:

#[pallet::call]\nimpl<T: Config> Pallet<T> {\n    #[pallet::call_index(0)]\n    #[pallet::weight(Weight::default())]\n    pub fn do_something(origin: OriginFor<T>, something: u32) -> DispatchResult {\n        // Check that the extrinsic was signed and get the signer.\n        let who = ensure_signed(origin)?;\n\n        // Update storage.\n        Something::<T>::put(something);\n\n        // Emit an event.\n        Self::deposit_event(Event::SomethingStored { something, who });\n\n        // Return a successful `DispatchResult`\n        Ok(())\n    }\n\n    #[pallet::call_index(1)]\n    #[pallet::weight(Weight::default())]\n    pub fn cause_error(origin: OriginFor<T>) -> DispatchResult {\n        let _who = ensure_signed(origin)?;\n\n        // Read a value from storage.\n        match Something::<T>::get() {\n            // Return an error if the value has not been set.\n            None => Err(Error::<T>::NoneValue.into()),\n            Some(old) => {\n                // Increment the value read from storage. This will cause an error in the event\n                // of overflow.\n                let new = old.checked_add(1).ok_or(Error::<T>::StorageOverflow)?;\n                // Update the value in storage with the incremented result.\n                Something::<T>::put(new);\n                Ok(())\n            },\n        }\n    }\n}\n
"},{"location":"develop/blockchains/custom-blockchains/make-custom-pallet/#pallet-implementation-overview","title":"Pallet Implementation Overview","text":"

After following all the previous steps, the pallet is now fully implemented. Below is the complete code, combining the configuration, events, errors, storage, and dispatchable functions:

Code
pub use pallet::*;\n\n#[frame_support::pallet]\npub mod pallet {\n    use frame_support::pallet_prelude::*;\n    use frame_system::pallet_prelude::*;\n\n    #[pallet::pallet]\n    pub struct Pallet<T>(_);\n\n    #[pallet::config]\n    pub trait Config: frame_system::Config {\n        /// The overarching runtime event type.\n        type RuntimeEvent: From<Event<Self>> + IsType<<Self as frame_system::Config>::RuntimeEvent>;\n        /// A type representing the weights required by the dispatchables of this pallet.\n        type WeightInfo;\n    }\n\n    #[pallet::event]\n    #[pallet::generate_deposit(pub(super) fn deposit_event)]\n    pub enum Event<T: Config> {\n        /// A user has successfully set a new value.\n        SomethingStored {\n            /// The new value set.\n            something: u32,\n            /// The account who set the new value.\n            who: T::AccountId,\n        },\n    }\n\n    #[pallet::error]\n    pub enum Error<T> {\n        /// The value retrieved was `None` as no value was previously set.\n        NoneValue,\n        /// There was an attempt to increment the value in storage over `u32::MAX`.\n        StorageOverflow,\n    }\n\n    #[pallet::storage]\n    pub type Something<T> = StorageValue<_, u32>;\n\n    #[pallet::call]\n    impl<T: Config> Pallet<T> {\n        #[pallet::call_index(0)]\n        #[pallet::weight(Weight::default())]\n        pub fn do_something(origin: OriginFor<T>, something: u32) -> DispatchResult {\n            // Check that the extrinsic was signed and get the signer.\n            let who = ensure_signed(origin)?;\n\n            // Update storage.\n            Something::<T>::put(something);\n\n            // Emit an event.\n            Self::deposit_event(Event::SomethingStored { something, who });\n\n            // Return a successful `DispatchResult`\n            Ok(())\n        }\n\n        #[pallet::call_index(1)]\n        #[pallet::weight(Weight::default())]\n        pub fn cause_error(origin: OriginFor<T>) -> DispatchResult {\n            let _who = ensure_signed(origin)?;\n\n            // Read a value from storage.\n            match Something::<T>::get() {\n                // Return an error if the value has not been set.\n                None => Err(Error::<T>::NoneValue.into()),\n                Some(old) => {\n                    // Increment the value read from storage. This will cause an error in the event\n                    // of overflow.\n                    let new = old.checked_add(1).ok_or(Error::<T>::StorageOverflow)?;\n                    // Update the value in storage with the incremented result.\n                    Something::<T>::put(new);\n                    Ok(())\n                },\n            }\n        }\n    }\n}\n
"},{"location":"develop/blockchains/custom-blockchains/make-custom-pallet/#where-to-go-next","title":"Where to Go Next","text":"

With the pallet implemented, the next steps involve ensuring its reliability and performance before integrating it into a runtime. Check the following sections:

  • Testing - learn how to effectively test the functionality and reliability of your pallet to ensure it behaves as expected

  • Benchmarking - explore methods to measure the performance and execution cost of your pallet

  • Add a Pallet to the Runtime - follow this guide to include your pallet in a Polkadot SDK-based runtime, making it ready for use in your blockchain

"},{"location":"develop/blockchains/custom-blockchains/overview/","title":"Overview","text":""},{"location":"develop/blockchains/custom-blockchains/overview/#introduction","title":"Introduction","text":"

The runtime is the heart of any Polkadot SDK-based blockchain, handling the essential logic that governs state changes and transaction processing. With Polkadot SDK\u2019s FRAME (Framework for Runtime Aggregation of Modularized Entities), developers gain access to a powerful suite of tools for building custom blockchain runtimes. FRAME offers a modular architecture, featuring reusable pallets and support libraries, to streamline development.

This guide provides an overview of FRAME, its core components like pallets and system libraries, and demonstrates how to compose a runtime tailored to your specific blockchain use case. Whether you\u2019re integrating pre-built modules or designing custom logic, FRAME equips you with the tools to create scalable, feature-rich blockchains.

"},{"location":"develop/blockchains/custom-blockchains/overview/#frame-runtime-architecture","title":"FRAME Runtime Architecture","text":"

The following diagram illustrates how FRAME components integrate into the runtime:

All transactions sent to the runtime are handled by the frame_executive pallet, which dispatches them to the appropriate pallet for execution. These runtime modules contain the logic for specific blockchain features. The frame_system module provides core functions, while frame_support libraries offer useful tools to simplify pallet development. Together, these components form the backbone of a FRAME-based blockchain's runtime.

"},{"location":"develop/blockchains/custom-blockchains/overview/#pallets","title":"Pallets","text":"

Pallets are modular components within the FRAME ecosystem that encapsulate specific blockchain functionalities. These modules offer customizable business logic for various use cases and features that can be integrated into a runtime.

Developers have the flexibility to implement any desired behavior in the core logic of the blockchain, such as:

  • Exposing new transactions
  • Storing information
  • Enforcing business rules

Pallets also include necessary wiring code to ensure proper integration and functionality within the runtime. FRAME provides a range of pre-built pallets for standard and common blockchain functionalities, including consensus algorithms, staking mechanisms, governance systems, and more. These pre-existing pallets serve as building blocks or templates, which developers can use as-is, modify, or reference when creating custom functionalities.

"},{"location":"develop/blockchains/custom-blockchains/overview/#pallet-structure","title":"Pallet Structure","text":"

Polkadot SDK heavily utilizes Rust macros, allowing developers to focus on specific functional requirements when writing pallets instead of dealing with technicalities and scaffolding code.

A typical pallet skeleton looks like this:

pub use pallet::*;\n\n#[frame_support::pallet]\npub mod pallet {\n  use frame_support::pallet_prelude::*;\n  use frame_system::pallet_prelude::*;\n\n  #[pallet::pallet]\n  #[pallet::generate_store(pub(super) trait Store)]\n  pub struct Pallet<T>(_);\n\n  #[pallet::config]  // snip\n  #[pallet::event]   // snip\n  #[pallet::error]   // snip\n  #[pallet::storage] // snip\n  #[pallet::call]    // snip\n}\n

All pallets, including custom ones, can implement these attribute macros:

  • #[frame_support::pallet] - marks the module as usable in the runtime
  • #[pallet::pallet] - applied to a structure used to retrieve module information easily
  • #[pallet::config] - defines the configuration for the pallets's data types
  • #[pallet::event] - defines events to provide additional information to users
  • #[pallet::error] - lists possible errors in an enum to be returned upon unsuccessful execution
  • #[pallet::storage] - defines elements to be persisted in storage
  • #[pallet::call] - defines functions exposed as transactions, allowing dispatch to the runtime

These macros are applied as attributes to Rust modules, functions, structures, enums, and types. They enable the pallet to be built and added to the runtime, exposing the custom logic to the outer world.

Note

The macros above are the core components of a pallet. For a comprehensive guide on these and additional macros, refer to the pallet_macros section in the Polkadot SDK documentation.

"},{"location":"develop/blockchains/custom-blockchains/overview/#support-libraries","title":"Support Libraries","text":"

In addition to purpose-specific pallets, FRAME offers services and core libraries that facilitate composing and interacting with the runtime:

  • frame_system pallet - provides low-level types, storage, and functions for the runtime
  • frame_executive pallet - orchestrates the execution of incoming function calls to the respective pallets in the runtime
  • frame_support crate - is a collection of Rust macros, types, traits, and modules that simplify the development of Substrate pallets
  • frame_benchmarking crate - contains common runtime patterns for benchmarking and testing purposes
"},{"location":"develop/blockchains/custom-blockchains/overview/#compose-a-runtime-with-pallets","title":"Compose a Runtime with Pallets","text":"

The Polkadot SDK allows developers to construct a runtime by combining various pallets, both built-in and custom-made. This modular approach enables the creation of unique blockchain behaviors tailored to specific requirements.

The following diagram illustrates the process of selecting and combining FRAME pallets to compose a runtime:

This modular design allows developers to:

  • Rapidly prototype blockchain systems
  • Easily add or remove features by including or excluding pallets
  • Customize blockchain behavior without rebuilding core components
  • Leverage tested and optimized code from built-in pallets

For more detailed information on implementing this process, refer to the following sections:

  • Add a Pallet to Your Runtime
  • Create a Custom Pallet
"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/","title":"Pallet Testing","text":""},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#introduction","title":"Introduction","text":"

Unit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. The Polkadot SDK offers a set of APIs to create a test environment that can simulate runtime and mock transaction execution for both extrinsic and queries.

This guide will explore how to mock a runtime and test a pallet. For that, the Polkadot SDK pallets use the mock.rs and test.rs files as a basis for testing pallet processes. The mock.rs file allows the mock runtime to be tested, and the test.rs file allows writing unit test functions to check the functionality of isolated pieces of code within the pallet.

"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#mocking-the-runtime","title":"Mocking the Runtime","text":"

To test a pallet, a mock runtime is created to simulate the behavior of the blockchain environment where the pallet will be included. This involves defining a minimal runtime configuration that only provides for the required dependencies for the tested pallet.

For a complete example of a mocked runtime, check out the mock.rs file in the Solochain Template.

A mock.rs file defines the mock runtime in a typical Polkadot SDK project. It includes the elements described below.

"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#runtime-composition","title":"Runtime Composition","text":"

This section describes the pallets included for the mocked runtime. For example, the following code snippet shows how to build a mocked runtime called Test that consists of the frame_system pallet and the pallet_template:

frame_support::construct_runtime!(\n    pub enum Test {\n        System: frame_system,\n        TemplateModule: pallet_template,\n    }\n);\n
"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#pallets-configurations","title":"Pallets Configurations","text":"

This section outlines the types linked to each pallet in the mocked runtime. For testing, many of these types are simple or primitive, replacing more complex, abstract types to streamline the process.

impl frame_system::Config for Test {\n    ...\n    type Index = u64;\n    type BlockNumber = u64;\n    type Hash = H256;\n    type Hashing = BlakeTwo256;\n    type AccountId = u64;\n    ...\n}\n

The configuration should be set for each pallet existing in the mocked runtime.

Note

The simplification of types is for simplifying the testing process. For example, AccountId is u64, meaning a valid account address can be an unsigned integer:

let alice_account: u64 = 1;\n
"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#genesis-config-initialization","title":"Genesis Config Initialization","text":"

To initialize the genesis storage according to the mocked runtime, the following function can be used:

pub fn new_test_ext() -> sp_io::TestExternalities {\n    frame_system::GenesisConfig::<Test>::default()\n        .build_storage()\n        .unwrap()\n        .into()\n}\n
"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#pallet-unit-testing","title":"Pallet Unit Testing","text":"

Once the mock runtime is in place, the next step is to write unit tests that evaluate the functionality of your pallet. Unit tests allow you to test specific pallet features in isolation, ensuring that each function behaves correctly under various conditions. These tests typically reside in your pallet\u2019s module\u2019s test.rs file.

"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#writing-unit-tests","title":"Writing Unit Tests","text":"

Unit tests in the Polkadot SDK use the Rust testing framework, and the mock runtime you\u2019ve defined earlier will serve as the test environment. Below are the typical steps involved in writing unit tests for a pallet.

"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#test-initialization","title":"Test Initialization","text":"

Each test starts by initializing the runtime environment, typically using the new_test_ext() function, which sets up the mock storage and environment.

#[test]\nfn test_pallet_functionality() {\n    new_test_ext().execute_with(|| {\n        // Test logic goes here\n    });\n}\n
"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#function-call-testing","title":"Function Call Testing","text":"

Call the pallet\u2019s extrinsics or functions to simulate user interaction or internal logic. Use the assert_ok! macro to check for successful execution and assert_err! to verify that errors are handled properly.

#[test]\nfn it_works_for_valid_input() {\n    new_test_ext().execute_with(|| {\n        // Call an extrinsic or function\n        assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param));\n    });\n}\n\n#[test]\nfn it_fails_for_invalid_input() {\n    new_test_ext().execute_with(|| {\n        // Call an extrinsic with invalid input and expect an error\n        assert_err!(\n            TemplateModule::some_function(Origin::signed(1), invalid_param),\n            Error::<Test>::InvalidInput\n        );\n    });\n}\n
"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#storage-testing","title":"Storage Testing","text":"

After calling a function or extrinsic in your pallet, it's important to verify that the state changes in the pallet's storage match the expected behavior. This ensures that data is updated correctly based on the actions taken.

The following example shows how to test the storage behavior before and after the function call:

#[test]\nfn test_storage_update_on_extrinsic_call() {\n    new_test_ext().execute_with(|| {\n        // Check the initial storage state (before the call)\n        assert_eq!(Something::<Test>::get(), None);\n\n        // Dispatch a signed extrinsic, which modifies storage\n        assert_ok!(TemplateModule::do_something(RuntimeOrigin::signed(1), 42));\n\n        // Validate that the storage has been updated as expected (after the call)\n        assert_eq!(Something::<Test>::get(), Some(42));\n    });\n}\n
"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#event-testing","title":"Event Testing","text":"

It\u2019s also crucial to test the events that your pallet emits during execution. By default, events generated in a pallet using the #generate_deposit macro are stored under the system's event storage key (system/events) as EventRecord entries. These can be accessed using System::events() or verified with specific helper methods provided by the system pallet, such as assert_has_event and assert_last_event.

Here\u2019s an example of testing events in a mock runtime:

#[test]\nfn it_emits_events_on_success() {\n    new_test_ext().execute_with(|| {\n        // Call an extrinsic or function\n        assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param));\n\n        // Verify that the expected event was emitted\n        assert!(System::events().iter().any(|record| {\n            record.event == Event::TemplateModule(TemplateEvent::SomeEvent)\n        }));\n    });\n}\n

Some key considerations are:

  • Block number - events are not emitted on the genesis block, so you need to set the block number using System::set_block_number() to ensure events are triggered
  • Converting events - use .into() when instantiating your pallet\u2019s event to convert it into a generic event type, as required by the system\u2019s event storage
"},{"location":"develop/blockchains/custom-blockchains/pallet-testing/#where-to-go-next","title":"Where to Go Next","text":"
  • Dive into the full implementation of the mock.rs and test.rs files in the Solochain Template
  • To further optimize performance, check out the Benchmarking documentation to learn how to evaluate and improve the efficiency of your pallet operations
"},{"location":"develop/blockchains/deployment/build-deterministic-runtime/","title":"Build a Deterministic Runtime","text":""},{"location":"develop/blockchains/deployment/build-deterministic-runtime/#introduction","title":"Introduction","text":"

By default, the Rust compiler produces optimized Wasm binaries. These binaries are suitable for working in an isolated environment, such as local development. However, the Wasm binaries the compiler builds by default aren't guaranteed to be deterministically reproducible. Each time the compiler generates the Wasm runtime, it might produce a slightly different Wasm byte code. This is problematic in a blockchain network where all nodes must use exactly the same raw chain specification file.

Working with builds that aren't guaranteed to be deterministically reproducible can cause other problems, too. For example, for automating the build processes for a blockchain, it is ideal that the same code always produces the same result (in terms of bytecode). Compiling the Wasm runtime with every push would produce inconsistent and unpredictable results without a deterministic build, making it difficult to integrate with any automation and likely to break a CI/CD pipeline continuously. Deterministic builds\u2014code that always compiles to exactly the same bytecode\u2014ensure that the Wasm runtime can be inspected, audited, and independently verified.

"},{"location":"develop/blockchains/deployment/build-deterministic-runtime/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure you have Docker installed.

"},{"location":"develop/blockchains/deployment/build-deterministic-runtime/#tooling-for-wasm-runtime","title":"Tooling for Wasm Runtime","text":"

To compile the Wasm runtime deterministically, the same tooling that produces the runtime for Polkadot, Kusama, and other Polkadot SDK-based chains can be used. This tooling, referred to collectively as the Substrate Runtime Toolbox or\u00a0srtool, ensures that the same source code consistently compiles to an identical Wasm blob.

The core component of srtool is a Docker container executed as part of a Docker image. The name of the srtool Docker image specifies the version of the Rust compiler used to compile the code included in the image. For example, the image paritytech/srtool:1.62.0 indicates that the code in the image was compiled with version 1.62.0 of the rustc compiler.

"},{"location":"develop/blockchains/deployment/build-deterministic-runtime/#working-with-the-docker-container","title":"Working with the Docker Container","text":"

The srtool-cli package is a command-line utility written in Rust that installs an executable program called srtool. This program simplifies the interactions with the srtool Docker container.

Over time, the tooling around the srtool Docker image has expanded to include the following tools and helper programs:

  • srtool-cli - provides a command-line interface to pull the srtool Docker image, get information about the image and tooling used to interact with it, and build the runtime using the srtool Docker container
  • subwasm - provides command-line options for working with the metadata and Wasm runtime built using srtool. The subwasm program is also used internally to perform tasks in the srtool image
  • srtool-actions - provides GitHub actions to integrate builds produced using the srtool image with your GitHub CI/CD pipelines
  • srtool-app - provides a simple graphical user interface for building the runtime using the srtool Docker image
"},{"location":"develop/blockchains/deployment/build-deterministic-runtime/#prepare-the-environment","title":"Prepare the Environment","text":"

It is recommended to install the srtool-cli program to work with the Docker image using a simple command-line interface.

To prepare the environment:

  1. Verify that Docker is installed by running the following command:

    docker --version\n

    If Docker is installed, the command will display version information:

    docker --version Docker version 20.10.17, build 100c701

  2. Install the srtool command-line interface by running the following command:

    cargo install --git https://github.com/chevdor/srtool-cli\n
  3. View usage information for the srtool command-line interface by running the following command:

    srtool help\n
  4. Download the latest srtool Docker image by running the following command:

    srtool pull\n
"},{"location":"develop/blockchains/deployment/build-deterministic-runtime/#start-a-deterministic-build","title":"Start a Deterministic Build","text":"

After preparing the environment, the Wasm runtime can be compiled using the\u00a0srtool\u00a0Docker image.

To build the runtime, you need to open your Polkadot SDK-based project in a terminal shell and run the following command:

srtool build --app --package INSERT_RUNTIME_PACKAGE_NAME --runtime-dir INSERT_RUNTIME_PATH \n
  • The name specified for the --package should be the name defined in the Cargo.toml file for the runtime
  • The path specified for the --runtime-dir should be the path to the Cargo.toml file for the runtime. For example:

    node/\npallets/\nruntime/\n\u251c\u2500\u2500lib.rs\n\u2514\u2500\u2500Cargo.toml # INSERT_RUNTIME_PATH should be the path to this file\n...\n
  • If the Cargo.toml file for the runtime is located in a runtime subdirectory, for example, runtime/kusama, the --runtime-dir parameter can be omitted

"},{"location":"develop/blockchains/deployment/build-deterministic-runtime/#use-srtool-in-github-actions","title":"Use srtool in GitHub Actions","text":"

To add a GitHub workflow for building the runtime:

  1. Create a .github/workflows directory in the chain's directory
  2. In the .github/workflows directory, click Add file, then select Create new file
  3. Copy the sample GitHub action from basic.yml example in the srtools-actions repository and paste it into the file you created in the previous step

    basic.yml
    name: Srtool build\n\non: push\n\njobs:\n  srtool:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        chain: [\"asset-hub-kusama\", \"asset-hub-westend\"]\n    steps:\n      - uses: actions/checkout@v3\n      - name: Srtool build\n        id: srtool_build\n        uses: chevdor/srtool-actions@v0.8.0\n        with:\n          chain: ${{ matrix.chain }}\n          runtime_dir: polkadot-parachains/${{ matrix.chain }}-runtime\n      - name: Summary\n        run: |\n          echo '${{ steps.srtool_build.outputs.json }}' | jq . > ${{ matrix.chain }}-srtool-digest.json\n          cat ${{ matrix.chain }}-srtool-digest.json\n          echo \"Runtime location: ${{ steps.srtool_build.outputs.wasm }}\"\n
  4. Modify the settings in the sample action

    For example, modify the following settings:

    • The name of the chain
    • The name of the runtime package
    • The location of the runtime
  5. Type a name for the action file and commit

"},{"location":"develop/blockchains/deployment/build-deterministic-runtime/#use-the-srtool-image-via-docker-hub","title":"Use the srtool Image via Docker Hub","text":"

If utilizing srtool-cli or srtool-app isn't an option, the paritytech/srtool container image can be used directly via Docker Hub.

To pull the image from Docker Hub:

  1. Sign in to Docker Hub
  2. Type paritytech/srtool in the search field and press enter
  3. Click paritytech/srtool, then click Tags
  4. Copy the command for the image you want to pull
  5. Open a terminal shell on your local computer
  6. Paste the command you copied from the Docker Hub. For example, you might run a command similar to the following, which downloads and unpacks the image:

    docker pull paritytech/srtool:1.62.0\n
"},{"location":"develop/blockchains/deployment/build-deterministic-runtime/#naming-convention-for-images","title":"Naming Convention for Images","text":"

Keep in mind that there is no latest tag for the srtool image. Ensure that the image selected is compatible with the locally available version of the Rust compiler.

The naming convention for paritytech/srtool Docker images specifies the version of the Rust compiler used to compile the code included in the image. Some images specify both a compiler version and the version of the build script used. For example, an image named paritytech/srtool:1.62.0-0.9.19 was compiled with version 1.62.0 of the rustc compiler and version 0.9.19 of the build script. Images that only specify the compiler version always contain the software's latest version.

"},{"location":"develop/blockchains/deployment/generate-chain-specs/","title":"Generate Chain Specs","text":""},{"location":"develop/blockchains/deployment/generate-chain-specs/#introduction","title":"Introduction","text":"

A chain specification collects information that describes a Polkadot SDK-based network. A chain specification is a crucial parameter when starting a node, providing the genesis configurations, bootnodes, and other parameters relating to that particular network. It identifies the network a blockchain node connects to, the other nodes it initially communicates with, and the initial state that nodes must agree on to produce blocks.

The chain specification is defined using the ChainSpec struct. This struct separates the information required for a chain into two parts:

  • Client specification - contains information the node uses to communicate with network participants and send data to telemetry endpoints. Many of these chain specification settings can be overridden by command-line options when starting a node or can be changed after the blockchain has started

  • Initial genesis state - agreed upon by all nodes in the network. It must be set when the blockchain is first started and cannot be changed after that without starting a whole new blockchain

"},{"location":"develop/blockchains/deployment/generate-chain-specs/#node-settings-customization","title":"Node Settings Customization","text":"

For the node, the chain specification controls information such as:

  • The bootnodes the node will communicate with
  • The server endpoints for the node to send telemetry data to
  • The human and machine-readable names for the network the node will connect to

The chain specification can be customized to include additional information. For example, you can configure the node to connect to specific blocks at specific heights to prevent long-range attacks when syncing a new node from genesis.

Note that you can customize node settings after genesis. However, nodes only add peers that use the same protocolId.

"},{"location":"develop/blockchains/deployment/generate-chain-specs/#genesis-configuration-customization","title":"Genesis Configuration Customization","text":"

All nodes in the network must agree on the genesis state before they can agree on any subsequent blocks. The information configured in the genesis portion of a chain specification is used to create a genesis block. When you start the first node, it takes effect and cannot be overridden with command-line options. However, you can configure some information in the genesis section of a chain specification. For example, you can customize it to include information such as:

  • Initial account balances
  • Accounts that are initially part of a governance council
  • The account that controls the sudo key
  • Any other genesis state for a pallet

Nodes also require the compiled Wasm to execute the runtime logic on the chain, so the initial runtime must also be supplied in the chain specification. For a more detailed look at customizing the genesis chain specification, be sure to check out the Polkadot SDK Docs.

"},{"location":"develop/blockchains/deployment/generate-chain-specs/#declaring-storage-items-for-a-runtime","title":"Declaring Storage Items for a Runtime","text":"

A runtime usually requires some storage items to be configured at genesis. This includes the initial state for pallets, for example, how much balance\u00a0specific accounts\u00a0have, or which account will have sudo permissions.

These storage values are configured in the genesis portion of the chain specification. You can create a patch file and ingest it using the chain-spec-builder utility, that is explained in the Creating a Custom Chain Specification section.

"},{"location":"develop/blockchains/deployment/generate-chain-specs/#chain-specification-json-format","title":"Chain Specification JSON Format","text":"

Users generally work with the JSON format of the chain specification. Internally, the chain specification is embedded in the GenericChainSpec struct, with specific properties accessible through the ChainSpec struct. The chain specification includes the following keys:

  • name - the human-readable name for the network
  • id - the machine-readable identifier for the network
  • chainType - the type of chain to start (refer to ChainType for more details)
  • bootNodes - a list of multiaddresses belonging to the chain's boot nodes
  • telemetryEndpoints - an optional list of multiaddresses for telemetry endpoints with verbosity levels ranging from 0 to 9 (0 being the lowest verbosity)
  • protocolId - the optional protocol identifier for the network
  • forkId - an optional fork ID that should typically be left empty; it can be used to signal a fork at the network level when two chains share the same genesis hash
  • properties - custom properties provided as a key-value JSON object
  • codeSubstitutes - an optional mapping of block numbers to Wasm code
  • genesis - the genesis configuration for the chain

For example, the following JSON shows a basic chain specification file:

{\n    \"name\": \"chainName\",\n    \"id\": \"chainId\",\n    \"chainType\": \"Local\",\n    \"bootNodes\": [],\n    \"telemetryEndpoints\": null,\n    \"protocolId\": null,\n    \"properties\": null,\n    \"codeSubstitutes\": {},\n    \"genesis\": {\n        \"code\": \"0x...\"\n    }\n}\n
"},{"location":"develop/blockchains/deployment/generate-chain-specs/#creating-a-custom-chain-specification","title":"Creating a Custom Chain Specification","text":"

To create a custom chain specification, you can use the chain-spec-builder tool. This is a CLI tool that is used to generate chain specifications from the runtime of a node. To install the tool, run the following command:

cargo install staging-chain-spec-builder\n

To verify the installation, run the following:

chain-spec-builder --help\n
"},{"location":"develop/blockchains/deployment/generate-chain-specs/#plain-chain-specifications","title":"Plain Chain Specifications","text":"

To create a plain chain specification, you can use the following utility within your project:

chain-spec-builder create -r <INSERT_RUNTIME_WASM_PATH> <INSERT_COMMAND> \n

Note

Before running the command, ensure that the runtime has been compiled and is available at the specified path.

Ensure to replace <INSERT_RUNTIME_WASM_PATH> with the path to the runtime Wasm file and <INSERT_COMMAND> with the command to insert the runtime into the chain specification. The available commands are:

  • patch - overwrites the runtime's default genesis config with the provided patch. You can check the following patch file as a reference
  • full - build the genesis config for runtime using the JSON file. No defaults will be used. As a reference, you can check the following full file
  • default - gets the default genesis config for the runtime and uses it in ChainSpec. Please note that the default genesis config may not be valid. For some runtimes, initial values should be added there (e.g., session keys, BABE epoch)
  • named-preset - uses named preset provided by the runtime to build the chain spec
"},{"location":"develop/blockchains/deployment/generate-chain-specs/#raw-chain-specifications","title":"Raw Chain Specifications","text":"

With runtime upgrades, the blockchain's runtime can be upgraded with newer business logic. Chain specifications contain information structured in a way that the node's runtime can understand. For example, consider this excerpt of a common entry for a chain specification:

\"sudo\": {\n    \"key\": \"5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY\"\n}\n

In the plain chain spec JSON file, the keys and associated values are in a human-readable format, which can be used to initialize the genesis storage. When the chain specification is loaded, the runtime converts these readable values into storage items within the trie. However, for long-lived networks like testnets or production chains, using the raw format for storage initialization is preferred. This avoids the need for conversion by the runtime and ensures that storage items remain consistent, even when runtime upgrades occur.

To enable a node with an upgraded runtime to synchronize with a chain from genesis, the plain chain specification is encoded in a raw format. The raw format allows the distribution of chain specifications that all nodes can use to synchronize the chain even after runtime upgrades.

To convert a plain chain specification to a raw chain specification, you can use the following utility:

chain-spec-builder convert-to-raw chain_spec.json\n

After the conversion to the raw format, the sudo key snippet looks like this:

\"0x50a63a871aced22e88ee6466fe5aa5d9\": \"0xd43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d\",\n

The raw chain specification can be used to initialize the genesis storage for a node.

"},{"location":"develop/blockchains/deployment/generate-chain-specs/#where-to-go-next","title":"Where to Go Next","text":"

After generating a chain specification, you can use it to initialize the genesis storage for a node. Refer to the following guides to learn how to proceed with the deployment of your blockchain:

  • Obtain Coretime - learn how to obtain the necessary coretime configuration to synchronize your blockchain\u2019s timestamping and enhance its performance
  • Deployment - explore the steps required to deploy your chain specification, ensuring a smooth launch of your network and proper node operation
  • Maintenance - discover best practices for maintaining your blockchain post-deployment, including how to manage upgrades and monitor network health
"},{"location":"develop/blockchains/get-started/build-custom-blockchains/","title":"Build Custom Blockchains","text":""},{"location":"develop/blockchains/get-started/build-custom-blockchains/#introduction","title":"Introduction","text":"

Building custom blockchains with the Polkadot SDK allows developers to create specialized blockchain solutions tailored to unique requirements. By leveraging Substrate\u2014a Rust-based, modular blockchain development framework\u2014the Polkadot SDK provides powerful tools to construct chains that can either stand-alone or connect to Polkadot\u2019s shared security network as parachains. This flexibility empowers projects across various sectors to launch blockchains that meet specific functional, security, and scalability needs.

This guide covers the core steps for building a custom blockchain using the Polkadot SDK, starting from pre-built chain templates. These templates simplify development, providing an efficient starting point that can be further customized, allowing you to focus on implementing the features and modules that set your blockchain apart.

"},{"location":"develop/blockchains/get-started/build-custom-blockchains/#starting-from-templates","title":"Starting from Templates","text":"

Using pre-built templates is an efficient way to begin building a custom blockchain. Templates provide a foundational setup with pre-configured modules, letting developers avoid starting from scratch and instead focus on customization. Depending on your project\u2019s goals\u2014whether you want a simple test chain, a standalone chain, or a parachain that integrates with Polkadot\u2019s relay chains\u2014there are templates designed to suit different levels of complexity and scalability.

Within the Polkadot SDK, the following templates are available to get you started:

  • minimal-template - includes only the essential components necessary for a functioning blockchain. It\u2019s ideal for developers who want to gain familiarity with blockchain basics and test simple customizations before scaling up

  • solochain-template - provides a foundation for creating standalone blockchains with moderate features, including a simple consensus mechanism and several core FRAME pallets. It\u2019s a solid starting point for developers who want a fully functional chain that doesn\u2019t depend on a relay chain

  • parachain-template - designed for connecting to relay chains like Polkadot, Kusama, or Paseo, this template enables a chain to operate as a parachain. For projects aiming to integrate with Polkadot\u2019s ecosystem, this template offers a great starting point

In addition, several external templates offer unique features and can align with specific use cases or developer familiarity:

  • OpenZeppelin - offers two flexible starting points:

    • The generic-runtime-template provides a minimal setup with essential pallets and secure defaults, creating a reliable foundation for custom blockchain development
    • The evm-runtime-template enables EVM compatibility, allowing developers to migrate Solidity contracts and EVM-based dApps. This template is ideal for Ethereum developers looking to leverage Substrate's capabilities
  • Tanssi - provides developers with pre-built templates that can help accelerate the process of creating appchain

  • Pop Network - designed with user-friendliness in mind, Pop Network offers an approachable starting point for new developers, with a simple CLI interface for creating appchains

Choosing a suitable template depends on your project\u2019s unique requirements, level of customization, and integration needs. Starting from a template speeds up development and lets you focus on implementing your chain\u2019s unique features rather than the foundational blockchain setup.

"},{"location":"develop/blockchains/get-started/build-custom-blockchains/#high-level-steps-to-build-a-custom-chain","title":"High-Level Steps to Build a Custom Chain","text":"

Building a custom blockchain with the Polkadot SDK involves several core steps, from environment setup to deployment. Here\u2019s a breakdown of each stage:

  1. Set up the development environment - install Rust and configure all necessary dependencies to work with the Polkadot SDK (for more information, check the Install Polkadot SDK dependencies page). Ensuring your environment is correctly set up from the start is crucial for avoiding compatibility issues later

  2. Clone the chain template - start by downloading the code for one of the pre-built templates that best aligns with your project needs. Each template offers a different configuration, so select one based on your chain\u2019s intended functionality

  3. Define your chain's custom logic - with your chosen template, check the runtime configuration to customize the chain\u2019s functionality. Polkadot\u2019s modular \u201cpallet\u201d system lets you easily add or modify features like account balances, transaction handling, and staking. Creating custom pallets to implement unique features and combining them with existing ones enables you to define the unique aspects of your chain

  4. Test and debug - testing is essential to ensure your custom chain works as intended. Conduct unit tests for individual pallets and integration tests for interactions between pallets

  5. Compile - after finalizing and testing your custom configurations, compile the blockchain to generate the necessary executable files for running a node. Run the node locally to validate that your customizations work as expected and that your chain is stable and responsive

Each of these steps is designed to build on the last, helping ensure that your custom blockchain is functional, optimized, and ready for deployment within the Polkadot ecosystem or beyond.

"},{"location":"develop/blockchains/get-started/build-custom-blockchains/#where-to-go-next","title":"Where to Go Next","text":"

Once your chain is functional locally, depending on your project\u2019s goals, you can deploy to a TestNet to monitor performance and gather feedback or launch directly on a MainNet. To learn more about this process, check the Deployment section of the documentation.

After deployment, regular monitoring and maintenance are essential to ensure that the chain is functioning as expected. Developers need to be able to monitor the chain's performance, identify issues, and troubleshoot problems. Key activities include tracking network health, node performance, and transaction throughput. It's also essential to test the blockchain\u2019s scalability under high load and perform security audits regularly to prevent vulnerabilities. For more information on monitoring and maintenance, refer to the Maintenance section.

"},{"location":"develop/blockchains/get-started/deploy-blockchain-to-polkadot/","title":"Deployment Overview","text":""},{"location":"develop/blockchains/get-started/deploy-blockchain-to-polkadot/#introduction","title":"Introduction","text":"

Deploying a blockchain with the Polkadot SDK is a critical step in transforming a locally developed network into a secure, fully functioning system for public or private use. It involves more than just launching a runtime; you'll need to prepare the chain specification, ensure ecosystem compatibility, and plan for long-term maintenance and updates.

Whether deploying a test network for development or a mainnet for production, this guide highlights the essential steps to get your blockchain operational. It provides an overview of the deployment process, introducing key concepts, tools, and best practices for a smooth transition from development to production.

"},{"location":"develop/blockchains/get-started/deploy-blockchain-to-polkadot/#deployment-process","title":"Deployment Process","text":"

Taking your Polkadot SDK-based blockchain from a local environment to production involves several steps, ensuring your network is stable, secure, and ready for real-world use. The following diagram outlines the process at a high level:

graph LR\n    subgraph Pre-Deployment\n    A(\"Local Development\\nand\\nTesting\") --> B(\"Runtime \\n Compilation\")\n    B --> C(\"Generate \\n Chain \\n Specifications\")\n    C --> D(\"Prepare \\n Deployment \\n Environment\")\n    D --> E(\"Acquire \\n Coretime\")\n    end\n    subgraph Deployment\n    E --> F(\"Launch \\n and \\n Monitor\")\n    end\n    subgraph Post-Deployment\n    F --> G(\"Maintenance \\n and \\n Upgrades\")\n    end
  • Local development and testing - the process begins with local development and testing. Developers focus on building the runtime by selecting and configuring the necessary pallets while refining network features. In this phase, it's essential to run a local TestNet to verify transactions and ensure the blockchain behaves as expected. Unit and integration tests are also crucial for ensuring the network works as expected before launch. Thorough testing is conducted, not only for individual components but also for interactions between pallets

  • Runtime compilation - Polkadot SDK-based blockchains are built with Wasm, a highly portable and efficient format. Compiling your blockchain's runtime into Wasm ensures it can be executed reliably across various environments, guaranteeing network-wide compatibility and security. The srtool is helpful for this purpose since it allows you to compile deterministic runtimes

  • Generate chain specifications - the chain spec file defines the structure and configuration of your blockchain. It includes initial node identities, session keys, and other parameters. Defining a well thought-out chain specification ensures that your network will operate smoothly and according to your intended design

  • Deployment environment - whether launching a local test network or a production-grade blockchain, selecting the proper infrastructure is vital. For further information about these topics, see the Infrastructure section

  • Acquire coretime - to build on top of the Polkadot network, users need to acquire coretime (either on-demand or in bulk) to access the computational resources of the relay chain. This allows for the secure validation of parachain blocks through a randomized selection of relay chain validators

    Note

    If you\u2019re building a standalone blockchain (solochain) that won\u2019t connect to Polkadot as a parachain, you can skip this step, as there\u2019s no need to acquire coretime or implement Cumulus .

  • Launch and monitor - once everything is configured, you can launch the blockchain, initiating the network with your chain spec and Wasm runtime. Validators or collators will begin producing blocks, and the network will go live. Post-launch, monitoring is vital to ensuring network health\u2014tracking block production, node performance, and overall security

  • Maintenance and upgrade - a blockchain continues to evolve post-deployment. As the network expands and adapts, it may require runtime upgrades, governance updates, coretime renewals, and even modifications to the underlying code. For an in-depth guide on this topic, see the Maintenance section

"},{"location":"develop/blockchains/get-started/deploy-blockchain-to-polkadot/#where-to-go-next","title":"Where to Go Next","text":"

Deploying a Polkadot SDK-based blockchain is a multi-step process that requires careful planning, from generating chain specs and compiling the runtime to managing post-launch updates. By understanding the deployment process and utilizing the right tools, developers can confidently take their blockchain from development to production. For more on this topic, check out the following resources:

  • Generate Chain Specifications - learn how to generate a chain specification for your blockchain
  • Building Deterministic Runtimes - learn how to build deterministic runtimes for your blockchain
  • Infrastructure - learn about the different infrastructure options available for your blockchain
  • Maintenance - discover how to manage updates on your blockchain to ensure smooth operation
"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/","title":"Install Polkadot SDK Dependencies","text":"

This guide provides step-by-step instructions for installing the dependencies you need to work with the Polkadot SDK-based chains on macOS, Linux, and Windows. Follow the appropriate section for your operating system to ensure all necessary tools are installed and configured properly.

"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#macos","title":"macOS","text":"

You can install Rust and set up a Substrate development environment on Apple macOS computers with Intel or Apple M1 processors.

"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#before-you-begin","title":"Before You Begin","text":"

Before you install Rust and set up your development environment on macOS, verify that your computer meets the following basic requirements:

  • Operating system version is 10.7 Lion or later
  • Processor speed of at least 2 GHz. Note that 3 GHz is recommended
  • Memory of at least 8 GB RAM. Note that 16 GB is recommended
  • Storage of at least 10 GB of available space
  • Broadband Internet connection
"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#install-homebrew","title":"Install Homebrew","text":"

In most cases, you should use Homebrew to install and manage packages on macOS computers. If you don't already have Homebrew installed on your local computer, you should download and install it before continuing.

To install Homebrew:

  1. Open the Terminal application

  2. Download and install Homebrew by running the following command:

    /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)\"\n
  3. Verify Homebrew has been successfully installed by running the following command:

    brew --version\n

    The command displays output similar to the following:

    brew --version Homebrew 4.3.15

"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#support-for-apple-silicon","title":"Support for Apple Silicon","text":"

Protobuf must be installed before the build process can begin. To install it, run the following command:

brew install protobuf\n
"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#install-required-packages-and-rust","title":"Install Required Packages and Rust","text":"

Because the blockchain requires standard cryptography to support the generation of public/private key pairs and the validation of transaction signatures, you must also have a package that provides cryptography, such as openssl.

To install openssl and the Rust toolchain on macOS:

  1. Open the Terminal application

  2. Ensure you have an updated version of Homebrew by running the following command:

    brew update\n
  3. Install the openssl package by running the following command:

    brew install openssl\n
  4. Download the rustup installation program and use it to install Rust by running the following command:

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\n
  5. Follow the prompts displayed to proceed with a default installation

  6. Update your current shell to include Cargo by running the following command:

    source ~/.cargo/env\n
  7. Configure the Rust toolchain to default to the latest stable version by running the following commands:

    rustup default stable\nrustup update\nrustup target add wasm32-unknown-unknown\n
  8. Add the nightly release and the nightly Wasm targets to your development environment by running the following commands:

    rustup update nightly\nrustup target add wasm32-unknown-unknown --toolchain nightly\n
  9. Verify your installation

  10. Install cmake using the following command:

    brew install cmake\n
"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#linux","title":"Linux","text":"

Rust supports most Linux distributions. Depending on the specific distribution and version of the operating system you use, you might need to add some software dependencies to your environment. In general, your development environment should include a linker or C-compatible compiler, such as clang and an appropriate integrated development environment (IDE).

"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#before-you-begin-linux","title":"Before You Begin","text":"

Check the documentation for your operating system for information about the installed packages and how to download and install any additional packages you might need. For example, if you use Ubuntu, you can use the Ubuntu Advanced Packaging Tool (apt) to install the build-essential package:

sudo apt install build-essential\n

At a minimum, you need the following packages before you install Rust:

clang curl git make\n

Because the blockchain requires standard cryptography to support the generation of public/private key pairs and the validation of transaction signatures, you must also have a package that provides cryptography, such as libssl-dev or openssl-devel.

"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#install-required-packages-and-rust-linux","title":"Install Required Packages and Rust","text":"

To install the Rust toolchain on Linux:

  1. Open a terminal shell

  2. Check the packages you have installed on the local computer by running an appropriate package management command for your Linux distribution

  3. Add any package dependencies you are missing to your local development environment by running the appropriate package management command for your Linux distribution:

    UbuntuDebianArchFedoraOpenSUSE
    sudo apt install --assume-yes git clang curl libssl-dev protobuf-compiler\n
    sudo apt install --assume-yes git clang curl libssl-dev llvm libudev-dev make protobuf-compiler\n
    pacman -Syu --needed --noconfirm curl git clang make protobuf\n
    sudo dnf update\nsudo dnf install clang curl git openssl-devel make protobuf-compiler\n
    sudo zypper install clang curl git openssl-devel llvm-devel libudev-devel make protobuf\n

    Remember that different distributions might use different package managers and bundle packages in different ways. For example, depending on your installation selections, Ubuntu Desktop and Ubuntu Server might have different packages and different requirements. However, the packages listed in the command-line examples are applicable for many common Linux distributions, including Debian, Linux Mint, MX Linux, and Elementary OS.

  4. Download the rustup installation program and use it to install Rust by running the following command:

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\n
  5. Follow the prompts displayed to proceed with a default installation

  6. Update your current shell to include Cargo by running the following command:

    source $HOME/.cargo/env\n
  7. Verify your installation by running the following command:

    rustc --version\n
  8. Configure the Rust toolchain to default to the latest stable version by running the following commands:

    rustup default stable\nrustup update\n
  9. Add the nightly release and the nightly Wasm targets to your development environment by running the following commands:

    rustup update nightly\nrustup target add wasm32-unknown-unknown --toolchain nightly\n
  10. Verify your installation

"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#windows-wsl","title":"Windows (WSL)","text":"

In general, UNIX-based operating systems\u2014like macOS or Linux\u2014provide a better development environment for building Substrate-based blockchains.

However, suppose your local computer uses Microsoft Windows instead of a UNIX-based operating system. In that case, you can configure it with additional software to make it a suitable development environment for building Substrate-based blockchains. To prepare a development environment on a Microsoft Windows computer, you can use Windows Subsystem for Linux (WSL) to emulate a UNIX operating environment.

"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#before-you-begin-windows","title":"Before You Begin","text":"

Before installing on Microsoft Windows, verify the following basic requirements:

  • You have a computer running a supported Microsoft Windows operating system:
  • For Windows desktop - you must be running Microsoft Windows 10, version 2004 or later, or Microsoft Windows 11 to install WSL
  • For Windows server - you must be running Microsoft Windows Server 2019, or later, to install WSL on a server operating system
  • You have good internet connection and access to a shell terminal on your local computer
"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#set-up-windows-subsystem-for-linux","title":"Set Up Windows Subsystem for Linux","text":"

WSL enables you to emulate a Linux environment on a computer that uses the Windows operating system. The primary advantage of this approach for Substrate development is that you can use all of the code and command-line examples as described in the Substrate documentation. For example, you can run common commands\u2014such as ls and ps\u2014unmodified. By using WSL, you can avoid configuring a virtual machine image or a dual-boot operating system.

To prepare a development environment using WSL:

  1. Check your Windows version and build number to see if WSL is enabled by default.

    If you have Microsoft Windows 10, version 2004 (Build 19041 and higher), or Microsoft Windows 11, WSL is available by default and you can continue to the next step.

    If you have an older version of Microsoft Windows installed, see the WSL manual installation steps for older versions. If you are installing on an older version of Microsoft Windows, you can download and install WLS 2 if your computer has Windows 10, version 1903 or higher

  2. Select Windows PowerShell or Command Prompt from the Start menu, right-click, then Run as administrator

  3. In the PowerShell or Command Prompt terminal, run the following command:

    wsl --install\n

    This command enables the required WSL 2 components that are part of the Windows operating system, downloads the latest Linux kernel, and installs the Ubuntu Linux distribution by default.

    If you want to review the other Linux distributions available, run the following command:

    wsl --list --online\n
  4. After the distribution is downloaded, close the terminal

  5. Click the Start menu, select Shut down or sign out, then click Restart to restart the computer.

    Restarting the computer is required to start the installation of the Linux distribution. It can take a few minutes for the installation to complete after you restart.

    For more information about setting up WSL as a development environment, see the Set up a WSL development environment docs

"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#install-required-packages-and-rust-windows","title":"Install Required Packages and Rust","text":"

To install the Rust toolchain on WSL:

  1. Click the Start menu, then select Ubuntu

  2. Type a UNIX user name to create user account

  3. Type a password for your UNIX user, then retype the password to confirm it

  4. Download the latest updates for the Ubuntu distribution using the Ubuntu Advanced Packaging Tool (apt) by running the following command:

    sudo apt update\n
  5. Add the required packages for the Ubuntu distribution by running the following command:

    sudo apt install --assume-yes git clang curl libssl-dev llvm libudev-dev make protobuf-compiler\n
  6. Download the rustup installation program and use it to install Rust for the Ubuntu distribution by running the following command:

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\n
  7. Follow the prompts displayed to proceed with a default installation

  8. Update your current shell to include Cargo by running the following command:

    source ~/.cargo/env\n
  9. Verify your installation by running the following command:

    rustc --version\n
  10. Configure the Rust toolchain to use the latest stable version as the default toolchain by running the following commands:

    rustup default stable\nrustup update\n
  11. Add the nightly version of the toolchain and the nightly Wasm target to your development environment by running the following commands:

    rustup update nightly\nrustup target add wasm32-unknown-unknown --toolchain nightly\n
  12. Verify your installation

"},{"location":"develop/blockchains/get-started/install-polkadot-sdk/#verifying-installation","title":"Verifying Installation","text":"

Verify the configuration of your development environment by running the following command:

rustup show\nrustup +nightly show\n

The command displays output similar to the following:

rustup show ... active toolchain ---------------- stable-x86_64-apple-darwin (default) rustc 1.81.0 (eeb90cda1 2024-09-04) ... active toolchain ---------------- nightly-x86_64-apple-darwin (overridden by +toolchain on the command line) rustc 1.83.0-nightly (6c6d21008 2024-09-22)"},{"location":"develop/blockchains/get-started/intro-polkadot-sdk/","title":"Introduction to Polkadot SDK","text":""},{"location":"develop/blockchains/get-started/intro-polkadot-sdk/#introduction","title":"Introduction","text":"

The Polkadot SDK is a powerful and versatile developer kit designed to facilitate building on the Polkadot network. It provides the necessary components for creating custom blockchains, parachains, generalized rollups, and more. Written in the Rust programming language, it puts security and robustness at the forefront of its design.

Whether you're building a standalone chain or deploying a parachain on Polkadot, this SDK equips developers with the libraries and tools needed to manage runtime logic, compile the codebase, and utilize core features like staking, governance, and Cross-Consensus Messaging (XCM). It also provides a means for building generalized peer-to-peer systems beyond blockchains. The Polkadot SDK houses the following overall functionality:

  • Networking and peer-to-peer communication (powered by Libp2p)
  • Consensus protocols, such as BABE, GRANDPA, or Aura
  • Cryptography
  • The ability to create portable Wasm runtimes
  • A selection of pre-built modules, called pallets
  • Benchmarking and testing suites

Note

For an in-depth dive into the monorepo, the Polkadot SDK Rust documentation is highly recommended.

"},{"location":"develop/blockchains/get-started/intro-polkadot-sdk/#polkadot-sdk-overview","title":"Polkadot SDK Overview","text":"

The Polkadot SDK is composed of five major components:

  • Substrate - a set of libraries and primitives for building blockchains
  • FRAME - a blockchain development framework built on top of Substrate
  • Cumulus - a set of libraries and pallets to add parachain capabilities to a Substrate/FRAME runtime
  • XCM (Cross Consensus Messaging) - the primary format for conveying messages between parachains
  • Polkadot - the node implementation for the Polkadot protocol
"},{"location":"develop/blockchains/get-started/intro-polkadot-sdk/#substrate","title":"Substrate","text":"

Substrate is a Software Development Kit (SDK) that uses Rust-based libraries and tools to enable you to build application-specific blockchains from modular and extensible components. Application-specific blockchains built with Substrate can run as standalone services or in parallel with other chains to take advantage of the shared security provided by the Polkadot ecosystem. Substrate includes default implementations of the core components of the blockchain infrastructure to allow you to focus on the application logic.

Every blockchain platform relies on a decentralized network of computers\u2014called nodes\u2014that communicate with each other about transactions and blocks. In general, a node in this context is the software running on the connected devices rather than the physical or virtual machine in the network. As software, Substrate-based nodes consist of two main parts with separate responsibilities:

  • Client - services to handle network and blockchain infrastructure activity
    • Native binary
    • Executes the Wasm runtime
    • Manages components like database, networking, mempool, consensus, and others
    • Also known as \"Host\"
  • Runtime - business logic for state transitions
    • Application logic
    • Compiled to Wasm
    • Stored as a part of the chain state
    • Also known as State Transition Function (STF)
"},{"location":"develop/blockchains/get-started/intro-polkadot-sdk/#frame","title":"FRAME","text":"

FRAME provides the core modular and extensible components that make the Substrate SDK flexible and adaptable to different use cases. FRAME includes Rust-based libraries that simplify the development of application-specific logic. Most of the functionality that FRAME provides takes the form of plug-in modules called pallets that you can add and configure to suit your requirements.

"},{"location":"develop/blockchains/get-started/intro-polkadot-sdk/#cumulus","title":"Cumulus","text":"

Cumulus provides utilities and libraries to turn FRAME-based runtimes into runtimes that can be a parachain on Polkadot. Cumulus runtimes are still FRAME runtimes but contain the necessary functionality that allows that runtime to become a parachain on a relay chain.

"},{"location":"develop/blockchains/get-started/intro-polkadot-sdk/#why-use-polkadot-sdk","title":"Why Use Polkadot SDK?","text":"

Using the Polkadot SDK, you can build application-specific blockchains without the complexity of building a blockchain from scratch or the limitations of building on a general-purpose blockchain. You can focus on crafting the business logic that makes your chain unique and innovative with the additional benefits of flexibility, upgradeability, open-source licensing, and cross-consensus interoperability.

"},{"location":"develop/blockchains/get-started/intro-polkadot-sdk/#create-a-custom-blockchain-using-the-sdk","title":"Create a Custom Blockchain Using the SDK","text":"

Before starting your blockchain development journey, you'll need to decide whether you want to build a standalone chain or a parachain that connects to the Polkadot network. Each path has its considerations and requirements. Once you've made this decision, follow these development stages:

graph LR\n    A[Install the Polkadot SDK] --> B[Build the Chain]\n    B --> C[Deploy the Chain]
  1. Install the Polkadot SDK - set up your development environment with all necessary dependencies and tools
  2. Build the chain - learn how to create and customize your blockchain's runtime, configure pallets, and implement your chain's unique features
  3. Deploy the chain - follow the steps to launch your blockchain, whether as a standalone network or as a parachain on Polkadot

Each stage is covered in detail in its respective guide, walking you through the process from initial setup to final deployment.

"},{"location":"develop/blockchains/maintenance/runtime-upgrades/","title":"Runtime Upgrades","text":""},{"location":"develop/blockchains/maintenance/runtime-upgrades/#introduction","title":"Introduction","text":"

One of the defining features of Polkadot SDK-based blockchains is the ability to perform forkless runtime upgrades. Unlike traditional blockchains, which require hard forks and node coordination for upgrades, Polkadot networks enable seamless updates without network disruption.

Forkless upgrades are achieved through WebAssembly (Wasm) runtimes stored on-chain, which can be securely swapped and upgraded as part of the blockchain's state. By leveraging decentralized consensus, runtime updates can be happen trustlessly, ensuring continuous improvement and evolution without halting operations.

This guide explains how Polkadot's runtime versioning, Wasm deployment, and storage migrations enable these upgrades, ensuring the blockchain evolves smoothly and securely. You'll also learn how different upgrade processes apply to solo chains and parachains, depending on the network setup.

"},{"location":"develop/blockchains/maintenance/runtime-upgrades/#how-runtime-upgrades-work","title":"How Runtime Upgrades Work","text":"

In FRAME, the system pallet uses the set_code extrinsic to update the Wasm code for the runtime. This method allows solo chains to upgrade without disruption.

For parachains, upgrades are more complex. Parachains must first call authorize_upgrade, followed by apply_authorized_upgrade, to ensure the relay chain approves and applies the changes. Additionally, changes to current functionality that impact storage often require a storage migration.

"},{"location":"develop/blockchains/maintenance/runtime-upgrades/#runtime-versioning","title":"Runtime Versioning","text":"

The executor is the component that selects the runtime execution environment to communicate with. Although you can override the default execution strategies for custom scenarios, in most cases, the executor selects the appropriate binary to use by evaluating and comparing key parameters from the native and Wasm runtime binaries.

The runtime includes a runtime version struct to provide the needed parameter information to the executor process. A sample runtime version struct might look as follows:

pub const VERSION: RuntimeVersion = RuntimeVersion {\n    spec_name: create_runtime_str!(\"node-template\"),\n    impl_name: create_runtime_str!(\"node-template\"),\n    authoring_version: 1,\n    spec_version: 1,\n    impl_version: 1,\n    apis: RUNTIME_API_VERSIONS,\n    transaction_version: 1,\n};\n

The struct provides the following parameter information to the executor:

  • spec_name - the identifier for the different runtimes
  • impl_name - the name of the implementation of the spec. Serves only to differentiate code of different implementation teams
  • authoring_version - the version of the authorship interface. An authoring node won't attempt to author blocks unless this is equal to its native runtime
  • spec_version - the version of the runtime specification. A full node won't attempt to use its native runtime in substitute for the on-chain Wasm runtime unless the spec_name, spec_version, and authoring_version are all the same between the Wasm and native binaries. Updates to the spec_version can be automated as a CI process, as is done for the Polkadot network. This parameter is typically incremented when there's an update to the transaction_version
  • impl_version - the version of the implementation of the specification. Nodes can ignore this. It is only used to indicate that the code is different. As long as the authoring_version and the spec_version are the same, the code might have changed, but the native and Wasm binaries do the same thing. In general, only non-logic-breaking optimizations would result in a change of the impl_version
  • transaction_version - the version of the interface for handling transactions. This parameter can be useful to synchronize firmware updates for hardware wallets or other signing devices to verify that runtime transactions are valid and safe to sign. This number must be incremented if there is a change in the index of the pallets in the construct_runtime! macro or if there are any changes to dispatchable functions, such as the number of parameters or parameter types. If transaction_version is updated, then the spec_version must also be updated
  • apis - a list of supported runtime APIs along with their versions

The executor follows the same consensus-driven logic for both the native runtime and the Wasm runtime before deciding which to execute. Because runtime versioning is a manual process, there is a risk that the executor could make incorrect decisions if the runtime version is misrepresented or incorrectly defined.

"},{"location":"develop/blockchains/maintenance/runtime-upgrades/#accessing-the-runtime-version","title":"Accessing the Runtime Version","text":"

The runtime version can be accessed through the state.getRuntimeVersion RPC endpoint, which accepts an optional block identifier. It can also be accessed through the runtime metadata to understand the APIs the runtime exposes and how to interact with them.

The runtime metadata should only change when the chain's runtime spec_version changes.

"},{"location":"develop/blockchains/maintenance/runtime-upgrades/#storage-migrations","title":"Storage Migrations","text":"

Storage migrations are custom, one-time functions that allow you to update storage to adapt to changes in the runtime.

For example, if a runtime upgrade changes the data type used to represent user balances from an unsigned integer to a signed integer, the storage migration would read the existing value as an unsigned integer and write back an updated value that has been converted to a signed integer.

If you don't make changes to how data is stored when needed, the runtime can't properly interpret the storage values to include in the runtime state and is likely to lead to undefined behavior.

"},{"location":"develop/blockchains/maintenance/runtime-upgrades/#storage-migrations-with-frame","title":"Storage Migrations with FRAME","text":"

FRAME storage migrations are implemented using the OnRuntimeUpgrade trait. The OnRuntimeUpgrade trait specifies a single function, on_runtime_upgrade, that allows you to specify logic to run immediately after a runtime upgrade but before any on_initialize functions or transactions are executed.

For further details about this process, see the Storage Migrations page.

"},{"location":"develop/blockchains/maintenance/runtime-upgrades/#ordering-migrations","title":"Ordering Migrations","text":"

By default, FRAME orders the execution of on_runtime_upgrade functions based on the order in which the pallets appear in the construct_runtime! macro. The functions run in reverse order for upgrades, starting with the last pallet executed first. You can impose a custom order if needed.

FRAME storage migrations run in this order:

  1. Custom on_runtime_upgrade functions if using a custom order
  2. System frame_system::on_runtime_upgrade functions
  3. All on_runtime_upgrade functions defined in the runtime starting with the last pallet in the construct_runtime! macro
"},{"location":"develop/blockchains/maintenance/storage-migrations/","title":"Storage Migrations","text":""},{"location":"develop/blockchains/maintenance/storage-migrations/#introduction","title":"Introduction","text":"

Storage migrations are a crucial part of the runtime upgrade process. They allow you to update the storage items of your blockchain, adapting to changes in the runtime. Whenever you change the encoding or data types used to represent data in storage, you'll need to provide a storage migration to ensure the runtime can correctly interpret the existing stored values in the new runtime state.

Storage migrations must be executed precisely during the runtime upgrade process to ensure data consistency and prevent runtime panics. The migration code needs to run as follows:

  • After the new runtime is deployed
  • Before any other code from the new runtime executes
  • Before any on_initialize hooks run
  • Before any transactions are processed

This timing is critical because the new runtime expects data to be in the updated format. Any attempt to decode the old data format without proper migration could result in runtime panics or undefined behavior.

"},{"location":"develop/blockchains/maintenance/storage-migrations/#storage-migration-scenarios","title":"Storage Migration Scenarios","text":"

A storage migration is necessary whenever a runtime upgrade changes the storage layout or the encoding/interpretation of existing data. Even if the underlying data type appears to still \"fit\" the new storage representation, a migration may be required if the interpretation of the stored values has changed.

Storage migrations ensure data consistency and prevent corruption during runtime upgrades. Below are common scenarios categorized by their impact on storage and migration requirements:

  • Migration required:

    • Reordering or mutating fields of an existing data type to change the encoded/decoded data representation
    • Removal of a pallet or storage item warrants cleaning up storage via a migration to avoid state bloat
  • Migration not required:

    • Adding a new storage item would not require any migration since no existing data needs transformation
    • Adding or removing an extrinsic introduces no new interpretation of preexisting data, so no migration is required

The following are some common scenarios where a storage migration is needed:

  • Changing data types - changing the underlying data type requires a migration to convert the existing values

    #[pallet::storage]\npub type FooValue = StorageValue<_, Foo>;\n// old\npub struct Foo(u32)\n// new\npub struct Foo(u64)\n
  • Changing data representation - modifying the representation of the stored data, even if the size appears unchanged, requires a migration to ensure the runtime can correctly interpret the existing values

    #[pallet::storage]\npub type FooValue = StorageValue<_, Foo>;\n// old\npub struct Foo(u32)\n// new\npub struct Foo(i32)\n// or\npub struct Foo(u16, u16)\n
  • Extending an enum - adding new variants to an enum requires a migration if you reorder existing variants, insert new variants between existing ones, or change the data type of existing variants. No migration is required when adding new variants at the end of the enum

    #[pallet::storage]\npub type FooValue = StorageValue<_, Foo>;\n// old\npub enum Foo { A(u32), B(u32) }\n// new (New variant added at the end. No migration required)\npub enum Foo { A(u32), B(u32), C(u128) }\n// new (Reordered variants. Requires migration)\npub enum Foo { A(u32), C(u128), B(u32) }\n
  • Changing the storage key - modifying the storage key, even if the underlying data type remains the same, requires a migration to ensure the runtime can locate the correct stored values.

    #[pallet::storage]\npub type FooValue = StorageValue<_, u32>;\n// new\n#[pallet::storage]\npub type BarValue = StorageValue<_, u32>;\n

Warning

In general, any change to the storage layout or data encoding used in your runtime requires careful consideration of the need for a storage migration. Overlooking a necessary migration can lead to undefined behavior or data loss during a runtime upgrade.

"},{"location":"develop/blockchains/maintenance/storage-migrations/#implement-storage-migrations","title":"Implement Storage Migrations","text":"

The OnRuntimeUpgrade trait provides the foundation for implementing storage migrations in your runtime. Here's a detailed look at its essential functions:

pub trait OnRuntimeUpgrade {\n    fn on_runtime_upgrade() -> Weight { ... }\n    fn try_on_runtime_upgrade(checks: bool) -> Result<Weight, TryRuntimeError> { ... }\n    fn pre_upgrade() -> Result<Vec<u8>, TryRuntimeError> { ... }\n    fn post_upgrade(_state: Vec<u8>) -> Result<(), TryRuntimeError> { ... }\n}\n
"},{"location":"develop/blockchains/maintenance/storage-migrations/#core-migration-function","title":"Core Migration Function","text":"

The on_runtime_upgrade function executes when the FRAME Executive pallet detects a runtime upgrade. Important considerations when using this function include:

  • It runs before any pallet's on_initialize hooks
  • Critical storage items (like block_number) may not be set
  • Execution is mandatory and must be completed
  • Careful weight calculation is required to prevent bricking the chain

When implementing the migration logic, your code must handle several vital responsibilities. A migration implementation must do the following to operate correctly:

  • Read existing storage values in their original format
  • Transform data to match the new format
  • Write updated values back to storage
  • Calculate and return consumed weight
"},{"location":"develop/blockchains/maintenance/storage-migrations/#migration-testing-hooks","title":"Migration Testing Hooks","text":"

The OnRuntimeUpgrade trait provides some functions designed specifically for testing migrations. These functions never execute on-chain but are essential for validating migration behavior in test environments. The migration test hooks are as follows:

  • try_on_runtime_upgrade - this function serves as the primary orchestrator for testing the complete migration process. It coordinates the execution flow from pre-upgrade checks through the actual migration to post-upgrade verification. Handling the entire migration sequence ensures that storage modifications occur correctly and in the proper order. Preserving this sequence is particularly valuable when testing multiple dependent migrations, where the execution order matters

  • pre_upgrade - before a runtime upgrade begins, the pre_upgrade function performs preliminary checks and captures the current state. It returns encoded state data that can be used for post-upgrade verification. This function must never modify storage - it should only read and verify the existing state. The data it returns includes critical state values that should remain consistent or transform predictably during migration

  • post_upgrade - after the migration completes, post_upgrade validates its success. It receives the state data captured by pre_upgrade to verify that the migration was executed correctly. This function checks for storage consistency and ensures all data transformations are completed as expected. Like pre_upgrade, it operates exclusively in testing environments and should not modify storage

"},{"location":"develop/blockchains/maintenance/storage-migrations/#migration-structure","title":"Migration Structure","text":"

There are two approaches to implementing storage migrations. The first method involves directly implementing OnRuntimeUpgrade on structs. This approach requires manually checking the on-chain storage version against the new StorageVersion and executing the transformation logic only when the check passes. This version verification prevents multiple executions of the migration during subsequent runtime upgrades.

The recommended approach is to implement UncheckedOnRuntimeUpgrade and wrap it with VersionedMigration. VersionedMigration implements OnRuntimeUpgrade and handles storage version management automatically, following best practices and reducing potential errors.

VersionedMigration requires five type parameters:

  • From - the source version for the upgrade
  • To - the target version for the upgrade
  • Inner - the UncheckedOnRuntimeUpgrade implementation
  • Pallet - the pallet being upgraded
  • Weight - the runtime's RuntimeDbWeight implementation

Examine the following migration example that transforms a simple StorageValue storing a u32 into a more complex structure that tracks both current and previous values using the CurrentAndPreviousValue struct:

  • Old StorageValue format:

    #[pallet::storage]\npub type Value<T: Config> = StorageValue<_, u32>;\n

  • New StorageValue format:

    /// Example struct holding the most recently set [`u32`] and the\n/// second most recently set [`u32`] (if one existed).\n#[docify::export]\n#[derive(\n    Clone, Eq, PartialEq, Encode, Decode, RuntimeDebug, scale_info::TypeInfo, MaxEncodedLen,\n)]\npub struct CurrentAndPreviousValue {\n    /// The most recently set value.\n    pub current: u32,\n    /// The previous value, if one existed.\n    pub previous: Option<u32>,\n}\n\n#[pallet::storage]\npub type Value<T: Config> = StorageValue<_, CurrentAndPreviousValue>;\n

  • Migration:

    use frame_support::{\n    storage_alias,\n    traits::{Get, UncheckedOnRuntimeUpgrade},\n};\n\n#[cfg(feature = \"try-runtime\")]\nuse alloc::vec::Vec;\n\n/// Collection of storage item formats from the previous storage version.\n///\n/// Required so we can read values in the v0 storage format during the migration.\nmod v0 {\n    use super::*;\n\n    /// V0 type for [`crate::Value`].\n    #[storage_alias]\n    pub type Value<T: crate::Config> = StorageValue<crate::Pallet<T>, u32>;\n}\n\n/// Implements [`UncheckedOnRuntimeUpgrade`], migrating the state of this pallet from V0 to V1.\n///\n/// In V0 of the template [`crate::Value`] is just a `u32`. In V1, it has been upgraded to\n/// contain the struct [`crate::CurrentAndPreviousValue`].\n///\n/// In this migration, update the on-chain storage for the pallet to reflect the new storage\n/// layout.\npub struct InnerMigrateV0ToV1<T: crate::Config>(core::marker::PhantomData<T>);\n\nimpl<T: crate::Config> UncheckedOnRuntimeUpgrade for InnerMigrateV0ToV1<T> {\n    /// Return the existing [`crate::Value`] so we can check that it was correctly set in\n    /// `InnerMigrateV0ToV1::post_upgrade`.\n    #[cfg(feature = \"try-runtime\")]\n    fn pre_upgrade() -> Result<Vec<u8>, sp_runtime::TryRuntimeError> {\n        use codec::Encode;\n\n        // Access the old value using the `storage_alias` type\n        let old_value = v0::Value::<T>::get();\n        // Return it as an encoded `Vec<u8>`\n        Ok(old_value.encode())\n    }\n\n    /// Migrate the storage from V0 to V1.\n    ///\n    /// - If the value doesn't exist, there is nothing to do.\n    /// - If the value exists, it is read and then written back to storage inside a\n    /// [`crate::CurrentAndPreviousValue`].\n    fn on_runtime_upgrade() -> frame_support::weights::Weight {\n        // Read the old value from storage\n        if let Some(old_value) = v0::Value::<T>::take() {\n            // Write the new value to storage\n            let new = crate::CurrentAndPreviousValue { current: old_value, previous: None };\n            crate::Value::<T>::put(new);\n            // One read + write for taking the old value, and one write for setting the new value\n            T::DbWeight::get().reads_writes(1, 2)\n        } else {\n            // No writes since there was no old value, just one read for checking\n            T::DbWeight::get().reads(1)\n        }\n    }\n\n    /// Verifies the storage was migrated correctly.\n    ///\n    /// - If there was no old value, the new value should not be set.\n    /// - If there was an old value, the new value should be a [`crate::CurrentAndPreviousValue`].\n    #[cfg(feature = \"try-runtime\")]\n    fn post_upgrade(state: Vec<u8>) -> Result<(), sp_runtime::TryRuntimeError> {\n        use codec::Decode;\n        use frame_support::ensure;\n\n        let maybe_old_value = Option::<u32>::decode(&mut &state[..]).map_err(|_| {\n            sp_runtime::TryRuntimeError::Other(\"Failed to decode old value from storage\")\n        })?;\n\n        match maybe_old_value {\n            Some(old_value) => {\n                let expected_new_value =\n                    crate::CurrentAndPreviousValue { current: old_value, previous: None };\n                let actual_new_value = crate::Value::<T>::get();\n\n                ensure!(actual_new_value.is_some(), \"New value not set\");\n                ensure!(\n                    actual_new_value == Some(expected_new_value),\n                    \"New value not set correctly\"\n                );\n            },\n            None => {\n                ensure!(crate::Value::<T>::get().is_none(), \"New value unexpectedly set\");\n            },\n        };\n        Ok(())\n    }\n}\n\n/// [`UncheckedOnRuntimeUpgrade`] implementation [`InnerMigrateV0ToV1`] wrapped in a\n/// [`VersionedMigration`](frame_support::migrations::VersionedMigration), which ensures that:\n/// - The migration only runs once when the on-chain storage version is 0\n/// - The on-chain storage version is updated to `1` after the migration executes\n/// - Reads/Writes from checking/settings the on-chain storage version are accounted for\npub type MigrateV0ToV1<T> = frame_support::migrations::VersionedMigration<\n    0, // The migration will only execute when the on-chain storage version is 0\n    1, // The on-chain storage version will be set to 1 after the migration is complete\n    InnerMigrateV0ToV1<T>,\n    crate::pallet::Pallet<T>,\n    <T as frame_system::Config>::DbWeight,\n>;\n

"},{"location":"develop/blockchains/maintenance/storage-migrations/#migration-organization","title":"Migration Organization","text":"

Best practices recommend organizing migrations in a separate module within your pallet. Here's the recommended file structure:

my-pallet/\n\u251c\u2500\u2500 src/\n\u2502   \u251c\u2500\u2500 lib.rs       # Main pallet implementation\n\u2502   \u2514\u2500\u2500 migrations/  # All migration-related code\n\u2502       \u251c\u2500\u2500 mod.rs   # Migrations module definition\n\u2502       \u251c\u2500\u2500 v1.rs    # V0 -> V1 migration\n\u2502       \u2514\u2500\u2500 v2.rs    # V1 -> V2 migration\n\u2514\u2500\u2500 Cargo.toml\n

This structure provides several benefits:

  • Separates migration logic from core pallet functionality
  • Makes migrations easier to test and maintain
  • Provides explicit versioning of storage changes
  • Simplifies the addition of future migrations
"},{"location":"develop/blockchains/maintenance/storage-migrations/#scheduling-migrations","title":"Scheduling Migrations","text":"

To execute migrations during a runtime upgrade, you must configure them in your runtime's Executive pallet. Add your migrations in runtime/src/lib.rs:

/// Tuple of migrations (structs that implement `OnRuntimeUpgrade`)\ntype Migrations = (\n    pallet_my_pallet::migrations::v1::Migration,\n    // More migrations can be added here\n);\npub type Executive = frame_executive::Executive<\n    Runtime,\n    Block,\n    frame_system::ChainContext<Runtime>,\n    Runtime,\n    AllPalletsWithSystem,\n    Migrations, // Include migrations here\n>;\n
"},{"location":"develop/blockchains/maintenance/storage-migrations/#single-block-migrations","title":"Single-Block Migrations","text":"

Single-block migrations execute their logic within one block immediately following a runtime upgrade. They run as part of the runtime upgrade process through the OnRuntimeUpgrade trait implementation and must be completed before any other runtime logic executes.

While single-block migrations are straightforward to implement and provide immediate data transformation, they carry significant risks. The most critical consideration is that they must complete within one block's weight limits. This is especially crucial for parachains, where exceeding block weight limits will brick the chain.

Use single-block migrations only when you can guarantee:

  • The migration has a bounded execution time
  • Weight calculations are thoroughly tested
  • Total weight will never exceed block limits

For a complete implementation example of a single-block migration, refer to the single-block migration example in the Polkadot SDK documentation.

"},{"location":"develop/blockchains/maintenance/storage-migrations/#multi-block-migrations","title":"Multi Block Migrations","text":"

Multi-block migrations distribute the migration workload across multiple blocks, providing a safer approach for production environments. The migration state is tracked in storage, allowing the process to pause and resume across blocks.

This approach is essential for production networks and parachains as the risk of exceeding block weight limits is eliminated. Multi-block migrations can safely handle large storage collections, unbounded data structures, and complex nested data types where weight consumption might be unpredictable.

Multi-block migrations are ideal when dealing with:

  • Large-scale storage migrations
  • Unbounded storage items or collections
  • Complex data structures with uncertain weight costs

The primary trade-off is increased implementation complexity, as you must manage the migration state and handle partial completion scenarios. However, multi-block migrations' significant safety benefits and operational reliability are typically worth the increased complexity.

For a complete implementation example of multi-block migrations, refer to the official example in the Polkadot SDK.

"},{"location":"develop/blockchains/testing/runtime/","title":"Runtime Testing","text":""},{"location":"develop/blockchains/testing/runtime/#introduction","title":"Introduction","text":"

In the Polkadot SDK, it's important to test individual pallets in isolation and how they interact within the runtime. Once unit tests for specific pallets are complete, the next step is integration testing to verify that multiple pallets work together correctly within the blockchain system. This testing ensures that the entire runtime functions as expected under real-world conditions.

This article extends the Testing Setup guide by illustrating how to test interactions between different pallets within the same runtime.

"},{"location":"develop/blockchains/testing/runtime/#testing-pallets-interactions","title":"Testing Pallets Interactions","text":"

Once the test environment is ready, you can write tests to simulate interactions between multiple pallets in the runtime. Below is an example of how to test the interaction between two generic pallets, referred to here as pallet_a and pallet_b. In this scenario, assume that pallet_b depends on pallet_a. The configuration of pallet_b is the following:

use pallet_a::Config as PalletAConfig;\n\n...\n\n#[pallet::config]\npub trait Config: frame_system::Config + PalletAConfig {\n    type RuntimeEvent: From<Event<Self>> + IsType<<Self as frame_system::Config>::RuntimeEvent>;\n    type WeightInfo: WeightInfo;\n}\n

And also, pallet_b exposes a call that interacts with pallet_a:

#[pallet::call]\nimpl<T: Config> Pallet<T> {\n    #[pallet::call_index(0)]\n    #[pallet::weight(<T as pallet_b::Config>::WeightInfo::dummy_weight())]\n    pub fn dummy_call_against_pallet_a(_origin: OriginFor<T>, number: u32) -> DispatchResult {\n        pallet_a::DummyCounter::<T>::put(number);\n        Self::deposit_event(Event::Dummy);\n        Ok(())\n    }\n}\n

In this first test, a call to pallet_a is simulated, and the internal state is checked to ensure it updates correctly. The block number is also checked to ensure it advances as expected:

#[test]\nfn testing_runtime_with_pallet_a() {\n    new_test_ext().execute_with(|| {\n        // Block 0: Verify runtime initialization\n        assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 0);\n\n        // Check the initial state of pallet_a\n        assert_eq!(0, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n        // Simulate calling a function from pallet_a\n        let dummy_origin = RuntimeOrigin::none();\n        pallet_a::Pallet::<Runtime>::dummy_call(dummy_origin, 2);\n\n        // Verify that pallet_a's state has been updated\n        assert_eq!(2, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n        // Move to the next block\n        frame_system::Pallet::<Runtime>::set_block_number(1);\n\n        // Confirm the block number has advanced\n        assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 1);\n    });\n}\n

Next, a test can be written to verify the interaction between pallet_a and pallet_b:

#[test]\nfn testing_runtime_with_pallet_b() {\n    new_test_ext().execute_with(|| {\n        // Block 0: Check if initialized correctly\n        assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 0);\n\n        // Ensure that pallet_a is initialized correctly\n        assert_eq!(0, pallet_a::Pallet::<Runtime>::get_dummy_counter()); \n\n        // Use pallet_b to call a function that interacts with pallet_a\n        let dummy_origin = RuntimeOrigin::none();\n        pallet_b::Pallet::<Runtime>::dummy_call_against_pallet_a(dummy_origin, 4);\n\n        // Confirm that pallet_a's state was updated by pallet_b\n        assert_eq!(4, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n        // Transition to block 1.\n        frame_system::Pallet::<Runtime>::set_block_number(1);\n\n        // Confirm the block number has advanced\n        assert_eq!(frame_system::pallet::<runtime>::block_number(), 1);\n    });\n}\n

This test demonstrates how pallet_b can trigger a change in pallet_a's state, verifying that the pallets interact properly during runtime.

For more information about testing more specific elements like storage, errors, and events, see the Pallet Testing article.

Integration Test - Complete Code

The complete code for the integration test is shown below:

pub mod integration_testing {\n    use crate::*;\n    use sp_runtime::BuildStorage;\n    use frame_support::assert_ok;\n\n    // Build genesis storage according to the runtime's configuration.\n    pub fn new_test_ext() -> sp_io::TestExternalities {\n        frame_system::GenesisConfig::<Runtime>::default().build_storage().unwrap().into()\n    }\n\n    #[test]\n    fn testing_runtime_with_pallet_a() {\n        new_test_ext().execute_with(|| {\n            // Block 0: Check if initialized correctly\n            assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 0);\n\n            assert_eq!(0, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n            let dummy_origin = RuntimeOrigin::none();\n            pallet_a::Pallet::<Runtime>::dummy_call(dummy_origin, 2);\n\n            assert_eq!(2, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n            // Transition to block 1.\n            frame_system::Pallet::<Runtime>::set_block_number(1);\n\n            // Check if block number is now 1.\n            assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 1);\n        });\n    }\n\n    #[test]\n    fn testing_runtime_with_pallet_b() {\n        new_test_ext().execute_with(|| {\n            // Block 0: Check if initialized correctly\n            assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 0);\n\n            assert_eq!(0, pallet_a::Pallet::<Runtime>::get_dummy_counter()); \n            let dummy_origin = RuntimeOrigin::none();\n            pallet_b::Pallet::<Runtime>::dummy_call_against_pallet_a(dummy_origin, 4);\n            assert_eq!(4, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n            // Transition to block 1.\n            frame_system::Pallet::<Runtime>::set_block_number(1);\n\n            // Check if block number is now 1.\n            assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 1);\n        });\n    }\n}\n
"},{"location":"develop/blockchains/testing/runtime/#verifying-pallet-interactions","title":"Verifying Pallet Interactions","text":"

The tests confirm that:

  • Pallets initialize correctly - at the start of each test, the system should initialize with block number 0, and the pallets should be in their default states
  • Pallets modify each other's state - the second test shows how one pallet can trigger changes in another pallet's internal state, confirming proper cross-pallet interactions
  • State transitions between blocks are seamless - by simulating block transitions, the tests validate that the runtime responds correctly to changes in the block number

Testing pallet interactions within the runtime is critical for ensuring the blockchain behaves as expected under real-world conditions. Writing integration tests allows validation of how pallets function together, preventing issues that might arise when the system is fully assembled.

This approach provides a comprehensive view of the runtime's functionality, ensuring the blockchain is stable and reliable.

"},{"location":"develop/blockchains/testing/setup/","title":"Testing Setup","text":""},{"location":"develop/blockchains/testing/setup/#introduction","title":"Introduction","text":"

In Polkadot SDK development, testing is crucial to ensure your blockchain works as expected. While unit testing for individual pallets validates isolated functionality, as discussed in Pallet Testing, it's equally important to test how these pallets function together within the runtime. Runtime testing fills this role by providing a complete simulation of the blockchain system.

This guide will help you set up an environment to test an entire runtime. Runtime testing will enable you to assess how different pallets, their configurations, and system components interact, ensuring your blockchain behaves correctly under real-world conditions.

"},{"location":"develop/blockchains/testing/setup/#runtime-testing","title":"Runtime Testing","text":"

In the context of Polkadot SDK, runtime testing involves creating a simulated environment that mimics actual blockchain conditions. This type of testing goes beyond individual pallet validation, focusing on how multiple components integrate and collaborate across the system. This way, multiple runtimes can be tested if needed.

While unit tests provide confidence that individual pallets function correctly in isolation, runtime tests offer a holistic view. These tests validate pallets' communication and interaction, ensuring a seamless and functional blockchain system. By running integration tests at the runtime level, you can catch issues that only arise when multiple pallets are combined, which is critical for building a stable and reliable blockchain.

"},{"location":"develop/blockchains/testing/setup/#configuring-a-mock-runtime-for-integration-tests","title":"Configuring a Mock Runtime for Integration Tests","text":"

The mock runtime includes all the necessary pallets and configurations needed for testing. To simplify the process, you can create a module that integrates all components, making it easier to assess how pallets and system elements interact.

Here's a simple example of how to create a testing module that simulates these interactions:

pub mod integration_testing {\n    use crate::*;\n    // ...\n}\n

Note

The crate::*; snippet imports all the components from your crate (including runtime configurations, pallet modules, and utility functions) into the integration_testing module. This allows you to write tests without manually importing each piece, making the code more concise and readable.

Once the testing module is set, the next step is configuring the genesis storage\u2014the initial state of your blockchain. Genesis storage sets the starting conditions for the runtime, defining how pallets are configured before any blocks are produced.

In Polkadot SDK, you can create this storage using the BuildStorage trait from the sp_runtime crate. This trait is essential for building the configuration that initializes the blockchain's state.

The function new_test_ext() demonstrates setting up this environment. It uses frame_system::GenesisConfig::<Runtime>::default() to generate a default genesis configuration for the runtime, followed by .build_storage() to create the initial storage state. This storage is then converted into a format usable by the testing framework, sp_io::TestExternalities, allowing tests to be executed in a simulated blockchain environment.

Here's the code that sets up the mock runtime:

pub mod integration_testing {\n    use crate::*;\n    use sp_runtime::BuildStorage;\n\n    pub fn new_test_ext() -> sp_io::TestExternalities {\n        frame_system::GenesisConfig::<Runtime>::default()\n            .build_storage()\n            .unwrap()\n            .into()\n    }\n}\n

You can also customize the genesis storage to set initial values for your runtime pallets. For example, you can set the initial balance for accounts like this:

// Build genesis storage according to the runtime's configuration\npub fn new_test_ext() -> sp_io::TestExternalities {\n    // Define the initial balances for accounts\n    let initial_balances: Vec<(AccountId32, u128)> = vec![\n        (AccountId32::from([0u8; 32]), 1_000_000_000_000),\n        (AccountId32::from([1u8; 32]), 2_000_000_000_000),\n    ];\n\n    let mut t = frame_system::GenesisConfig::<Runtime>::default()\n        .build_storage()\n        .unwrap();\n\n    // Adding balances configuration to the genesis config\n    pallet_balances::GenesisConfig::<Runtime> {\n        balances: initial_balances,\n    }\n    .assimilate_storage(&mut t)\n    .unwrap();\n\n    t.into()\n}\n
"},{"location":"develop/blockchains/testing/setup/#where-to-go-next","title":"Where to Go Next","text":"

With the mock environment in place, you can now write tests to validate how your pallets interact within the runtime. This approach ensures that your blockchain behaves as expected when the entire runtime is assembled.

You can view a complete example of an integration test in the Astar parachain codebase.

For more advanced information on runtime testing, please refer to the Runtime Testing article.

"},{"location":"develop/development-pathways/","title":"Development Pathways","text":""},{"location":"develop/development-pathways/#introduction","title":"Introduction","text":"

Developers can choose from different development pathways to build applications and core blockchain functionality. Each pathway caters to different types of projects and developer skill sets, while complementing one another within the broader network.

The Polkadot ecosystem provides multiple development pathways:

graph TD\n    A[Development Pathways]\n    A --> B[Smart Contract Development]\n    A --> C[Blockchain Development]\n    A --> D[Client-side Development]
"},{"location":"develop/development-pathways/#smart-contract-development","title":"Smart Contract Development","text":"

Smart contracts are sandboxed programs that run within a virtual machine on the blockchain. These deterministic pieces of code are deployed at specific blockchain addresses and execute predefined logic when triggered by transactions. Because they run in an isolated environment, they provide enhanced security and predictable execution. Smart contracts can be deployed permissionlessly, allowing any developer to create and launch applications without requiring special access or permissions. They enable developers to create trustless applications by encoding rules, conditions, and state transitions that leverage the security and transparency of the underlying blockchain.

Some key benefits of developing smart contracts include ease of development, faster time to market, and permissionless deployment. Smart contracts allow developers to quickly build and deploy decentralized applications without complex infrastructure or intermediaries. This accelerates the development lifecycle and enables rapid innovation within the Polkadot ecosystem.

For more information on developing smart contracts in the Polkadot ecosystem, check the Smart Contracts section.

"},{"location":"develop/development-pathways/#blockchain-development","title":"Blockchain Development","text":"

Runtimes are the core building blocks that define the logic and functionality of Polkadot SDK-based blockchains. Developers can customize and extend the features of their blockchain, allowing for tighter integration with critical network tasks such as block production, consensus mechanisms, and governance processes.

Runtimes can be upgraded through forkless runtime updates, enabling seamless evolution of the blockchain without disrupting existing functionality.

Developers can define the parameters, rules, and behaviors that shape their blockchain network. This includes token economics, transaction fees, permissions, and more. Using the Polkadot SDK, teams can iterate on their blockchain designs, experiment with new features, and deploy highly specialized networks tailored to their specific use cases.

For those interested in delving deeper into runtime development, explore the dedicated Custom Blockchains section.

"},{"location":"develop/development-pathways/#client-side-development","title":"Client-Side Development","text":"

The client-side development path is dedicated to building applications that interact with Polkadot SDK-based blockchains and enhance user engagement with the network. While decentralized applications (dApps) are a significant focus, this pathway also includes developing other tools and interfaces that expand users' interactions with blockchain data and services.

Client-side developers can build:

  • Decentralized applications (dApps) - these applications leverage the blockchain's smart contracts or runtimes to offer a wide range of features, from financial services to gaming and social applications, all accessible directly by end-users

  • Command-line interfaces (CLIs) - CLI tools empower developers and technical users to interact with the blockchain programmatically. These tools enable tasks like querying the blockchain, deploying smart contracts, managing wallets, and monitoring network status

  • Data analytics and visualization tools - developers can create tools that aggregate, analyze, and visualize on-chain data to help users and businesses understand trends, track transactions, and gain insights into the network's health and usage

  • Wallets - securely managing accounts and private keys is crucial for blockchain users. Client-side development includes building user-friendly wallets, account management tools, and extensions that integrate seamlessly with the ecosystem

  • Explorers and dashboards - blockchain explorers allow users to view and search on-chain data, including blocks, transactions, and accounts. Dashboards provide a more interactive interface for users to monitor critical metrics, such as staking rewards, governance proposals, and network performance

These applications can leverage the Polkadot blockchain's underlying protocol features to create solutions that allow users to interact with the ecosystem. The Client-side development pathway is ideal for developers interested in enhancing user experiences and building applications that bring the power of decentralized networks to a broader audience.

Check the API Libraries section for essential tools to interact with Polkadot SDK-based blockchain data and protocol features.

"},{"location":"develop/integrations/indexers/","title":"Indexers","text":""},{"location":"develop/integrations/indexers/#the-challenge-of-blockchain-data-access","title":"The Challenge of Blockchain Data Access","text":"

Blockchain data is inherently sequential and distributed, with information stored chronologically across numerous blocks. While retrieving data from a single block through JSON-RPC API calls is straightforward, more complex queries that span multiple blocks present significant challenges:

  • Data is scattered and unorganized across the blockchain
  • Retrieving large datasets can take days or weeks to sync
  • Complex operations (like aggregations, averages, or cross-chain queries) require additional processing
  • Direct blockchain queries can impact dApp performance and responsiveness
"},{"location":"develop/integrations/indexers/#what-is-a-blockchain-indexer","title":"What is a Blockchain Indexer?","text":"

A blockchain indexer is a specialized infrastructure tool that processes, organizes, and stores blockchain data in an optimized format for efficient querying. Think of it as a search engine for blockchain data that:

  • Continuously monitors the blockchain for new blocks and transactions
  • Processes and categorizes this data according to predefined schemas
  • Stores the processed data in an easily queryable database
  • Provides efficient APIs (typically GraphQL) for data retrieval
"},{"location":"develop/integrations/indexers/#indexer-implementations","title":"Indexer Implementations","text":"
  • Subsquid

    Subsquid is a data network that allows rapid and cost-efficient retrieval of blockchain data from 100+ chains using Subsquid's decentralized data lake and open-source SDK. In simple terms, Subsquid can be considered an ETL (extract, transform, and load) tool with a GraphQL server included. It enables comprehensive filtering, pagination, and even full-text search capabilities. Subsquid has native and full support for EVM and Substrate data, even within the same project.

    Reference

  • Subquery

    SubQuery is a fast, flexible, and reliable open-source data decentralised infrastructure network that provides both RPC and indexed data to consumers worldwide. It provides custom APIs for your web3 project across multiple supported chains.

    Reference

"},{"location":"develop/integrations/oracles/","title":"Oracles","text":""},{"location":"develop/integrations/oracles/#what-is-a-blockchain-oracle","title":"What is a Blockchain Oracle?","text":"

Oracles enable blockchains to access external data sources. Since blockchains operate as isolated networks, they cannot natively interact with external systems - this limitation is known as the \"blockchain oracle problem.\" Oracles solves this by extracting data from external sources (like APIs, IoT devices, or other blockchains), validating it, and submitting it on-chain.

While simple oracle implementations may rely on a single trusted provider, more sophisticated solutions use decentralized networks where multiple providers stake assets and reach consensus on data validity. Typical applications include DeFi price feeds, weather data for insurance contracts, and cross-chain asset verification.

"},{"location":"develop/integrations/oracles/#oracle-implementations","title":"Oracle Implementations","text":"
  • Acurast

    Acurast is a decentralized, serverless cloud platform that uses a distributed network of mobile devices for oracle services, addressing centralized trust and data ownership issues. In the Polkadot ecosystem, it allows developers to define off-chain data and computation needs, which are processed by these devices acting as decentralized oracle nodes, delivering results to Substrate (Wasm) and EVM environments.

    Reference

  • Chainlink

    Chainlink is a decentralized oracle network that brings external data onto blockchains. It acts as a secure bridge between traditional data sources and blockchain networks, enabling access to real-world information reliably. In the Polkadot ecosystem, Chainlink provides the Chainlink Feed Pallet, a Polkadot SDK-based oracle module that enables access to price reference data across your runtime logic.

    Reference

"},{"location":"develop/integrations/wallets/","title":"Wallets","text":""},{"location":"develop/integrations/wallets/#what-is-a-blockchain-wallet","title":"What is a Blockchain Wallet?","text":"

A wallet serves as your gateway to interacting with blockchain networks. Rather than storing funds, wallets secure your private keys, controlling access to your blockchain assets. Your private key provides complete control over all permitted transactions on your blockchain account, making it essential to keep it secure.

Wallet types fall into two categories based on their connection to the internet:

  • Hot wallets - online storage through websites, browser extensions or smartphone apps
  • Cold wallets - offline storage using hardware devices or air-gapped systems
"},{"location":"develop/integrations/wallets/#hot-wallets","title":"Hot Wallets","text":"
  • Talisman

    With Talisman, you can securely store assets, manage your portfolio, and interact with Polkadot and Ethereum applications. It supports Web3 apps, asset storage, and account management across over 150 Polkadot SDK-based and EVM networks. Additional features include NFT management, Ledger support, fiat on-ramp, and portfolio tracking

    Reference

  • Subwallet

    A non-custodial Polkadot & Ethereum wallet. Track, send, receive, and monitor multi-chain assets on 150+ networks. Import account with seed phrase, private key, QR code, and JSON file. Import token & NFT, attach read-only account. XCM Transfer, NFT Management, Parity Signer & Ledger support, light clients support, EVM DApp support, MetaMask compatibility, custom endpoints, fiat on-ramp, phishing detection, transaction history.

    Reference

"},{"location":"develop/integrations/wallets/#cold-wallets","title":"Cold Wallets","text":"
  • Ledger

    A hardware wallet that securely stores cryptocurrency private keys offline, protecting them from online threats. Using a secure chip and the Ledger Live app allows safe transactions and asset management while keeping keys secure.

    Reference

  • Polkadot Vault

    This cold storage solution lets you use a phone in airplane mode as an air-gapped wallet, turning any spare phone, tablet, or iOS/Android device into a hardware wallet.

    Reference

"},{"location":"develop/parachain-devs/system-parachains/register-a-foreign-asset/","title":"Register a Foreign Asset on Asset Hub","text":""},{"location":"develop/parachain-devs/system-parachains/register-a-foreign-asset/#introduction","title":"Introduction","text":"

As outlined in the Asset Hub Overview, Asset Hub supports two categories of assets: local and foreign. Local assets are created on the Asset Hub system parachain and are identified by integer IDs. On the other hand, foreign assets, which originate outside of Asset Hub, are recognized by Multilocations.

When registering a foreign asset on Asset Hub, it's essential to notice that the process involves communication between two parachains. The Asset Hub parachain will be the destination of the foreign asset, while the source parachain will be the origin of the asset. The communication between the two parachains is facilitated by the Cross-Chain Message Passing (XCMP) protocol.

This guide will take you through the process of registering a foreign asset on the Asset Hub parachain.

"},{"location":"develop/parachain-devs/system-parachains/register-a-foreign-asset/#prerequisites","title":"Prerequisites","text":"

The Asset Hub parachain is one of the system parachains on a relay chain, such as Polkadot or Kusama. To interact with these parachains, you can use the Polkadot.js Apps interface for:

  • Polkadot Asset Hub
  • Kusama Asset Hub

For testing purposes, you can also interact with the Asset Hub instance on the following test networks:

  • Paseo Asset Hub

Before you start, ensure that you have:

  • Access to the Polkadot.js Apps interface, and you are connected to the desired chain
  • A parachain that supports the XCMP protocol to interact with the Asset Hub parachain
  • A funded wallet to pay for the transaction fees and subsequent registration of the foreign asset

This guide will use Polkadot, its local Asset Hub instance, and the Astar parachain (ID 2006), as stated in the Test Environment Setup section. However, the process is the same for other relay chains and their respective Asset Hub parachain, regardless of the network you are using and the parachain owner of the foreign asset.

"},{"location":"develop/parachain-devs/system-parachains/register-a-foreign-asset/#steps-to-register-a-foreign-asset","title":"Steps to Register a Foreign Asset","text":""},{"location":"develop/parachain-devs/system-parachains/register-a-foreign-asset/#asset-hub","title":"Asset Hub","text":"
  1. Open the Polkadot.js Apps interface and connect to the Asset Hub parachain using the network selector in the top left corner

    • Testing foreign asset registration is recommended on TestNet before proceeding to MainNet. If you haven't set up a local testing environment yet, consult the Environment setup guide. After setting up, connect to the Local Node (Chopsticks) at ws://127.0.0.1:8000
    • For live network operations, connect to the Asset Hub parachain. You can choose either Polkadot or Kusama Asset Hub from the dropdown menu, selecting your preferred RPC provider.
  2. Navigate to the Extrinsics page

    1. Click on the Developer tab from the top navigation bar
    2. Select Extrinsics from the dropdown

  3. Select the Foreign Assets pallet

    1. Select the foreignAssets pallet from the dropdown list
    2. Choose the create extrinsic

  4. Fill out the required fields and click on the copy icon to copy the encoded call data to your clipboard. The fields to be filled are:

    • id - as this is a foreign asset, the ID will be represented by a Multilocation that reflects its origin. For this case, the Multilocation of the asset will be from the source parachain perspective:

      MultiLocation {parents: 1, interior: X1(Parachain(2006))};\n
    • admin - refers to the account that will be the admin of this asset. This account will be able to manage the asset, including updating its metadata. As the registered asset corresponds to a native asset of the source parachain, the admin account should be the sovereign account of the source parachain

      Obtain the sovereign account

      The sovereign account can be obtained through Substrate Utilities.

      Ensure that Sibling is selected and that the Para ID corresponds to the source parachain. In this case, since the guide follows the test setup stated in the Test Environment Setup section, the Para ID is 2006.

    • minBalance - the minimum balance required to hold this asset

    Encoded call data

    If you want an example of the encoded call data, you can copy the following:

    0x3500010100591f007369626cd6070000000000000000000000000000000000000000000000000000a0860100000000000000000000000000\n

"},{"location":"develop/parachain-devs/system-parachains/register-a-foreign-asset/#source-parachain","title":"Source Parachain","text":"
  1. Navigate to the Developer > Extrinsics section
  2. Create the extrinsic to register the foreign asset through XCM

    1. Paste the encoded call data copied in the previous step
    2. Click the Submit Transaction button

    This XCM call involves withdrawing DOT from the sibling account of the parachain, using it to initiate an execution. The transaction will be carried out with XCM as the origin kind, and will be a hex-encoded call to create a foreign asset on Asset Hub for the specified parachain asset multilocation. Any surplus will be refunded, and the asset will be deposited into the sibling account.

    Warning

    Note that the sovereign account on the Asset Hub parachain must have a sufficient balance to cover the XCM BuyExecution instruction. If the account does not have enough balance, the transaction will fail.

    Example of the encoded call data

    If you want to have the whole XCM call ready to be copied, go to the Developer > Extrinsics > Decode section and paste the following hex-encoded call data:

    0x6300330003010100a10f030c000400010000070010a5d4e81300010000070010a5d4e80006030700b4f13501419ce03500010100591f007369626cd607000000000000000000000000000000000000000000000000000000000000000000000000000000000000\n

    Ensure to replace the encoded call data with the one you copied in the previous step.

After the transaction is successfully executed, the foreign asset will be registered on the Asset Hub parachain.

"},{"location":"develop/parachain-devs/system-parachains/register-a-foreign-asset/#asset-registration-verification","title":"Asset Registration Verification","text":"

To confirm that a foreign asset has been successfully accepted and registered on the Asset Hub parachain, you can navigate to the Network > Explorer section of the Polkadot.js Apps interface for Asset Hub. You should be able to see an event that includes the following details:

In the image above, the success field indicates whether the asset registration was successful.

"},{"location":"develop/parachain-devs/system-parachains/register-a-foreign-asset/#test-environment-setup","title":"Test Environment Setup","text":"

To test the foreign asset registration process before deploying it on a live network, you can set up a local parachain environment. This guide uses Chopsticks to simulate that process. For more information on using Chopsticks, please refer to the Chopsticks documentation.

To set up a test environment, run the following command:

npx @acala-network/chopsticks xcm \\\n--r polkadot \\\n--p polkadot-asset-hub \\\n--p astar\n

Note

The above command will create a lazy fork of Polkadot as the relay chain, its Asset Hub instance, and the Astar parachain. The xcm parameter enables communication through the XCMP protocol between the relay chain and the parachains, allowing the registration of foreign assets on Asset Hub. For further information on the chopsticks usage of the XCMP protocol, refer to the XCM Testing section of the Chopsticks documentation.

After executing the command, the terminal will display the subsequent output:

According to the output, the Polkadot relay chain, the Polkadot Asset Hub, and the Astar parachain are running locally and connected through XCM. They can be accessed via the Polkadot.js Apps interface:

  • Polkadot Relay Chain
  • Polkadot Asset Hub
  • Astar Parachain
"},{"location":"develop/parachain-devs/system-parachains/register-a-local-asset/","title":"Register a Local Asset on Asset Hub","text":""},{"location":"develop/parachain-devs/system-parachains/register-a-local-asset/#introduction","title":"Introduction","text":"

As detailed in the Asset Hub Overview page, Asset Hub accommodates two types of assets: local and foreign. Local assets are those that were created in Asset Hub and are identifiable by an integer ID. On the other hand, foreign assets originate from a sibling parachain and are identified by a Multilocation.

This guide will take you through the steps of registering a local asset on the Asset Hub parachain.

"},{"location":"develop/parachain-devs/system-parachains/register-a-local-asset/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure you have access to the Polkadot.js Apps interface and a funded wallet with DOT or KSM.

  • For Polkadot Asset Hub, you would need a deposit of 10 DOT and around 0.201 DOT for the metadata
  • For Kusama Asset Hub, the deposit is 0.1 KSM and around 0.000669 KSM for the metadata

You need to ensure that your Asset Hub account balance is a bit more than the sum of those two deposits, which should seamlessly account for the required deposits and transaction fees.

"},{"location":"develop/parachain-devs/system-parachains/register-a-local-asset/#steps-to-register-a-local-asset","title":"Steps to Register a Local Asset","text":"

To register a local asset on the Asset Hub parachain, follow these steps:

  1. Open the Polkadot.js Apps interface and connect to the Asset Hub parachain using the network selector in the top left corner

    • You may prefer to test local asset registration on TestNet before registering the asset on a MainNet hub. If you still need to set up a local testing environment, review the Environment setup section for instructions. Once the local environment is set up, connect to the Local Node (Chopsticks) available on ws://127.0.0.1:8000
    • For the live network, connect to the Asset Hub parachain. Either Polkadot or Kusama Asset Hub can be selected from the dropdown list, choosing the desired RPC provider
  2. Click on the Network tab on the top navigation bar and select Assets from the dropdown list

  3. Now, you need to examine all the registered asset IDs. This step is crucial to ensure that the asset ID you are about to register is unique. Asset IDs are displayed in the assets column

  4. Once you have confirmed that the asset ID is unique, click on the Create button on the top right corner of the page

  5. Fill in the required fields in the Create Asset form:

    1. creator account - the account to be used for creating this asset and setting up the initial metadata
    2. asset name - the descriptive name of the asset you are registering
    3. asset symbol - the symbol that will be used to represent the asset
    4. asset decimals - the number of decimal places for this token, with a maximum of 20 allowed through the user interface
    5. minimum balance - the minimum balance for the asset. This is specified in the units and decimals as requested
    6. asset ID - the selected id for the asset. This should not match an already-existing asset id
    7. Click on the Next button

  6. Choose the accounts for the roles listed below:

    1. admin account - the account designated for continuous administration of the token
    2. issuer account - the account that will be used for issuing this token
    3. freezer account - the account that will be used for performing token freezing operations
    4. Click on the Create button

  7. Click on the Sign and Submit button to complete the asset registration process

"},{"location":"develop/parachain-devs/system-parachains/register-a-local-asset/#verify-asset-registration","title":"Verify Asset Registration","text":"

After completing these steps, the asset will be successfully registered. You can now view your asset listed on the Assets section of the Polkadot.js Apps interface.

Note

Take into consideration that the Assets section\u2019s link may differ depending on the network you are using. For the local environment, the link will be ws://127.0.0.1:8000/#/assets.

In this way, you have successfully registered a local asset on the Asset Hub parachain.

For an in-depth explanation of Asset Hub and its features, please refer to the Polkadot Wiki page on Asset Hub.

"},{"location":"develop/parachain-devs/system-parachains/register-a-local-asset/#test-setup-environment","title":"Test Setup Environment","text":"

You can set up a local parachain environment to test the asset registration process before deploying it on the live network. This guide uses chopsticks to simulate that process. For further information on chopsticks usage, refer to the Chopsticks documentation.

To set up a test environment, execute the following command:

npx @acala-network/chopsticks \\\n--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml\n

Note

The above command will spawn a lazy fork of Polkadot Asset Hub with the latest block data from the network. If you need to test Kusama Asset Hub, replace polkadot-asset-hub.yml with kusama-asset-hub.yml in the command.

An Asset Hub instance is now running locally, and you can proceed with the asset registration process. Note that the local registration process does not differ from the live network process. Once you have a successful TestNet transaction, you can use the same steps to register the asset on MainNet.

"},{"location":"develop/toolkit/api-libraries/papi/","title":"Polkadot-API","text":""},{"location":"develop/toolkit/api-libraries/papi/#introduction","title":"Introduction","text":"

Polkadot-API (PAPI) is a set of libraries built to be modular, composable, and grounded in a \u201clight-client first\u201d approach. Its primary aim is to equip dApp developers with an extensive toolkit for building fully decentralized applications.

PAPI is optimized for light-client functionality, using the new JSON-RPC spec to support decentralized interactions fully. It provides strong TypeScript support with types and documentation generated directly from on-chain metadata, and it offers seamless access to storage reads, constants, transactions, events, and runtime calls. Developers can connect to multiple chains simultaneously and prepare for runtime updates through multi-descriptor generation and compatibility checks. PAPI is lightweight and performant, leveraging native BigInt, dynamic imports, and modular subpaths to avoid bundling unnecessary assets. It supports promise-based and observable-based APIs, integrates easily with Polkadot.js extensions, and offers signing options through browser extensions or private keys.

"},{"location":"develop/toolkit/api-libraries/papi/#get-started","title":"Get Started","text":""},{"location":"develop/toolkit/api-libraries/papi/#api-instantiation","title":"API Instantiation","text":"

To instantiate the API, you can install the package by using the following command:

npmpnpmyarn
npm i polkadot-api\n
pnpm add polkadot-api\n
yarn add polkadot-api\n

Then, obtain the latest metadata from the target chain and generate the necessary types:

# Add the target chain\nnpx papi add dot -n polkadot\n

The papi add command initializes the library by generating the corresponding types needed for the chain used. It assigns the chain a custom name and specifies downloading metadata from the Polkadot chain. You can replace dot with the name you prefer or with another chain if you want to add a different one. Once the latest metadata is downloaded, generate the required types:

# Generate the necessary types\nnpx papi\n

You can now set up a PolkadotClient with your chosen provider to begin interacting with the API. Choose from Smoldot via WebWorker, Node.js, or direct usage, or connect through the WSS provider. The examples below show how to configure each option for your setup.

Smoldot (WebWorker)Smoldot (Node.js)SmoldotWSS
// `dot` is the identifier assigned during `npx papi add`\nimport { dot } from '@polkadot-api/descriptors';\nimport { createClient } from 'polkadot-api';\nimport { getSmProvider } from 'polkadot-api/sm-provider';\nimport { chainSpec } from 'polkadot-api/chains/polkadot';\nimport { startFromWorker } from 'polkadot-api/smoldot/from-worker';\nimport SmWorker from 'polkadot-api/smoldot/worker?worker';\n\nconst worker = new SmWorker();\nconst smoldot = startFromWorker(worker);\nconst chain = await smoldot.addChain({ chainSpec });\n\n// Establish connection to the Polkadot relay chain\nconst client = createClient(getSmProvider(chain));\n\n// To interact with the chain, obtain the `TypedApi`, which provides\n// the necessary types for every API call on this chain\nconst dotApi = client.getTypedApi(dot);\n
// `dot` is the alias assigned during `npx papi add`\nimport { dot } from '@polkadot-api/descriptors';\nimport { createClient } from 'polkadot-api';\nimport { getSmProvider } from 'polkadot-api/sm-provider';\nimport { chainSpec } from 'polkadot-api/chains/polkadot';\nimport { startFromWorker } from 'polkadot-api/smoldot/from-node-worker';\nimport { fileURLToPath } from 'url';\nimport { Worker } from 'worker_threads';\n\n// Get the path for the worker file in ESM\nconst workerPath = fileURLToPath(\n  import.meta.resolve('polkadot-api/smoldot/node-worker'),\n);\n\nconst worker = new Worker(workerPath);\nconst smoldot = startFromWorker(worker);\nconst chain = await smoldot.addChain({ chainSpec });\n\n// Set up a client to connect to the Polkadot relay chain\nconst client = createClient(getSmProvider(chain));\n\n// To interact with the chain's API, use `TypedApi` for access to\n// all the necessary types and calls associated with this chain\nconst dotApi = client.getTypedApi(dot);\n
// `dot` is the alias assigned when running `npx papi add`\nimport { dot } from '@polkadot-api/descriptors';\nimport { createClient } from 'polkadot-api';\nimport { getSmProvider } from 'polkadot-api/sm-provider';\nimport { chainSpec } from 'polkadot-api/chains/polkadot';\nimport { start } from 'polkadot-api/smoldot';\n\n// Initialize Smoldot client\nconst smoldot = start();\nconst chain = await smoldot.addChain({ chainSpec });\n\n// Set up a client to connect to the Polkadot relay chain\nconst client = createClient(getSmProvider(chain));\n\n// Access the `TypedApi` to interact with all available chain calls and types\nconst dotApi = client.getTypedApi(dot);\n
// `dot` is the identifier assigned when executing `npx papi add`\nimport { dot } from '@polkadot-api/descriptors';\nimport { createClient } from 'polkadot-api';\n// Use this import for Node.js environments\nimport { getWsProvider } from 'polkadot-api/ws-provider/web';\nimport { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat';\n\n// Establish a connection to the Polkadot relay chain\nconst client = createClient(\n  // The Polkadot SDK nodes may have compatibility issues; using this enhancer is recommended.\n  // Refer to the Requirements page for additional details\n  withPolkadotSdkCompat(getWsProvider('wss://dot-rpc.stakeworld.io')),\n);\n\n// To interact with the chain, obtain the `TypedApi`, which provides\n// the types for all available calls in that chain\nconst dotApi = client.getTypedApi(dot);\n

Now that you have set up the client, you can interact with the chain by reading and sending transactions.

"},{"location":"develop/toolkit/api-libraries/papi/#reading-chain-data","title":"Reading Chain Data","text":"

The TypedApi provides a streamlined way to read blockchain data through three main interfaces, each designed for specific data access patterns:

  • Constants - access fixed values or configurations on the blockchain using the constants interface:

    const version = await typedApi.constants.System.Version();\n
  • Storage queries - retrieve stored values by querying the blockchain\u2019s storage via the query interface:

    const asset = await api.query.ForeignAssets.Asset.getValue(\n  token.location,\n  { at: 'best' },\n);\n
  • Runtime APIs - interact directly with runtime APIs using the apis interface:

    const metadata = await typedApi.apis.Metadata.metadata();\n

To learn more about the different actions you can perform with the TypedApi, refer to the TypedApi reference.

"},{"location":"develop/toolkit/api-libraries/papi/#sending-transactions","title":"Sending Transactions","text":"

In PAPI, the TypedApi provides the tx and txFromCallData methods to send transactions.

  • The tx method allows you to directly send a transaction with the specified parameters by using the typedApi.tx.Pallet.Call pattern:

    const tx: Transaction = typedApi.tx.Pallet.Call({arg1, arg2, arg3});\n

    For instance, to execute the balances.transferKeepAlive call, you can use the following snippet:

    import { MultiAddress } from '@polkadot-api/descriptors';\n\nconst tx: Transaction = typedApi.tx.Balances.transfer_keep_alive({\n  dest: MultiAddress.Id('INSERT_DESTINATION_ADDRESS'),\n  value: BigInt(INSERT_VALUE),\n});\n

    Ensure you replace INSERT_DESTINATION_ADDRESS and INSERT_VALUE with the actual destination address and value, respectively.

  • The txFromCallData method allows you to send a transaction using the call data. This option accepts binary call data and constructs the transaction from it. It validates the input upon creation and will throw an error if invalid data is provided. The pattern is as follows:

    const callData = Binary.fromHex('0x...');\nconst tx: Transaction = typedApi.txFromCallData(callData);\n

    For instance, to execute a transaction using the call data, you can use the following snippet:

    const callData = Binary.fromHex('0x00002470617065726d6f6f6e');\nconst tx: Transaction = typedApi.txFromCallData(callData);\n

For more information about sending transactions, refer to the Transactions page.

"},{"location":"develop/toolkit/api-libraries/papi/#where-to-go-next","title":"Where to Go Next","text":"

For an in-depth guide on how to use PAPI, refer to the official PAPI documentation.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/","title":"Polkadot.js API","text":""},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#introduction","title":"Introduction","text":"

The Polkadot.js API uses JavaScript/TypeScript to interact with Polkadot SDK-based chains. It allows you to query nodes, read chain state, and submit transactions through a dynamic, auto-generated API interface.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#dynamic-api-generation","title":"Dynamic API Generation","text":"

Unlike traditional static APIs, the Polkadot.js API generates its interfaces automatically when connecting to a node. Here's what happens when you connect:

  1. The API connects to your node
  2. It retrieves the chain's metadata
  3. Based on this metadata, it creates specific endpoints in this format: api.<type>.<module>.<section>
"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#available-api-categories","title":"Available API Categories","text":"

You can access three main categories of chain interactions:

  • Runtime constants (api.consts)

    • Access runtime constants directly
    • Returns values immediately without function calls
    • Example - api.consts.balances.existentialDeposit
  • State queries (api.query)

    • Read chain state
    • Example - api.query.system.account(accountId)
  • Transactions (api.tx)

    • Submit extrinsics (transactions)
    • Example - api.tx.balances.transfer(accountId, value)

The available methods and interfaces will automatically reflect what's possible on your connected chain.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#installation","title":"Installation","text":"

To add the Polkadot.js API to your project:

npmpnpmyarn
npm i @polkadot/api\n
pnpm add @polkadot/api\n
yarn add @polkadot/api\n

This command installs the latest stable release, which supports any Polkadot SDK-based chain.

Note

For more installation details, refer to the Installation section in the official Polkadot.js API documentation.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#get-started","title":"Get Started","text":""},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#creating-an-api-instance","title":"Creating an API Instance","text":"

To interact with a Polkadot SDK-based chain, you must establish a connection through an API instance. The API provides methods for querying chain state, sending transactions, and subscribing to updates.

To create an API connection:

import { ApiPromise, WsProvider } from '@polkadot/api';\n\n// Create a WebSocket provider\nconst wsProvider = new WsProvider('wss://rpc.polkadot.io');\n\n// Initialize the API\nconst api = await ApiPromise.create({ provider: wsProvider });\n\n// Verify the connection by getting the chain's genesis hash\nconsole.log('Genesis Hash:', api.genesisHash.toHex());\n

Note

All await operations must be wrapped in an async function or block since the API uses promises for asynchronous operations.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#reading-chain-data","title":"Reading Chain Data","text":"

The API provides several ways to read data from the chain. You can access:

  • Constants - values that are fixed in the runtime and don't change without a runtime upgrade

    // Get the minimum balance required for a new account\nconst minBalance = api.consts.balances.existentialDeposit.toNumber();\n
  • State - current chain state that updates with each block

    // Example address\nconst address = '5DTestUPts3kjeXSTMyerHihn1uwMfLj8vU8sqF7qYrFabHE';\n\n// Get current timestamp\nconst timestamp = await api.query.timestamp.now();\n\n// Get account information\nconst { nonce, data: balance } = await api.query.system.account(address);\n\nconsole.log(`\n  Timestamp: ${timestamp}\n  Free Balance: ${balance.free}\n  Nonce: ${nonce}\n`);\n
"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#sending-transactions","title":"Sending Transactions","text":"

Transactions (also called extrinsics) modify the chain state. Before sending a transaction, you need:

  • A funded account with sufficient balance to pay transaction fees
  • The account's keypair for signing

To make a transfer:

// Assuming you have an `alice` keypair from the Keyring\nconst recipient = 'INSERT_RECIPIENT_ADDRESS';\nconst amount = 'INSERT_VALUE'; // Amount in the smallest unit (e.g., Planck for DOT)\n\n// Sign and send a transfer\nconst txHash = await api.tx.balances\n  .transfer(recipient, amount)\n  .signAndSend(alice);\n\nconsole.log('Transaction Hash:', txHash);\n

Note

The alice keypair in the example comes from a Keyring object. See the Keyring documentation for details on managing keypairs.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#where-to-go-next","title":"Where to Go Next","text":"

For more detailed information about the Polkadot.js API, check the official documentation.

"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/","title":"Python Substrate Interface","text":""},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#introduction","title":"Introduction","text":"

The Python Substrate Interface is a powerful library that enables interaction with Polkadot SDK-based chains. It provides essential functionality for:

  • Querying on-chain storage
  • Composing and submitting extrinsics
  • SCALE encoding/decoding
  • Interacting with Substrate runtime metadata
  • Managing blockchain interactions through convenient utility methods
"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#installation","title":"Installation","text":"

Install the library using pip:

pip install substrate-interface\n

Note

For more installation details, refer to the Installation section in the official Python Substrate Interface documentation.

"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#get-started","title":"Get Started","text":"

This guide will walk you through the basic operations with the Python Substrate Interface: connecting to a node, reading chain state, and submitting transactions.

"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#establishing-connection","title":"Establishing Connection","text":"

The first step is to establish a connection to a Polkadot SDK-based node. You can connect to either a local or remote node:

from substrateinterface import SubstrateInterface\n\n# Connect to a node using websocket\nsubstrate = SubstrateInterface(\n    # For local node: \"ws://127.0.0.1:9944\"\n    # For Polkadot: \"wss://rpc.polkadot.io\"\n    # For Kusama: \"wss://kusama-rpc.polkadot.io\"\n    url=\"INSERT_WS_URL\"\n)\n\n# Verify connection\nprint(f\"Connected to chain: {substrate.chain}\")\n
"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#reading-chain-state","title":"Reading Chain State","text":"

You can query various on-chain storage items. To retrieve data, you need to specify three key pieces of information:

  • Pallet name - module or pallet that contains the storage item you want to access
  • Storage item - specific storage entry you want to query within the pallet
  • Required parameters - any parameters needed to retrieve the desired data

Here's an example of how to check an account's balance and other details:

# ...\n\n# Query account balance and info\naccount_info = substrate.query(\n    module=\"System\",  # The pallet name\n    storage_function=\"Account\",  # The storage item\n    params=[\"INSERT_ADDRESS\"],  # Account address in SS58 format\n)\n\n# Access account details from the result\nfree_balance = account_info.value[\"data\"][\"free\"]\nreserved = account_info.value[\"data\"][\"reserved\"]\nnonce = account_info.value[\"nonce\"]\n\nprint(\n    f\"\"\"\n    Account Details:\n    - Free Balance: {free_balance}\n    - Reserved: {reserved} \n    - Nonce: {nonce}\n    \"\"\"\n)\n
"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#submitting-transactions","title":"Submitting Transactions","text":"

To modify the chain state, you need to submit transactions (extrinsics). Before proceeding, ensure you have:

  • A funded account with sufficient balance to pay transaction fees
  • Access to the account's keypair

Here's how to create and submit a balance transfer:

#...\n\n# Compose the transfer call\ncall = substrate.compose_call(\n    call_module=\"Balances\",  # The pallet name\n    call_function=\"transfer_keep_alive\",  # The extrinsic function\n    call_params={\n        'dest': 'INSERT_ADDRESS',  # Recipient's address\n        'value': 'INSERT_VALUE'  # Amount in smallest unit (e.g., Planck for DOT)\n    }\n)\n\n# Create a signed extrinsic\nextrinsic = substrate.create_signed_extrinsic(\n    call=call, keypair=keypair  # Your keypair for signing\n)\n\n# Submit and wait for inclusion\nreceipt = substrate.submit_extrinsic(\n    extrinsic, wait_for_inclusion=True  # Wait until the transaction is in a block\n)\n\nif receipt.is_success:\n    print(\n        f\"\"\"\n        Transaction successful:\n        - Extrinsic Hash: {receipt.extrinsic_hash}\n        - Block Hash: {receipt.block_hash}\n        \"\"\"\n    )\nelse:\n    print(f\"Transaction failed: {receipt.error_message}\")\n

Note

The keypair object is essential for signing transactions. See the Keypair documentation for more details.

"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#where-to-go-next","title":"Where to Go Next","text":"

Now that you understand the basics, you can:

  • Explore more complex queries and transactions
  • Learn about batch transactions and utility functions
  • Discover how to work with custom pallets and types

For comprehensive reference materials and advanced features, visit the py-substrate-interface documentation.

"},{"location":"develop/toolkit/api-libraries/sidecar/","title":"Sidecar API","text":""},{"location":"develop/toolkit/api-libraries/sidecar/#introduction","title":"Introduction","text":"

The Sidecar Rest API is a service that provides a REST interface for interacting with Polkadot SDK-based blockchains. With this API, developers can easily access a broad range of endpoints for nodes, accounts, transactions, parachains, and more.

Sidecar functions as a caching layer between your application and a Polkadot SDK-based node, offering standardized REST endpoints that simplify interactions without requiring complex, direct RPC calls. This approach is especially valuable for developers who prefer REST APIs or build applications in languages with limited WebSocket support.

Some of the key features of the Sidecar API include:

  • REST API interface - provides a familiar REST API interface for interacting with Polkadot SDK-based chains
  • Standardized endpoints - offers consistent endpoint formats across different chain implementations
  • Caching layer - acts as a caching layer to improve performance and reduce direct node requests
  • Multiple chain support - works with any Polkadot SDK-based chain, including Polkadot, Kusama, and custom chains
"},{"location":"develop/toolkit/api-libraries/sidecar/#installation","title":"Installation","text":"

To install Substrate API Sidecar, use one of the following commands:

npmpnpmyarn
npm install -g @substrate/api-sidecar\n
pnpm install -g @substrate/api-sidecar\n
yarn global add @substrate/api-sidecar\n

Note

Sidecar API requires Node.js version 18.14 LTS or higher. Verify your Node.js version:

node --version\n

If you need to install or update Node.js, visit the official Node.js website to download and install the latest LTS version.

You can confirm the installation by running:

substrate-api-sidecar --version\n

For more information about the Sidecar API installation, please refer to the official documentation.

"},{"location":"develop/toolkit/api-libraries/sidecar/#usage","title":"Usage","text":"

To use the Sidecar API, you have two options:

  • Local node - run a node locally, which Sidecar will connect to by default, requiring no additional configuration. To start, run:
    substrate-api-sidecar\n
  • Remote node - connect Sidecar to a remote node by specifying the RPC endpoint for that chain. For example, to gain access to the Polkadot Asset Hub associated endpoints:

    SAS_SUBSTRATE_URL=wss://polkadot-asset-hub-rpc.polkadot.io substrate-api-sidecar\n

    Note

    More configuration details are available in the Configuration section of the Sidecar API documentation.

Once the Sidecar API is running, you\u2019ll see output similar to this:

SAS_SUBSTRATE_URL=wss://polkadot-asset-hub-rpc.polkadot.io substrate-api-sidecar SAS: \ud83d\udce6 LOG: \u2705 LEVEL: \"info\" \u2705 JSON: false \u2705 FILTER_RPC: false \u2705 STRIP_ANSI: false \u2705 WRITE: false \u2705 WRITE_PATH: \"/opt/homebrew/lib/node_modules/@substrate/api-sidecar/build/src/logs\" \u2705 WRITE_MAX_FILE_SIZE: 5242880 \u2705 WRITE_MAX_FILES: 5 \ud83d\udce6 SUBSTRATE: \u2705 URL: \"wss://polkadot-asset-hub-rpc.polkadot.io\" \u2705 TYPES_BUNDLE: undefined \u2705 TYPES_CHAIN: undefined \u2705 TYPES_SPEC: undefined \u2705 TYPES: undefined \u2705 CACHE_CAPACITY: undefined \ud83d\udce6 EXPRESS: \u2705 BIND_HOST: \"127.0.0.1\" \u2705 PORT: 8080 \u2705 KEEP_ALIVE_TIMEOUT: 5000 \ud83d\udce6 METRICS: \u2705 ENABLED: false \u2705 PROM_HOST: \"127.0.0.1\" \u2705 PROM_PORT: 9100 \u2705 LOKI_HOST: \"127.0.0.1\" \u2705 LOKI_PORT: 3100 \u2705 INCLUDE_QUERYPARAMS: false 2024-11-06 08:06:01 info: Version: 19.3.0 2024-11-06 08:06:02 warn: API/INIT: RPC methods not decorated: chainHead_v1_body, chainHead_v1_call, chainHead_v1_continue, chainHead_v1_follow, chainHead_v1_header, chainHead_v1_stopOperation, chainHead_v1_storage, chainHead_v1_unfollow, chainHead_v1_unpin, chainSpec_v1_chainName, chainSpec_v1_genesisHash, chainSpec_v1_properties, transactionWatch_v1_submitAndWatch, transactionWatch_v1_unwatch, transaction_v1_broadcast, transaction_v1_stop 2024-11-06 08:06:02 info: Connected to chain Polkadot Asset Hub on the statemint client at wss://polkadot-asset-hub-rpc.polkadot.io 2024-11-06 08:06:02 info: Listening on http://127.0.0.1:8080/ 2024-11-06 08:06:02 info: Check the root endpoint (http://127.0.0.1:8080/) to see the available endpoints for the current node

With Sidecar running, you can access the exposed endpoints via a browser, Postman, curl, or your preferred tool.

"},{"location":"develop/toolkit/api-libraries/sidecar/#endpoints","title":"Endpoints","text":"

Sidecar API provides a set of REST endpoints that allow you to query different aspects of the chain, including blocks, accounts, and transactions. Each endpoint offers specific insights into the chain\u2019s state and activities.

For example, to retrieve the version of the node, use the /node/version endpoint:

curl -X 'GET' \\\n  'http://127.0.0.1:8080/node/version' \\\n  -H 'accept: application/json'\n

Note

Alternatively, you can access http://127.0.0.1:8080/node/version directly in a browser since it\u2019s a GET request.

In response, you\u2019ll see output similar to this (assuming you\u2019re connected to Polkadot Asset Hub):

curl -X 'GET' 'http://127.0.0.1:8080/node/version' -H 'accept: application/json' { \"clientVersion\": \"1.16.1-835e0767fe8\", \"clientImplName\": \"statemint\", \"chain\": \"Polkadot Asset Hub\" }

For a complete list of available endpoints and their documentation, visit the Sidecar API list endpoints. You can learn about the endpoints and how to use them in your applications.

"},{"location":"develop/toolkit/api-libraries/sidecar/#where-to-go-next","title":"Where to Go Next","text":"

To dive deeper, refer to the official Sidecar documentation. This provides a comprehensive guide to the available configurations and advanced usage.

"},{"location":"develop/toolkit/blockchain/fork-live-chains/","title":"Chopsticks","text":""},{"location":"develop/toolkit/blockchain/fork-live-chains/#introduction","title":"Introduction","text":"

Chopsticks, created and maintained by the Acala Foundation, is a powerful tool designed to enhance the development process for Polkadot SDK-based blockchains. It offers developers a user-friendly method to locally fork existing chains, enabling them to:

  • Experiment with custom blockchain configurations in a local environment
  • Replay blocks and analyze how extrinsics affect state
  • Fork multiple blocks for comprehensive XCM testing

With Chopsticks, developers can simulate and test complex blockchain scenarios without deploying to a live network. This tool significantly reduces the complexity of building blockchain applications on Polkadot SDK, making it more accessible to developers of varying experience levels. Ultimately, Chopsticks aims to accelerate innovation in the Polkadot SDK ecosystem by providing a robust, flexible testing framework.

For additional support and information, please reach out through GitHub Issues.

Note

Chopsticks uses Smoldot light client, which only supports the native Polkadot SDK API. Consequently, a Chopsticks-based fork doesn't support Ethereum JSON-RPC calls, meaning you cannot use it to fork your chain and connect Metamask.

"},{"location":"develop/toolkit/blockchain/fork-live-chains/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure you have the following installed:

  • Node.js
  • A package manager such as npm, which should be installed with Node.js by default, or Yarn
"},{"location":"develop/toolkit/blockchain/fork-live-chains/#install-chopsticks","title":"Install Chopsticks","text":"

You can install Chopsticks globally or locally in your project. Choose the option that best fits your development workflow.

Note

This documentation explains the features of Chopsticks version 0.13.1. Make sure you're using the correct version to match these instructions.

"},{"location":"develop/toolkit/blockchain/fork-live-chains/#global-installation","title":"Global Installation","text":"

To install Chopsticks globally, allowing you to use it across multiple projects, run:

npm i -g @acala-network/chopsticks@0.13.1\n

Now, you should be able to run the chopsticks command from your terminal.

"},{"location":"develop/toolkit/blockchain/fork-live-chains/#local-installation","title":"Local Installation","text":"

To use Chopsticks in a specific project, first create a new directory and initialize a Node.js project:

mkdir my-chopsticks-project\ncd my-chopsticks-project\nnpm init -y\n

Then, install Chopsticks as a local dependency:

npm i @acala-network/chopsticks@0.13.1\n

Finally, you can run Chopsticks using the npx command:

npx @acala-network/chopsticks\n
"},{"location":"develop/toolkit/blockchain/fork-live-chains/#configure-chopsticks","title":"Configure Chopsticks","text":"

To run Chopsticks, you need to configure some parameters. This can be set either through using a configuration file or the command line interface (CLI). The parameters that can be configured are as follows:

  • genesis - the link to a parachain's raw genesis file to build the fork from, instead of an endpoint
  • timestamp - timestamp of the block to fork from
  • endpoint - the endpoint of the parachain to fork
  • block - use to specify at which block hash or number to replay the fork
  • wasm-override - path of the Wasm to use as the parachain runtime, instead of an endpoint's runtime
  • db - path to the name of the file that stores or will store the parachain's database
  • config - path or URL of the config file
  • port - the port to expose an endpoint on
  • build-block-mode - how blocks should be built in the fork: batch, manual, instant
  • import-storage - a pre-defined JSON/YAML storage path to override in the parachain's storage
  • allow-unresolved-imports - whether to allow Wasm unresolved imports when using a Wasm to build the parachain
  • html - include to generate storage diff preview between blocks
  • mock-signature-host - mock signature host so that any signature starts with 0xdeadbeef and filled by 0xcd is considered valid
"},{"location":"develop/toolkit/blockchain/fork-live-chains/#use-a-configuration-file","title":"Use a Configuration File","text":"

The Chopsticks source repository includes a collection of YAML files that can be used to set up various Polkadot SDK chains locally. You can download these configuration files from the repository's configs folder.

An example of a configuration file for Polkadot is as follows:

endpoint:\n  - wss://rpc.ibp.network/polkadot\n  - wss://polkadot-rpc.dwellir.com\nmock-signature-host: true\nblock: ${env.POLKADOT_BLOCK_NUMBER}\ndb: ./db.sqlite\nruntime-log-level: 5\n\nimport-storage:\n  System:\n    Account:\n      - - - 5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY\n        - providers: 1\n          data:\n            free: '10000000000000000000'\n  ParasDisputes:\n    $removePrefix: ['disputes'] # those can makes block building super slow\n

To run Chopsticks using a configuration file, utilize the --config flag. You can use a raw GitHub URL, a path to a local file, or simply the chain's name. For example, the following commands all use Polkadot's configuration in the same way:

GitHub URLLocal File PathChain Name
npx @acala-network/chopsticks \\\n--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml\n
npx @acala-network/chopsticks --config=configs/polkadot.yml\n
npx @acala-network/chopsticks --config=polkadot\n

Regardless of which method you choose from the preceding examples, you'll see an output similar to the following:

npx @acala-network/chopsticks --config=polkadot [18:38:26.155] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml app: \"chopsticks\" chopsticks::executor TRACE: Calling Metadata_metadata chopsticks::executor TRACE: Completed Metadata_metadata [18:38:28.186] INFO: Polkadot RPC listening on port 8000 app: \"chopsticks\"

Note

If using a file path, make sure you've downloaded the Polkadot configuration file, or have created your own.

"},{"location":"develop/toolkit/blockchain/fork-live-chains/#use-the-cli","title":"Use the CLI","text":"

Alternatively, all settings (except for genesis and timestamp) can be configured via command-line flags, providing a comprehensive method to set up the environment. For example, the following command forks Polkadot at block 100.

npx @acala-network/chopsticks \\\n--endpoint wss://polkadot-rpc.dwellir.com \\\n--block 100\n
If the fork is successful, you will see output similar to the following:

npx @acala-network/chopsticks \\ --endpoint wss://polkadot-rpc.dwellir.com \\ --block 100 [19:12:21.023] INFO: Polkadot RPC listening on port 8000 app: \"chopsticks\""},{"location":"develop/toolkit/blockchain/fork-live-chains/#interact-with-a-fork","title":"Interact with a Fork","text":"

When running a fork, it's accessible by default at:

ws://localhost:8000\n

You can interact with the forked chain using various libraries such as Polkadot.js and its user interface, Polkadot.js Apps.

"},{"location":"develop/toolkit/blockchain/fork-live-chains/#use-polkadotjs-apps","title":"Use Polkadot.js Apps","text":"

To interact with Chopsticks via the hosted user interface, visit Polkadot.js Apps and follow these steps:

  1. Select the network icon in the top left corner

  2. Scroll to the bottom and select Development

  3. Choose Custom
  4. Enter ws://localhost:8000 in the input field
  5. Select the Switch button

You should now be connected to your local fork and can interact with it as you would with a real chain.

"},{"location":"develop/toolkit/blockchain/fork-live-chains/#use-polkadotjs-library","title":"Use Polkadot.js Library","text":"

For programmatic interaction, you can use the Polkadot.js library. Here's a basic example:

import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function connectToFork() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n\n  // Now you can use 'api' to interact with your fork\n  console.log(`Connected to chain: ${await api.rpc.system.chain()}`);\n}\n\nconnectToFork();\n
"},{"location":"develop/toolkit/blockchain/fork-live-chains/#replay-blocks","title":"Replay Blocks","text":"

Chopsticks allows you to replay specific blocks from a chain, which is useful for debugging and analyzing state changes. You can use the parameters in the Configuration section to set up the chain configuration, and then use the run-block subcommand with additional options:

  • output-path - path to print output
  • html - generate HTML with storage diff
  • open - open generated HTML

For example, to replay block 1000 from Polkadot and save the output to a JSON file:

npx @acala-network/chopsticks run-block  \\\n--endpoint wss://polkadot-rpc.dwellir.com  \\\n--output-path ./polkadot-output.json  \\\n--block 1000\n
Output file content
{\n    \"Call\": {\n        \"result\": \"0xba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44a10f6fc59a4d90c3b78e38fac100fc6adc6f9e69a07565ec8abce6165bd0d24078cc7bf34f450a2cc7faacc1fa1e244b959f0ed65437f44208876e1e5eefbf8dd34c040642414245b501030100000083e2cc0f00000000d889565422338aa58c0fd8ebac32234149c7ce1f22ac2447a02ef059b58d4430ca96ba18fbf27d06fe92ec86d8b348ef42f6d34435c791b952018d0a82cae40decfe5faf56203d88fdedee7b25f04b63f41f23da88c76c876db5c264dad2f70c\",\n        \"storageDiff\": [\n            [\n                \"0x0b76934f4cc08dee01012d059e1b83eebbd108c4899964f707fdaffb82636065\",\n                \"0x00\"\n            ],\n            [\n                \"0x1cb6f36e027abb2091cfb5110ab5087f0323475657e0890fbdbf66fb24b4649e\",\n                null\n            ],\n            [\n                \"0x1cb6f36e027abb2091cfb5110ab5087f06155b3cd9a8c9e5e9a23fd5dc13a5ed\",\n                \"0x83e2cc0f00000000\"\n            ],\n            [\n                \"0x1cb6f36e027abb2091cfb5110ab5087ffa92de910a7ce2bd58e99729c69727c1\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef702a5c1b19ab7a04f536c519aca4983ac\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef70a98fdbe9ce6c55837576c60c7af3850\",\n                \"0x02000000\"\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef734abf5cb34d6244378cddbf18e849d96\",\n                \"0xc03b86ae010000000000000000000000\"\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef780d41e5e16056765bc8461851072c9d7\",\n                \"0x080000000000000080e36a09000000000200000001000000000000ca9a3b00000000020000\"\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef78a42f33323cb5ced3b44dd825fda9fcc\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef799e7f93fc6a98f0874fd057f111c4d2d\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7a44704b568d21667356a5a050c118746d366e7fe86e06375e7030000\",\n                \"0xba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44\"\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7a86da5a932684f199539836fcb8c886f\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7b06c3320c6ac196d813442e270868d63\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7bdc0bd303e9855813aa8a30d4efc5112\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7df1daeb8986837f21cc5d17596bb78d15153cb1f00942ff401000000\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7df1daeb8986837f21cc5d17596bb78d1b4def25cfda6ef3a00000000\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7ff553b5a9862a516939d82b3d3d8661a\",\n                null\n            ],\n            [\n                \"0x2b06af9719ac64d755623cda8ddd9b94b1c371ded9e9c565e89ba783c4d5f5f9b4def25cfda6ef3a000000006f3d6b177c8acbd8dc9974cdb3cebfac4d31333c30865ff66c35c1bf898df5c5dd2924d3280e7201\",\n                \"0x9b000000\"\n            ],\n            [\"0x3a65787472696e7369635f696e646578\", null],\n            [\n                \"0x3f1467a096bcd71a5b6a0c8155e208103f2edf3bdf381debe331ab7446addfdc\",\n                \"0x550057381efedcffffffffffffffffff\"\n            ],\n            [\n                \"0x3fba98689ebed1138735e0e7a5a790ab0f41321f75df7ea5127be2db4983c8b2\",\n                \"0x00\"\n            ],\n            [\n                \"0x3fba98689ebed1138735e0e7a5a790ab21a5051453bd3ae7ed269190f4653f3b\",\n                \"0x080000\"\n            ],\n            [\n                \"0x3fba98689ebed1138735e0e7a5a790abb984cfb497221deefcefb70073dcaac1\",\n                \"0x00\"\n            ],\n            [\n                \"0x5f3e4907f716ac89b6347d15ececedca80cc6574281671b299c1727d7ac68cabb4def25cfda6ef3a00000000\",\n                \"0x204e0000183887050ecff59f58658b3df63a16d03a00f92890f1517f48c2f6ccd215e5450e380e00005809fd84af6483070acbb92378e3498dbc02fb47f8e97f006bb83f60d7b2b15d980d000082104c22c383925323bf209d771dec6e1388285abe22c22d50de968467e0bb6ce00b000088ee494d719d68a18aade04903839ea37b6be99552ceceb530674b237afa9166480d0000dc9974cdb3cebfac4d31333c30865ff66c35c1bf898df5c5dd2924d3280e72011c0c0000e240d12c7ad07bb0e7785ee6837095ddeebb7aef84d6ed7ea87da197805b343a0c0d0000\"\n            ],\n            [\n                \"0xae394d879ddf7f99595bc0dd36e355b5bbd108c4899964f707fdaffb82636065\",\n                null\n            ],\n            [\n                \"0xbd2a529379475088d3e29a918cd478721a39ec767bd5269111e6492a1675702a\",\n                \"0x4501407565175cfbb5dca18a71e2433f838a3d946ef532c7bff041685db1a7c13d74252fffe343a960ef84b15187ea0276687d8cb3168aeea5202ea6d651cb646517102b81ff629ee6122430db98f2cadf09db7f298b49589b265dae833900f24baa8fb358d87e12f3e9f7986a9bf920c2fb48ce29886199646d2d12c6472952519463e80b411adef7e422a1595f1c1af4b5dd9b30996fba31fa6a30bd94d2022d6b35c8bc5a8a51161d47980bf4873e01d15afc364f8939a6ce5a09454ab7f2dd53bf4ee59f2c418e85aa6eb764ad218d0097fb656900c3bdd859771858f87bf7f06fc9b6db154e65d50d28e8b2374898f4f519517cd0bedc05814e0f5297dc04beb307b296a93cc14d53afb122769dfd402166568d8912a4dff9c2b1d4b6b34d811b40e5f3763e5f3ab5cd1da60d75c0ff3c12bcef3639f5f792a85709a29b752ffd1233c2ccae88ed3364843e2fa92bdb49021ee36b36c7cdc91b3e9ad32b9216082b6a2728fccd191a5cd43896f7e98460859ca59afbf7c7d93cd48da96866f983f5ff8e9ace6f47ee3e6c6edb074f578efbfb0907673ebca82a7e1805bc5c01cd2fa5a563777feeb84181654b7b738847c8e48d4f575c435ad798aec01631e03cf30fe94016752b5f087f05adf1713910767b7b0e6521013be5370776471191641c282fdfe7b7ccf3b2b100a83085cd3af2b0ad4ab3479448e71fc44ff987ec3a26be48161974b507fb3bc8ad23838f2d0c54c9685de67dc6256e71e739e9802d0e6e3b456f6dca75600bc04a19b3cc1605784f46595bfb10d5e077ce9602ae3820436166aa1905a7686b31a32d6809686462bc9591c0bc82d9e49825e5c68352d76f1ac6e527d8ac02db3213815080afad4c2ecb95b0386e3e9ab13d4f538771dac70d3059bd75a33d0b9b581ec33bb16d0e944355d4718daccb35553012adfcdacb1c5200a2aec3756f6ad5a2beffd30018c439c1b0c4c0f86dbf19d0ad59b1c9efb7fe90906febdb9001af1e7e15101089c1ab648b199a40794d30fe387894db25e614b23e833291a604d07eec2ade461b9b139d51f9b7e88475f16d6d23de6fe7831cc1dbba0da5efb22e3b26cd2732f45a2f9a5d52b6d6eaa38782357d9ae374132d647ef60816d5c98e6959f8858cfa674c8b0d340a8f607a68398a91b3a965585cc91e46d600b1310b8f59c65b7c19e9d14864a83c4ad6fa4ba1f75bba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44c7736fc3ab2969878810153aa3c93fc08c99c478ed1bb57f647d3eb02f25cee122c70424643f4b106a7643acaa630a5c4ac39364c3cb14453055170c01b44e8b1ef007c7727494411958932ae8b3e0f80d67eec8e94dd2ff7bbe8c9e51ba7e27d50bd9f52cbaf9742edecb6c8af1aaf3e7c31542f7d946b52e0c37d194b3dd13c3fddd39db0749755c7044b3db1143a027ad428345d930afcefc0d03c3a0217147900bdea1f5830d826f7e75ecd1c4e2bc8fd7de3b35c6409acae1b2215e9e4fd7e360d6825dc712cbf9d87ae0fd4b349b624d19254e74331d66a39657da81e73d7b13adc1e5efa8efd65aa32c1a0a0315913166a590ae551c395c476116156cf9d872fd863893edb41774f33438161f9b973e3043f819d087ba18a0f1965e189012496b691f342f7618fa9db74e8089d4486c8bd1993efd30ff119976f5cc0558e29b417115f60fd8897e13b6de1a48fbeee38ed812fd267ae25bffea0caa71c09309899b34235676d5573a8c3cf994a3d7f0a5dbd57ab614c6caf2afa2e1a860c6307d6d9341884f1b16ef22945863335bb4af56e5ef5e239a55dbd449a4d4d3555c8a3ec5bd3260f88cabca88385fe57920d2d2dfc5d70812a8934af5691da5b91206e29df60065a94a0a8178d118f1f7baf768d934337f570f5ec68427506391f51ab4802c666cc1749a84b5773b948fcbe460534ed0e8d48a15c149d27d67deb8ea637c4cc28240ee829c386366a0b1d6a275763100da95374e46528a0adefd4510c38c77871e66aeda6b6bfd629d32af9b2fad36d392a1de23a683b7afd13d1e3d45dad97c740106a71ee308d8d0f94f6771164158c6cd3715e72ccfbc49a9cc49f21ead8a3c5795d64e95c15348c6bf8571478650192e52e96dd58f95ec2c0fb4f2ccc05b0ab749197db8d6d1c6de07d6e8cb2620d5c308881d1059b50ffef3947c273eaed7e56c73848e0809c4bd93619edd9fd08c8c5c88d5f230a55d2c6a354e5dd94440e7b5bf99326cf4a112fe843e7efdea56e97af845761d98f40ed2447bd04a424976fcf0fe0a0c72b97619f85cf431fe4c3aa6b3a4f61df8bc1179c11e77783bfedb7d374bd1668d0969333cb518bd20add8329462f2c9a9f04d150d60413fdd27271586405fd85048481fc2ae25b6826cb2c947e4231dc7b9a0d02a9a03f88460bced3fef5d78f732684bd218a1954a4acfc237d79ccf397913ab6864cd8a07e275b82a8a72520624738368d1c5f7e0eaa2b445cf6159f2081d3483618f7fc7b16ec4e6e4d67ab5541bcda0ca1af40efd77ef8653e223191448631a8108c5e50e340cd405767ecf932c1015aa8856b834143dc81fa0e8b9d1d8c32278fca390f2ff08181df0b74e2d13c9b7b1d85543416a0dae3a77530b9cd1366213fcf3cd12a9cd3ae0a006d6b29b5ffc5cdc1ab24343e2ab882abfd719892fca5bf2134731332c5d3bef6c6e4013d84a853cb03d972146b655f0f8541bcd36c3c0c8a775bb606edfe50d07a5047fd0fe01eb125e83673930bc89e91609fd6dfe97132679374d3de4a0b3db8d3f76f31bed53e247da591401d508d65f9ee01d3511ee70e3644f3ab5d333ca7dbf737fe75217b4582d50d98b5d59098ea11627b7ed3e3e6ee3012eadd326cf74ec77192e98619427eb0591e949bf314db0fb932ed8be58258fb4f08e0ccd2cd18b997fb5cf50c90d5df66a9f3bb203bd22061956128b800e0157528d45c7f7208c65d0592ad846a711fa3c5601d81bb318a45cc1313b122d4361a7d7a954645b04667ff3f81d3366109772a41f66ece09eb93130abe04f2a51bb30e767dd37ec6ee6a342a4969b8b342f841193f4f6a9f0fac4611bc31b6cab1d25262feb31db0b8889b6f8d78be23f033994f2d3e18e00f3b0218101e1a7082782aa3680efc8502e1536c30c8c336b06ae936e2bcf9bbfb20dd514ed2867c03d4f44954867c97db35677d30760f37622b85089cc5d182a89e29ab0c6b9ef18138b16ab91d59c2312884172afa4874e6989172014168d3ed8db3d9522d6cbd631d581d166787c93209bec845d112e0cbd825f6df8b64363411270921837cfb2f9e7f2e74cdb9cd0d2b02058e5efd9583e2651239654b887ea36ce9537c392fc5dfca8c5a0facbe95b87dfc4232f229bd12e67937d32b7ffae2e837687d2d292c08ff6194a2256b17254748857c7e3c871c3fff380115e6f7faf435a430edf9f8a589f6711720cfc5cec6c8d0d94886a39bb9ac6c50b2e8ef6cf860415192ca4c1c3aaa97d36394021a62164d5a63975bcd84b8e6d74f361c17101e3808b4d8c31d1ee1a5cf3a2feda1ca2c0fd5a50edc9d95e09fb5158c9f9b0eb5e2c90a47deb0459cea593201ae7597e2e9245aa5848680f546256f3\"\n            ],\n            [\n                \"0xd57bce545fb382c34570e5dfbf338f5e326d21bc67a4b34023d577585d72bfd7\",\n                null\n            ],\n            [\n                \"0xd57bce545fb382c34570e5dfbf338f5ea36180b5cfb9f6541f8849df92a6ec93\",\n                \"0x00\"\n            ],\n            [\n                \"0xd57bce545fb382c34570e5dfbf338f5ebddf84c5eb23e6f53af725880d8ffe90\",\n                null\n            ],\n            [\n                \"0xd5c41b52a371aa36c9254ce34324f2a53b996bb988ea8ee15bad3ffd2f68dbda\",\n                \"0x00\"\n            ],\n            [\n                \"0xf0c365c3cf59d671eb72da0e7a4113c49f1f0515f462cdcf84e0f1d6045dfcbb\",\n                \"0x50defc5172010000\"\n            ],\n            [\n                \"0xf0c365c3cf59d671eb72da0e7a4113c4bbd108c4899964f707fdaffb82636065\",\n                null\n            ],\n            [\n                \"0xf68f425cf5645aacb2ae59b51baed90420d49a14a763e1cbc887acd097f92014\",\n                \"0x9501800300008203000082030000840300008503000086030000870300008703000089030000890300008b0300008b0300008d0300008d0300008f0300008f0300009103000092030000920300009403000094030000960300009603000098030000990300009a0300009b0300009b0300009d0300009d0300009f0300009f030000a1030000a2030000a3030000a4030000a5030000a6030000a6030000a8030000a8030000aa030000ab030000ac030000ad030000ae030000af030000b0030000b1030000b1030000b3030000b3030000b5030000b6030000b7030000b8030000b9030000ba030000ba030000bc030000bc030000be030000be030000c0030000c1030000c2030000c2030000c4030000c5030000c5030000c7030000c7030000c9030000c9030000cb030000cc030000cd030000ce030000cf030000d0030000d0030000d2030000d2030000d4030000d4030000d6030000d7030000d8030000d9030000da030000db030000db030000dd030000dd030000df030000e0030000e1030000e2030000e3030000e4030000e4030000\"\n            ],\n            [\n                \"0xf68f425cf5645aacb2ae59b51baed9049b58374218f48eaf5bc23b7b3e7cf08a\",\n                \"0xb3030000\"\n            ],\n            [\n                \"0xf68f425cf5645aacb2ae59b51baed904b97380ce5f4e70fbf9d6b5866eb59527\",\n                \"0x9501800300008203000082030000840300008503000086030000870300008703000089030000890300008b0300008b0300008d0300008d0300008f0300008f0300009103000092030000920300009403000094030000960300009603000098030000990300009a0300009b0300009b0300009d0300009d0300009f0300009f030000a1030000a2030000a3030000a4030000a5030000a6030000a6030000a8030000a8030000aa030000ab030000ac030000ad030000ae030000af030000b0030000b1030000b1030000b3030000b3030000b5030000b6030000b7030000b8030000b9030000ba030000ba030000bc030000bc030000be030000be030000c0030000c1030000c2030000c2030000c4030000c5030000c5030000c7030000c7030000c9030000c9030000cb030000cc030000cd030000ce030000cf030000d0030000d0030000d2030000d2030000d4030000d4030000d6030000d7030000d8030000d9030000da030000db030000db030000dd030000dd030000df030000e0030000e1030000e2030000e3030000e4030000e4030000\"\n            ]\n        ],\n        \"offchainStorageDiff\": [],\n        \"runtimeLogs\": []\n    }\n}\n
"},{"location":"develop/toolkit/blockchain/fork-live-chains/#xcm-testing","title":"XCM Testing","text":"

To test XCM (Cross-Consensus Messaging) messages between networks, you can fork multiple parachains and a relay chain locally using Chopsticks.

  • relaychain - relay chain config file
  • parachain - parachain config file

For example, to fork Moonbeam, Astar, and Polkadot enabling XCM between them, you can use the following command:

npx @acala-network/chopsticks xcm \\\n--r polkadot \\\n--p moonbeam \\\n--p astar\n

After running it, you should see output similar to the following:

npx @acala-network/chopsticks xcm \\ --r polkadot \\ --p moonbeam \\ --p astar [13:46:07.901] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/moonbeam.yml app: \"chopsticks\" [13:46:12.631] INFO: Moonbeam RPC listening on port 8000 app: \"chopsticks\" [13:46:12.632] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/astar.yml app: \"chopsticks\" chopsticks::executor TRACE: Calling Metadata_metadata chopsticks::executor TRACE: Completed Metadata_metadata [13:46:23.669] INFO: Astar RPC listening on port 8001 app: \"chopsticks\" [13:46:25.144] INFO (xcm): Connected parachains [2004,2006] app: \"chopsticks\" [13:46:25.144] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml app: \"chopsticks\" chopsticks::executor TRACE: Calling Metadata_metadata chopsticks::executor TRACE: Completed Metadata_metadata [13:46:53.320] INFO: Polkadot RPC listening on port 8002 app: \"chopsticks\" [13:46:54.038] INFO (xcm): Connected relaychain 'Polkadot' with parachain 'Moonbeam' app: \"chopsticks\" [13:46:55.028] INFO (xcm): Connected relaychain 'Polkadot' with parachain 'Astar' app: \"chopsticks\"

Now you can interact with your forked chains using the ports specified in the output.

"},{"location":"develop/toolkit/blockchain/fork-live-chains/#websocket-commands","title":"WebSocket Commands","text":"

Chopstick's internal WebSocket server has special endpoints that allow the manipulation of the local Polkadot SDK chain.

These are the methods that can be invoked and their parameters:

dev_newBlock (newBlockParams) \u2014 Generates one or more new blocks ParametersExample
  • newBlockParams NewBlockParams - the parameters to build the new block with. Where the NewBlockParams interface includes the following properties:
    • count number - the number of blocks to build
    • dmp { msg: string, sentAt: number }[] - the downward messages to include in the block
    • hrmp Record<string | number, { data: string, sentAt: number }[]> - the horizontal messages to include in the block
    • to number - the block number to build to
    • transactions string[] - the transactions to include in the block
    • ump Record<number, string[]> - the upward messages to include in the block
    • unsafeBlockHeight number - build block using a specific block height (unsafe)
import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  await api.rpc('dev_newBlock', { count: 1 });\n}\n\nmain();\n
dev_setBlockBuildMode (buildBlockMode) \u2014 Sets block build mode ParameterExample
  • buildBlockMode BuildBlockMode - the build mode. Can be any of the following modes:
    export enum BuildBlockMode {\n  Batch = 'Batch', /** One block per batch (default) */\n  Instant = 'Instant', /** One block per transaction */\n  Manual = 'Manual', /** Only build when triggered */\n}\n
import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  await api.rpc('dev_setBlockBuildMode', 'Instant');\n}\n\nmain();\n
dev_setHead (hashOrNumber) \u2014 Sets the head of the blockchain to a specific hash or number ParameterExample
  • hashOrNumber string | number - the block hash or number to set as head
import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  await api.rpc('dev_setHead', 500);\n}\n\nmain();\n
dev_setRuntimeLogLevel (runtimeLogLevel) \u2014 Sets the runtime log level ParameterExample
  • runtimeLogLevel number - the runtime log level to set
import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  await api.rpc('dev_setRuntimeLogLevel', 1);\n}\n\nmain();\n
dev_setStorage (values, blockHash) \u2014 Creates or overwrites the value of any storage ParametersExample
  • values object - JSON object resembling the path to a storage value
  • blockHash string - the block hash to set the storage value
import { ApiPromise, WsProvider } from '@polkadot/api';\n\nimport { Keyring } from '@polkadot/keyring';\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  const keyring = new Keyring({ type: 'ed25519' });\n  const bob = keyring.addFromUri('//Bob');\n  const storage = {\n    System: {\n      Account: [[[bob.address], { data: { free: 100000 }, nonce: 1 }]],\n    },\n  };\n  await api.rpc('dev_setStorage', storage);\n}\n\nmain();\n
dev_timeTravel (date) \u2014 Sets the timestamp of the block to a specific date ParameterExample
  • date string - timestamp or date string to set. All future blocks will be sequentially created after this point in time
import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  await api.rpc('dev_timeTravel', '2030-08-15T00:00:00');\n}\n\nmain();\n
"},{"location":"develop/toolkit/interoperability/xcm-tools/","title":"XCM Tools","text":""},{"location":"develop/toolkit/interoperability/xcm-tools/#introduction","title":"Introduction","text":"

As described in the Interoperability section, XCM (Cross-Consensus Messaging) is a protocol used in the Polkadot and Kusama ecosystems to enable communication and interaction between chains. It facilitates cross-chain communication, allowing assets, data, and messages to flow seamlessly across the ecosystem.

As XCM is central to enabling communication between blockchains, developers need robust tools to help interact with, build, and test XCM messages. Several XCM tools simplify working with the protocol by providing libraries, frameworks, and utilities that enhance the development process, ensuring that applications built within the Polkadot ecosystem can efficiently use cross-chain functionalities.

"},{"location":"develop/toolkit/interoperability/xcm-tools/#popular-xcm-tools","title":"Popular XCM Tools","text":""},{"location":"develop/toolkit/interoperability/xcm-tools/#moonsong-labs-xcm-tools","title":"Moonsong Labs XCM Tools","text":"

Moonsong Labs XCM Tools provides a collection of scripts for managing and testing XCM operations between Polkadot SDK-based runtimes. These tools allow performing tasks like asset registration, channel setup, and XCM initialization. Key features include:

  • Asset registration - registers assets, setting units per second (up-front fees), and configuring error (revert) codes
  • XCM initializer - initializes XCM, sets default XCM versions, and configures revert codes for XCM-related precompiles
  • HRMP manipulator - manages HRMP channel actions, including opening, accepting, or closing channels
  • XCM-Transactor-Info-Setter - configures transactor information, including extra weight and fee settings
  • Decode XCM - decodes XCM messages on the relay chain or parachains to help interpret cross-chain communication

To get started, clone the repository and install the required dependencies:

git clone https://github.com/Moonsong-Labs/xcm-tools && \ncd xcm-tools &&\nyarn install\n

For a full overview of each script, visit the scripts directory or refer to the official documentation on GitHub.

"},{"location":"develop/toolkit/interoperability/xcm-tools/#paraspell","title":"ParaSpell","text":"

ParaSpell is a collection of open-source XCM tools designed to streamline cross-chain asset transfers and interactions within the Polkadot and Kusama ecosystems. It equips developers with an intuitive interface to manage and optimize XCM-based functionalities. Some key points included by ParaSpell are:

  • XCM SDK - provides a unified layer to incorporate XCM into decentralized applications, simplifying complex cross-chain interactions
  • XCM API - offers an efficient, package-free approach to integrating XCM functionality while offloading heavy computing tasks, minimizing costs and improving application performance
  • XCM router - enables cross-chain asset swaps in a single command, allowing developers to send one asset type (such as DOT on Polkadot) and receive a different asset on another chain (like ASTR on Astar)
  • XCM analyser - decodes and translates complex XCM multilocation data into readable information, supporting easier troubleshooting and debugging
  • XCM visualizator - a tool designed to give developers a clear, interactive view of XCM activity across the Polkadot ecosystem, providing insights into cross-chain communication flow

ParaSpell's tools make it simple for developers to build, test, and deploy cross-chain solutions without needing extensive knowledge of the XCM protocol. With features like message composition, decoding, and practical utility functions for parachain interactions, ParaSpell is especially useful for debugging and optimizing cross-chain communications.

"},{"location":"develop/toolkit/interoperability/xcm-tools/#astar-xcm-tools","title":"Astar XCM Tools","text":"

The Astar parachain offers a crate with a set of utilities for interacting with the XCM protocol. The xcm-tools crate provides a straightforward method for users to locate a sovereign account or calculate an XC20 asset ID. Some commands included by the xcm-tools crate allow users to perform the following tasks:

  • Sovereign accounts - obtain the sovereign account address for any parachain, either on the Relay Chain or for sibling parachains, using a simple command
  • XC20 EVM addresses - generate XC20-compatible EVM addresses for assets by entering the asset ID, making it easy to integrate assets across EVM-compatible environments
  • Remote accounts - retrieve remote account addresses needed for multi-location compatibility, using flexible options to specify account types and parachain IDs

To start using these tools, clone the Astar repository and compile the xcm-tools package:

git clone https://github.com/AstarNetwork/Astar &&\ncd Astar &&\ncargo build --release -p xcm-tools\n

After compiling, verify the setup with the following command:

./target/release/xcm-tools --help\n
For more details on using Astar xcm-tools, consult the official documentation.

"},{"location":"develop/toolkit/interoperability/xcm-tools/#chopsticks","title":"Chopsticks","text":"

The Chopsticks library provides XCM functionality for testing XCM messages across networks, enabling you to fork multiple parachains along with a relay chain. For further details, see the Chopsticks documentation about XCM.

"},{"location":"images/","title":"Images","text":"

TODO

"},{"location":"infrastructure/running-a-node/setup-bootnode/","title":"Set Up a Bootnode","text":""},{"location":"infrastructure/running-a-node/setup-bootnode/#introduction","title":"Introduction","text":"

Bootnodes are essential for helping blockchain nodes discover peers and join the network. When a node starts, it needs to find other nodes, and bootnodes provide an initial point of contact. Once connected, a node can expand its peer connections and play its role in the network, like participating as a validator.

This guide will walk you through setting up a Polkadot bootnode, configuring P2P, WebSocket (WS), secure WSS connections, and managing network keys. You'll also learn how to test your bootnode to ensure it is running correctly and accessible to other nodes.

"},{"location":"infrastructure/running-a-node/setup-bootnode/#prerequisites","title":"Prerequisites","text":"

Before you start, you need to have the following prerequisites:

  • Verify a working Polkadot (polkadot) binary is available on your machine
  • Ensure you have nginx installed. Please refer to the Installation Guide for help with installation if needed
  • A VPS or other dedicated server setup
"},{"location":"infrastructure/running-a-node/setup-bootnode/#accessing-the-bootnode","title":"Accessing the Bootnode","text":"

Bootnodes must be accessible through three key channels to connect with other nodes in the network:

  • P2P - a direct peer-to-peer connection, set by:

    --listen-addr /ip4/0.0.0.0/tcp/INSERT_PORT\n

    Note

    This is not enabled by default on non-validator nodes like archive RPC nodes.

  • P2P/WS - a WebSocket (WS) connection, also configured via --listen-addr

  • P2P/WSS - a secure WebSocket (WSS) connection using SSL, often required for light clients. An SSL proxy is needed, as the node itself cannot handle certificates
"},{"location":"infrastructure/running-a-node/setup-bootnode/#node-key","title":"Node Key","text":"

A node key is the ED25519 key used by libp2p to assign your node an identity or peer ID. Generating a known node key for a bootnode is crucial, as it gives you a consistent key that can be placed in chain specifications as a known, reliable bootnode.

Starting a node creates its node key in the chains/INSERT_CHAIN/network/secret_ed25519 file.

You can create a node key using:

polkadot key generate-node-key\n

This key can be used in the startup command line.

It is imperative that you backup the node key. If it is included in the polkadot binary, it is hardcoded into the binary, which must be recompiled to change the key.

"},{"location":"infrastructure/running-a-node/setup-bootnode/#running-the-bootnode","title":"Running the Bootnode","text":"

A bootnode can be run as follows:

polkadot --chain polkadot \\\n--name dot-bootnode \\\n--listen-addr /ip4/0.0.0.0/tcp/30310 \\\n--listen-addr /ip4/0.0.0.0/tcp/30311/ws\n

This assigns the p2p to port 30310 and p2p/ws to port 30311. For the p2p/wss port, a proxy must be set up with a DNS name and a corresponding certificate. The following example is for the popular nginx server and enables p2p/wss on port 30312 by adding a proxy to the p2p/ws port 30311:

/etc/nginx/sites-enabled/dot-bootnode
server {\n       listen       30312 ssl http2 default_server;\n       server_name  dot-bootnode.stakeworld.io;\n       root         /var/www/html;\n\n       ssl_certificate \"INSERT_YOUR_CERT\";\n       ssl_certificate_key \"INSERT_YOUR_KEY\";\n\n       location / {\n         proxy_buffers 16 4k;\n         proxy_buffer_size 2k;\n         proxy_pass http://localhost:30311;\n         proxy_http_version 1.1;\n         proxy_set_header Upgrade $http_upgrade;\n         proxy_set_header Connection \"Upgrade\";\n         proxy_set_header Host $host;\n   }\n\n}\n
"},{"location":"infrastructure/running-a-node/setup-bootnode/#testing-bootnode-connection","title":"Testing Bootnode Connection","text":"

If the preceding node is running with DNS name dot-bootnode.stakeworld.io, which contains a proxy with a valid certificate and node-id 12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg then the following commands should output syncing 1 peers.

Tip

You can add -lsub-libp2p=trace on the end to get libp2p trace logging for debugging purposes.

"},{"location":"infrastructure/running-a-node/setup-bootnode/#p2p","title":"P2P","text":"
polkadot --chain polkadot \\\n--base-path /tmp/node \\\n--name \"Bootnode testnode\" \\\n--reserved-only \\\n--reserved-nodes \"/dns/dot-bootnode.stakeworld.io/tcp/30310/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg\" \\\n--no-hardware-benchmarks\n
"},{"location":"infrastructure/running-a-node/setup-bootnode/#p2pws","title":"P2P/WS","text":"
polkadot --chain polkadot \\\n--base-path /tmp/node \\\n--name \"Bootnode testnode\" \\\n--reserved-only \\\n--reserved-nodes \"/dns/dot-bootnode.stakeworld.io/tcp/30311/ws/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg\" \\\n--no-hardware-benchmarks\n
"},{"location":"infrastructure/running-a-node/setup-bootnode/#p2pwss","title":"P2P/WSS","text":"
polkadot --chain polkadot \\\n--base-path /tmp/node \\\n--name \"Bootnode testnode\" \\\n--reserved-only \\\n--reserved-nodes \"/dns/dot-bootnode.stakeworld.io/tcp/30312/wss/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg\" \\\n--no-hardware-benchmarks\n
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/","title":"Stop Validating","text":""},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/#introduction","title":"Introduction","text":"

If you're ready to stop validating on Polkadot, there are essential steps to ensure a smooth transition while protecting your funds and account integrity. Whether you're taking a break for maintenance or unbonding entirely, you'll need to chill your validator, purge session keys, and unbond your tokens. This guide explains how to use Polkadot's tools and extrinsics to safely withdraw from validation activities, safeguarding your account's future usability.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/#pause-versus-stop","title":"Pause Versus Stop","text":"

If you wish to remain a validator or nominator (for example, stopping for planned downtime or server maintenance), submitting the chill extrinsic in the staking pallet should suffice. Additional steps are only needed to unbond funds or reap an account.

The following are steps to ensure a smooth stop to validation:

  • Chill the validator
  • Purge validator session keys
  • Unbond your tokens
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/#chill-validator","title":"Chill Validator","text":"

When stepping back from validating, the first step is to chill your validator status. This action stops your validator from being considered for the next era without fully unbonding your tokens, which can be useful for temporary pauses like maintenance or planned downtime.

Use the staking.chill extrinsic to initiate this. For more guidance on chilling your node, refer to the Pause Validatingguide. You may also claim any pending staking rewards at this point.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/#purge-validator-session-keys","title":"Purge Validator Session Keys","text":"

Purging validator session keys is a critical step in removing the association between your validator account and its session keys, which ensures that your account is fully disassociated from validator activities. The session.purgeKeys extrinsic removes the reference to your session keys from the stash or staking proxy account that originally set them.

Here are a couple of important things to know about purging keys:

  • Account used to purge keys - always use the same account to purge keys you originally used to set them, usually your stash or staking proxy account. Using a different account may leave an unremovable reference to the session keys on the original account, preventing its reaping
  • Account reaping issue - failing to purge keys will prevent you from reaping (fully deleting) your stash account. If you attempt to transfer tokens without purging, you'll need to rebond, purge the session keys, unbond again, and wait through the unbonding period before any transfer
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/#unbond-your-tokens","title":"Unbond Your Tokens","text":"

After chilling your node and purging session keys, the final step is to unbond your staked tokens. This action removes them from staking and begins the unbonding period (usually 28 days for Polkadot and seven days for Kusama), after which the tokens will be transferable.

To unbond tokens, go to Network > Staking > Account Actions on Polkadot.js Apps. Select your stash account, click on the dropdown menu, and choose Unbond Funds. Alternatively, you can use the staking.unbond extrinsic if you handle this via a staking proxy account.

Once the unbonding period is complete, your tokens will be available for use in transactions or transfers outside of staking.

"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/","title":"Pause Validating","text":""},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#introduction","title":"Introduction","text":"

If you need to temporarily stop participating in Polkadot staking activities without fully unbonding your funds, chilling your account allows you to do so efficiently. Chilling removes your node from active validation or nomination in the next era while keeping your funds bonded, making it ideal for planned downtimes or temporary pauses.

This guide covers the steps for chilling as a validator or nominator, using the chill and chillOther extrinsics, and how these affect your staking status and nominations.

"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#chilling-your-node","title":"Chilling Your Node","text":"

If you need to temporarily step back from staking without unbonding your funds, you can \"chill\" your account. Chilling pauses your active staking participation, setting your account to inactive in the next era while keeping your funds bonded.

To chill your account, go to the Network > Staking > Account Actions page on Polkadot.js Apps, and select Stop. Alternatively, you can call the chill extrinsic in the Staking pallet.

"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#staking-election-timing-considerations","title":"Staking Election Timing Considerations","text":"

When a node actively participates in staking but then chills, it will continue contributing for the remainder of the current era. However, its eligibility for the next election depends on the chill status at the start of the new era:

  • Chilled during previous era - will not participate in the current era election and will remain inactive until reactivated -Chilled during current era - will not be selected for the next era's election -Chilled after current era - may be selected if it was active during the previous era and is now chilled
"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#chilling-as-a-nominator","title":"Chilling as a Nominator","text":"

When you choose to chill as a nominator, your active nominations are reset. Upon re-entering the nominating process, you must reselect validators to support manually. Depending on preferences, these can be the same validators as before or a new set. Remember that your previous nominations won\u2019t be saved or automatically reactivated after chilling.

While chilled, your nominator account remains bonded, preserving your staked funds without requiring a full unbonding process. When you\u2019re ready to start nominating again, you can issue a new nomination call to activate your bond with a fresh set of validators. This process bypasses the need for re-bonding, allowing you to maintain your stake while adjusting your involvement in active staking.

"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#chilling-as-a-validator","title":"Chilling as a Validator","text":"

When you chill as a validator, your active validator status is paused. Although your nominators remain bonded to you, the validator bond will no longer appear as an active choice for new or revised nominations until reactivated. Any existing nominators who take no action will still have their stake linked to the validator, meaning they don\u2019t need to reselect the validator upon reactivation. However, if nominators adjust their stakes while the validator is chilled, they will not be able to nominate the chilled validator until it resumes activity.

Upon reactivating as a validator, you must also reconfigure your validator preferences, such as commission rate and other parameters. These can be set to match your previous configuration or updated as desired. This step is essential for rejoining the active validator set and regaining eligibility for nominations.

"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#chill-other","title":"Chill Other","text":"

Historical constraints in the runtime prevented unlimited nominators and validators from being supported. These constraints created a need for checks to keep the size of the staking system manageable. One of these checks is the chillOther extrinsic, allowing users to chill accounts that no longer met standards such as minimum staking requirements set through on-chain governance.

This control mechanism included a ChillThreshold, which was structured to define how close to the maximum number of nominators or validators the staking system would be allowed to get before users could start chilling one another. With the passage of Referendum #90, the value for maxNominatorCount on Polkadot was set to None, effectively removing the limit on how many nominators and validators can participate. This means the ChillThreshold will never be met; thus, chillOther no longer has any effect.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/","title":"Upgrade a Validator Node","text":""},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#introduction","title":"Introduction","text":"

Upgrading a Polkadot validator node is essential for staying current with network updates and maintaining optimal performance. This guide covers routine and extended maintenance scenarios, including software upgrades and major server changes. Following these steps, you can manage session keys and transition smoothly between servers without risking downtime, slashing, or network disruptions. The process requires strategic planning, especially if you need to perform long-lead maintenance, ensuring your validator remains active and compliant.

This guide will allow validators to seamlessly substitute an active validator server to allow for maintenance operations. The process can take several hours, so ensure you understand the instructions first and plan accordingly.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#prerequisites","title":"Prerequisites","text":"

Before beginning the upgrade process for your validator node, ensure the following:

  • You have a fully functional validator setup with all required binaries installed. See Set Up a Validator and Validator Requirements for additional guidance
  • Your VPS infrastructure has enough capacity to run a secondary validator instance temporarily for the upgrade process
"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#session-keys","title":"Session Keys","text":"

Session keys are used to sign validator operations and establish a connection between your validator node and your staking proxy account. These keys are stored in the client, and any change to them requires a waiting period. Specifically, if you modify your session keys, the change will take effect only after the current session is completed and two additional sessions have passed.

Remembering this delayed effect when planning upgrades is crucial to ensure that your validator continues to function correctly and avoids interruptions. To learn more about session keys and their importance, visit the Keys section.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#keystore","title":"Keystore","text":"

Your validator server's keystore folder holds the private keys needed for signing network-level transactions. It is important not to duplicate or transfer this folder between validator instances. Doing so could result in multiple validators signing with the duplicate keys, leading to severe consequences such as equivocation slashing. Instead, always generate new session keys for each validator instance.

The default path to the keystore is as follows:

/home/polkadot/.local/share/polkadot/chains/<chain>/keystore\n

Taking care to manage your keys securely ensures that your validator operates safely and without the risk of slashing penalties.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#upgrade-using-backup-validator","title":"Upgrade Using Backup Validator","text":"

The following instructions outline how to temporarily switch between two validator nodes. The original active validator is referred to as Validator A and the backup node used for maintenance purposes as Validator B.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#session-n","title":"Session N","text":"
  1. Start Validator B - launch a secondary node and wait until it is fully synced with the network. Once synced, start it with the --validator flag. This node will now act as Validator B
  2. Generate session keys - create new session keys specifically for Validator B
  3. Submit the set_key extrinsic - use your staking proxy account to submit a set_key extrinsic, linking the session keys for Validator B to your staking setup
  4. Record the session - make a note of the session in which you executed this extrinsic
  5. Wait for session changes - allow the current session to end and then wait for two additional full sessions for the new keys to take effect

Keep Validator A running

It is crucial to keep Validator A operational during this entire waiting period. Since set_key does not take effect immediately, turning off Validator A too early may result in chilling or even slashing.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#session-n3","title":"Session N+3","text":"

At this stage, Validator B becomes your active validator. You can now safely perform any maintenance tasks on Validator A.

Complete the following steps when you are ready to bring Validator A back online:

  1. Start Validator A - launch Validator A, sync the blockchain database, and ensure it is running with the --validator flag
  2. Generate new session keys for Validator A - create fresh session keys for Validator A
  3. Submit the set_key extrinsic - using your staking proxy account, submit a set_key extrinsic with the new Validator A session keys
  4. Record the session - again, make a note of the session in which you executed this extrinsic

Keep Validator B active until the session during which you executed the set-key extrinsic completes plus two additional full sessions have passed. Once Validator A has successfully taken over, you can safely stop Validator B. This process helps ensure a smooth handoff between nodes and minimizes the risk of downtime or penalties. Verify the transition by checking for finalized blocks in the new session. The logs should indicate the successful change, similar to the example below:

INSERT_COMMAND 2019-10-28 21:44:13 Applying authority set change scheduled at block #450092 2019-10-28 21:44:13 Applying GRANDPA set change to new set with 20 authorities"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/","title":"Offenses and Slashes","text":""},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#introduction","title":"Introduction","text":"

In Polkadot's Nominated Proof of Stake (NPoS) system, validator misconduct is deterred through a combination of slashing, disabling, and reputation penalties. Validators and nominators who stake tokens face consequences for validator misbehavior, which range from token slashes to restrictions on network participation.

This page outlines the types of offenses recognized by Polkadot, including block equivocations and invalid votes, as well as the corresponding penalties. While some parachains may implement additional custom slashing mechanisms, this guide focuses on the offenses tied to staking within the Polkadot ecosystem.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#offenses","title":"Offenses","text":"

Polkadot is a public permissionless network. As such, it has a mechanism to disincentivize offenses and incentivize good behavior. You can review the\u00a0parachain protocol to understand better the terminology used to describe offenses. Polkadot validator offenses fall into two categories: invalid votes and equivocations.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#invalid-votes","title":"Invalid Votes","text":"

A validator will be penalized for inappropriate voting activity during the block inclusion and approval processes. The invalid voting related offenses are as follows:

  • Backing an invalid block - a para-validator backs an invalid block for inclusion in a fork of the relay chain
  • ForInvalid vote - when acting as a secondary checker, the validator votes in favor of an invalid block
  • AgainstValid vote - when acting as a secondary checker, the validator votes against a valid block. This type of vote wastes network resources required to resolve the disparate votes and resulting dispute
"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#equivocations","title":"Equivocations","text":"

Equivocation occurs when a validator produces statements that conflict with each other when producing blocks or voting. Unintentional equivocations usually occur when duplicate signing keys reside on the validator host. If keys are never duplicated, the probability of an honest equivocation slash decreases to near zero. The equivocation related offenses are as follows:

  • Equivocation - the validator produces two or more of the same block or vote
    • GRANDPA and BEEFY equivocation - the validator signs two or more votes in the same round on different chains
    • BABE equivocation - the validator produces two or more blocks on the relay chain in the same time slot
  • Double seconded equivocation - the validator attempts to second, or back, more than one block in the same round
  • Seconded and valid equivocation - the validator seconds, or backs, a block and then attempts to hide their role as the responsible backer by later placing a standard validation vote
"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#penalties","title":"Penalties","text":"

On Polkadot, offenses to the network incur different penalties depending on severity. There are three main penalties: slashing, disabling, and reputation changes.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#slashing","title":"Slashing","text":"

Validators engaging in bad actor behavior in the network may be subject to slashing if they commit a qualifying offense. When a validator is slashed, they and their nominators lose a percentage of their staked DOT or KSM, from as little as 0.01% up to 100% based on the severity of the offense. Nominators are evaluated for slashing against their active validations at any given time. Validator nodes are evaluated as discrete entities, meaning an operator can't attempt to mitigate the offense on another node they operate in order to avoid a slash.

Any slashed DOT or KSM will be added to the Treasury rather than burned or distributed as rewards. Moving slashed funds to the Treasury allows tokens to be quickly moved away from malicious validators while maintaining the ability to revert faulty slashes when needed.

Multiple active nominations

A nominator with a very large bond may nominate several validators in a single era. In this case, a slash is proportionate to the amount staked to the offending validator. Stake allocation and validator activation is controlled by the Phragm\u00e9n algorithm.

A validator slash creates an unapplied state transition. You can view pending slashes on Polkadot.js Apps. The UI will display the slash per validator, the affected nominators, and the slash amounts. The unapplied state includes a 27-day grace period during which a governance proposal can be made to reverse the slash. Once this grace period expires, the slash is applied.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#equivocation-slash","title":"Equivocation Slash","text":"

The Web3 Foundation's Slashing mechanisms page provides guidelines for evaluating the security threat level of different offenses and determining penalties proportionate to the threat level of the offense. Offenses requiring coordination between validators or extensive computational costs to the system will typically call for harsher penalties than those more likely to be unintentional than malicious. A description of potential offenses for each threat level and the corresponding penalties is as follows:

  • Level 1 - honest misconduct such as isolated cases of unresponsiveness
    • Penalty - validator can be kicked out or slashed up to 0.1% of stake in the validator slot
  • Level 2 - misconduct that can occur honestly but is a sign of bad practices. Examples include repeated cases of unresponsiveness and isolated cases of equivocation
    • Penalty - slash of up to 1% of stake in the validator slot
  • Level 3 - misconduct that is likely intentional but of limited effect on the performance or security of the network. This level will typically include signs of coordination between validators. Examples include repeated cases of equivocation or isolated cases of unjustified voting on GRANDPA
    • Penalty - reduction in networking reputation metrics, slash of up to 10% of stake in the validator slot
  • Level 4 - misconduct that poses severe security or monetary risk to the system or mass collusion. Examples include signs of extensive coordination, creating a serious security risk to the system, or forcing the system to use extensive resources to counter the misconduct
    • Penalty - slash of up to 100% of stake in the validator slot

See the next section to understand how slash amounts for equivocations are calculated. If you want to know more details about slashing, please look at the research page on Slashing mechanisms.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#slash-calculation-for-equivocation","title":"Slash Calculation for Equivocation","text":"

The slashing penalty for GRANDPA, BABE, and BEEFY equivocations is calculated using the formula below, where x represents the number of offenders and n is the total number of validators in the active set:

min((3 * x / n )^2, 1)\n

The following scenarios demonstrate how this formula means slash percentages can increase exponentially based on the number of offenders involved compared to the size of the validator pool:

  • Minor offense - assume 1 validator out of a 100 validator active set equivocates in a slot. A single validator committing an isolated offense is most likely a mistake rather than malicious attack on the network. This offense results in a 0.09% slash to the stake in the validator slot

    flowchart LR\nN[\"Total Validators = 100\"]\nX[\"Offenders = 1\"]\nF[\"min(3 * 1 / 100)^2, 1) = 0.0009\"]\nG[\"0.09% slash of stake\"]\n\nN --> F\nX --> F\nF --> G
  • Moderate offense - assume 5 validators out a 100 validator active set equivocate in a slot. This is a slightly more serious event as there may be some element of coordination involved. This offense results in a 2.25% slash to the stake in the validator slot

    flowchart LR\nN[\"Total Validators = 100\"]\nX[\"Offenders = 5\"]\nF[\"min((3 * 5 / 100)^2, 1) = 0.0225\"]\nG[\"2.25% slash of stake\"]\n\nN --> F\nX --> F\nF --> G
  • Major offense - assume 20 validators out a 100 validator active set equivocate in a slot. This is a major security threat as it possible represents a coordinated attack on the network. This offense results in a 36% slash and all slashed validators will also be chilled

    flowchart LR\nN[\"Total Validators = 100\"]\nX[\"Offenders = 20\"]\nF[\"min((3 * 20 / 100)^2, 1) = 0.36\"]\nG[\"36% slash of stake\"]\n\nN --> F\nX --> F\nF --> G

The examples above show the risk of nominating or running many validators in the active set. While rewards grow linearly (two validators will get you approximately twice as many staking rewards as one), slashing grows exponentially. Going from a single validator equivocating to two validators equivocating causes a slash four time as much as the single validator.

Validators may run their nodes on multiple machines to ensure they can still perform validation work if one of their nodes goes down. Still, validator operators should be cautious when setting these up. Equivocation is possible if they don't coordinate well in managing signing machines.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#best-practices-to-avoid-slashing","title":"Best Practices to Avoid Slashing","text":"

The following are advised to node operators to ensure that they obtain pristine binaries or source code and to ensure the security of their node:

  • Always download either source files or binaries from the official Parity repository
  • Verify the hash of downloaded files
  • Use the W3F secure validator setup or adhere to its principles
  • Ensure essential security items are checked, use a firewall, manage user access, use SSH certificates
  • Avoid using your server as a general-purpose system. Hosting a validator on your workstation or one that hosts other services increases the risk of maleficence
  • Avoid cloning servers (copying all contents) when migrating to new hardware. If an image is needed, create it before generating keys
  • High Availability (HA) systems are generally not recommended as equivocation may occur if concurrent operations happen\u2014such as when a failed server restarts or two servers are falsely online simultaneously
  • Copying the keystore folder when moving a database between instances can cause equivocation. Even brief use of duplicated keystores can result in slashing

Below are some examples of small equivocations that happened in the past:

Network Era Event Type Details Action Taken Polkadot 774 Small Equivocation The validator migrated servers and cloned the keystore folder. The on-chain event can be viewed on Subscan. The validator didn't submit a request for the slash to be canceled. Kusama 3329 Small Equivocation The validator operated a test machine with cloned keys. The test machine was online simultaneously as the primary, which resulted in a slash. Details can be found on Polkassembly. The validator requested a slash cancellation, but the council declined. Kusama 3995 Small Equivocation The validator noticed several errors, after which the client crashed, and a slash was applied. The validator recorded all events and opened GitHub issues to allow for technical opinions to be shared. Details can be found on Polkassembly. The validator requested to cancel the slash. The council approved the request as they believed the error wasn't operator-related."},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#slashing-across-eras","title":"Slashing Across Eras","text":"

There are three main difficulties to account for with slashing in NPoS:

  • A nominator can nominate multiple validators and be slashed as a result of actions taken by any of them
  • Until slashed, the stake is reused from era to era
  • Slashable offenses can be found after the fact and out of order

To balance this, the system applies only the maximum slash a participant can receive in a given time period rather than the sum. This ensures protection from excessive slashing.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#disabling","title":"Disabling","text":"

The disabling mechanism is triggered when validators commit serious infractions, such as backing invalid blocks or engaging in equivocations. Disabling stops validators from performing specific actions after they have committed an offense. Disabling is further divided into:

  • On-chain disabling - lasts for a whole era and stops validators from authoring blocks, backing, and initiating a dispute
  • Off-chain disabling - lasts for a session, is caused by losing a dispute, and stops validators from initiating a dispute

Off-chain disabling is always a lower priority than on-chain disabling. Off-chain disabling prioritizes disabling first backers and then approval checkers.

Note

The material in this guide reflects the changes introduced in Stage 2. For more details, refer to the State of Disabling issue on GitHub.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#reputation-changes","title":"Reputation Changes","text":"

Some minor offenses, such as spamming, are only punished by networking reputation changes. Validators use a reputation metric when choosing which peers to connect with. The system adds reputation if a peer provides valuable data and behaves appropriately. If they provide faulty or spam data, the system reduces their reputation. If a validator loses enough reputation, their peers will temporarily close their channels to them. This helps in fighting against Denial of Service (DoS) attacks. Performing validator tasks under reduced reputation will be harder, resulting in lower validator rewards.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#penalties-by-offense","title":"Penalties by Offense","text":"

Below, you can find a summary of penalties for specific offenses:

Offense Slash (%) On-Chain Disabling Off-Chain Disabling Reputational Changes Backing Invalid 100% Yes Yes (High Priority) No ForInvalid Vote - No Yes (Mid Priority) No AgainstValid Vote - No Yes (Low Priority) No GRANDPA / BABE / BEEFY Equivocations 0.01-100% Yes No No Seconded + Valid Equivocation - No No No Double Seconded Equivocation - No No Yes"},{"location":"infrastructure/staking-mechanics/rewards-payout/","title":"Rewards Payout","text":""},{"location":"infrastructure/staking-mechanics/rewards-payout/#introduction","title":"Introduction","text":"

Understanding how rewards are distributed to validators and nominators is essential for network participants. In Polkadot and Kusama, validators earn rewards based on their era points, which are accrued through actions like block production and parachain validation.

This guide explains the payout scheme, factors influencing rewards, and how multiple validators affect returns. Validators can also share rewards with nominators, who contribute by staking behind them. By following the payout mechanics, validators can optimize their earnings and better engage with their nominators.

"},{"location":"infrastructure/staking-mechanics/rewards-payout/#era-points","title":"Era Points","text":"

The Polkadot ecosystem measures their reward cycles in a unit called an era. Kusama eras are approximately 6 hours long, and Polkadot eras are 24 hours. At the end of each era, validators are paid proportionally to the amount of era points they have collected. Era points are reward points earned for payable actions like:

  • Issuing validity statements for parachain blocks
  • Producing a non-uncle block in the relay chain
  • Producing a reference to a previously unreferenced uncle block
  • Producing a referenced uncle block

Note

An uncle block is a relay chain block that is valid in every regard but has failed to become canonical. This can happen when two or more validators are block producers in a single slot, and the block produced by one validator reaches the next block producer before the others. The lagging blocks are called uncle blocks.

Payments occur at the end of every era.

"},{"location":"infrastructure/staking-mechanics/rewards-payout/#reward-variance","title":"Reward Variance","text":"

Rewards in Polkadot and Kusama staking systems can fluctuate due to differences in era points earned by para-validators and non-para-validators. Para-validators generally contribute more to the overall reward distribution due to their role in validating parachain blocks, thus influencing the variance in staking rewards.

To illustrate this relationship:

  • Para-validator era points tend to have a higher impact on the expected value of staking rewards compared to non-para-validator points
  • The variance in staking rewards increases as the total number of validators grows relative to the number of para-validators
  • In simpler terms, when more validators are added to the active set without increasing the para-validator pool, the disparity in rewards between validators becomes more pronounced

However, despite this increased variance, rewards tend to even out over time due to the continuous rotation of para-validators across eras. The network's design ensures that over multiple eras, each validator has an equal opportunity to participate in para-validation, eventually leading to a balanced distribution of rewards.

Probability in Staking Rewards

This should only serve as a high-level overview of the probabilistic nature for staking rewards.

Let:

  • pe = para-validator era points
  • ne = non-para-validator era points
  • EV = expected value of staking rewards

Then, EV(pe) has more influence on the EV than EV(ne).

Since EV(pe) has a more weighted probability on the EV, the increase in variance against the EV becomes apparent between the different validator pools (aka. validators in the active set and the ones chosen to para-validate).

Also, let:

  • v = the variance of staking rewards
  • p = number of para-validators
  • w = number validators in the active set
  • e = era

Then, v \u2191 if w \u2191, as this reduces p : w, with respect to e.

Increased v is expected, and initially keeping p \u2193 using the same para-validator set for all parachains ensures availability and voting. In addition, despite v \u2191 on an e to e basis, over time, the amount of rewards each validator receives will equal out based on the continuous selection of para-validators.

There are plans to scale the active para-validation set in the future.

"},{"location":"infrastructure/staking-mechanics/rewards-payout/#payout-scheme","title":"Payout Scheme","text":"

Validator rewards are distributed equally among all validators in the active set, regardless of the total stake behind each validator. However, individual payouts may differ based on the number of era points a validator has earned. Although factors like network connectivity can affect era points, well-performing validators should accumulate similar totals over time.

Validators can also receive tips from users, which incentivize them to include certain transactions in their blocks. Validators retain 100% of these tips.

Rewards are paid out in the network's native token (DOT for Polkadot and KSM for Kusama).

The following example illustrates a four member validator set with their names, amount they have staked, and how payout of rewards is divided. This scenario assumes all validators earned the same amount of era points and no one received tips:

%%Payout, 4 val set, A-D are validators/stakes, E is payout%%\n\nblock-beta\n    columns 1\n  block\n    A[\"Alice (18 DOT)\"]\n    B[\"Bob (9 DOT)\"]\n    C[\"Carol (8 DOT)\"]\n    D[\"Dave (7 DOT)\"]\n  end\n    space\n    E[\"Payout (8 DOT total)\"]:1\n    E --\"2 DOT\"--> A\n    E --\"2 DOT\"--> B\n    E --\"2 DOT\"--> C\n    E --\"2 DOT\"--> D 

Note that this is different than most other Proof-of-Stake systems. As long as a validator is in the validator set, it will receive the same block reward as every other validator. Validator Alice, who had 18 DOT staked, received the same 2 DOT reward in this era as Dave, who had only 7 DOT staked.

"},{"location":"infrastructure/staking-mechanics/rewards-payout/#running-multiple-validators","title":"Running Multiple Validators","text":"

Running multiple validators can offer a more favorable risk/reward ratio compared to running a single one. If you have sufficient DOT or nominators staking on your validators, maintaining multiple validators within the active set can yield higher rewards.

In the preceding section, with 18 DOT staked and no nominators, Alice earned 2 DOT in one era. This example uses DOT, but the same principles apply for KSM on the Kusama network. By managing stake across multiple validators, you can potentially increase overall returns. Recall the set of validators from the preceding section:

%%Payout, 4 val set, A-D are validators/stakes, E is payout%%\n\nblock-beta\n    columns 1\n  block\n    A[\"Alice (18 DOT)\"]\n    B[\"Bob (9 DOT)\"]\n    C[\"Carol (8 DOT)\"]\n    D[\"Dave (7 DOT)\"]\n  end\n    space\n    E[\"Payout (8 DOT total)\"]:1\n    E --\"2 DOT\"--> A\n    E --\"2 DOT\"--> B\n    E --\"2 DOT\"--> C\n    E --\"2 DOT\"--> D 

Now, assume Alice decides to split their stake and run two validators, each with a nine DOT stake. This validator set only has four spots and priority is given to validators with a larger stake. In this example, Dave has the smallest stake and loses his spot in the validator set. Now, Alice will earn two shares of the total payout each era as illustrated below:

%%Payout, 4 val set, A-D are validators/stakes, E is payout%%\n\nblock-beta\n    columns 1\n  block\n    A[\"Alice (9 DOT)\"]\n    F[\"Alice (9 DOT)\"]\n    B[\"Bob (9 DOT)\"]\n    C[\"Carol (8 DOT)\"]\n  end\n    space\n    E[\"Payout (8 DOT total)\"]:1\n    E --\"2 DOT\"--> A\n    E --\"2 DOT\"--> B\n    E --\"2 DOT\"--> C\n    E --\"2 DOT\"--> F 

With enough stake, you could run more than two validators. However, each validator must have enough stake behind it to maintain a spot in the validator set.

"},{"location":"infrastructure/staking-mechanics/rewards-payout/#nominators-and-validator-payments","title":"Nominators and Validator Payments","text":"

A nominator's stake allows them to vote for validators and earn a share of the rewards without managing a validator node. Although staking rewards depend on validator activity during an era, validators themselves never control or own nominator rewards. To trigger payouts, anyone can call the staking.payoutStakers or staking.payoutStakerByPage methods, which mint and distribute rewards directly to the recipients. This trustless process ensures nominators receive their earned rewards.

Validators set a commission rate as a percentage of the block reward, affecting how rewards are shared with nominators. A 0% commission means the validator keeps only rewards from their self-stake, while a 100% commission means they retain all rewards, leaving none for nominators.

The following examples model splitting validator payments between nominator and validator using various commission percentages. For simplicity, these examples assume a Polkadot-SDK based relay chain that uses DOT as a native token and a single nominator per validator. Calculations of KSM reward payouts for Kusama follow the same formula.

Start with the original validator set from the previous section:

block-beta\n    columns 1\n  block:e\n    A[\"Alice (18 DOT)\"]\n    B[\"Bob (9 DOT)\"]\n    C[\"Carol (8 DOT)\"]\n    D[\"Dave (7 DOT)\"]\n  end\n    space\n    E[\"Payout (8 DOT total)\"]:1\n    E --\"2 DOT\"--> A\n    E --\"2 DOT\"--> B\n    E --\"2 DOT\"--> C\n    E --\"2 DOT\"--> D 

The preceding diagram shows each validator receiving a 2 DOT payout, but doesn't account for sharing rewards with nominators. The following diagram shows what nominator payout might look like for validator Alice. Alice has a 20% commission rate and holds 50% of the stake for their validator:

\nflowchart TD\n    A[\"Gross Rewards = 2 DOT\"]\n    E[\"Commission = 20%\"]\n    F[\"Alice Validator Payment = 0.4 DOT\"]\n    G[\"Total Stake Rewards = 1.6 DOT\"]\n    B[\"Alice Validator Stake = 18 DOT\"]\n    C[\"9 DOT Alice (50%)\"]\n    H[\"Alice Stake Reward = 0.8 DOT\"]\n    I[\"Total Alice Validator Reward = 1.2 DOT\"]\n    D[\"9 DOT Nominator (50%)\"]\n    J[\"Total Nominator Reward = 0.8 DOT\"]\n\n    A --> E\n    E --(2 x 0.20)--> F\n    F --(2 - 0.4)--> G\n    B --> C\n    B --> D\n    C --(1.6 x 0.50)--> H\n    H --(0.4 + 0.8)--> I\n    D --(1.60 x 0.50)--> J

Notice the validator commission rate is applied against the gross amount of rewards for the era. The validator commission is subtracted from the total rewards. After the commission is paid to the validator, the remaining amount is split among stake owners according to their percentage of the total stake. A validator's total rewards for an era include their commission plus their piece of the stake rewards.

Now, consider a different scenario for validator Bob where the commission rate is 40%, and Bob holds 33% of the stake for their validator:

\nflowchart TD\n    A[\"Gross Rewards = 2 DOT\"]\n    E[\"Commission = 40%\"]\n    F[\"Bob Validator Payment = 0.8 DOT\"]\n    G[\"Total Stake Rewards = 1.2 DOT\"]\n    B[\"Bob Validator Stake = 9 DOT\"]\n    C[\"3 DOT Bob (33%)\"]\n    H[\"Bob Stake Reward = 0.4 DOT\"]\n    I[\"Total Bob Validator Reward = 1.2 DOT\"]\n    D[\"6 DOT Nominator (67%)\"]\n    J[\"Total Nominator Reward = 0.8 DOT\"]\n\n    A --> E\n    E --(2 x 0.4)--> F\n    F --(2 - 0.8)--> G\n    B --> C\n    B --> D\n    C --(1.2 x 0.33)--> H\n    H --(0.8 + 0.4)--> I\n    D --(1.2 x 0.67)--> J

Bob holds a smaller percentage of their node's total stake, making their stake reward smaller than Alice's. In this scenario, Bob makes up the difference by charging a 40% commission rate and ultimately ends up with the same total payment as Alice. Each validator will need to find their ideal balance between the amount of stake and commission rate to attract nominators while still making running a validator worthwhile.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/","title":"Overview","text":""},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#introduction","title":"Introduction","text":"

Polkadot is a next-generation blockchain protocol designed to support a multi-chain future by enabling secure communication and interoperability between different blockchains. Built as a Layer-0 protocol, Polkadot introduces innovations like application-specific Layer-1 chains (parachains), shared security through Nominated Proof of Stake (NPoS), and seamless cross-chain interactions via its native Cross-Consensus Messaging Format (XCM).

This guide covers key aspects of Polkadot\u2019s architecture, including its high-level protocol structure, runtime upgrades, blockspace commoditization, and the role of its native token, DOT, in governance, staking, and resource allocation.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#polkadot-10","title":"Polkadot 1.0","text":"

Polkadot 1.0 represents the state of Polkadot as of 2023, coinciding with the release of Polkadot runtime v1.0.0. This section will focus on Polkadot 1.0, along with philosophical insights into network resilience and blockspace.

As a Layer-0 blockchain, Polkadot contributes to the multi-chain vision through several key innovations and initiatives, including:

  • Application-specific Layer-1 blockchains (parachains) - Polkadot's sharded network allows for parallel transaction processing, with shards that can have unique state transition functions, enabling custom-built L1 chains optimized for specific applications

  • Shared security and scalability - L1 chains connected to Polkadot benefit from its Nominated-Proof-of-Stake (NPoS) system, providing security out-of-the-box without the need to bootstrap their own

  • Secure interoperability - Polkadot's native interoperability enables seamless data and value exchange between parachains. This interoperability can also be used outside of the ecosystem for bridging with external networks

  • Resilient infrastructure - decentralized and scalable, Polkadot ensures ongoing support for development and community initiatives via its on-chain treasury and governance

  • Rapid L1 development - the Polkadot SDK allows fast, flexible creation and deployment of Layer-1 chains

  • Cultivating the next generation of Web3 developers - Polkadot supports the growth of Web3 core developers through initiatives such as:

    • Polkadot Blockchain Academy
    • Polkadot Alpha Program
    • EdX courses
    • Rust and Substrate courses (coming soon)
"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#high-level-architecture","title":"High-Level Architecture","text":"

Polkadot features a chain that serves as the central component of the system. This chain is depicted as a ring encircled by several parachains that are connected to it.

According to Polkadot's design, any blockchain that can compile to WebAssembly (Wasm) and adheres to the Parachains Protocol becomes a parachain on the Polkadot network.

Here\u2019s a high-level overview of the Polkadot protocol architecture:

Parachains propose blocks to Polkadot validators, who check for availability and validity before finalizing them. With the relay chain providing security, collators\u2014full nodes of parachains\u2014can focus on their tasks without needing strong incentives.

The Cross-Consensus Messaging Format (XCM) allows parachains to exchange messages freely, leveraging the chain's security for trust-free communication.

In order to interact with chains that want to use their own finalization process (e.g., Bitcoin), Polkadot has bridges that offer two-way compatibility, meaning that transactions can be made between different parachains.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#polkadots-additional-functionalities","title":"Polkadot's Additional Functionalities","text":"

The Polkadot chain oversees crowdloans and auctions. Chain cores were leased through auctions for three-month periods, up to a maximum of two years.

Crowdloans enabled users to securely lend funds to teams for lease deposits in exchange for pre-sale tokens, which is the only way to access slots on Polkadot 1.0.

Note

Auctions are deprecated in favor of coretime.

Additionally, the chain handles staking, accounts, balances, and governance.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#agile-coretime","title":"Agile Coretime","text":"

The new and more efficient way of obtaining core on Polkadot is to go through the process of purchasing coretime.

Agile coretime improves the efficient use of Polkadot's network resources and offers economic flexibility for developers, extending Polkadot's capabilities far beyond the original vision outlined in the whitepaper.

It enables parachains to purchase monthly \"bulk\" allocations of coretime (the time allocated for utilizing a core, measured in Polkadot relay chain blocks), ensuring heavy-duty parachains that can author a block every six seconds with Asynchronous Backing can reliably renew their coretime each month. Although six-second block times are now the default, parachains have the option of producing blocks less frequently.

Renewal orders are prioritized over new orders, offering stability against price fluctuations and helping parachains budget more effectively for project costs.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#polkadots-resilience","title":"Polkadot's Resilience","text":"

Decentralization is a vital component of blockchain networks, but it comes with trade-offs:

  • An overly decentralized network may face challenges in reaching consensus and require significant energy to operate
  • Also, a network that achieves consensus quickly risks centralization, making it easier to manipulate or attack

A network should be decentralized enough to prevent manipulative or malicious influence. In this sense, decentralization is a tool for achieving resilience.

Polkadot 1.0 currently achieves resilience through several strategies:

  • Nominated Proof of Stake (NPoS) - this ensures that the stake per validator is maximized and evenly distributed among validators

  • Decentralized nodes - designed to encourage operators to join the network. This program aims to expand and diversify the validators in the ecosystem who aim to become independent of the program during their term. Feel free to explore more about the program on the official Decentralized Nodes page

  • On-chain treasury and governance - known as OpenGov, this system allows every decision to be made through public referenda, enabling any token holder to cast a vote

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#polkadots-blockspace","title":"Polkadot's Blockspace","text":"

Polkadot 1.0\u2019s design allows for the commoditization of blockspace.

Blockspace is a blockchain's capacity to finalize and commit operations, encompassing its security, computing, and storage capabilities. Its characteristics can vary across different blockchains, affecting security, flexibility, and availability.

  • Security - measures the robustness of blockspace in Proof of Stake (PoS) networks linked to the stake locked on validator nodes, the variance in stake among validators, and the total number of validators. It also considers social centralization (how many validators are owned by single operators) and physical centralization (how many validators run on the same service provider)

  • Flexibility - reflects the functionalities and types of data that can be stored, with high-quality data essential to avoid bottlenecks in critical processes

  • Availability - indicates how easily users can access blockspace. It should be easily accessible, allowing diverse business models to thrive, ideally regulated by a marketplace based on demand and supplemented by options for \"second-hand\" blockspace

Polkadot is built on core blockspace principles, but there's room for improvement. Tasks like balance transfers, staking, and governance are managed on the relay chain.

Delegating these responsibilities to system chains could enhance flexibility and allow the relay chain to concentrate on providing shared security and interoperability.

Note

For more information about blockspace, watch Robert Habermeier\u2019s interview or read his technical blog post.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#dot-token","title":"DOT Token","text":"

DOT is the native token of the Polkadot network, much like BTC for Bitcoin and Ether for the Ethereum blockchain. DOT has 10 decimals, uses the Planck base unit, and has a balance type of u128. The same is true for Kusama's KSM token with the exception of having 12 decimals.

Redenomination of DOT

Polkadot conducted a community poll, which ended on 27 July 2020 at block 888,888, to decide whether to redenominate the DOT token. The stakeholders chose to redenominate the token, changing the value of 1 DOT from 1e12 plancks to 1e10 plancks.

Importantly, this did not affect the network's total number of base units (plancks); it only affects how a single DOT is represented.

The redenomination became effective 72 hours after transfers were enabled, occurring at block 1,248,328 on 21 August 2020 around 16:50 UTC.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#the-planck-unit","title":"The Planck Unit","text":"

The smallest unit of account balance on Substrate-based blockchains (such as Polkadot and Kusama) is called Planck, named after the Planck length, the smallest measurable distance in the physical universe.

Similar to how BTC's smallest unit is the Satoshi and ETH's is the Wei, Polkadot's native token DOT equals 1e10 Planck, while Kusama's native token KSM equals 1e12 Planck.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#uses-for-dot","title":"Uses for DOT","text":"

DOT serves three primary functions within the Polkadot network:

  • Governance - it is used to participate in the governance of the network
  • Staking - DOT is staked to support the network's operation and security
  • Buying coretime - used to purchase coretime in-bulk or on-demand and access the chain to benefit from Polkadot's security and interoperability

Additionally, DOT can serve as a transferable token. For example, DOT, held in the treasury, can be allocated to teams developing projects that benefit the Polkadot ecosystem.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#jam-and-the-road-ahead","title":"JAM and the Road Ahead","text":"

The Join-Accumulate Machine (JAM) represents a transformative redesign of Polkadot's core architecture, envisioned as the successor to the current relay chain. Unlike traditional blockchain architectures, JAM introduces a unique computational model that processes work through two primary functions:

  • Join - handles data integration
  • Accumulate - folds computations into the chain's state

JAM removes many of the opinions and constraints of the current relay chain while maintaining its core security properties. Expected improvements include:

  • Permissionless code execution - JAM is designed to be more generic and flexible, allowing for permissionless code execution through services that can be deployed without governance approval
  • More effective block time utilization - JAM's efficient pipeline processing model places the prior state root in block headers instead of the posterior state root, enabling more effective utilization of block time for computations

This architectural evolution promises to enhance Polkadot's scalability and flexibility while maintaining robust security guarantees. JAM is planned to be rolled out to Polkadot as a single, complete upgrade rather than a stream of smaller updates. This approach seeks to minimize the developer overhead required to address any breaking changes.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/","title":"Proof of Stake Consensus","text":""},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#introduction","title":"Introduction","text":"

Polkadot's Proof of Stake consensus model leverages a unique hybrid approach by design to promote decentralized and secure network operations. In traditional Proof of Stake (PoS) systems, a node's ability to validate transactions is tied to its token holdings, which can lead to centralization risks and limited validator participation. Polkadot addresses these concerns through its Nominated Proof of Stake (NPoS) model and a combination of advanced consensus mechanisms to ensure efficient block production and strong finality guarantees. This combination enables the Polkadot network to scale while maintaining security and decentralization.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#nominated-proof-of-stake","title":"Nominated Proof of Stake","text":"

Polkadot uses Nominated Proof of Stake (NPoS) to select the validator set and secure the network. This model is designed to maximize decentralization and security by balancing the roles of validators and nominators.

  • Validators - play a key role in maintaining the network's integrity. They produce new blocks, validate parachain blocks, and ensure the finality of transactions across the relay chain
  • Nominators - support the network by selecting validators to back with their stake. This mechanism allows users who don't want to run a validator node to still participate in securing the network and earn rewards based on the validators they support

In Polkadot's NPoS system, nominators can delegate their tokens to trusted validators, giving them voting power in selecting validators while spreading security responsibilities across the network.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#hybrid-consensus","title":"Hybrid Consensus","text":"

Polkadot employs a hybrid consensus model that combines two key protocols: a finality gadget called GRANDPA and a block production mechanism known as BABE. This hybrid approach enables the network to benefit from both rapid block production and provable finality, ensuring security and performance.

The hybrid consensus model has some key advantages:

  • Probabilistic finality - with BABE constantly producing new blocks, Polkadot ensures that the network continues to make progress, even when a final decision has not yet been reached on which chain is the true canonical chain

  • Provable finality - GRANDPA guarantees that once a block is finalized, it can never be reverted, ensuring that all network participants agree on the finalized chain

By using separate protocols for block production and finality, Polkadot can achieve rapid block creation and strong guarantees of finality while avoiding the typical trade-offs seen in traditional consensus mechanisms.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#block-production-babe","title":"Block Production - BABE","text":"

Blind Assignment for Blockchain Extension (BABE) is Polkadot's block production mechanism, working with GRANDPA to ensure blocks are produced consistently across the network. As validators participate in BABE, they are assigned block production slots through a randomness-based lottery system. This helps determine which validator is responsible for producing a block at a given time. BABE shares similarities with Ouroboros Praos but differs in key aspects like chain selection rules and slot timing.

Key features of BABE include:

  • Epochs and slots - BABE operates in phases called epochs, each of which is divided into slots (around 6 seconds per slot). Validators are assigned slots at the beginning of each epoch based on stake and randomness

  • Randomized block production - validators enter a lottery to determine which will produce a block in a specific slot. This randomness is sourced from the relay chain's randomness cycle

  • Multiple block producers per slot - in some cases, more than one validator might win the lottery for the same slot, resulting in multiple blocks being produced. These blocks are broadcasted, and the network's fork choice rule helps decide which chain to follow

  • Handling empty slots - if no validators win the lottery for a slot, a secondary selection algorithm ensures that a block is still produced. Validators selected through this method always produce a block, ensuring no slots are skipped

BABE's combination of randomness and slot allocation creates a secure, decentralized system for consistent block production while also allowing for fork resolution when multiple validators produce blocks for the same slot.

Additional Information
  • Refer to the BABE paper for further technical insights, including cryptographic details and formal proofs
  • Visit the Block Production Lottery section of the Polkadot Protocol Specification for technical definitions and formulas
"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#validator-participation","title":"Validator Participation","text":"

In BABE, validators participate in a lottery for every slot to determine whether they are responsible for producing a block during that slot. This process's randomness ensures a decentralized and unpredictable block production mechanism.

There are two lottery outcomes for any given slot that initiate additional processes:

  • Multiple validators in a slot - due to the randomness, multiple validators can be selected to produce a block for the same slot. When this happens, each validator produces a block and broadcasts it to the network resulting in a race condition. The network's topology and latency then determine which block reaches the majority of nodes first. BABE allows both chains to continue building until the finalization process resolves which one becomes canonical. The Fork Choice rule is then used to decide which chain the network should follow

  • No validators in a slot - on occasions when no validator is selected by the lottery, a secondary validator selection algorithm steps in. This backup ensures that a block is still produced, preventing skipped slots. However, if the primary block produced by a verifiable random function (VRF)-selected validator exists for that slot, the secondary block will be ignored. As a result, every slot will have either a primary or a secondary block

This design ensures continuous block production, even in cases of multiple competing validators or an absence of selected validators.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#finality-gadget-grandpa","title":"Finality Gadget - GRANDPA","text":"

GRANDPA (GHOST-based Recursive ANcestor Deriving Prefix Agreement) serves as the finality gadget for Polkadot's relay chain. Operating alongside the BABE block production mechanism, it ensures provable finality, giving participants confidence that blocks finalized by GRANDPA cannot be reverted.

Key features of GRANDPA include:

  • Independent finality service \u2013 GRANDPA runs separately from the block production process, operating in parallel to ensure seamless finalization
  • Chain-based finalization \u2013 instead of finalizing one block at a time, GRANDPA finalizes entire chains, speeding up the process significantly
  • Batch finalization \u2013 can finalize multiple blocks in a single round, enhancing efficiency and minimizing delays in the network
  • Partial synchrony tolerance \u2013 GRANDPA works effectively in a partially synchronous network environment, managing both asynchronous and synchronous conditions
  • Byzantine fault tolerance \u2013 can handle up to 1/5 Byzantine (malicious) nodes, ensuring the system remains secure even when faced with adversarial behavior
What is GHOST?

GHOST (Greedy Heaviest-Observed Subtree) is a consensus protocol used in blockchain networks to select the heaviest branch in a block tree. Unlike traditional longest-chain rules, GHOST can more efficiently handle high block production rates by considering the weight of subtrees rather than just the chain length.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#probabilistic-vs-provable-finality","title":"Probabilistic vs. Provable Finality","text":"

In traditional Proof-of-Work (PoW) blockchains, finality is probabilistic. As blocks are added to the chain, the probability that a block is final increases, but it can never be guaranteed. Eventual consensus means that over time, all nodes will agree on a single version of the blockchain, but this process can be unpredictable and slow.

Conversely, GRANDPA provides provable finality, which means that once a block is finalized, it is irreversible. By using Byzantine fault-tolerant agreements, GRANDPA finalizes blocks more efficiently and securely than probabilistic mechanisms like Nakamoto consensus. Like Ethereum's Casper the Friendly Finality Gadget(FFG), GRANDPA ensures that finalized blocks cannot be reverted, offering stronger guarantees of consensus.

Additional Information

For more details, including formal proofs and detailed algorithms, see the GRANDPA paper.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#fork-choice","title":"Fork Choice","text":"

The fork choice of the relay chain combines BABE and GRANDPA:

  1. BABE must always build on the chain that GRANDPA has finalized
  2. When there are forks after the finalized head, BABE builds on the chain with the most primary blocks to provide probabilistic finality

In the preceding diagram, finalized blocks are black, and non-finalized blocks are yellow. Primary blocks are labeled '1', and secondary blocks are labeled '2.' The topmost chain is the longest chain originating from the last finalized block, but it is not selected because it only has one primary block at the time of evaluation. In comparison, the one below it originates from the last finalized block and has three primary blocks.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#bridging-beefy","title":"Bridging - BEEFY","text":"

Bridge Efficiency Enabling Finality Yielder (BEEFY) is a specialized protocol that extends the finality guarantees provided by GRANDPA. It is specifically designed to facilitate efficient bridging between Polkadot relay chains (such as Polkadot and Kusama) and external blockchains like Ethereum. While GRANDPA is well-suited for finalizing blocks within Polkadot, it has limitations when bridging external chains that weren't built with Polkadot's interoperability features in mind. BEEFY addresses these limitations by ensuring other networks can efficiently verify finality proofs.

Key features of BEEFY include:

  • Efficient finality proof verification - BEEFY enables external networks to easily verify Polkadot finality proofs, ensuring seamless communication between chains
  • Merkle Mountain Ranges (MMR) - this data structure is used to efficiently store and transmit proofs between chains, optimizing data storage and reducing transmission overhead
  • ECDSA signature schemes - BEEFY uses ECDSA signatures, which are widely supported on Ethereum and other EVM-based chains, making integration with these ecosystems smoother
  • Light client optimization - BEEFY reduces the computational burden on light clients by allowing them to check for a super-majority of validator votes rather than needing to process all validator signatures, improving performance
Additional Information

For more details, including technical definitions and formulas, see Bridge design (BEEFY) in the Polkadot Protocol Specification.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#resources","title":"Resources","text":"
  • GRANDPA Rust implementation
  • GRANDPA Pallet
  • Block Production and Finalization in Polkadot - Bill Laboon explains how BABE and GRANDPA work together to produce and finalize blocks on Kusama
  • Block Production and Finalization in Polkadot: Understanding the BABE and GRANDPA Protocols - Bill Laboon's MIT Cryptoeconomic Systems 2020 academic talk describing Polkadot's hybrid consensus model in-depth
"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/","title":"Asset Hub","text":""},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#introduction","title":"Introduction","text":"

The Asset Hub is a critical component in the Polkadot ecosystem, enabling the management of fungible and non-fungible assets across the network. Since the relay chain focuses on maintaining security and consensus without direct asset management, Asset Hub provides a streamlined platform for creating, managing, and using on-chain assets in a fee-efficient manner. This guide outlines the core features of Asset Hub, including how it handles asset operations, cross-chain transfers, and asset integration using XCM, as well as essential tools like API Sidecar and TxWrapper for developers working with on-chain assets.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#assets-basics","title":"Assets Basics","text":"

In the Polkadot ecosystem, the relay chain does not natively support additional assets beyond its native token (DOT for Polkadot, KSM for Kusama). The Asset Hub parachain on Polkadot and Kusama provides a fungible and non-fungible assets framework. Asset Hub allows developers and users to create, manage, and use assets across the ecosystem.

Asset creators can use Asset Hub to track their asset issuance across multiple parachains and manage assets through operations such as minting, burning, and transferring. Projects that need a standardized method of handling on-chain assets will find this particularly useful. The fungible asset interface provided by Asset Hub closely resembles Ethereum's ERC-20 standard but is directly integrated into Polkadot's runtime, making it more efficient in terms of speed and transaction fees.

Integrating with Asset Hub offers several key benefits, particularly for infrastructure providers and users:

  • Support for non-native on-chain assets - Asset Hub enables seamless asset creation and management, allowing projects to develop tokens or assets that can interact with the broader ecosystem
  • Lower transaction fees - Asset Hub offers significantly lower transaction costs\u2014approximately one-tenth of the fees on the relay chain, providing cost-efficiency for regular operations
  • Reduced deposit requirements - depositing assets in Asset Hub is more accessible, with deposit requirements that are around one one-hundredth of those on the relay chain
  • Payment of transaction fees with non-native assets - users can pay transaction fees in assets other than the native token (DOT or KSM), offering more flexibility for developers and users

Assets created on the Asset Hub are stored as part of a map, where each asset has a unique ID that links to information about the asset, including details like:

  • The management team
  • The total supply
  • The number of accounts holding the asset
  • Sufficiency for account existence - whether the asset alone is enough to maintain an account without a native token balance
  • The metadata of the asset, including its name, symbol, and the number of decimals for representation

Some assets can be regarded as sufficient to maintain an account's existence, meaning that users can create accounts on the network without needing a native token balance (i.e., no existential deposit required). Developers can also set minimum balances for their assets. If an account's balance drops below the minimum, the balance is considered dust and may be cleared.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#assets-pallet","title":"Assets Pallet","text":"

The Polkadot SDK's Assets pallet is a powerful module designated for creating and managing fungible asset classes with a fixed supply. It offers a secure and flexible way to issue, transfer, freeze, and destroy assets. The pallet supports various operations and includes permissioned and non-permissioned functions to cater to simple and advanced use cases.

Visit the Assets Pallet Rust docs for more in-depth information.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#key-features","title":"Key Features","text":"

Key features of the Assets pallet include:

  • Asset issuance - allows the creation of a new asset, where the total supply is assigned to the creator's account
  • Asset transfer - enables transferring assets between accounts while maintaining a balance in both accounts
  • Asset freezing - prevents transfers of a specific asset from one account, locking it from further transactions
  • Asset destruction - allows accounts to burn or destroy their holdings, removing those assets from circulation
  • Non-custodial transfers - a non-custodial mechanism to enable one account to approve a transfer of assets on behalf of another
"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#main-functions","title":"Main Functions","text":"

The Assets pallet provides a broad interface for managing fungible assets. Some of the main dispatchable functions include:

  • create() - create a new asset class by placing a deposit, applicable when asset creation is permissionless
  • issue() - mint a fixed supply of a new asset and assign it to the creator's account
  • transfer() - transfer a specified amount of an asset between two accounts
  • approve_transfer() - approve a non-custodial transfer, allowing a third party to move assets between accounts
  • destroy() - destroy an entire asset class, removing it permanently from the chain
  • freeze() and thaw() - administrators or privileged users can lock or unlock assets from being transferred

For a full list of dispatchable and privileged functions, see the dispatchables Rust docs.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#querying-functions","title":"Querying Functions","text":"

The Assets pallet exposes several key querying functions that developers can interact with programmatically. These functions allow you to query asset information and perform operations essential for managing assets across accounts. The two main querying functions are:

  • balance(asset_id, account) - retrieves the balance of a given asset for a specified account. Useful for checking the holdings of an asset class across different accounts

  • total_supply(asset_id) - returns the total supply of the asset identified by asset_id. Allows users to verify how much of the asset exists on-chain

In addition to these basic functions, other utility functions are available for querying asset metadata and performing asset transfers. You can view the complete list of querying functions in the Struct Pallet Rust docs.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#permission-models-and-roles","title":"Permission Models and Roles","text":"

The Assets pallet incorporates a robust permission model, enabling control over who can perform specific operations like minting, transferring, or freezing assets. The key roles within the permission model are:

  • Admin - can freeze (preventing transfers) and forcibly transfer assets between accounts. Admins also have the power to reduce the balance of an asset class across arbitrary accounts. They manage the more sensitive and administrative aspects of the asset class
  • Issuer - responsible for minting new tokens. When new assets are created, the Issuer is the account that controls their distribution to other accounts
  • Freezer - can lock the transfer of assets from an account, preventing the account holder from moving their balance. This function is useful for freezing accounts involved in disputes or fraud
  • Owner - has overarching control, including destroying an entire asset class. Owners can also set or update the Issuer, Freezer, and Admin roles

These permissions provide fine-grained control over assets, enabling developers and asset managers to ensure secure, controlled operations. Each of these roles is crucial for managing asset lifecycles and ensuring that assets are used appropriately across the network.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#asset-freezing","title":"Asset Freezing","text":"

The Assets pallet allows you to freeze assets. This feature prevents transfers or spending from a specific account, effectively locking the balance of an asset class until it is explicitly unfrozen. Asset freezing is beneficial when assets are restricted due to security concerns or disputes.

Freezing assets is controlled by the Freezer role, as mentioned earlier. Only the account with the Freezer privilege can perform these operations. Here are the key freezing functions:

  • freeze(asset_id, account) - locks the specified asset of the account. While the asset is frozen, no transfers can be made from the frozen account
  • thaw(asset_id, account) - corresponding function for unfreezing, allowing the asset to be transferred again

This approach enables secure and flexible asset management, providing administrators the tools to control asset movement in special circumstances.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#non-custodial-transfers-approval-api","title":"Non-Custodial Transfers (Approval API)","text":"

The Assets pallet also supports non-custodial transfers through the Approval API. This feature allows one account to approve another account to transfer a specific amount of its assets to a third-party recipient without granting full control over the account's balance. Non-custodial transfers enable secure transactions where trust is required between multiple parties.

Here's a brief overview of the key functions for non-custodial asset transfers:

  • approve_transfer(asset_id, delegate, amount) - approves a delegate to transfer up to a certain amount of the asset on behalf of the original account holder
  • cancel_approval(asset_id, delegate) - cancels a previous approval for the delegate. Once canceled, the delegate no longer has permission to transfer the approved amount
  • transfer_approved(asset_id, owner, recipient, amount) - executes the approved asset transfer from the owner\u2019s account to the recipient. The delegate account can call this function once approval is granted

These delegated operations make it easier to manage multi-step transactions and dApps that require complex asset flows between participants.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#foreign-assets","title":"Foreign Assets","text":"

Foreign assets in Asset Hub refer to assets originating from external blockchains or parachains that are registered in the Asset Hub. These assets are typically native tokens from other parachains within the Polkadot ecosystem or bridged tokens from external blockchains such as Ethereum.

Once a foreign asset is registered in the Asset Hub by its originating blockchain's root origin, users are able to send these tokens to the Asset Hub and interact with them as they would any other asset within the Polkadot ecosystem.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#handling-foreign-assets","title":"Handling Foreign Assets","text":"

The Foreign Assets pallet, an instance of the Assets pallet, manages these assets. Since foreign assets are integrated into the same interface as native assets, developers can use the same functionalities, such as transferring and querying balances. However, there are important distinctions when dealing with foreign assets.

  • Asset identifier - unlike native assets, foreign assets are identified using an XCM Multilocation rather than a simple numeric AssetId. This multilocation identifier represents the cross-chain location of the asset and provides a standardized way to reference it across different parachains and relay chains

  • Transfers - once registered in the Asset Hub, foreign assets can be transferred between accounts, just like native assets. Users can also send these assets back to their originating blockchain if supported by the relevant cross-chain messaging mechanisms

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#integration","title":"Integration","text":"

Asset Hub supports a variety of integration tools that make it easy for developers to manage assets and interact with the blockchain in their applications. The tools and libraries provided by Parity Technologies enable streamlined operations, such as querying asset information, building transactions, and monitoring cross-chain asset transfers.

Developers can integrate Asset Hub into their projects using these core tools:

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#api-sidecar","title":"API Sidecar","text":"

API Sidecar is a RESTful service that can be deployed alongside Polkadot and Kusama nodes. It provides endpoints to retrieve real-time blockchain data, including asset information. When used with Asset Hub, Sidecar allows querying:

  • Asset look-ups - retrieve specific assets using AssetId
  • Asset balances - view the balance of a particular asset on Asset Hub

Public instances of API Sidecar connected to Asset Hub are available, such as:

  • Polkadot Asset Hub Sidecar
  • Kusama Asset Hub Sidecar

These public instances are primarily for ad-hoc testing and quick checks.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#txwrapper","title":"TxWrapper","text":"

TxWrapper is a library that simplifies constructing and signing transactions for Polkadot SDK-based chains, including Polkadot and Kusama. This tool includes support for working with Asset Hub, enabling developers to:

  • Construct offline transactions
  • Leverage asset-specific functions such as minting, burning, and transferring assets

TxWrapper provides the flexibility needed to integrate asset operations into custom applications while maintaining the security and efficiency of Polkadot's transaction model.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#asset-transfer-api","title":"Asset Transfer API","text":"

Asset Transfer API is a library focused on simplifying the construction of asset transfers for Polkadot SDK-based chains that involve system parachains like Asset Hub. It exposes a reduced set of methods that facilitate users sending transfers to other parachains or locally. Refer to the cross-chain support table for the current status of cross-chain support development.

Key features include:

  • Support for cross-chain transfers between parachains
  • Streamlined transaction construction with support for the necessary parachain metadata

The API supports various asset operations, such as paying transaction fees with non-native tokens and managing asset liquidity.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#parachain-node","title":"Parachain Node","text":"

To fully leverage the Asset Hub's functionality, developers will need to run a system parachain node. Setting up an Asset Hub node allows users to interact with the parachain in real time, syncing data and participating in the broader Polkadot ecosystem. Guidelines for setting up an Asset Hub node are available in the Parity documentation.

Using these integration tools, developers can manage assets seamlessly and integrate Asset Hub functionality into their applications, leveraging Polkadot's powerful infrastructure.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#xcm-transfer-monitoring","title":"XCM Transfer Monitoring","text":"

Since Asset Hub facilitates cross-chain asset transfers across the Polkadot ecosystem, XCM transfer monitoring becomes an essential practice for developers and infrastructure providers. This section outlines how to monitor the cross-chain movement of assets between parachains, the relay chain, and other systems.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#monitor-xcm-deposits","title":"Monitor XCM Deposits","text":"

As assets move between chains, tracking the cross-chain transfers in real time is crucial. Whether assets are transferred via a teleport from system parachains or through a reserve-backed transfer from any other parachain, each transfer emits a relevant event (such as the balances.minted event).

To ensure accurate monitoring of these events:

  • Track XCM deposits - query every new block created in the relay chain or Asset Hub, loop through the events array, and filter for any balances.minted events which confirm the asset was successfully transferred to the account
  • Track event origins - each balances.minted event points to a specific address. By monitoring this, service providers can verify that assets have arrived in the correct account
"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#track-xcm-information-back-to-the-source","title":"Track XCM Information Back to the Source","text":"

While the balances.minted event confirms the arrival of assets, there may be instances where you need to trace the origin of the cross-chain message that triggered the event. In such cases, you can:

  1. Query the relevant chain at the block where the balances.minted event was emitted
  2. Look for a messageQueue(Processed) event within that block's initialization. This event contains a parameter (Id) that identifies the cross-chain message received by the relay chain or Asset Hub. You can use this Id to trace the message back to its origin chain, offering full visibility of the asset transfer's journey
"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#practical-monitoring-examples","title":"Practical Monitoring Examples","text":"

The preceding sections outline the process of monitoring XCM deposits to specific accounts and then tracing back the origin of these deposits. The process of tracking an XCM transfer and the specific events to monitor may vary based on the direction of the XCM message. Here are some examples to showcase the slight differences:

  • Transfer from parachain to relay chain - track parachainsystem(UpwardMessageSent) on the parachain and messagequeue(Processed) on the relay chain
  • Transfer from relay chain to parachain - track xcmPallet(sent) on the relay chain and dmpqueue(ExecutedDownward) on the parachain
  • Transfer between parachains - track xcmpqueue(XcmpMessageSent) on the system parachain and xcmpqueue(Success) on the destination parachain
"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#monitor-for-failed-xcm-transfers","title":"Monitor for Failed XCM Transfers","text":"

Sometimes, XCM transfers may fail due to liquidity or other errors. Failed transfers emit specific error events, which are key to resolving issues in asset transfers. Monitoring for these failure events helps catch issues before they affect asset balances.

  • Relay chain to system parachain - look for the dmpqueue(ExecutedDownward) event on the parachain with an Incomplete outcome and an error type such as UntrustedReserveLocation
  • Parachain to parachain - monitor for xcmpqueue(Fail) on the destination parachain with error types like TooExpensive

For detailed error management in XCM, see Gavin Wood's blog post on XCM Execution and Error Management.

"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/","title":"Bridge Hub","text":""},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#introduction","title":"Introduction","text":"

The Bridge Hub system parachain plays a crucial role in facilitating trustless interactions between Polkadot, Kusama, Ethereum, and other blockchain ecosystems. By implementing on-chain light clients and supporting protocols like BEEFY and GRANDPA, Bridge Hub ensures seamless message transmission and state verification across chains. It also provides essential pallets for sending and receiving messages, making it a cornerstone of Polkadot\u2019s interoperability framework. With built-in support for XCM (Cross-Consensus Messaging), Bridge Hub enables secure, efficient communication between diverse blockchain networks.

This guide covers the architecture, components, and deployment of the Bridge Hub system. You'll explore its trustless bridging mechanisms, key pallets for various blockchains, and specific implementations like Snowbridge and the Polkadot <> Kusama bridge. By the end, you'll understand how Bridge Hub enhances connectivity within the Polkadot ecosystem and beyond.

"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#trustless-bridging","title":"Trustless Bridging","text":"

Bridge Hub provides a mode of trustless bridging through its implementation of on-chain light clients and trustless relayers. The target chain and source chain both provide ways of verifying one another's state and actions (such as a transfer) based on the consensus and finality of both chains rather than an external mechanism controlled by a third party.

BEEFY (Bridge Efficiency Enabling Finality Yielder) is instrumental in this solution. It provides a more efficient way to verify the consensus on the relay chain. It allows the participants in a network to verify finality proofs, meaning a remote chain like Ethereum can verify the state of Polkadot at a given block height.

Info

In this context, \"trustless\" refers to the lack of need to trust a human when interacting with various system components. Trustless systems are based instead on trusting mathematics, cryptography, and code.

Trustless bridges are essentially two one-way bridges, where each chain has a method of verifying the state of the other in a trustless manner through consensus proofs.

For example, the Ethereum and Polkadot bridging solution that Snowbridge implements involves two light clients: one which verifies the state of Polkadot and the other which verifies the state of Ethereum. The light client for Polkadot is implemented in the runtime as a pallet, whereas the light client for Ethereum is implemented as a smart contract on the beacon chain.

"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#bridging-components","title":"Bridging Components","text":"

In any given Bridge Hub implementation (Kusama, Polkadot, or other relay chains), there are a few primary pallets that are utilized:

  • Pallet Bridge GRANDPA - an on-chain GRANDPA light client for Substrate based chains
  • Pallet Bridge Parachains - a finality module for parachains
  • Pallet Bridge Messages - a pallet which allows sending, receiving, and tracking of inbound and outbound messages
  • Pallet XCM Bridge - a pallet which, with the Bridge Messages pallet, adds XCM support to bridge pallets
"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#ethereum-specific-support","title":"Ethereum-Specific Support","text":"

Bridge Hub also has a set of components and pallets that support a bridge between Polkadot and Ethereum through Snowbridge.

To view the complete list of which pallets are included in Bridge Hub, visit the Subscan Runtime Modules page. Alternatively, the source code for those pallets can be found in the Polkadot SDK Snowbridge Pallets repository.

"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#deployed-bridges","title":"Deployed Bridges","text":"
  • Snowbridge - a general-purpose, trustless bridge between Polkadot and Ethereum
  • Hyperbridge - a cross-chain solution built as an interoperability coprocessor, providing state-proof-based interoperability across all blockchains
  • Polkadot <> Kusama Bridge - a bridge that utilizes relayers to bridge the Polkadot and Kusama relay chains trustlessly
"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#where-to-go-next","title":"Where to Go Next","text":"
  • Go over the Bridge Hub README in the Polkadot SDK Bridge-hub Parachains repository
  • Take a deeper dive into bridging architecture in the Polkadot SDK High-Level Bridge documentation
  • Read more about BEEFY and Bridging in the Polkadot Wiki: Bridging: BEEFY
"},{"location":"polkadot-protocol/architecture/system-chains/coretime/","title":"Coretime","text":""},{"location":"polkadot-protocol/architecture/system-chains/coretime/#introduction","title":"Introduction","text":"

The Coretime system chain facilitates the allocation, procurement, sale, and scheduling of bulk coretime, enabling tasks (such as parachains) to utilize the computation and security provided by Polkadot.

The Broker pallet, along with Cross Consensus Messaging (XCM), enables this functionality to be delegated to the system chain rather than the relay chain. Using XCMP's Upward Message Passing (UMP) to the relay chain allows for core assignments to take place for a task registered on the relay chain.

The Fellowship RFC\u00a0RFC-1: Agile Coretime contains the specification for the Coretime system chain and coretime as a concept.

Besides core management, its responsibilities include:

  • The number of cores that should be made available
  • Which tasks should be running on which cores and in what ratios
  • Accounting information for the on-demand pool

From the relay chain, it expects the following via Downward Message Passing (DMP):

  • The number of cores available to be scheduled
  • Account information on on-demand scheduling

The details for this interface can be found in RFC-5: Coretime Interface.

"},{"location":"polkadot-protocol/architecture/system-chains/coretime/#bulk-coretime-assignment","title":"Bulk Coretime Assignment","text":"

The Coretime chain allocates coretime before its usage. It also manages the ownership of a core. As cores are made up of regions (by default, one core is a single region), a region is recognized as a non-fungible asset. The Coretime chain exposes Regions over XCM as an NFT. Users can transfer individual regions, partition, interlace, or allocate them to a task. Regions describe how a task may use a core.

One core can contain more than one region.

A core can be considered a logical representation of an active validator set on the relay chain, where these validators commit to verifying the state changes for a particular task running on that region. With partitioning, having more than one region per core is possible, allowing for different computational schemes. Therefore, running more than one task on a single core is possible.

Regions can be managed in the following manner on the Coretime chain:

  • Assigning region - regions can be assigned to a task on the relay chain, such as a parachain/rollup using the assign dispatchable

Coretime Availability

When bulk coretime is obtained, block production is not immediately available. It becomes available to produce blocks for a task in the next Coretime cycle. To view the status of the current or next Coretime cycle, go to the Subscan Coretime Dashboard.

  • Transferring regions - regions may be transferred on the Coretime chain, upon which the transfer dispatchable in the Broker pallet would assign a new owner to that specific region

  • Partitioning regions - using the partition dispatchable, regions may be partitioned into two non-overlapping subregions within the same core. A partition involves specifying a pivot, wherein the new region will be defined and available for use

  • Interlacing regions - using the interlace dispatchable, interlacing regions allows a core to have alternative-compute strategies. Whereas partitioned regions are mutually exclusive, interlaced regions overlap because multiple tasks may utilize a single core in an alternating manner

For more information regarding these mechanisms, visit the coretime page on the Polkadot Wiki: Introduction to Agile Coretime.

"},{"location":"polkadot-protocol/architecture/system-chains/coretime/#on-demand-coretime","title":"On Demand Coretime","text":"

At this writing, on-demand coretime is currently deployed on the relay chain and will eventually be deployed to the Coretime chain. On-demand coretime allows parachains (previously known as parathreads) to utilize available cores per block.

The Coretime chain also handles coretime sales, details of which can be found on the Polkadot Wiki: Agile Coretime: Coretime Sales.

"},{"location":"polkadot-protocol/architecture/system-chains/coretime/#where-to-go-next","title":"Where to Go Next","text":"
  • Learn about Agile Coretime on the Polkadot Wiki
"},{"location":"polkadot-protocol/architecture/system-chains/overview/","title":"Overview of Polkadot's System Chains","text":""},{"location":"polkadot-protocol/architecture/system-chains/overview/#introduction","title":"Introduction","text":"

Polkadot's relay chain is designed to secure parachains and facilitate seamless inter-chain communication. However, resource-intensive\u2014tasks like governance, asset management, and bridging are more efficiently handled by system parachains. These specialized chains offload functionality from the relay chain, leveraging Polkadot's parallel execution model to improve performance and scalability. By distributing key functionalities across system parachains, Polkadot can maximize its relay chain's blockspace for its core purpose of securing and validating parachains.

This guide will explore how system parachains operate within Polkadot and Kusama, detailing their critical roles in network governance, asset management, and bridging. You'll learn about the currently deployed system parachains, their unique functions, and how they enhance Polkadot's decentralized ecosystem.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#system-chains","title":"System Chains","text":"

System parachains contain core Polkadot protocol features, but in parachains rather than the relay chain. Execution cores for system chains are allocated via network governance rather than purchasing coretime on a marketplace.

System parachains defer to on-chain governance to manage their upgrades and other sensitive actions as they do not have native tokens or governance systems separate from DOT or KSM. It is not uncommon to see a system parachain implemented specifically to manage network governance.

Note

You may see system parachains called common good parachains in articles and discussions. This nomenclature caused confusion as the network evolved, so system parachains is preferred.

For more details on this evolution, review this parachains forum discussion.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#existing-system-chains","title":"Existing System Chains","text":"
---\ntitle: System Parachains at a Glance\n---\nflowchart TB\n    subgraph POLKADOT[\"Polkadot\"]\n        direction LR\n            PAH[\"Polkadot Asset Hub\"]\n            PCOL[\"Polkadot Collectives\"]\n            PBH[\"Polkadot Bridge Hub\"]\n            PPC[\"Polkadot People Chain\"]\n            PCC[\"Polkadot Coretime Chain\"]\n    end\n\n    subgraph KUSAMA[\"Kusama\"]\n        direction LR\n            KAH[\"Kusama Asset Hub\"]\n            KBH[\"Kusama Bridge Hub\"]\n            KPC[\"Kusama People Chain\"]\n            KCC[\"Kusama Coretime Chain\"]\n            E[\"Encointer\"]\n        end

All system parachains are on both Polkadot and Kusama with the following exceptions:

  • Collectives - only on Polkadot
  • Encointer - only on Kusama
"},{"location":"polkadot-protocol/architecture/system-chains/overview/#asset-hub","title":"Asset Hub","text":"

The Asset Hub is an asset portal for the entire network. It helps asset creators, such as reserve-backed stablecoin issuers, track the total issuance of an asset in the network, including amounts transferred to other parachains. It also serves as the hub where asset creators can perform on-chain operations, such as minting and burning, to manage their assets effectively.

This asset management logic is encoded directly in the runtime of the chain rather than in smart contracts. The efficiency of executing logic in a parachain allows for fees and deposits that are about 1/10th of what is required on the relay chain. These low fees mean that the Asset Hub is well suited for handling the frequent transactions required when managing balances, transfers, and on-chain assets.

The Asset Hub also supports non-fungible assets (NFTs) via the Uniques pallet and NFTs pallet. For more information about NFTs, see the Polkadot Wiki section on NFT Pallets.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#collectives","title":"Collectives","text":"

The Polkadot Collectives parachain was added in Referendum 81 and exists on Polkadot but not on Kusama. The Collectives chain hosts on-chain collectives that serve the Polkadot network, including the following:

  • Polkadot Alliance - provides a set of ethics and standards for the community to follow. Includes an on-chain means to call out bad actors
  • Polkadot Technical Fellowship - a rules-based social organization to support and incentivize highly-skilled developers to contribute to the technical stability, security, and progress of the network

These on-chain collectives will play essential roles in the future of network stewardship and decentralized governance. Networks can use a bridge hub to help them act as collectives and express their legislative voices as single opinions within other networks.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#bridge-hub","title":"Bridge Hub","text":"

Before parachains, the only way to design a bridge was to put the logic onto the relay chain. Since both networks now support parachains and the isolation they provide, each network can have a parachain dedicated to bridges.

The Bridge Hub system parachain operates on the relay chain, and is responsible for faciliating bridges to the wider Web3 space. It contains the required bridge pallets in its runtime, which enable trustless bridging with other blockchain networks like Polkadot, Kusama, and Ethereum. The Bridge Hub uses the native token of the relay chain.

See the Bridge Hub documentation for additional information.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#people-chain","title":"People Chain","text":"

The People Chain provides a naming system that allows users to manage and verify their account identity.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#coretime-chain","title":"Coretime Chain","text":"

The Coretime system chain lets users buy coretime to access Polkadot's computation. Coretime marketplaces run on top of the Coretime chain. Kusama does not use the Collectives system chain. Instead, Kusama relies on the Encointer system chain, which provides Sybil resistance as a service to the entire Kusama ecosystem.

Visit Introduction to Agile Coretime in the Polkadot Wiki for more information.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#encointer","title":"Encointer","text":"

Encointer is a blockchain platform for self-sovereign ID and a global universal basic income (UBI). The Encointer protocol uses a novel Proof of Personhood (PoP) system to create unique identities and resist Sybil attacks. PoP is based on the notion that a person can only be in one place at any given time. Encointer offers a framework that allows for any group of real people to create, distribute, and use their own digital community tokens.

Participants are requested to attend physical key-signing ceremonies with small groups of random people at randomized locations. These local meetings are part of one global signing ceremony occurring at the same time. Participants use the Encointer wallet app to participate in these ceremonies and manage local community currencies.

Referendums marking key Encointer adoption milestones include:

  • Referendum 158 - Register Encointer As a Common Good Chain - registered Encointer as the second system parachain on Kusama's network
  • Referendum 187 - Encointer Runtime Upgrade to Full Functionality - introduced a runtime upgrade bringing governance and full functionality for communities to use the protocol

Tip

To learn more about Encointer, check out the official Encointer book or watch an Encointer ceremony in action.

"},{"location":"polkadot-protocol/basics/accounts/","title":"Accounts","text":""},{"location":"polkadot-protocol/basics/accounts/#introduction","title":"Introduction","text":"

Accounts are essential for managing identity, transactions, and governance on the network in the Polkadot SDK. Understanding these components is critical for seamless development and operation on the network, whether you're building or interacting with Polkadot-based chains.

This page will guide you through the essential aspects of accounts, including their data structure, balance types, reference counters, and address formats. You\u2019ll learn how accounts are managed within the runtime, how balances are categorized, and how addresses are encoded and validated.

"},{"location":"polkadot-protocol/basics/accounts/#account-data-structure","title":"Account Data Structure","text":"

Accounts are foundational to any blockchain, and the Polkadot SDK provides a flexible management system. This section explains how the Polkadot SDK defines accounts and manages their lifecycle through data structures within the runtime.

"},{"location":"polkadot-protocol/basics/accounts/#account","title":"Account","text":"

The Account data type is a storage map within the System pallet that links an account ID to its corresponding data. This structure is fundamental for mapping account-related information within the chain.

The code snippet below shows how accounts are defined:

 /// The full account information for a particular account ID\n #[pallet::storage]\n #[pallet::getter(fn account)]\n pub type Account<T: Config> = StorageMap<\n   _,\n   Blake2_128Concat,\n   T::AccountId,\n   AccountInfo<T::Nonce, T::AccountData>,\n   ValueQuery,\n >;\n

The preceding code block defines a storage map named Account. The StorageMap is a type of on-chain storage that maps keys to values. In the Account map, the key is an account ID, and the value is the account's information. Here, T represents the generic parameter for the runtime configuration, which is defined by the pallet's configuration trait (Config).

The StorageMap consists of the following parameters:

  • _ - used in macro expansion and acts as a placeholder for the storage prefix type. Tells the macro to insert the default prefix during expansion
  • Blake2_128Concat - the hashing function applied to keys in the storage map
  • T::AccountId - represents the key type, which corresponds to the account\u2019s unique ID
  • AccountInfo<T::Nonce, T::AccountData> - the value type stored in the map. For each account ID, the map stores an AccountInfo struct containing:
    • T::Nonce - a nonce for the account, which is incremented with each transaction to ensure transaction uniqueness
    • T::AccountData - custom account data defined by the runtime configuration, which could include balances, locked funds, or other relevant information
  • ValueQuery - defines how queries to the storage map behave when no value is found; returns a default value instead of None
Additional information

For a detailed explanation of storage maps, refer to the StorageMap Rust docs.

"},{"location":"polkadot-protocol/basics/accounts/#account-info","title":"Account Info","text":"

The AccountInfo structure is another key element within the System pallet, providing more granular details about each account's state. This structure tracks vital data, such as the number of transactions and the account\u2019s relationships with other modules.

#[derive(Clone, Eq, PartialEq, Default, RuntimeDebug, Encode, Decode)]\npub struct AccountInfo<Nonce, AccountData> {\n  pub nonce: Nonce,\n  pub consumers: RefCount,\n  pub providers: RefCount,\n  pub sufficients: RefCount,\n  pub data: AccountData,\n}\n

The AccountInfo structure includes the following components:

  • nonce - tracks the number of transactions initiated by the account, which ensures transaction uniqueness and prevents replay attacks
  • consumers - counts how many other modules or pallets rely on this account\u2019s existence. The account cannot be removed from the chain (reaped) until this count reaches zero
  • providers - tracks how many modules permit this account\u2019s existence. An account can only be reaped once both providers and sufficients are zero
  • sufficients - represents the number of modules that allow the account to exist for internal purposes, independent of any other modules
  • AccountData - a flexible data structure that can be customized in the runtime configuration, usually containing balances or other user-specific data

This structure helps manage an account's state and prevents its premature removal while it is still referenced by other on-chain data or modules. The AccountInfo structure can vary as long as it satisfies the trait bounds defined by the AccountData associated type in the frame-system::pallet::Config trait.

"},{"location":"polkadot-protocol/basics/accounts/#account-reference-counters","title":"Account Reference Counters","text":"

Polkadot SDK uses reference counters to track an account\u2019s dependencies across different runtime modules. These counters ensure that accounts remain active while data is associated with them.

The reference counters include:

  • consumers - prevents account removal while other pallets still rely on the account
  • providers - ensures an account is active before other pallets store data related to it
  • sufficients - indicates the account\u2019s independence, ensuring it can exist even without a native token balance, such as when holding sufficient alternative assets
"},{"location":"polkadot-protocol/basics/accounts/#providers-reference-counters","title":"Providers Reference Counters","text":"

The providers counter ensures that an account is ready to be depended upon by other runtime modules. For example, it is incremented when an account has a balance above the existential deposit, which marks the account as active.

The system requires this reference counter to be greater than zero for the consumers counter to be incremented, ensuring the account is stable before any dependencies are added.

"},{"location":"polkadot-protocol/basics/accounts/#consumers-reference-counters","title":"Consumers Reference Counters","text":"

The consumers counter ensures that the account cannot be reaped until all references to it across the runtime have been removed. This check prevents the accidental deletion of accounts that still have active on-chain data.

It is the user\u2019s responsibility to clear out any data from other runtime modules if they wish to remove their account and reclaim their existential deposit.

"},{"location":"polkadot-protocol/basics/accounts/#sufficients-reference-counter","title":"Sufficients Reference Counter","text":"

The sufficients counter tracks accounts that can exist independently without relying on a native account balance. This is useful for accounts holding other types of assets, like tokens, without needing a minimum balance in the native token.

For instance, the Assets pallet, may increment this counter for an account holding sufficient tokens.

"},{"location":"polkadot-protocol/basics/accounts/#account-deactivation","title":"Account Deactivation","text":"

In Polkadot SDK-based chains, an account is deactivated when its reference counters (such as providers, consumers, and sufficient) reach zero. These counters ensure the account remains active as long as other runtime modules or pallets reference it.

When all dependencies are cleared and the counters drop to zero, the account becomes deactivated and may be removed from the chain (reaped). This is particularly important in Polkadot SDK-based blockchains, where accounts with balances below the existential deposit threshold are pruned from storage to conserve state resources.

Each pallet that references an account has cleanup functions that decrement these counters when the pallet no longer depends on the account. Once these counters reach zero, the account is marked for deactivation.

"},{"location":"polkadot-protocol/basics/accounts/#updating-counters","title":"Updating Counters","text":"

The Polkadot SDK provides runtime developers with various methods to manage account lifecycle events, such as deactivation or incrementing reference counters. These methods ensure that accounts cannot be reaped while still in use.

The following helper functions manage these counters:

  • inc_consumers() - increments the consumer reference counter for an account, signaling that another pallet depends on it
  • dec_consumers() - decrements the consumer reference counter, signaling that a pallet no longer relies on the account
  • inc_providers() - increments the provider reference counter, ensuring the account remains active
  • dec_providers() - decrements the provider reference counter, allowing for account deactivation when no longer in use
  • inc_sufficients() - increments the sufficient reference counter for accounts that hold sufficient assets
  • dec_sufficients() - decrements the sufficient reference counter

To ensure proper account cleanup and lifecycle management, a corresponding decrement should be made for each increment action.

The System pallet offers three query functions to assist developers in tracking account states:

  • can_inc_consumer() - checks if the account can safely increment the consumer reference
  • can_dec_provider() - ensures that no consumers exist before allowing the decrement of the provider counter
  • is_provider_required() - verifies whether the account still has any active consumer references

This modular and flexible system of reference counters tightly controls the lifecycle of accounts in Polkadot SDK-based blockchains, preventing the accidental removal or retention of unneeded accounts. You can refer to the System pallet Rust docs for more details.

"},{"location":"polkadot-protocol/basics/accounts/#account-balance-types","title":"Account Balance Types","text":"

In the Polkadot ecosystem, account balances are categorized into different types based on how the funds are utilized and their availability. These balance types determine the actions that can be performed, such as transferring tokens, paying transaction fees, or participating in governance activities. Understanding these balance types helps developers manage user accounts and implement balance-dependent logic.

A more efficient distribution of account balance types is in development

Soon, pallets in the Polkadot SDK will implement the fungible trait (see the tracking issue for more details). This update will enable more efficient use of account balances, allowing the free balance to be utilized for on-chain activities such as setting proxies and managing identities.

"},{"location":"polkadot-protocol/basics/accounts/#balance-types","title":"Balance Types","text":"

The five main balance types are:

  • Free balance - represents the total tokens available to the account for any on-chain activity, including staking, governance, and voting. However, it may not be fully spendable or transferrable if portions of it are locked or reserved
  • Locked balance - portions of the free balance that cannot be spent or transferred because they are tied up in specific activities like staking, vesting, or participating in governance. While the tokens remain part of the free balance, they are non-transferable for the duration of the lock
  • Reserved balance - funds locked by specific system actions, such as setting up an identity, creating proxies, or submitting deposits for governance proposals. These tokens are not part of the free balance and cannot be spent unless they are unreserved
  • Spendable balance - the portion of the free balance that is available for immediate spending or transfers. It is calculated by subtracting the maximum of locked or reserved amounts from the free balance, ensuring that existential deposit limits are met
  • Untouchable balance - funds that cannot be directly spent or transferred but may still be utilized for on-chain activities, such as governance participation or staking. These tokens are typically tied to certain actions or locked for a specific period

The spendable balance is calculated as follows:

spendable = free - max(locked - reserved, ED)\n

Here, free, locked, and reserved are defined above. The ED represents the existential deposit, the minimum balance required to keep an account active and prevent it from being reaped. You may find you can't see all balance types when looking at your account via a wallet. Wallet providers often display only spendable, locked, and reserved balances.

"},{"location":"polkadot-protocol/basics/accounts/#locks","title":"Locks","text":"

Locks are applied to an account's free balance, preventing that portion from being spent or transferred. Locks are automatically placed when an account participates in specific on-chain activities, such as staking or governance. Although multiple locks may be applied simultaneously, they do not stack. Instead, the largest lock determines the total amount of locked tokens.

Locks follow these basic rules:

  • If different locks apply to varying amounts, the largest lock amount takes precedence
  • If multiple locks apply to the same amount, the lock with the longest duration governs when the balance can be unlocked
"},{"location":"polkadot-protocol/basics/accounts/#locks-example","title":"Locks Example","text":"

Consider an example where an account has 80 DOT locked for both staking and governance purposes like so:

  • 80 DOT is staked with a 28-day lock period
  • 24 DOT is locked for governance with a 1x conviction and a 7-day lock period
  • 4 DOT is locked for governance with a 6x conviction and a 224-day lock period

In this case, the total locked amount is 80 DOT because only the largest lock (80 DOT from staking) governs the locked balance. These 80 DOT will be released at different times based on the lock durations. In this example, the 24 DOT locked for governance will be released first since the shortest lock period is seven days. The 80 DOT stake with a 28-day lock period is released next. Now, all that remains locked is the 4 DOT for governance. After 224 days, all 80 DOT (minus the existential deposit) will be free and transferrable.

"},{"location":"polkadot-protocol/basics/accounts/#edge-cases-for-locks","title":"Edge Cases for Locks","text":"

In scenarios where multiple convictions and lock periods are active, the lock duration and amount are determined by the longest period and largest amount. For example, if you delegate with different convictions and attempt to undelegate during an active lock period, the lock may be extended for the full amount of tokens. For a detailed discussion on edge case lock behavior, see this Stack Exchange post.

"},{"location":"polkadot-protocol/basics/accounts/#balance-types-on-polkadotjs","title":"Balance Types on Polkadot.js","text":"

Polkadot.js provides a user-friendly interface for managing and visualizing various account balances on Polkadot and Kusama networks. When interacting with Polkadot.js, you will encounter multiple balance types that are critical for understanding how your funds are distributed and restricted. This section explains how different balances are displayed in the Polkadot.js UI and what each type represents.

The most common balance types displayed on Polkadot.js are:

  • Total balance - the total number of tokens available in the account. This includes all tokens, whether they are transferable, locked, reserved, or vested. However, the total balance does not always reflect what can be spent immediately. In this example, the total balance is 0.6274 KSM

  • Transferrable balance - shows how many tokens are immediately available for transfer. It is calculated by subtracting the locked and reserved balances from the total balance. For example, if an account has a total balance of 0.6274 KSM and a transferrable balance of 0.0106 KSM, only the latter amount can be sent or spent freely

  • Vested balance - tokens that allocated to the account but released according to a specific schedule. Vested tokens remain locked and cannot be transferred until fully vested. For example, an account with a vested balance of 0.2500 KSM means that this amount is owned but not yet transferable

  • Locked balance - tokens that are temporarily restricted from being transferred or spent. These locks typically result from participating in staking, governance, or vested transfers. In Polkadot.js, locked balances do not stack\u2014only the largest lock is applied. For instance, if an account has 0.5500 KSM locked for governance and staking, the locked balance would display 0.5500 KSM, not the sum of all locked amounts

  • Reserved balance - refers to tokens locked for specific on-chain actions, such as setting an identity, creating a proxy, or making governance deposits. Reserved tokens are not part of the free balance, but can be freed by performing certain actions. For example, removing an identity would unreserve those funds

  • Bonded balance - the tokens locked for staking purposes. Bonded tokens are not transferrable until they are unbonded after the unbonding period

  • Redeemable balance - the number of tokens that have completed the unbonding period and are ready to be unlocked and transferred again. For example, if an account has a redeemable balance of 0.1000 KSM, those tokens are now available for spending

  • Democracy balance - reflects the number of tokens locked for governance activities, such as voting on referenda. These tokens are locked for the duration of the governance action and are only released after the lock period ends

By understanding these balance types and their implications, developers and users can better manage their funds and engage with on-chain activities more effectively.

"},{"location":"polkadot-protocol/basics/accounts/#address-formats","title":"Address Formats","text":"

The SS58 address format is a core component of the Polkadot SDK that enables accounts to be uniquely identified across Polkadot-based networks. This format is a modified version of Bitcoin's Base58Check encoding, specifically designed to accommodate the multi-chain nature of the Polkadot ecosystem. SS58 encoding allows each chain to define its own set of addresses while maintaining compatibility and checksum validation for security.

"},{"location":"polkadot-protocol/basics/accounts/#basic-format","title":"Basic Format","text":"

SS58 addresses consist of three main components:

base58encode(concat(<address-type>, <address>, <checksum>))\n
  • Address type - a byte or set of bytes that define the network (or chain) for which the address is intended. This ensures that addresses are unique across different Polkadot SDK-based chains
  • Address - the public key of the account encoded as bytes
  • Checksum - a hash-based checksum which ensures that addresses are valid and unaltered. The checksum is derived from the concatenated address type and address components, ensuring integrity

The encoding process transforms the concatenated components into a Base58 string, providing a compact and human-readable format that avoids easily confused characters (e.g., zero '0', capital 'O', lowercase 'l'). This encoding function (encode) is implemented exactly as defined in Bitcoin and IPFS specifications, using the same alphabet as both implementations.

Additional information

Refer to Ss58Codec for more details on the SS58 address format implementation.

"},{"location":"polkadot-protocol/basics/accounts/#address-type","title":"Address Type","text":"

The address type defines how an address is interpreted and to which network it belongs. Polkadot SDK uses different prefixes to distinguish between various chains and address formats:

  • Address types 0-63 - simple addresses, commonly used for network identifiers
  • Address types 64-127 - full addresses that support a wider range of network identifiers
  • Address types 128-255 - reserved for future address format extensions

For example, Polkadot\u2019s main network uses an address type of 0, while Kusama uses 2. This ensures that addresses can be used without confusion between networks.

The address type is always encoded as part of the SS58 address, making it easy to quickly identify the network. Refer to the SS58 registry for the canonical listing of all address type identifiers and how they map to Polkadot SDK-based networks.

"},{"location":"polkadot-protocol/basics/accounts/#address-length","title":"Address Length","text":"

SS58 addresses can have different lengths depending on the specific format. Address lengths range from as short as 3 to 35 bytes, depending on the complexity of the address and network requirements. This flexibility allows SS58 addresses to adapt to different chains while providing a secure encoding mechanism.

Total Type Raw account Checksum 3 1 1 1 4 1 2 1 5 1 2 2 6 1 4 1 7 1 4 2 8 1 4 3 9 1 4 4 10 1 8 1 11 1 8 2 12 1 8 3 13 1 8 4 14 1 8 5 15 1 8 6 16 1 8 7 17 1 8 8 35 1 32 2

SS58 addresses also support different payload sizes, allowing a flexible range of account identifiers.

"},{"location":"polkadot-protocol/basics/accounts/#checksum-types","title":"Checksum Types","text":"

A checksum is applied to validate SS58 addresses. Polkadot SDK uses a Blake2b-512 hash function to calculate the checksum, which is appended to the address before encoding. The checksum length can vary depending on the address format (e.g., 1-byte, 2-byte, or longer), providing varying levels of validation strength.

The checksum ensures that an address is not modified or corrupted, adding an extra layer of security for account management.

"},{"location":"polkadot-protocol/basics/accounts/#validating-addresses","title":"Validating Addresses","text":"

SS58 addresses can be validated using the subkey command-line interface or the Polkadot.js API. These tools help ensure an address is correctly formatted and valid for the intended network. The following sections will provide an overview of how validation works with these tools.

"},{"location":"polkadot-protocol/basics/accounts/#using-subkey","title":"Using Subkey","text":"

Subkey is a CLI tool provided by Polkadot SDK for generating and managing keys. It can inspect and validate SS58 addresses.

The inspect command gets a public key and an SS58 address from the provided secret URI. The basic syntax for the subkey inspect command is:

subkey inspect [flags] [options] uri\n

For the uri command-line argument, you can specify the secret seed phrase, a hex-encoded private key, or an SS58 address. If the input is a valid address, the subkey program displays the corresponding hex-encoded public key, account identifier, and SS58 addresses.

For example, to inspect the public keys derived from a secret seed phrase, you can run a command similar to the following:

subkey inspect \"caution juice atom organ advance problem want pledge someone senior holiday very\"\n

The command displays output similar to the following:

subkey inspect \"caution juice atom organ advance problem want pledge someone senior holiday very\" Secret phrase caution juice atom organ advance problem want pledge someone senior holiday very is account: Secret seed: 0xc8fa03532fb22ee1f7f6908b9c02b4e72483f0dbd66e4cd456b8f34c6230b849 Public key (hex): 0xd6a3105d6768e956e9e5d41050ac29843f98561410d3a47f9dd5b3b227ab8746 Public key (SS58): 5Gv8YYFu8H1btvmrJy9FjjAWfb99wrhV3uhPFoNEr918utyR Account ID: 0xd6a3105d6768e956e9e5d41050ac29843f98561410d3a47f9dd5b3b227ab8746 SS58 Address: 5Gv8YYFu8H1btvmrJy9FjjAWfb99wrhV3uhPFoNEr918utyR

The subkey program assumes an address is based on a public/private key pair. If you inspect an address, the command returns the 32-byte account identifier.

However, not all addresses in Polkadot SDK-based networks are based on keys.

Depending on the command-line options you specify and the input you provided, the command output might also display the network for which the address has been encoded. For example:

subkey inspect \"12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU\"\n

The command displays output similar to the following:

subkey inspect \"12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU\" Public Key URI 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU is account: Network ID/Version: polkadot Public key (hex): 0x46ebddef8cd9bb167dc30878d7113b7e168e6f0646beffd77d69d39bad76b47a Account ID: 0x46ebddef8cd9bb167dc30878d7113b7e168e6f0646beffd77d69d39bad76b47a Public key (SS58): 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU SS58 Address: 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU

"},{"location":"polkadot-protocol/basics/accounts/#using-polkadotjs-api","title":"Using Polkadot.js API","text":"

To verify an address in JavaScript or TypeScript projects, you can use the functions built into the Polkadot.js API. For example:

// Import Polkadot.js API dependencies\nconst { decodeAddress, encodeAddress } = require('@polkadot/keyring');\nconst { hexToU8a, isHex } = require('@polkadot/util');\n\n// Specify an address to test.\nconst address = 'INSERT_ADDRESS_TO_TEST';\n\n// Check address\nconst isValidSubstrateAddress = () => {\n  try {\n    encodeAddress(isHex(address) ? hexToU8a(address) : decodeAddress(address));\n\n    return true;\n  } catch (error) {\n    return false;\n  }\n};\n\n// Query result\nconst isValid = isValidSubstrateAddress();\nconsole.log(isValid);\n

If the function returns true, the specified address is a valid address.

"},{"location":"polkadot-protocol/basics/accounts/#other-ss58-implementations","title":"Other SS58 Implementations","text":"

Support for encoding and decoding Polkadot SDK SS58 addresses has been implemented in several other languages and libraries.

  • Crystal - wyhaines/base58.cr
  • Go - itering/subscan-plugin
  • Python - polkascan/py-scale-codec
  • TypeScript - subsquid/squid-sdk
"},{"location":"polkadot-protocol/basics/chain-data/","title":"Chain Data","text":""},{"location":"polkadot-protocol/basics/chain-data/#introduction","title":"Introduction","text":"

Understanding and leveraging on-chain data is a fundamental aspect of blockchain development. Whether you're building frontend applications or backend systems, accessing and decoding runtime metadata is vital to interacting with the blockchain. This guide introduces you to the tools and processes for generating and retrieving metadata, explains its role in application development, and outlines the additional APIs available for interacting with a Polkadot node. By mastering these components, you can ensure seamless communication between your applications and the blockchain.

"},{"location":"polkadot-protocol/basics/chain-data/#application-development","title":"Application Development","text":"

You might not be directly involved in building frontend applications as a blockchain developer. However, most applications that run on a blockchain require some form of frontend or user-facing client to enable users or other programs to access and modify the data that the blockchain stores. For example, you might develop a browser-based, mobile, or desktop application that allows users to submit transactions, post articles, view their assets, or track previous activity. The backend for that application is configured in the runtime logic for your blockchain, but the frontend client makes the runtime features accessible to your users.

For your custom chain to be useful to others, you'll need to provide a client application that allows users to view, interact with, or update information that the blockchain keeps track of. In this article, you'll learn how to expose information about your runtime so that client applications can use it, see examples of the information exposed, and explore tools and libraries that use it.

"},{"location":"polkadot-protocol/basics/chain-data/#understand-metadata","title":"Understand Metadata","text":"

Polkadot SDK-based blockchain networks are designed to expose their runtime information, allowing developers to learn granular details regarding pallets, RPC calls, and runtime APIs. The metadata also exposes their related documentation. The chain's metadata is SCALE-encoded, allowing for the development of browser-based, mobile, or desktop applications to support the chain's runtime upgrades seamlessly. It is also possible to develop applications compatible with multiple Polkadot SDK-based chains simultaneously.

"},{"location":"polkadot-protocol/basics/chain-data/#expose-runtime-information-as-metadata","title":"Expose Runtime Information as Metadata","text":"

To interact with a node or the state of the blockchain, you need to know how to connect to the chain and access the exposed runtime features. This interaction involves a Remote Procedure Call (RPC) through a node endpoint address, commonly through a secure web socket connection.

An application developer typically needs to know the contents of the runtime logic, including the following details:

  • Version of the runtime the application is connecting to
  • Supported APIs
  • Implemented pallets
  • Defined functions and corresponding type signatures
  • Defined custom types
  • Exposed parameters users can set

As the Polkadot SDK is modular and provides a composable framework for building blockchains, there are limitless opportunities to customize the schema of properties. Each runtime can be configured with its properties, including function calls and types, which can be changed over time with runtime upgrades.

The Polkadot SDK enables you to generate the runtime metadata schema to capture information unique to a runtime. The metadata for a runtime describes the pallets in use and types defined for a specific runtime version. The metadata includes information about each pallet's storage items, functions, events, errors, and constants. The metadata also provides type definitions for any custom types included in the runtime.

Metadata provides a complete inventory of a chain's runtime. It is key to enabling client applications to interact with the node, parse responses, and correctly format message payloads sent back to that chain.

"},{"location":"polkadot-protocol/basics/chain-data/#generate-metadata","title":"Generate Metadata","text":"

To efficiently use the blockchain's networking resources and minimize the data transmitted over the network, the metadata schema is encoded using the Parity SCALE Codec. This encoding is done automatically through the scale-infocrate.

At a high level, generating the metadata involves the following steps:

  1. The pallets in the runtime logic expose callable functions, types, parameters, and documentation that need to be encoded in the metadata
  2. The scale-info crate collects type information for the pallets in the runtime, builds a registry of the pallets that exist in a particular runtime, and the relevant types for each pallet in the registry. The type information is detailed enough to enable encoding and decoding for every type
  3. The frame-metadata crate describes the structure of the runtime based on the registry provided by the scale-info crate
  4. Nodes provide the RPC method state_getMetadata to return a complete description of all the types in the current runtime as a hex-encoded vector of SCALE-encoded bytes
"},{"location":"polkadot-protocol/basics/chain-data/#retrieve-runtime-metadata","title":"Retrieve Runtime Metadata","text":"

The type information provided by the metadata enables applications to communicate with nodes using different runtime versions and across chains that expose different calls, events, types, and storage items. The metadata also allows libraries to generate a substantial portion of the code needed to communicate with a given node, enabling libraries like subxt to generate frontend interfaces that are specific to a target chain.

"},{"location":"polkadot-protocol/basics/chain-data/#use-polkadotjs","title":"Use Polkadot.js","text":"

Visit the Polkadot.js Portal and select the Developer dropdown in the top banner. Select RPC Calls to make the call to request metadata. Follow these steps to make the RPC call:

  1. Select state as the endpoint to call
  2. Select getMetadata(at) as the method to call
  3. Click Submit RPC call to submit the call and return the metadata in JSON format
"},{"location":"polkadot-protocol/basics/chain-data/#use-curl","title":"Use Curl","text":"

You can fetch the metadata for the network by calling the node's RPC endpoint. This request returns the metadata in bytes rather than human-readable JSON:

curl -H \"Content-Type: application/json\" \\\n-d '{\"id\":1, \"jsonrpc\":\"2.0\", \"method\": \"state_getMetadata\"}' \\\nhttps://rpc.polkadot.io\n
"},{"location":"polkadot-protocol/basics/chain-data/#use-subxt","title":"Use Subxt","text":"

subxt may also be used to fetch the metadata of any data in a human-readable JSON format:

subxt metadata  --url wss://rpc.polkadot.io --format json > spec.json\n

Another option is to use the subxt explorer web UI.

"},{"location":"polkadot-protocol/basics/chain-data/#client-applications-and-metadata","title":"Client Applications and Metadata","text":"

The metadata exposes the expected way to decode each type, meaning applications can send, retrieve, and process application information without manual encoding and decoding. Client applications must use the SCALE codec library to encode and decode RPC payloads to use the metadata. Client applications use the metadata to interact with the node, parse responses, and format message payloads sent to the node.

"},{"location":"polkadot-protocol/basics/chain-data/#metadata-format","title":"Metadata Format","text":"

Although the SCALE-encoded bytes can be decoded using the frame-metadata and parity-scale-codec libraries, there are other tools, such as subxt and the Polkadot-JS API, that can convert the raw data to human-readable JSON format.

The types and type definitions included in the metadata returned by the state_getMetadata RPC call depend on the runtime's metadata version.

In general, the metadata includes the following information:

  • A constant identifying the file as containing metadata
  • The version of the metadata format used in the runtime
  • Type definitions for all types used in the runtime and generated by the scale-info crate
  • Pallet information for the pallets included in the runtime in the order that they are defined in the construct_runtime macro

Metadata formats may vary

Depending on the frontend library used (such as the Polkadot API), they may format the metadata differently than the raw format shown.

The following example illustrates a condensed and annotated section of metadata decoded and converted to JSON:

[\n    1635018093,\n    {\n        \"V14\": {\n            \"types\": {\n                \"types\": [{}]\n            },\n            \"pallets\": [{}],\n            \"extrinsic\": {\n                \"ty\": 126,\n                \"version\": 4,\n                \"signed_extensions\": [{}]\n            },\n            \"ty\": 141\n        }\n    }\n]\n

The constant 1635018093 is a magic number that identifies the file as a metadata file. The rest of the metadata is divided into the types, pallets, and extrinsic sections:

  • The types section contains an index of the types and information about each type's type signature
  • The pallets section contains information about each pallet in the runtime
  • The extrinsic section describes the type identifier and transaction format version that the runtime uses

Different extrinsic versions can have varying formats, especially when considering signed transactions.

"},{"location":"polkadot-protocol/basics/chain-data/#pallets","title":"Pallets","text":"

The following is a condensed and annotated example of metadata for a single element in the pallets array (the sudo pallet):

{\n    \"name\": \"Sudo\",\n    \"storage\": {\n        \"prefix\": \"Sudo\",\n        \"entries\": [\n            {\n                \"name\": \"Key\",\n                \"modifier\": \"Optional\",\n                \"ty\": {\n                    \"Plain\": 0\n                },\n                \"default\": [0],\n                \"docs\": [\"The `AccountId` of the sudo key.\"]\n            }\n        ]\n    },\n    \"calls\": {\n        \"ty\": 117\n    },\n    \"event\": {\n        \"ty\": 42\n    },\n    \"constants\": [],\n    \"error\": {\n        \"ty\": 124\n    },\n    \"index\": 8\n}\n

Every element metadata contains the name of the pallet it represents and information about its storage, calls, events, and errors. You can look up details about the definition of the calls, events, and errors by viewing the type index identifier. The type index identifier is the u32 integer used to access the type information for that item. For example, the type index identifier for calls in the Sudo pallet is 117. If you view information for that type identifier in the types section of the metadata, it provides information about the available calls, including the documentation for each call.

For example, the following is a condensed excerpt of the calls for the Sudo pallet:

{\n    \"id\": 117,\n    \"type\": {\n        \"path\": [\"pallet_sudo\", \"pallet\", \"Call\"],\n        \"params\": [\n            {\n                \"name\": \"T\",\n                \"type\": null\n            }\n        ],\n        \"def\": {\n            \"variant\": {\n                \"variants\": [\n                    {\n                        \"name\": \"sudo\",\n                        \"fields\": [\n                            {\n                                \"name\": \"call\",\n                                \"type\": 114,\n                                \"typeName\": \"Box<<T as Config>::RuntimeCall>\"\n                            }\n                        ],\n                        \"index\": 0,\n                        \"docs\": [\n                            \"Authenticates sudo key, dispatches a function call with `Root` origin\"\n                        ]\n                    },\n                    {\n                        \"name\": \"sudo_unchecked_weight\",\n                        \"fields\": [\n                            {\n                                \"name\": \"call\",\n                                \"type\": 114,\n                                \"typeName\": \"Box<<T as Config>::RuntimeCall>\"\n                            },\n                            {\n                                \"name\": \"weight\",\n                                \"type\": 8,\n                                \"typeName\": \"Weight\"\n                            }\n                        ],\n                        \"index\": 1,\n                        \"docs\": [\n                            \"Authenticates sudo key, dispatches a function call with `Root` origin\"\n                        ]\n                    },\n                    {\n                        \"name\": \"set_key\",\n                        \"fields\": [\n                            {\n                                \"name\": \"new\",\n                                \"type\": 103,\n                                \"typeName\": \"AccountIdLookupOf<T>\"\n                            }\n                        ],\n                        \"index\": 2,\n                        \"docs\": [\n                            \"Authenticates current sudo key, sets the given AccountId (`new`) as the new sudo\"\n                        ]\n                    },\n                    {\n                        \"name\": \"sudo_as\",\n                        \"fields\": [\n                            {\n                                \"name\": \"who\",\n                                \"type\": 103,\n                                \"typeName\": \"AccountIdLookupOf<T>\"\n                            },\n                            {\n                                \"name\": \"call\",\n                                \"type\": 114,\n                                \"typeName\": \"Box<<T as Config>::RuntimeCall>\"\n                            }\n                        ],\n                        \"index\": 3,\n                        \"docs\": [\n                            \"Authenticates sudo key, dispatches a function call with `Signed` origin from a given account\"\n                        ]\n                    }\n                ]\n            }\n        }\n    }\n}\n

For each field, you can access type information and metadata for the following:

  • Storage metadata - provides the information required to enable applications to get information for specific storage items
  • Call metadata - includes information about the runtime calls defined by the #[pallet] macro including call names, arguments and documentation
  • Event metadata - provides the metadata generated by the #[pallet::event] macro, including the name, arguments, and documentation for each pallet event
  • Constants metadata - provides metadata generated by the #[pallet::constant] macro, including the name, type, and hex-encoded value of the constant
  • Error metadata - provides metadata generated by the #[pallet::error] macro, including the name and documentation for each pallet error

Note

Type identifiers change from time to time, so you should avoid relying on specific type identifiers in your applications.

"},{"location":"polkadot-protocol/basics/chain-data/#extrinsic","title":"Extrinsic","text":"

The runtime generates extrinsic metadata and provides useful information about transaction format. When decoded, the metadata contains the transaction version and the list of signed extensions.

For example:

{\n    \"extrinsic\": {\n        \"ty\": 126,\n        \"version\": 4,\n        \"signed_extensions\": [\n            {\n                \"identifier\": \"CheckNonZeroSender\",\n                \"ty\": 132,\n                \"additional_signed\": 41\n            },\n            {\n                \"identifier\": \"CheckSpecVersion\",\n                \"ty\": 133,\n                \"additional_signed\": 4\n            },\n            {\n                \"identifier\": \"CheckTxVersion\",\n                \"ty\": 134,\n                \"additional_signed\": 4\n            },\n            {\n                \"identifier\": \"CheckGenesis\",\n                \"ty\": 135,\n                \"additional_signed\": 11\n            },\n            {\n                \"identifier\": \"CheckMortality\",\n                \"ty\": 136,\n                \"additional_signed\": 11\n            },\n            {\n                \"identifier\": \"CheckNonce\",\n                \"ty\": 138,\n                \"additional_signed\": 41\n            },\n            {\n                \"identifier\": \"CheckWeight\",\n                \"ty\": 139,\n                \"additional_signed\": 41\n            },\n            {\n                \"identifier\": \"ChargeTransactionPayment\",\n                \"ty\": 140,\n                \"additional_signed\": 41\n            }\n        ]\n    },\n    \"ty\": 141\n}\n

The type system is composite, meaning each type identifier contains a reference to a specific type or to another type identifier that provides information about the associated primitive types.

For example, you can encode the BitVec<Order, Store> type, but to decode it properly, you must know the types used for the Order and Store types. To find type information for Order and Store, you can use the path in the decoded JSON to locate their type identifiers.

"},{"location":"polkadot-protocol/basics/chain-data/#included-rpc-apis","title":"Included RPC APIs","text":"

A standard node comes with the following APIs to interact with a node:

  • AuthorApiServer - make calls into a full node, including authoring extrinsics and verifying session keys
  • ChainApiServer - retrieve block header and finality information
  • OffchainApiServer - make RPC calls for off-chain workers
  • StateApiServer - query information about on-chain state such as runtime version, storage items, and proofs
  • SystemApiServer - retrieve information about network state, such as connected peers and node roles
"},{"location":"polkadot-protocol/basics/chain-data/#additional-resources","title":"Additional Resources","text":"

The following tools can help you locate and decode metadata:

  • Subxt Explorer
  • Metadata Portal \ud83c\udf17
  • De[code] Sub[strate]
"},{"location":"polkadot-protocol/basics/cryptography/","title":"Cryptography","text":""},{"location":"polkadot-protocol/basics/cryptography/#introduction","title":"Introduction","text":"

Cryptography forms the backbone of blockchain technology, providing the mathematical verifiability crucial for consensus systems, data integrity, and user security. While a deep understanding of the underlying mathematical processes isn't necessary for most blockchain developers, grasping the fundamental applications of cryptography is essential. This page comprehensively overviews cryptographic implementations used across Polkadot SDK-based chains and the broader blockchain ecosystem.

"},{"location":"polkadot-protocol/basics/cryptography/#hash-functions","title":"Hash Functions","text":"

Hash functions are fundamental to blockchain technology, creating a unique digital fingerprint for any piece of data, including simple text, images, or any other form of file. They map input data of any size to a fixed-size output (typically 32 bytes) using complex mathematical operations. Hashing is used to verify data integrity, create digital signatures, and provide a secure way to store passwords. This form of mapping is known as the \"pigeonhole principle,\" it is primarily implemented to efficiently and verifiably identify data from large sets.

"},{"location":"polkadot-protocol/basics/cryptography/#key-properties-of-hash-functions","title":"Key Properties of Hash Functions","text":"
  • Deterministic - the same input always produces the same output
  • Quick computation - it's easy to calculate the hash value for any given input
  • Pre-image resistance - it's infeasible to generate the input data from its hash
  • Small changes in input yield large changes in output - known as the \"avalanche effect\"
  • Collision resistance - the probabilities are extremely low to find two different inputs with the same hash
"},{"location":"polkadot-protocol/basics/cryptography/#blake2","title":"Blake2","text":"

The Polkadot SDK utilizes Blake2, a state-of-the-art hashing method that offers:

  • Equal or greater security compared to SHA-2
  • Significantly faster performance than other algorithms

These properties make Blake2 ideal for blockchain systems, reducing sync times for new nodes and lowering the resources required for validation.

Note

For detailed technical specifications on Blake2, refer to the official Blake2 paper.

"},{"location":"polkadot-protocol/basics/cryptography/#types-of-cryptography","title":"Types of Cryptography","text":"

There are two different ways that cryptographic algorithms are implemented: symmetric cryptography and asymmetric cryptography.

"},{"location":"polkadot-protocol/basics/cryptography/#symmetric-cryptography","title":"Symmetric Cryptography","text":"

Symmetric encryption is a branch of cryptography that isn't based on one-way functions, unlike asymmetric cryptography. It uses the same cryptographic key to encrypt plain text and decrypt the resulting ciphertext.

Symmetric cryptography is a type of encryption that has been used throughout history, such as the Enigma Cipher and the Caesar Cipher. It is still widely used today and can be found in Web2 and Web3 applications alike. There is only one single key, and a recipient must also have access to it to access the contained information.

"},{"location":"polkadot-protocol/basics/cryptography/#symmetric-advantages","title":"Advantages","text":"
  • Fast and efficient for large amounts of data
  • Requires less computational power
"},{"location":"polkadot-protocol/basics/cryptography/#symmetric-disadvantages","title":"Disadvantages","text":"
  • Key distribution can be challenging
  • Scalability issues in systems with many users
"},{"location":"polkadot-protocol/basics/cryptography/#asymmetric-cryptography","title":"Asymmetric Cryptography","text":"

Asymmetric encryption is a type of cryptography that uses two different keys, known as a keypair: a public key, used to encrypt plain text, and a private counterpart, used to decrypt the ciphertext.

The public key encrypts a fixed-length message that can only be decrypted with the recipient's private key and, sometimes, a set password. The public key can be used to cryptographically verify that the corresponding private key was used to create a piece of data without compromising the private key, such as with digital signatures. This has obvious implications for identity, ownership, and properties and is used in many different protocols across Web2 and Web3.

"},{"location":"polkadot-protocol/basics/cryptography/#asymmetric-advantages","title":"Advantages","text":"
  • Solves the key distribution problem
  • Enables digital signatures and secure key exchange
"},{"location":"polkadot-protocol/basics/cryptography/#asymmetric-disadvantages","title":"Disadvantages","text":"
  • Slower than symmetric encryption
  • Requires more computational resources
"},{"location":"polkadot-protocol/basics/cryptography/#trade-offs-and-compromises","title":"Trade-offs and Compromises","text":"

Symmetric cryptography is faster and requires fewer bits in the key to achieve the same level of security that asymmetric cryptography provides. However, it requires a shared secret before communication can occur, which poses issues to its integrity and a potential compromise point. On the other hand, asymmetric cryptography doesn't require the secret to be shared ahead of time, allowing for far better end-user security.

Hybrid symmetric and asymmetric cryptography is often used to overcome the engineering issues of asymmetric cryptography, as it is slower and requires more bits in the key to achieve the same level of security. It encrypts a key and then uses the comparatively lightweight symmetric cipher to do the \"heavy lifting\" with the message.

"},{"location":"polkadot-protocol/basics/cryptography/#digital-signatures","title":"Digital Signatures","text":"

Digital signatures are a way of verifying the authenticity of a document or message using asymmetric keypairs. They are used to ensure that a sender or signer's document or message hasn't been tampered with in transit, and for recipients to verify that the data is accurate and from the expected sender.

Signing digital signatures only requires a low-level understanding of mathematics and cryptography. For a conceptual example -- when signing a check, it is expected that it cannot be cashed multiple times. This isn't a feature of the signature system but rather the check serialization system. The bank will check that the serial number on the check hasn't already been used. Digital signatures essentially combine these two concepts, allowing the signature to provide the serialization via a unique cryptographic fingerprint that cannot be reproduced.

Unlike pen-and-paper signatures, knowledge of a digital signature cannot be used to create other signatures. Digital signatures are often used in bureaucratic processes, as they are more secure than simply scanning in a signature and pasting it onto a document.

Polkadot SDK provides multiple different cryptographic schemes and is generic so that it can support anything that implements the Pair trait.

"},{"location":"polkadot-protocol/basics/cryptography/#example-of-creating-a-digital-signature","title":"Example of Creating a Digital Signature","text":"

The process of creating and verifying a digital signature involves several steps:

  1. The sender creates a hash of the message
  2. The hash is encrypted using the sender's private key, creating the signature
  3. The message and signature are sent to the recipient
  4. The recipient decrypts the signature using the sender's public key
  5. The recipient hashes the received message and compares it to the decrypted hash

If the hashes match, the signature is valid, confirming the message's integrity and the sender's identity.

"},{"location":"polkadot-protocol/basics/cryptography/#elliptic-curve","title":"Elliptic Curve","text":"

Blockchain technology requires the ability to have multiple keys creating a signature for block proposal and validation. To this end, Elliptic Curve Digital Signature Algorithm (ECDSA) and Schnorr signatures are two of the most commonly used methods. While ECDSA is a far simpler implementation, Schnorr signatures are more efficient when it comes to multi-signatures.

Schnorr signatures bring some noticeable features over the ECDSA/EdDSA schemes:

  • It is better for hierarchical deterministic key derivations
  • It allows for native multi-signature through signature aggregation
  • It is generally more resistant to misuse

One sacrifice that is made when using Schnorr signatures over ECDSA is that both require 64 bytes, but only ECDSA signatures communicate their public key.

"},{"location":"polkadot-protocol/basics/cryptography/#various-implementations","title":"Various Implementations","text":"
  • ECDSA - Polkadot SDK provides an ECDSA signature scheme using the secp256k1 curve. This is the same cryptographic algorithm used to secure Bitcoin and Ethereum

  • Ed25519 - is an EdDSA signature scheme using Curve25519. It is carefully engineered at several levels of design and implementation to achieve very high speeds without compromising security

  • SR25519 - is based on the same underlying curve as Ed25519. However, it uses Schnorr signatures instead of the EdDSA scheme

"},{"location":"polkadot-protocol/basics/data-encoding/","title":"Data Encoding","text":""},{"location":"polkadot-protocol/basics/data-encoding/#introduction","title":"Introduction","text":"

The Polkadot SDK uses a lightweight and efficient encoding/decoding mechanism to optimize data transmission across the network. This mechanism, known as the SCALE codec, is used for serializing and deserializing data.

The SCALE codec enables communication between the runtime and the outer node. This mechanism is designed for high-performance, copy-free data encoding and decoding in resource-constrained environments like the Polkadot SDK Wasm runtime.

It is not self-describing, meaning the decoding context must fully know the encoded data types.

Parity's libraries utilize the parity-scale-codec crate (a Rust implementation of the SCALE codec) to handle encoding and decoding for interactions between RPCs and the runtime.

The codec mechanism is ideal for Polkadot SDK-based chains because:

  • It is lightweight compared to generic serialization frameworks like serde, which add unnecessary bulk to binaries
  • It doesn\u2019t rely on Rust\u2019s libstd, making it compatible with no_std environments like Wasm runtime
  • It integrates seamlessly with Rust, allowing easy derivation of encoding and decoding logic for new types using #[derive(Encode, Decode)]

Defining a custom encoding scheme in the Polkadot SDK-based chains, rather than using an existing Rust codec library, is crucial for enabling cross-platform and multi-language support.

"},{"location":"polkadot-protocol/basics/data-encoding/#scale-codec","title":"SCALE Codec","text":"

The codec is implemented using the following traits:

  • Encode
  • Decode
  • CompactAs
  • HasCompact
  • EncodeLike
"},{"location":"polkadot-protocol/basics/data-encoding/#encode","title":"Encode","text":"

The Encode trait handles data encoding into SCALE format and includes the following key functions:

  • size_hint(&self) -> usize - estimates the number of bytes required for encoding to prevent multiple memory allocations. This should be inexpensive and avoid complex operations. Optional if the size isn\u2019t known
  • encode_to<T: Output>(&self, dest: &mut T) - encodes the data, appending it to a destination buffer
  • encode(&self) -> Vec<u8> - encodes the data and returns it as a byte vector
  • using_encoded<R, F: FnOnce(&[u8]) -> R>(&self, f: F) -> R - encodes the data and passes it to a closure, returning the result
  • encoded_size(&self) -> usize - calculates the encoded size. Should be used when the encoded data isn\u2019t required

Note

For best performance, value types should override using_encoded, and allocating types should override encode_to. It's recommended to implement size_hint for all types where possible.

"},{"location":"polkadot-protocol/basics/data-encoding/#decode","title":"Decode","text":"

The Decode trait handles decoding SCALE-encoded data back into the appropriate types:

  • fn decode<I: Input>(value: &mut I) -> Result<Self, Error> - decodes data from the SCALE format, returning an error if decoding fails
"},{"location":"polkadot-protocol/basics/data-encoding/#compactas","title":"CompactAs","text":"

The CompactAs trait wraps custom types for compact encoding:

  • encode_as(&self) -> &Self::As - encodes the type as a compact type
  • decode_from(_: Self::As) -> Result<Self, Error> - decodes from a compact encoded type
"},{"location":"polkadot-protocol/basics/data-encoding/#hascompact","title":"HasCompact","text":"

The HasCompact trait indicates a type supports compact encoding.

"},{"location":"polkadot-protocol/basics/data-encoding/#encodelike","title":"EncodeLike","text":"

The EncodeLike trait is used to ensure multiple types that encode similarly are accepted by the same function. When using derive, it is automatically implemented.

"},{"location":"polkadot-protocol/basics/data-encoding/#data-types","title":"Data Types","text":"

The table below outlines how the Rust implementation of the Parity SCALE codec encodes different data types.

Type Description Example SCALE Decoded Value SCALE Encoded Value Boolean Boolean values are encoded using the least significant bit of a single byte. false / true 0x00 / 0x01 Compact/general integers A \"compact\" or general integer encoding is sufficient for encoding large integers (up to 2^536) and is more efficient at encoding most values than the fixed-width version. unsigned integer 0 / unsigned integer 1 / unsigned integer 42 / unsigned integer 69 / unsigned integer 65535 / BigInt(100000000000000) 0x00 / 0x04 / 0xa8 / 0x1501 / 0xfeff0300 / 0x0b00407a10f35a Enumerations (tagged-unions) A fixed number of variants Fixed-width integers Basic integers are encoded using a fixed-width little-endian (LE) format. signed 8-bit integer 69 / unsigned 16-bit integer 42 / unsigned 32-bit integer 16777215 0x45 / 0x2a00 / 0xffffff00 Options One or zero values of a particular type. Some / None 0x01 followed by the encoded value / 0x00 Results Results are commonly used enumerations which indicate whether certain operations were successful or unsuccessful. Ok(42) / Err(false) 0x002a / 0x0100 Strings Strings are Vectors of bytes (Vec) containing a valid UTF8 sequence. Structs For structures, the values are named, but that is irrelevant for the encoding (names are ignored - only order matters). SortedVecAsc::from([3, 5, 2, 8]) [3, 2, 5, 8] Tuples A fixed-size series of values, each with a possibly different but predetermined and fixed type. This is simply the concatenation of each encoded value. Tuple of compact unsigned integer and boolean: (3, false) 0x0c00 Vectors (lists, series, sets) A collection of same-typed values is encoded, prefixed with a compact encoding of the number of items, followed by each item's encoding concatenated in turn. Vector of unsigned 16-bit integers: [4, 8, 15, 16, 23, 42] 0x18040008000f00100017002a00"},{"location":"polkadot-protocol/basics/data-encoding/#encode-and-decode-rust-trait-implementations","title":"Encode and Decode Rust Trait Implementations","text":"

Here's how the Encode and Decode traits are implemented:

use parity_scale_codec::{Encode, Decode};\n\n[derive(Debug, PartialEq, Encode, Decode)]\nenum EnumType {\n    #[codec(index = 15)]\n    A,\n    B(u32, u64),\n    C {\n        a: u32,\n        b: u64,\n    },\n}\n\nlet a = EnumType::A;\nlet b = EnumType::B(1, 2);\nlet c = EnumType::C { a: 1, b: 2 };\n\na.using_encoded(|ref slice| {\n    assert_eq!(slice, &b\"\\x0f\");\n});\n\nb.using_encoded(|ref slice| {\n    assert_eq!(slice, &b\"\\x01\\x01\\0\\0\\0\\x02\\0\\0\\0\\0\\0\\0\\0\");\n});\n\nc.using_encoded(|ref slice| {\n    assert_eq!(slice, &b\"\\x02\\x01\\0\\0\\0\\x02\\0\\0\\0\\0\\0\\0\\0\");\n});\n\nlet mut da: &[u8] = b\"\\x0f\";\nassert_eq!(EnumType::decode(&mut da).ok(), Some(a));\n\nlet mut db: &[u8] = b\"\\x01\\x01\\0\\0\\0\\x02\\0\\0\\0\\0\\0\\0\\0\";\nassert_eq!(EnumType::decode(&mut db).ok(), Some(b));\n\nlet mut dc: &[u8] = b\"\\x02\\x01\\0\\0\\0\\x02\\0\\0\\0\\0\\0\\0\\0\";\nassert_eq!(EnumType::decode(&mut dc).ok(), Some(c));\n\nlet mut dz: &[u8] = &[0];\nassert_eq!(EnumType::decode(&mut dz).ok(), None);\n
"},{"location":"polkadot-protocol/basics/data-encoding/#scale-codec-libraries","title":"SCALE Codec Libraries","text":"

Several SCALE codec implementations are available in various languages. Here's a list of them:

  • AssemblyScript - LimeChain/as-scale-codec
  • C - MatthewDarnell/cScale
  • C++ - qdrvm/scale-codec-cpp
  • JavaScript - polkadot-js/api
  • Dart - leonardocustodio/polkadart
  • Haskell - airalab/hs-web3
  • Golang - itering/scale.go
  • Java - splix/polkaj
  • Python - polkascan/py-scale-codec
  • Ruby - wuminzhe/scale_rb
  • TypeScript - parity-scale-codec-ts, scale-ts, soramitsu/scale-codec-js-library, subsquid/scale-codec
"},{"location":"polkadot-protocol/basics/networks/","title":"Networks","text":""},{"location":"polkadot-protocol/basics/networks/#introduction","title":"Introduction","text":"

The Polkadot ecosystem is built on a robust set of networks designed to enable secure and scalable development. Whether you are testing new features or deploying to live production, Polkadot offers several layers of networks tailored for each stage of the development process. From local environments to experimental networks like Kusama and community-run TestNets such as Paseo, developers can thoroughly test, iterate, and validate their applications. This guide will introduce you to Polkadot's various networks and explain how they fit into the development workflow.

"},{"location":"polkadot-protocol/basics/networks/#network-overview","title":"Network Overview","text":"

Polkadot's development process is structured to ensure new features and upgrades are rigorously tested before being deployed on live production networks. The progression follows a well-defined path, starting from local environments and advancing through TestNets, ultimately reaching the Polkadot MainNet. The diagram below outlines the typical progression of the Polkadot development cycle:

\nflowchart LR\n    id1[Local] --> id2[Westend] --> id4[Kusama] --> id5[Polkadot]  \n    id1[Local] --> id3[Paseo] --> id5[Polkadot] 
This flow ensures developers can thoroughly test and iterate without risking real tokens or affecting production networks. Testing tools like Chopsticks and various TestNets make it easier to experiment safely before releasing to production.

A typical journey through the Polkadot core protocol development process might look like this:

  1. Local development node - development starts in a local environment, where developers can create, test, and iterate on upgrades or new features using a local development node. This stage allows rapid experimentation in an isolated setup without any external dependencies

  2. Westend - after testing locally, upgrades are deployed to Westend, Polkadot's primary TestNet. Westend simulates real-world conditions without using real tokens, making it the ideal place for rigorous feature testing before moving on to production networks

  3. Kusama - once features have passed extensive testing on Westend, they move to Kusama, Polkadot's experimental and fast-moving \"canary\" network. Kusama operates as a high-fidelity testing ground with actual economic incentives, giving developers insights into how their features will perform in a real-world environment

  4. Polkadot - after passing tests on Westend and Kusama, features are considered ready for deployment to Polkadot, the live production network

In addition, parachain developers can leverage local TestNets like Zombienet and deploy upgrades on parachain TestNets.

  1. Paseo - For parachain and dApp developers, Paseo serves as a community-run TestNet that mirrors Polkadot's runtime. Like Westend for core protocol development, Paseo provides a testing ground for parachain development without affecting live networks

Note

The Rococo TestNet deprecation date was October 14, 2024. Teams should use Westend for Polkadot protocol and feature testing and Paseo for chain development-related testing.

"},{"location":"polkadot-protocol/basics/networks/#polkadot-development-networks","title":"Polkadot Development Networks","text":"

Development and testing are crucial to building robust dApps and parachains and performing network upgrades within the Polkadot ecosystem. To achieve this, developers can leverage various networks and tools that provide a risk-free environment for experimentation and validation before deploying features to live networks. These networks help avoid the costs and risks associated with real tokens, enabling testing for functionalities like governance, cross-chain messaging, and runtime upgrades.

"},{"location":"polkadot-protocol/basics/networks/#kusama-network","title":"Kusama Network","text":"

Kusama is the experimental version of Polkadot, designed for developers who want to move quickly and test their applications in a real-world environment with economic incentives. Kusama serves as a production-grade testing ground where developers can deploy features and upgrades with the pressure of game theory and economics in mind. It mirrors Polkadot but operates as a more flexible space for innovation.

The native token for Kusama is KSM. For more information about KSM, visit the Native Assets page.

"},{"location":"polkadot-protocol/basics/networks/#test-networks","title":"Test Networks","text":"

The following test networks provide controlled environments for testing upgrades and new features. TestNet tokens are available from the Polkadot faucet.

"},{"location":"polkadot-protocol/basics/networks/#westend","title":"Westend","text":"

Westend is Polkadot's primary permanent TestNet. Unlike temporary test networks, Westend is not reset to the genesis block, making it an ongoing environment for testing Polkadot core features. Managed by Parity Technologies, Westend ensures that developers can test features in a real-world simulation without using actual tokens.

The native token for Westend is WND. More details about WND can be found on the Native Assets page.

"},{"location":"polkadot-protocol/basics/networks/#paseo","title":"Paseo","text":"

Paseo is a community-managed TestNet designed for parachain and dApp developers. It mirrors Polkadot's runtime and is maintained by Polkadot community members. Paseo provides a dedicated space for parachain developers to test their applications in a Polkadot-like environment without the risks associated with live networks.

The native token for Paseo is PAS. Additional information on PAS is available on the Native Assets page.

"},{"location":"polkadot-protocol/basics/networks/#local-test-networks","title":"Local Test Networks","text":"

Local test networks are an essential part of the development cycle for blockchain developers using the Polkadot SDK. They allow for fast, iterative testing in controlled, private environments without connecting to public TestNets. Developers can quickly spin up local instances to experiment, debug, and validate their code before deploying to larger TestNets like Westend or Paseo. Two key tools for local network testing are Zombienet and Chopsticks.

"},{"location":"polkadot-protocol/basics/networks/#zombienet","title":"Zombienet","text":"

Zombienet is a flexible testing framework for Polkadot SDK-based blockchains. It enables developers to create and manage ephemeral, short-lived networks. This feature makes Zombienet particularly useful for quick iterations, as it allows you to run multiple local networks concurrently, mimicking different runtime conditions. Whether you're developing a parachain or testing your custom blockchain logic, Zombienet gives you the tools to automate local testing.

Key features of Zombienet include:

  • Creating dynamic, local networks with different configurations
  • Running parachains and relay chains in a simulated environment
  • Efficient testing of network components like cross-chain messaging and governance

Zombienet is ideal for developers looking to test quickly and thoroughly before moving to more resource-intensive public TestNets.

"},{"location":"polkadot-protocol/basics/networks/#chopsticks","title":"Chopsticks","text":"

Chopsticks is a tool designed to create forks of Polkadot SDK-based blockchains, allowing developers to interact with network forks as part of their testing process. This capability makes Chopsticks a powerful option for testing upgrades, runtime changes, or cross-chain applications in a forked network environment.

Key features of Chopsticks include:

  • Forking live Polkadot SDK-based blockchains for isolated testing
  • Simulating cross-chain messages in a private, controlled setup
  • Debugging network behavior by interacting with the fork in real-time

Chopsticks provides a controlled environment for developers to safely explore the effects of runtime changes. It ensures that network behavior is tested and verified before upgrades are deployed to live networks.

"},{"location":"polkadot-protocol/basics/randomness/","title":"Randomness","text":""},{"location":"polkadot-protocol/basics/randomness/#introduction","title":"Introduction","text":"

Randomness is crucial in Proof of Stake (PoS) blockchains to ensure a fair and unpredictable distribution of validator duties. However, computers are inherently deterministic, meaning the same input always produces the same output. What we typically refer to as \"random\" numbers on a computer are actually pseudo-random. These numbers rely on an initial \"seed,\" which can come from external sources like atmospheric noise, heart rates, or even lava lamps. While this may seem random, given the same \"seed,\" the same sequence of numbers will always be generated.

In a global blockchain network, relying on real-world entropy for randomness isn\u2019t feasible because these inputs vary by time and location. If nodes use different inputs, blockchains can fork. Hence, real-world randomness isn't suitable for use as a seed in blockchain systems.

Currently, two primary methods for generating randomness in blockchains are used: RANDAO and VRF (Verifiable Random Function). Polkadot adopts the VRF approach for its randomness.

"},{"location":"polkadot-protocol/basics/randomness/#vrf","title":"VRF","text":"

A\u00a0Verifiable Random Function (VRF)\u00a0is a cryptographic function that generates a random number and proof that ensures the submitter produced the number. This proof allows anyone to verify the validity of the random number.

Polkadot's VRF is similar to the one used in Ouroboros Praos, which secures randomness for block production in systems like BABE (Polkadot\u2019s block production mechanism).

The key difference is that Polkadot's VRF doesn\u2019t rely on a central clock\u2014avoiding the issue of whose clock to trust. Instead, it uses its own past results and slot numbers to simulate time and determine future outcomes.

"},{"location":"polkadot-protocol/basics/randomness/#how-vrf-works","title":"How VRF Works","text":"

Slots on Polkadot are discrete units of time, each lasting six seconds, and can potentially hold a block. Multiple slots form an epoch, with 2400 slots making up one four-hour epoch.

In each slot, validators execute a \"die roll\" using a VRF. The VRF uses three inputs:

  1. A \"secret key\", unique to each validator, is used for the die roll
  2. An epoch randomness value, derived from the hash of VRF outputs from blocks two epochs ago (N-2), so past randomness influences the current epoch (N)
  3. The current slot number

This process helps maintain fair randomness across the network.

Here is a graphical representation:

The VRF produces two outputs: a result (the random number) and a proof (verifying that the number was generated correctly).

The\u00a0result\u00a0is checked by the validator against a protocol threshold. If it's below the threshold, the validator becomes a candidate for block production in that slot.

The validator then attempts to create a block, submitting it along with the PROOF and RESULT.

So, VRF can be expressed like:

(RESULT, PROOF) = VRF(SECRET, EPOCH_RANDOMNESS_VALUE, CURRENT_SLOT_NUMBER)

Put simply, performing a \"VRF roll\" generates a random number along with proof that the number was genuinely produced and not arbitrarily chosen.

After executing the VRF, the RESULT is compared to a protocol-defined THRESHOLD. If the RESULT is below the THRESHOLD, the validator becomes a valid candidate to propose a block for that slot. Otherwise, the validator skips the slot.

As a result, there may be multiple validators eligible to propose a block for a slot. In this case, the block accepted by other nodes will prevail, provided it is on the chain with the latest finalized block as determined by the GRANDPA finality gadget. It's also possible for no block producers to be available for a slot, in which case the AURA consensus takes over. AURA is a fallback mechanism that randomly selects a validator to produce a block, running in parallel with BABE and only stepping in when no block producers exist for a slot. Otherwise, it remains inactive.

Because validators roll independently, no block candidates may appear in some slots if all roll numbers are above the threshold.

Note

The resolution of this issue and the assurance that Polkadot block times remain near constant-time can be checked on the PoS Consensus page.

"},{"location":"polkadot-protocol/basics/randomness/#randao","title":"RANDAO","text":"

An alternative on-chain randomness method is Ethereum's\u00a0RANDAO, where validators perform thousands of hashes on a seed and publish the final hash during a round. The collective input from all validators forms the random number, and as long as one honest validator participates, the randomness is secure.

To enhance security,\u00a0RANDAO\u00a0can optionally be combined with a\u00a0Verifiable Delay Function (VDF), ensuring that randomness can't be predicted or manipulated during computation.

Note

More information about RANDAO can be found in the ETH documentation.

"},{"location":"polkadot-protocol/basics/randomness/#vdfs","title":"VDFs","text":"

Verifiable Delay Functions (VDFs) are time-bound computations that, even on parallel computers, take a set amount of time to complete.

They produce a unique result that can be quickly verified publicly. When combined with RANDAO, feeding RANDAO's output into a VDF introduces a delay that nullifies an attacker's chance to influence the randomness.

However,\u00a0VDF\u00a0likely requires specialized ASIC devices to run separately from standard nodes.

Warning

While only one is needed to secure the system, and they will be open-source and inexpensive, running them involves significant costs without direct incentives, adding friction for blockchain users.

"},{"location":"polkadot-protocol/basics/randomness/#additional-resources","title":"Additional Resources","text":"
  • Polkadot's research on blockchain randomness and sortition - contains reasoning for choices made along with proofs
  • Discussion on Randomness used in Polkadot - W3F researchers explore when and under what conditions Polkadot's randomness can be utilized
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/","title":"Blocks","text":""},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#introduction","title":"Introduction","text":"

In the Polkadot SDK, blocks are fundamental to the functioning of the blockchain, serving as containers for transactions and changes to the chain's state. Blocks consist of headers and an array of transactions, ensuring the integrity and validity of operations on the network. This guide explores the essential components of a block, the process of block production, and how blocks are validated and imported across the network. By understanding these concepts, developers can better grasp how blockchains maintain security, consistency, and performance within the Polkadot ecosystem.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#what-is-a-block","title":"What is a Block?","text":"

In the Polkadot SDK, a block is a fundamental unit that encapsulates both the header and an array of transactions. The block header includes critical metadata to ensure the integrity and sequence of the blockchain. Here's a breakdown of its components:

  • Block height - indicates the number of blocks created in the chain so far
  • Parent hash - the hash of the previous block, providing a link to maintain the blockchain's immutability
  • Transaction root - cryptographic digest summarizing all transactions in the block
  • State root - a cryptographic digest representing the post-execution state
  • Digest - additional information that can be attached to a block, such as consensus-related messages

Each transaction is part of a series that is executed according to the runtime's rules. The transaction root is a cryptographic digest of this series, which prevents alterations and enables succinct verification by light clients. This verification process allows light clients to confirm whether a transaction exists in a block with only the block header, avoiding downloading the entire block.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#block-production","title":"Block Production","text":"

When an authoring node is authorized to create a new block, it selects transactions from the transaction queue based on priority. This step, known as block production, relies heavily on the executive module to manage the initialization and finalization of blocks. The process is summarized as follows:

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#initialize-block","title":"Initialize Block","text":"

The block initialization process begins with a series of function calls that prepare the block for transaction execution:

  1. Call on_initialize - the executive module calls the\u00a0on_initialize\u00a0hook from the system pallet and other runtime pallets to prepare for the block's transactions
  2. Coordinate runtime calls - coordinates function calls in the order defined by the transaction queue
  3. Verify information - once on_initialize\u00a0functions are executed, the executive module checks the parent hash in the block header and the trie root to verify information is consistent
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#finalize-block","title":"Finalize Block","text":"

Once transactions are processed, the block must be finalized before being broadcast to the network. The finalization steps are as follows:

  1. -Call on_finalize - the executive module calls the on_finalize hooks in each pallet to ensure any remaining state updates or checks are completed before the block is sealed and published
  2. -Verify information - the block's digest and storage root in the header are checked against the initialized block to ensure consistency
  3. -Call on_idle - the\u00a0on_idle hook is triggered to process any remaining tasks using the leftover weight from the block
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#block-authoring-and-import","title":"Block Authoring and Import","text":"

Once the block is finalized, it is gossiped to other nodes in the network. Nodes follow this procedure:

  1. Receive transactions - the authoring node collects transactions from the network
  2. Validate - transactions are checked for validity
  3. Queue - valid transactions are placed in the transaction pool for execution
  4. Execute - state changes are made as the transactions are executed
  5. Publish - the finalized block is broadcast to the network
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#block-import-queue","title":"Block Import Queue","text":"

After a block is published, other nodes on the network can import it into their chain state. The block import queue is part of the outer node in every Polkadot SDK-based node and ensures incoming blocks are valid before adding them to the node's state.

In most cases, you don't need to know details about how transactions are gossiped or how other nodes on the network import blocks. The following traits are relevant, however, if you plan to write any custom consensus logic or want a deeper dive into the block import queue:

  • ImportQueue - the trait that defines the block import queue
  • Link - the trait that defines the link between the block import queue and the network
  • BasicQueue - a basic implementation of the block import queue
  • Verifier - the trait that defines the block verifier
  • BlockImport - the trait that defines the block import process

These traits govern how blocks are validated and imported across the network, ensuring consistency and security.

Additional information

Refer to the Block reference to learn more about the block structure in the Polkadot SDK runtime.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/","title":"Transactions Weights and Fees","text":""},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#introductions","title":"Introductions","text":"

When transactions are executed, or data is stored on-chain, the activity changes the chain's state and consumes blockchain resources. Because the resources available to a blockchain are limited, managing how operations on-chain consume them is important. In addition to being limited in practical terms, such as storage capacity, blockchain resources represent a potential attack vector for malicious users. For example, a malicious user might attempt to overload the network with messages to stop the network from producing new blocks. To protect blockchain resources from being drained or overloaded, you need to manage how they are made available and how they are consumed. The resources to be aware of include:

  • Memory usage
  • Storage input and output
  • Computation
  • Transaction and block size
  • State database size

The Polkadot SDK provides block authors with several ways to manage access to resources and to prevent individual components of the chain from consuming too much of any single resource. Two of the most important mechanisms available to block authors are\u00a0weights\u00a0and\u00a0transaction fees.

Weights\u00a0manage the time it takes to validate a block and characterize the time it takes to execute the calls in the block's body. By controlling the execution time a block can consume, weights set limits on storage input, output, and computation.

Some of the weight allowed for a block is consumed as part of the block's initialization and finalization. The weight might also be used to execute mandatory inherent extrinsic calls. To help ensure blocks don\u2019t consume too much execution time and prevent malicious users from overloading the system with unnecessary calls, weights are combined with\u00a0transaction fees.

Transaction fees provide an economic incentive to limit execution time, computation, and the number of calls required to perform operations. Transaction fees are also used to make the blockchain economically sustainable because they are typically applied to transactions initiated by users and deducted before a transaction request is executed.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#how-fees-are-calculated","title":"How Fees are Calculated","text":"

The final fee for a transaction is calculated using the following parameters:

  • base fee - this is the minimum amount a user pays for a transaction. It is declared a\u00a0base weight\u00a0in the runtime and converted to a fee using the\u00a0WeightToFee\u00a0conversion
  • weight fee - a fee proportional to the execution time (input and output and computation) that a transaction consumes
  • length fee - a fee proportional to the encoded length of the transaction
  • tip - an optional tip to increase the transaction\u2019s priority, giving it a higher chance to be included in the transaction queue

The base fee and proportional weight and length fees constitute the\u00a0inclusion fee. The inclusion fee is the minimum fee that must be available for a transaction to be included in a block.

inclusion fee = base fee + weight fee + length fee\n

Transaction fees are withdrawn before the transaction is executed. After the transaction is executed, the weight can be adjusted to reflect the resources used. If a transaction uses fewer resources than expected, the transaction fee is corrected, and the adjusted transaction fee is deposited.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#using-the-transaction-payment-pallet","title":"Using the Transaction Payment Pallet","text":"

The\u00a0Transaction Payment pallet\u00a0provides the basic logic for calculating the inclusion fee. You can also use the Transaction Payment pallet to:

  • Convert a weight value into a deductible fee based on a currency type using\u00a0Config::WeightToFee
  • Update the fee for the next block by defining a multiplier based on the chain\u2019s final state at the end of the previous block using\u00a0Config::FeeMultiplierUpdate
  • Manage the withdrawal, refund, and deposit of transaction fees using\u00a0Config::OnChargeTransaction

You can learn more about these configuration traits in the\u00a0Transaction Payment\u00a0documentation.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#understanding-the-inclusion-fee","title":"Understanding the Inclusion Fee","text":"

The formula for calculating the inclusion fee is as follows:

inclusion_fee = base_fee + length_fee + [targeted_fee_adjustment * weight_fee]\n

And then, for calculating the final fee:

final_fee = inclusion_fee + tip\n

In the first formula, the\u00a0targeted_fee_adjustment\u00a0is a multiplier that can tune the final fee based on the network\u2019s congestion.

  • The\u00a0base_fee\u00a0derived from the base weight covers inclusion overhead like signature verification
  • The\u00a0length_fee\u00a0is a per-byte fee that is multiplied by the length of the encoded extrinsic
  • The\u00a0weight_fee\u00a0fee is calculated using two parameters:
  • The\u00a0ExtrinsicBaseWeight\u00a0that is declared in the runtime and applies to all extrinsics
  • The\u00a0#[pallet::weight]\u00a0annotation that accounts for an extrinsic's complexity

To convert the weight to Currency, the runtime must define a WeightToFee struct that implements a conversion function, Convert<Weight,Balance>.

Note that the extrinsic sender is charged the inclusion fee before the extrinsic is invoked. The fee is deducted from the sender's balance even if the transaction fails upon execution.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#accounts-with-an-insufficient-balance","title":"Accounts with an Insufficient Balance","text":"

If an account does not have a sufficient balance to pay the inclusion fee and remain alive\u2014that is, enough to pay the inclusion fee and maintain the minimum\u00a0existential deposit\u2014then you should ensure the transaction is canceled so that no fee is deducted and the transaction does not begin execution.

The Polkadot SDK doesn't enforce this rollback behavior. However, this scenario would be rare because the transaction queue and block-making logic perform checks to prevent it before adding an extrinsic to a block.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#fee-multipliers","title":"Fee Multipliers","text":"

The inclusion fee formula always results in the same fee for the same input. However, weight can be dynamic and\u2014based on how\u00a0WeightToFee\u00a0is defined\u2014the final fee can include some degree of variability. The Transaction Payment pallet provides the\u00a0FeeMultiplierUpdate\u00a0configurable parameter to account for this variability.

The Polkadot network inspires the default update function and implements a targeted adjustment in which a target saturation level of block weight is defined. If the previous block is more saturated, the fees increase slightly. Similarly, if the last block has fewer transactions than the target, fees are decreased by a small amount. For more information about fee multiplier adjustments, see the\u00a0Web3 Research Page.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#transactions-with-special-requirements","title":"Transactions with Special Requirements","text":"

Inclusion fees must be computable before execution and can only represent fixed logic. Some transactions warrant limiting resources with other strategies. For example:

  • Bonds are a type of fee that might be returned or slashed after some on-chain event. For example, you might want to require users to place a bond to participate in a vote. The bond might then be returned at the end of the referendum or slashed if the voter attempted malicious behavior
  • Deposits are fees that might be returned later. For example, you might require users to pay a deposit to execute an operation that uses storage. The user\u2019s deposit could be returned if a subsequent operation frees up storage
  • Burn operations are used to pay for a transaction based on its internal logic. For example, a transaction might burn funds from the sender if the transaction creates new storage items to pay for the increased state size
  • Limits enable you to enforce constant or configurable limits on specific operations. For example, the default Staking pallet only allows nominators to nominate 16 validators to limit the complexity of the validator election process

It is important to note that if you query the chain for a transaction fee, it only returns the inclusion fee.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#default-weight-annotations","title":"Default Weight Annotations","text":"

All dispatchable functions in the Polkadot SDK must specify a weight. The way of doing that is using the annotation-based system that lets you combine fixed values for database read/write weight and/or fixed values based on benchmarks. The most basic example would look like this:

#[pallet::weight(100_000)]\nfn my_dispatchable() {\n    // ...\n}\n

Note that the\u00a0ExtrinsicBaseWeight\u00a0is automatically added to the declared weight to account for the costs of simply including an empty extrinsic into a block.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#weights-and-database-readwrite-operations","title":"Weights and Database Read/Write Operations","text":"

To make weight annotations independent of the deployed database backend, they are defined as a constant and then used in the annotations when expressing database accesses performed by the dispatchable:

#[pallet::weight(T::DbWeight::get().reads_writes(1, 2) + 20_000)]\nfn my_dispatchable() {\n    // ...\n}\n

This dispatchable allows one database to read and two to write, in addition to other things that add the additional 20,000. Database access is generally every time a value declared inside the\u00a0#[pallet::storage]\u00a0block is accessed. However, unique accesses are counted because after a value is accessed, it is cached, and reaccessing it does not result in a database operation. That is:

  • Multiple reads of the exact value count as one read
  • Multiple writes of the exact value count as one write
  • Multiple reads of the same value, followed by a write to that value, count as one read and one write
  • A write followed by a read-only counts as one write
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#dispatch-classes","title":"Dispatch Classes","text":"

Dispatches are broken into three classes:

  • Normal
  • Operational
  • Mandatory

If a dispatch is not defined as\u00a0Operational\u00a0or\u00a0Mandatory\u00a0in the weight annotation, the dispatch is identified as\u00a0Normal\u00a0by default. You can specify that the dispatchable uses another class like this:

#[pallet::dispatch((DispatchClass::Operational))]\nfn my_dispatchable() {\n    // ...\n}\n

This tuple notation also allows you to specify a final argument determining whether the user is charged based on the annotated weight. If you don't specify otherwise,\u00a0Pays::Yes\u00a0is assumed:

#[pallet::dispatch(DispatchClass::Normal, Pays::No)]\nfn my_dispatchable() {\n    // ...\n}\n
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#normal-dispatches","title":"Normal Dispatches","text":"

Dispatches in this class represent normal user-triggered transactions. These types of dispatches only consume a portion of a block's total weight limit. For information about the maximum portion of a block that can be consumed for normal dispatches, see\u00a0AvailableBlockRatio. Normal dispatches are sent to the\u00a0transaction pool.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#operational-dispatches","title":"Operational Dispatches","text":"

Unlike normal dispatches, which represent\u00a0the usage\u00a0of network capabilities, operational dispatches are those that\u00a0provide\u00a0network capabilities. Operational dispatches can consume the entire weight limit of a block. They are not bound by the\u00a0AvailableBlockRatio. Dispatches in this class are given maximum priority and are exempt from paying the\u00a0length_fee.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#mandatory-dispatches","title":"Mandatory Dispatches","text":"

Mandatory dispatches are included in a block even if they cause the block to surpass its weight limit. You can only use the mandatory dispatch class for\u00a0inherent transactions\u00a0that the block author submits. This dispatch class is intended to represent functions in the block validation process. Because these dispatches are always included in a block regardless of the function weight, the validation process must prevent malicious nodes from abusing the function to craft valid but impossibly heavy blocks. You can typically accomplish this by ensuring that:

  • The operation performed is always light
  • The operation can only be included in a block once

To make it more difficult for malicious nodes to abuse mandatory dispatches, they cannot be included in blocks that return errors. This dispatch class serves the assumption that it is better to allow an overweight block to be created than not to allow any block to be created at all.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#dynamic-weights","title":"Dynamic Weights","text":"

In addition to purely fixed weights and constants, the weight calculation can consider the input arguments of a dispatchable. The weight should be trivially computable from the input arguments with some basic arithmetic:

use frame_support:: {\n    dispatch:: {\n        DispatchClass::Normal,\n        Pays::Yes,\n    },\n   weights::Weight,\n};\n\n#[pallet::weight(FunctionOf(\n  |args: (&Vec<User>,)| args.0.len().saturating_mul(10_000),\n  )\n]\nfn handle_users(origin, calls: Vec<User>) {\n    // Do something per user\n}\n
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#post-dispatch-weight-correction","title":"Post Dispatch Weight Correction","text":"

Depending on the execution logic, a dispatchable function might consume less weight than was prescribed pre-dispatch. To correct weight, the function declares a different return type and returns its actual weight:

#[pallet::weight(10_000 + 500_000_000)]\nfn expensive_or_cheap(input: u64) -> DispatchResultWithPostInfo {\n    let was_heavy = do_calculation(input);\n\n    if (was_heavy) {\n        // None means \"no correction\" from the weight annotation.\n        Ok(None.into())\n    } else {\n        // Return the actual weight consumed.\n        Ok(Some(10_000).into())\n    }\n}\n
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#custom-fees","title":"Custom Fees","text":"

You can also define custom fee systems through custom weight functions or inclusion fee functions.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#custom-weights","title":"Custom Weights","text":"

Instead of using the default weight annotations, you can create a custom weight calculation type using the\u00a0weights\u00a0module. The custom weight calculation type must implement the following traits:

  • WeighData<T>\u00a0to determine the weight of the dispatch
  • ClassifyDispatch<T>\u00a0to determine the class of the dispatch
  • PaysFee<T>\u00a0to determine whether the sender of the dispatch pays fees

The Polkadot SDK then bundles the output information of the three traits into the\u00a0DispatchInfo struct and provides it by implementing the\u00a0GetDispatchInfo\u00a0for all\u00a0Call\u00a0variants and opaque extrinsic types. This is used internally by the System and Executive modules.

ClassifyDispatch,\u00a0WeighData, and\u00a0PaysFee\u00a0are generic over\u00a0T, which gets resolved into the tuple of all dispatch arguments except for the origin. The following example illustrates a\u00a0struct\u00a0that calculates the weight as\u00a0m * len(args),\u00a0where\u00a0m\u00a0is a given multiplier and\u00a0args\u00a0is the concatenated tuple of all dispatch arguments. In this example, the dispatch class is\u00a0Operational\u00a0if the transaction has more than 100 bytes of length in arguments and will pay fees if the encoded length exceeds 10 bytes.

struct LenWeight(u32);\nimpl<T> WeighData<T> for LenWeight {\n    fn weigh_data(&self, target: T) -> Weight {\n        let multiplier = self.0;\n        let encoded_len = target.encode().len() as u32;\n        multiplier * encoded_len\n    }\n}\n\nimpl<T> ClassifyDispatch<T> for LenWeight {\n    fn classify_dispatch(&self, target: T) -> DispatchClass {\n        let encoded_len = target.encode().len() as u32;\n        if encoded_len > 100 {\n            DispatchClass::Operational\n        } else {\n            DispatchClass::Normal\n        }\n    }\n}\n\nimpl<T> PaysFee<T> {\n    fn pays_fee(&self, target: T) -> Pays {\n        let encoded_len = target.encode().len() as u32;\n        if encoded_len > 10 {\n            Pays::Yes\n        } else {\n            Pays::No\n        }\n    }\n}\n

A weight calculator function can also be coerced to the final type of the argument instead of defining it as a vague type that can be encoded. The code would roughly look like this:

struct CustomWeight;\nimpl WeighData<(&u32, &u64)> for CustomWeight {\n    fn weigh_data(&self, target: (&u32, &u64)) -> Weight {\n        ...\n    }\n}\n\n// given a dispatch:\n#[pallet::call]\nimpl<T: Config<I>, I: 'static> Pallet<T, I> {\n    #[pallet::weight(CustomWeight)]\n    fn foo(a: u32, b: u64) { ... }\n}\n

In this example, the CustomWeight can only be used in conjunction with a dispatch with a particular signature (u32, u64), as opposed to LenWeight, which can be used with anything because there aren't any assumptions about <T>.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#custom-inclusion-fee","title":"Custom Inclusion Fee","text":"

The following example illustrates how to customize your inclusion fee. You must configure the appropriate associated types in the respective module.

// Assume this is the balance type\ntype Balance = u64;\n\n// Assume we want all the weights to have a `100 + 2 * w` conversion to fees\nstruct CustomWeightToFee;\nimpl WeightToFee<Weight, Balance> for CustomWeightToFee {\n    fn convert(w: Weight) -> Balance {\n        let a = Balance::from(100);\n        let b = Balance::from(2);\n        let w = Balance::from(w);\n        a + b * w\n    }\n}\n\nparameter_types! {\n    pub const ExtrinsicBaseWeight: Weight = 10_000_000;\n}\n\nimpl frame_system::Config for Runtime {\n    type ExtrinsicBaseWeight = ExtrinsicBaseWeight;\n}\n\nparameter_types! {\n    pub const TransactionByteFee: Balance = 10;\n}\n\nimpl transaction_payment::Config {\n    type TransactionByteFee = TransactionByteFee;\n    type WeightToFee = CustomWeightToFee;\n    type FeeMultiplierUpdate = TargetedFeeAdjustment<TargetBlockFullness>;\n}\n\nstruct TargetedFeeAdjustment<T>(sp_std::marker::PhantomData<T>);\nimpl<T: Get<Perquintill>> WeightToFee<Fixed128, Fixed128> for TargetedFeeAdjustment<T> {\n    fn convert(multiplier: Fixed128) -> Fixed128 {\n        // Don't change anything. Put any fee update info here.\n        multiplier\n    }\n}\n
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#further-resources","title":"Further Resources","text":"

You now know the weight system, how it affects transaction fee computation, and how to specify weights for your dispatchable calls. The next step is determining the correct weight for your dispatchable operations. You can use Substrate\u00a0benchmarking functions\u00a0and\u00a0frame-benchmarking\u00a0calls to test your functions with different parameters and empirically determine the proper weight in their worst-case scenarios.

  • Benchmark
  • SignedExtension
  • Custom weights for the Example pallet
  • Web3 Foundation Research
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/","title":"Transactions","text":""},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#introduction","title":"Introduction","text":"

Transactions are essential components of blockchain networks, enabling state changes and the execution of key operations. In the Polkadot SDK, transactions, often called extrinsics, come in multiple forms, including signed, unsigned, and inherent transactions.

This guide walks you through the different transaction types and how they're formatted, validated, and processed within the Polkadot ecosystem. You'll also learn how to customize transaction formats and construct transactions for FRAME-based runtimes, ensuring a complete understanding of how transactions are built and executed in Polkadot SDK-based chains.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#what-is-a-transaction","title":"What Is a Transaction?","text":"

In the Polkadot SDK, transactions represent operations that modify the chain's state, bundled into blocks for execution. The term extrinsic is often used to refer to any data that originates outside the runtime and is included in the chain. While other blockchain systems typically refer to these operations as \"transactions,\" the Polkadot SDK adopts the broader term \"extrinsic\" to capture the wide variety of data types that can be added to a block.

There are three primary types of transactions (extrinsics) in the Polkadot SDK:

  • Signed transactions - signed by the submitting account, often carrying transaction fees
  • Unsigned transactions - submitted without a signature, often requiring custom validation logic
  • Inherent transactions - typically inserted directly into blocks by block authoring nodes, without gossiping between peers

Each type serves a distinct purpose, and understanding when and how to use each is key to efficiently working with the Polkadot SDK.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#signed-transactions","title":"Signed Transactions","text":"

Signed transactions require an account's signature and typically involve submitting a request to execute a runtime call. The signature serves as a form of cryptographic proof that the sender has authorized the action, using their private key. These transactions often involve a transaction fee to cover the cost of execution and incentivize block producers.

Signed transactions are the most common type of transaction and are integral to user-driven actions, such as token transfers. For instance, when you transfer tokens from one account to another, the sending account must sign the transaction to authorize the operation.

For example, the pallet_balances::Call::transfer_allow_death extrinsic in the Balances pallet allows you to transfer tokens. Since your account initiates this transaction, your account key is used to sign it. You'll also be responsible for paying the associated transaction fee, with the option to include an additional tip to incentivize faster inclusion in the block.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#unsigned-transactions","title":"Unsigned Transactions","text":"

Unsigned transactions do not require a signature or account-specific data from the sender. Unlike signed transactions, they do not come with any form of economic deterrent, such as fees, which makes them susceptible to spam or replay attacks. Custom validation logic must be implemented to mitigate these risks and ensure these transactions are secure.

Unsigned transactions typically involve scenarios where including a fee or signature is unnecessary or counterproductive. However, due to the absence of fees, they require careful validation to protect the network. For example, pallet_im_online::Call::heartbeat extrinsic allows validators to send a heartbeat signal, indicating they are active. Since only validators can make this call, the logic embedded in the transaction ensures that the sender is a validator, making the need for a signature or fee redundant.

Unsigned transactions are more resource-intensive than signed ones because custom validation is required, but they play a crucial role in certain operational scenarios, especially when regular user accounts aren't involved.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#inherent-transactions","title":"Inherent Transactions","text":"

Inherent transactions are a specialized type of unsigned transaction that is used primarily for block authoring. Unlike signed or other unsigned transactions, inherent transactions are added directly by block producers and are not broadcasted to the network or stored in the transaction queue. They don't require signatures or the usual validation steps and are generally used to insert system-critical data directly into blocks.

A key example of an inherent transaction is inserting a timestamp into each block. The pallet_timestamp::Call::now extrinsic allows block authors to include the current time in the block they are producing. Since the block producer adds this information, there is no need for transaction validation, like signature verification. The validation in this case is done indirectly by the validators, who check whether the timestamp is within an acceptable range before finalizing the block.

Another example is the paras_inherent::Call::enter extrinsic, which enables parachain collator nodes to send validation data to the relay chain. This inherent transaction ensures that the necessary parachain data is included in each block without the overhead of gossiped transactions.

Inherent transactions serve a critical role in block authoring by allowing important operational data to be added directly to the chain without needing the validation processes required for standard transactions.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-formats","title":"Transaction Formats","text":"

Understanding the structure of signed and unsigned transactions is crucial for developers building on Polkadot SDK-based chains. Whether you're optimizing transaction processing, customizing formats, or interacting with the transaction pool, knowing the format of extrinsics, Polkadot's term for transactions, is essential.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#types-of-transaction-formats","title":"Types of Transaction Formats","text":"

In Polkadot SDK-based chains, extrinsics can fall into three main categories:

  • Unchecked extrinsics - typically used for signed transactions that require validation. They contain a signature and additional data, such as a nonce and information for fee calculation. Unchecked extrinsics are named as such because they require validation checks before being accepted into the transaction pool
  • Checked extrinsics - typically used for inherent extrinsics (unsigned transactions); these don't require signature verification. Instead, they carry information such as where the extrinsic originates and any additional data required for the block authoring process
  • Opaque extrinsics - used when the format of an extrinsic is not yet fully committed or finalized. They are still decodable, but their structure can be flexible depending on the context
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#signed-transaction-data-structure","title":"Signed Transaction Data Structure","text":"

A signed transaction typically includes the following components:

  • Signature - verifies the authenticity of the transaction sender
  • Call - the actual function or method call the transaction is requesting (for example, transferring funds)
  • Nonce - tracks the number of prior transactions sent from the account, helping to prevent replay attacks
  • Tip - an optional incentive to prioritize the transaction in block inclusion
  • Additional data - includes details such as spec version, block hash, and genesis hash to ensure the transaction is valid within the correct runtime and chain context

Here's a simplified breakdown of how signed transactions are typically constructed in a Polkadot SDK runtime:

<signing account ID> + <signature> + <additional data>\n

Each part of the signed transaction has a purpose, ensuring the transaction's authenticity and context within the blockchain.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#signed-extensions","title":"Signed Extensions","text":"

Polkadot SDK also provides the concept of signed extensions, which allow developers to extend extrinsics with additional data or validation logic before they are included in a block. The SignedExtension set helps enforce custom rules or protections, such as ensuring the transaction's validity or calculating priority.

The transaction queue regularly calls signed extensions to verify a transaction's validity before placing it in the ready queue. This safeguard ensures transactions won't fail in a block. Signed extensions are commonly used to enforce validation logic and protect the transaction pool from spam and replay attacks.

In FRAME, a signed extension can hold any of the following types by default:

  • AccountId - to encode the sender's identity
  • Call - to encode the pallet call to be dispatched. This data is used to calculate transaction fees
  • AdditionalSigned - to handle any additional data to go into the signed payload allowing you to attach any custom logic prior to dispatching a transaction
  • Pre - to encode the information that can be passed from before a call is dispatched to after it gets dispatched

Signed extensions can enforce checks like:

  • CheckSpecVersion - ensures the transaction is compatible with the runtime's current version
  • CheckWeight - calculates the weight (or computational cost) of the transaction, ensuring the block doesn't exceed the maximum allowed weight

These extensions are critical in the transaction lifecycle, ensuring that only valid and prioritized transactions are processed.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-construction","title":"Transaction Construction","text":"

Building transactions in the Polkadot SDK involves constructing a payload that can be verified, signed, and submitted for inclusion in a block. Each runtime in the Polkadot SDK has its own rules for validating and executing transactions, but there are common patterns for constructing a signed transaction.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#construct-a-signed-transaction","title":"Construct a Signed Transaction","text":"

A signed transaction in the Polkadot SDK includes various pieces of data to ensure security, prevent replay attacks, and prioritize processing. Here's an overview of how to construct one:

  1. Construct the unsigned payload - gather the necessary information for the call, including:
    • Pallet index - identifies the pallet where the runtime function resides
    • Function index - specifies the particular function to call in the pallet
    • Parameters - any additional arguments required by the function call
  2. Create a signing payload - once the unsigned payload is ready, additional data must be included:
    • Transaction nonce - unique identifier to prevent replay attacks
    • Era information - defines how long the transaction is valid before it's dropped from the pool
    • Block hash - ensures the transaction doesn't execute on the wrong chain or fork
  3. Sign the payload - using the sender's private key, sign the payload to ensure that the transaction can only be executed by the account holder
  4. Serialize the signed payload - once signed, the transaction must be serialized into a binary format, ensuring the data is compact and easy to transmit over the network
  5. Submit the serialized transaction - finally, submit the serialized transaction to the network, where it will enter the transaction pool and wait for processing by an authoring node

The following is an example of how a signed transaction might look:

node_runtime::UncheckedExtrinsic::new_signed(\n    function.clone(),                                      // some call\n    sp_runtime::AccountId32::from(sender.public()).into(), // some sending account\n    node_runtime::Signature::Sr25519(signature.clone()),   // the account's signature\n    extra.clone(),                                         // the signed extensions\n)\n
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-encoding","title":"Transaction Encoding","text":"

Before a transaction is sent to the network, it is serialized and encoded using a structured encoding process that ensures consistency and prevents tampering:

  • [1] - compact encoded length in bytes of the entire transaction
  • [2] - a\u00a0u8\u00a0containing 1 byte to indicate whether the transaction is signed or unsigned (1 bit) and the encoded transaction version ID (7 bits)
  • [3] - if signed, this field contains an account ID, an SR25519 signature, and some extra data
  • [4] - encoded call data, including pallet and function indices and any required arguments

This encoded format ensures consistency and efficiency in processing transactions across the network. By adhering to this format, applications can construct valid transactions and pass them to the network for execution.

Additional Information

Learn how compact encoding works using\u00a0SCALE.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#customize-transaction-construction","title":"Customize Transaction Construction","text":"

Although the basic steps for constructing transactions are consistent across Polkadot SDK-based chains, developers can customize transaction formats and validation rules. For example:

  • Custom pallets - you can define new pallets with custom function calls, each with its own parameters and validation logic
  • Signed extensions - developers can implement custom extensions that modify how transactions are prioritized, validated, or included in blocks

By leveraging Polkadot SDK's modular design, developers can create highly specialized transaction logic tailored to their chain's needs.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#lifecycle-of-a-transaction","title":"Lifecycle of a Transaction","text":"

In the Polkadot SDK, transactions are often referred to as extrinsics because the data in transactions originates outside of the runtime. These transactions contain data that initiates changes to the chain state. The most common type of extrinsic is a signed transaction, which is cryptographically verified and typically incurs a fee. This section focuses on how signed transactions are processed, validated, and ultimately included in a block.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#define-transaction-properties","title":"Define Transaction Properties","text":"

The Polkadot SDK runtime defines key transaction properties, such as:

  • Transaction validity - ensures the transaction meets all runtime requirements
  • Signed or unsigned - identifies whether a transaction needs to be signed by an account
  • State changes - determines how the transaction modifies the state of the chain

Pallets, which compose the runtime's logic, define the specific transactions that your chain supports. When a user submits a transaction, such as a token transfer, it becomes a signed transaction, verified by the user's account signature. If the account has enough funds to cover fees, the transaction is executed, and the chain's state is updated accordingly.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#process-on-a-block-authoring-node","title":"Process on a Block Authoring Node","text":"

In Polkadot SDK-based networks, some nodes are authorized to author blocks. These nodes validate and process transactions. When a transaction is sent to a node that can produce blocks, it undergoes a lifecycle that involves several stages, including validation and execution. Non-authoring nodes gossip the transaction across the network until an authoring node receives it. The following diagram illustrates the lifecycle of a transaction that's submitted to a network and processed by an authoring node.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#validate-and-queue","title":"Validate and Queue","text":"

Once a transaction reaches an authoring node, it undergoes an initial validation process to ensure it meets specific conditions defined in the runtime. This validation includes checks for:

  • Correct nonce - ensures the transaction is sequentially valid for the account
  • Sufficient funds - confirms the account can cover any associated transaction fees
  • Signature validity - verifies that the sender's signature matches the transaction data

After these checks, valid transactions are placed in the transaction pool, where they are queued for inclusion in a block. The transaction pool regularly re-validates queued transactions to ensure they remain valid before being processed. To reach consensus, two-thirds of the nodes must agree on the order of the transactions executed and the resulting state change. Transactions are validated and queued on the local node in a transaction pool to prepare for consensus.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-pool","title":"Transaction Pool","text":"

The transaction pool is responsible for managing valid transactions. It ensures that only transactions that pass initial validity checks are queued. Transactions that fail validation, expire, or become invalid for other reasons are removed from the pool.

The transaction pool organizes transactions into two queues:

  • Ready queue - transactions that are valid and ready to be included in a block
  • Future queue - transactions that are not yet valid but could be in the future, such as transactions with a nonce too high for the current state

Details on how the transaction pool validates transactions, including fee and signature handling, can be found in the validate_transaction method.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#invalid-transactions","title":"Invalid Transactions","text":"

If a transaction is invalid, for example, due to an invalid signature or insufficient funds, it is rejected and won't be added to the block. Invalid transactions might be rejected for reasons such as:

  • The transaction has already been included in a block
  • The transaction's signature does not match the sender
  • The transaction is too large to fit in the current block
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-ordering-and-priority","title":"Transaction Ordering and Priority","text":"

When a node is selected as the next block author, it prioritizes transactions based on weight, length, and tip amount. The goal is to fill the block with high-priority transactions without exceeding its maximum size or computational limits. Transactions are ordered as follows:

  • Inherents first - inherent transactions, such as block timestamp updates, are always placed first
  • Nonce-based ordering - transactions from the same account are ordered by their nonce
  • Fee-based ordering - among transactions with the same nonce or priority level, those with higher fees are prioritized
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-execution","title":"Transaction Execution","text":"

Once a block author selects transactions from the pool, the transactions are executed in priority order. As each transaction is processed, the state changes are written directly to the chain's storage. It's important to note that these changes are not cached, meaning a failed transaction won't revert earlier state changes, which could leave the block in an inconsistent state.

Events are also written to storage. Runtime logic should not emit an event before performing the associated actions. If the associated transaction fails after the event was emitted, the event will not revert.

Additional Information

Watch Seminar: Lifecycle of a transaction for a video overview of the lifecycle of transactions and the types of transactions that exist.

"},{"location":"polkadot-protocol/glossary/","title":"Glossary","text":"

Key definitions, concepts, and terminology specific to the Polkadot ecosystem are included here.

Additional glossaries from around the ecosystem you might find helpful:

  • Polkadot Wiki Glossary
  • Polkadot SDK Glossary
"},{"location":"polkadot-protocol/glossary/#authority","title":"Authority","text":"

The role in a blockchain that can participate in consensus mechanisms.

  • GRANDPA - the authorities vote on chains they consider final
  • Blind Assignment of Blockchain Extension (BABE) - the authorities are also block authors

Authority sets can be used as a basis for consensus mechanisms such as the Nominated Proof of Stake (NPoS) protocol.

"},{"location":"polkadot-protocol/glossary/#authority-round-aura","title":"Authority Round (Aura)","text":"

A deterministic consensus protocol where block production is limited to a rotating list of authorities that take turns creating blocks. In authority round (Aura) consensus, most online authorities are assumed to be honest. It is often used in combination with\u00a0GRANDPA\u00a0as a\u00a0hybrid consensus\u00a0protocol.

Learn more by reading the official Aura consensus algorithm wiki article.

"},{"location":"polkadot-protocol/glossary/#blind-assignment-of-blockchain-extension-babe","title":"Blind Assignment of Blockchain Extension (BABE)","text":"

A block authoring protocol similar to Aura, except authorities win slots based on a Verifiable Random Function (VRF) instead of the round-robin selection method. The winning authority can select a chain and submit a new block.

Learn more by reading the official Web3 Foundation BABE research document.

"},{"location":"polkadot-protocol/glossary/#block-author","title":"Block Author","text":"

The node responsible for the creation of a block, also called block producers. In a Proof of Work (PoW) blockchain, these nodes are called miners.

"},{"location":"polkadot-protocol/glossary/#byzantine-fault-tolerance-bft","title":"Byzantine Fault Tolerance (BFT)","text":"

The ability of a distributed computer network to remain operational if a certain proportion of its nodes or authorities are defective or behaving maliciously.

Note

A distributed network is typically considered Byzantine fault tolerant if it can remain functional, with up to one-third of nodes assumed to be defective, offline, actively malicious, and part of a coordinated attack.

"},{"location":"polkadot-protocol/glossary/#byzantine-failure","title":"Byzantine Failure","text":"

The loss of a network service due to node failures that exceed the proportion of nodes required to reach consensus.

"},{"location":"polkadot-protocol/glossary/#practical-byzantine-fault-tolerance-pbft","title":"Practical Byzantine Fault Tolerance (pBFT)","text":"

An early approach to Byzantine fault tolerance (BFT), practical Byzantine fault tolerance (pBFT) systems tolerate Byzantine behavior from up to one-third of participants.

The communication overhead for such systems is O(n\u00b2), where n is the number of nodes (participants) in the system.

"},{"location":"polkadot-protocol/glossary/#call","title":"Call","text":"

In the context of pallets containing functions to be dispatched to the runtime, Call is an enumeration data type that describes the functions that can be dispatched with one variant per pallet. A Call represents a dispatch data structure object.

"},{"location":"polkadot-protocol/glossary/#chain-specification","title":"Chain Specification","text":"

A chain specification file defines the properties required to run a node in an active or new Polkadot SDK-built network. It often contains the initial genesis runtime code, network properties (such as the network's name), the initial state for some pallets, and the boot node list. The chain specification file makes it easy to use a single Polkadot SDK codebase as the foundation for multiple independently configured chains.

"},{"location":"polkadot-protocol/glossary/#collator","title":"Collator","text":"

An author of a parachain network. They aren't authorities in themselves, as they require a relay chain to coordinate consensus.

More details are found on the Polkadot Collator Wiki.

"},{"location":"polkadot-protocol/glossary/#collective","title":"Collective","text":"

Most often used to refer to an instance of the Collective pallet on Polkadot SDK-based networks such as Kusama or Polkadot if the Collective pallet is part of the FRAME-based runtime for the network.

"},{"location":"polkadot-protocol/glossary/#consensus","title":"Consensus","text":"

Consensus is the process blockchain nodes use to agree on a chain's canonical fork. It is composed of authorship, finality, and fork-choice rule. In the Polkadot ecosystem, these three components are usually separate and the term consensus often refers specifically to authorship.

See also hybrid consensus.

"},{"location":"polkadot-protocol/glossary/#consensus-algorithm","title":"Consensus Algorithm","text":"

Ensures a set of actors\u2014who don't necessarily trust each other\u2014can reach an agreement about the state as the result of some computation. Most consensus algorithms assume that up to one-third of the actors or nodes can be Byzantine fault tolerant.

Consensus algorithms are generally concerned with ensuring two properties:

  • Safety - indicating that all honest nodes eventually agreed on the state of the chain
  • Liveness - indicating the ability of the chain to keep progressing
"},{"location":"polkadot-protocol/glossary/#consensus-engine","title":"Consensus Engine","text":"

The node subsystem responsible for consensus tasks.

For detailed information about the consensus strategies of the Polkadot network, see the Polkadot Consensus blog series.

See also hybrid consensus.

"},{"location":"polkadot-protocol/glossary/#coretime","title":"Coretime","text":"

The time allocated for utilizing a core, measured in relay chain blocks. There are two types of coretime: on-demand and bulk.

On-demand coretime refers to coretime acquired through bidding in near real-time for the validation of a single parachain block on one of the cores reserved specifically for on-demand orders. They are available as an on-demand coretime pool. Set of cores that are available on-demand. Cores reserved through bulk coretime could also be made available in the on-demand coretime pool, in parts or in entirety.

Bulk coretime is a fixed duration of continuous coretime represented by an NFT that can be split, shared, or resold. It is managed by the Broker pallet.

"},{"location":"polkadot-protocol/glossary/#development-phrase","title":"Development Phrase","text":"

A mnemonic phrase that is intentionally made public.

Well-known development accounts, such as Alice, Bob, Charlie, Dave, Eve, and Ferdie, are generated from the same secret phrase:

bottom drive obey lake curtain smoke basket hold race lonely fit walk\n

Many tools in the Polkadot SDK ecosystem, such as subkey, allow you to implicitly specify an account using a derivation path such as //Alice.

"},{"location":"polkadot-protocol/glossary/#digest","title":"Digest","text":"

An extensible field of the block header that encodes information needed by several actors in a blockchain network, including:

  • Light clients for chain synchronization
  • Consensus engines for block verification
  • The runtime itself, in the case of pre-runtime digests
"},{"location":"polkadot-protocol/glossary/#dispatchable","title":"Dispatchable","text":"

Function objects that act as the entry points in FRAME pallets. Internal or external entities can call them to interact with the blockchain\u2019s state. They are a core aspect of the runtime logic, handling transactions and other state-changing operations.

"},{"location":"polkadot-protocol/glossary/#events","title":"Events","text":"

A means of recording that some particular state transition happened.

In the context of FRAME, events are composable data types that each pallet can individually define. Events in FRAME are implemented as a set of transient storage items inspected immediately after a block has been executed and reset during block initialization.

"},{"location":"polkadot-protocol/glossary/#executor","title":"Executor","text":"

A means of executing a function call in a given runtime with a set of dependencies. There are two orchestration engines in Polkadot SDK, WebAssembly and native.

  • The native executor uses a natively compiled runtime embedded in the node to execute calls. This is a performance optimization available to up-to-date nodes

  • The WebAssembly executor uses a Wasm binary and a Wasm interpreter to execute calls. The binary is guaranteed to be up-to-date regardless of the version of the blockchain node because it is persisted in the state of the Polkadot SDK-based chain

"},{"location":"polkadot-protocol/glossary/#existential-deposit","title":"Existential Deposit","text":"

The minimum balance an account is allowed to have in the Balances pallet. Accounts cannot be created with a balance less than the existential deposit amount.

If an account balance drops below this amount, the Balances pallet uses a FRAME System API to drop its references to that account.

If the Balances pallet reference to an account is dropped, the account can be reaped.

"},{"location":"polkadot-protocol/glossary/#extrinsic","title":"Extrinsic","text":"

A general term for data that originates outside the runtime, is included in a block, and leads to some action. This includes user-initiated transactions and inherent transactions placed into the block by the block builder.

It is a SCALE-encoded array typically consisting of a version number, signature, and varying data types indicating the resulting runtime function to be called. Extrinsics can take two forms: inherents and transactions.

For more technical details, see the Polkadot spec.

"},{"location":"polkadot-protocol/glossary/#fork-choice-rulestrategy","title":"Fork Choice Rule/Strategy","text":"

A fork choice rule or strategy helps determine which chain is valid when reconciling several network forks. A common fork choice rule is the longest chain, in which the chain with the most blocks is selected.

"},{"location":"polkadot-protocol/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities","title":"FRAME (Framework for Runtime Aggregation of Modularized Entities)","text":"

Enables developers to create blockchain runtime environments from a modular set of components called pallets. It utilizes a set of procedural macros to construct runtimes.

Visit the Polkadot SDK docs for more details on FRAME.

"},{"location":"polkadot-protocol/glossary/#full-node","title":"Full Node","text":"

A node that prunes historical states, keeping only recently finalized block states to reduce storage needs. Full nodes provide current chain state access and allow direct submission and validation of extrinsics, maintaining network decentralization.

"},{"location":"polkadot-protocol/glossary/#genesis-configuration","title":"Genesis Configuration","text":"

A mechanism for specifying the initial state of a blockchain. By convention, this initial state or first block is commonly referred to as the genesis state or genesis block. The genesis configuration for Polkadot SDK-based chains is accomplished by way of a chain specification file.

"},{"location":"polkadot-protocol/glossary/#grandpa","title":"GRANDPA","text":"

A deterministic finality mechanism for blockchains that is implemented in the Rust programming language.

The formal specification is maintained by the Web3 Foundation.

"},{"location":"polkadot-protocol/glossary/#header","title":"Header","text":"

A structure that aggregates the information used to summarize a block. Primarily, it consists of cryptographic information used by light clients to get minimally secure but very efficient chain synchronization.

"},{"location":"polkadot-protocol/glossary/#hybrid-consensus","title":"Hybrid Consensus","text":"

A blockchain consensus protocol that consists of independent or loosely coupled mechanisms for block production and finality.

Hybrid consensus allows the chain to grow as fast as probabilistic consensus protocols, such as Aura, while maintaining the same level of security as deterministic finality consensus protocols, such as GRANDPA.

"},{"location":"polkadot-protocol/glossary/#inherent-transactions","title":"Inherent Transactions","text":"

A special type of unsigned transaction, referred to as inherents, that enables a block authoring node to insert information that doesn't require validation directly into a block.

Only the block-authoring node that calls the inherent transaction function can insert data into its block. In general, validators assume the data inserted using an inherent transaction is valid and reasonable even if it can't be deterministically verified.

"},{"location":"polkadot-protocol/glossary/#json-rpc","title":"JSON-RPC","text":"

A stateless, lightweight remote procedure call protocol encoded in JavaScript Object Notation (JSON). JSON-RPC provides a standard way to call functions on a remote system by using JSON.

For Polkadot SDK, this protocol is implemented through the Parity JSON-RPC crate.

"},{"location":"polkadot-protocol/glossary/#keystore","title":"Keystore","text":"

A subsystem for managing keys for the purpose of producing new blocks.

"},{"location":"polkadot-protocol/glossary/#kusama","title":"Kusama","text":"

Kusama is a Polkadot SDK-based blockchain that implements a design similar to the Polkadot network.

Kusama is a canary network and is referred to as Polkadot's \"wild cousin.\"

As a canary network, Kusama is expected to be more stable than a test network like Westend but less stable than a production network like Polkadot. Kusama is controlled by its network participants and is intended to be stable enough to encourage meaningful experimentation.

"},{"location":"polkadot-protocol/glossary/#libp2p","title":"libp2p","text":"

A peer-to-peer networking stack that allows the use of many transport mechanisms, including WebSockets (usable in a web browser).

Polkadot SDK uses the Rust implementation of the libp2p networking stack.

"},{"location":"polkadot-protocol/glossary/#light-client","title":"Light Client","text":"

A type of blockchain node that doesn't store the chain state or produce blocks.

A light client can verify cryptographic primitives and provides a remote procedure call (RPC) server, enabling blockchain users to interact with the network.

"},{"location":"polkadot-protocol/glossary/#metadata","title":"Metadata","text":"

Data that provides information about one or more aspects of a system. The metadata that exposes information about a Polkadot SDK blockchain enables you to interact with that system.

"},{"location":"polkadot-protocol/glossary/#nominated-proof-of-stake-npos","title":"Nominated Proof of Stake (NPoS)","text":"

A method for determining validators or authorities based on a willingness to commit their stake to the proper functioning of one or more block-producing nodes.

"},{"location":"polkadot-protocol/glossary/#oracle","title":"Oracle","text":"

An entity that connects a blockchain to a non-blockchain data source. Oracles enable the blockchain to access and act upon information from existing data sources and incorporate data from non-blockchain systems and services.

"},{"location":"polkadot-protocol/glossary/#origin","title":"Origin","text":"

A FRAME primitive that identifies the source of a dispatched function call into the runtime. The FRAME System pallet defines three built-in origins. As a pallet developer, you can also define custom origins, such as those defined by the Collective pallet.

"},{"location":"polkadot-protocol/glossary/#pallet","title":"Pallet","text":"

A module that can be used to extend the capabilities of a FRAME-based runtime. Pallets bundle domain-specific logic with runtime primitives like events and storage items.

"},{"location":"polkadot-protocol/glossary/#parachain","title":"Parachain","text":"

A parachain is a blockchain that derives shared infrastructure and security from a relay chain. You can learn more about parachains on the Polkadot Wiki.

"},{"location":"polkadot-protocol/glossary/#paseo","title":"Paseo","text":"

Paseo TestNet provisions testing on Polkadot's \"production\" runtime, which means less chance of feature or code mismatch when developing parachain apps. Specifically, after the Polkadot Technical fellowship proposes a runtime upgrade for Polkadot, this TestNet is updated, giving a period where the TestNet will be ahead of Polkadot to allow for testing.

"},{"location":"polkadot-protocol/glossary/#polkadot","title":"Polkadot","text":"

The Polkadot network is a blockchain that serves as the central hub of a heterogeneous blockchain network. It serves the role of the relay chain and provides shared infrastructure and security to support parachains.

"},{"location":"polkadot-protocol/glossary/#relay-chain","title":"Relay Chain","text":"

Relay chains are blockchains that provide shared infrastructure and security to the parachains in the network. In addition to providing consensus capabilities, relay chains allow parachains to communicate and exchange digital assets without needing to trust one another.

"},{"location":"polkadot-protocol/glossary/#rococo","title":"Rococo","text":"

A parachain test network for the Polkadot network. The Rococo network is a Polkadot SDK-based blockchain with an October 14, 2024 deprecation date. Development teams are encouraged to use the Paseo TestNet instead.

"},{"location":"polkadot-protocol/glossary/#runtime","title":"Runtime","text":"

The runtime provides the state transition function for a node. In Polkadot SDK, the runtime is stored as a Wasm binary in the chain state.

"},{"location":"polkadot-protocol/glossary/#slot","title":"Slot","text":"

A fixed, equal interval of time used by consensus engines such as Aura and BABE. In each slot, a subset of authorities is permitted, or obliged, to author a block.

"},{"location":"polkadot-protocol/glossary/#sovereign-account","title":"Sovereign Account","text":"

The unique account identifier for each chain in the relay chain ecosystem. It is often used in cross-consensus (XCM) interactions to sign XCM messages sent to the relay chain or other chains in the ecosystem.

The sovereign account for each chain is a root-level account that can only be accessed using the Sudo pallet or through governance. The account identifier is calculated by concatenating the Blake2 hash of a specific text string and the registered parachain identifier.

"},{"location":"polkadot-protocol/glossary/#ss58-address-format","title":"SS58 Address Format","text":"

A public key address based on the Bitcoin Base-58-check encoding. Each Polkadot SDK SS58 address uses a base-58 encoded value to identify a specific account on a specific Polkadot SDK-based chain

The canonical ss58-registry provides additional details about the address format used by different Polkadot SDK-based chains, including the network prefix and website used for different networks

"},{"location":"polkadot-protocol/glossary/#state-transition-function-stf","title":"State Transition Function (STF)","text":"

The logic of a blockchain that determines how the state changes when a block is processed. In Polkadot SDK, the state transition function is effectively equivalent to the runtime.

"},{"location":"polkadot-protocol/glossary/#storage-item","title":"Storage Item","text":"

FRAME primitives that provide type-safe data persistence capabilities to the runtime. Learn more in the storage items reference document in the Polkadot SDK.

"},{"location":"polkadot-protocol/glossary/#substrate","title":"Substrate","text":"

A flexible framework for building modular, efficient, and upgradeable blockchains. Substrate is written in the Rust programming language and is maintained by Parity Technologies.

"},{"location":"polkadot-protocol/glossary/#transaction","title":"Transaction","text":"

An extrinsic that includes a signature that can be used to verify the account authorizing it inherently or via signed extensions.

"},{"location":"polkadot-protocol/glossary/#transaction-era","title":"Transaction Era","text":"

A definable period expressed as a range of block numbers during which a transaction can be included in a block. Transaction eras are used to protect against transaction replay attacks if an account is reaped and its replay-protecting nonce is reset to zero.

"},{"location":"polkadot-protocol/glossary/#trie-patricia-merkle-tree","title":"Trie (Patricia Merkle Tree)","text":"

A data structure used to represent sets of key-value pairs and enables the items in the data set to be stored and retrieved using a cryptographic hash. Because incremental changes to the data set result in a new hash, retrieving data is efficient even if the data set is very large. With this data structure, you can also prove whether the data set includes any particular key-value pair without access to the entire data set.

In Polkadot SDK-based blockchains, state is stored in a trie data structure that supports the efficient creation of incremental digests. This trie is exposed to the runtime as a simple key/value map where both keys and values can be arbitrary byte arrays.

"},{"location":"polkadot-protocol/glossary/#validator","title":"Validator","text":"

A validator is a node that participates in the consensus mechanism of the network. Its roles include block production, transaction validation, network integrity, and security maintenance.

"},{"location":"polkadot-protocol/glossary/#webassembly-wasm","title":"WebAssembly (Wasm)","text":"

An execution architecture that allows for the efficient, platform-neutral expression of deterministic, machine-executable logic.

Wasm can be compiled from many languages, including the Rust programming language. Polkadot SDK-based chains use a Wasm binary to provide portable runtimes that can be included as part of the chain's state.

"},{"location":"polkadot-protocol/glossary/#weight","title":"Weight","text":"

A convention used in Polkadot SDK-based blockchains to measure and manage the time it takes to validate a block. Polkadot SDK defines one unit of weight as one picosecond of execution time on reference hardware.

The maximum block weight should be equivalent to one-third of the target block time with an allocation of one-third each for:

  • Block construction
  • Network propagation
  • Import and verification

By defining weights, you can trade-off the number of transactions per second and the hardware required to maintain the target block time appropriate for your use case. Weights are defined in the runtime, meaning you can tune them using runtime updates to keep up with hardware and software improvements.

"},{"location":"polkadot-protocol/glossary/#westend","title":"Westend","text":"

Westend is a Parity-maintained, Polkadot SDK-based blockchain that serves as a test network for the Polkadot network.

"},{"location":"polkadot-protocol/onchain-governance/overview/","title":"On-Chain Governance","text":""},{"location":"polkadot-protocol/onchain-governance/overview/#introduction","title":"Introduction","text":"

Polkadot\u2019s governance system exemplifies decentralized decision-making, empowering its community of stakeholders to shape the network\u2019s future through active participation. The latest evolution, OpenGov, builds on Polkadot\u2019s foundation by providing a more inclusive and efficient governance model.

This guide will explain the principles and structure of OpenGov and walk you through its key components, such as Origins, Tracks, and Delegation. You will learn about improvements over earlier governance systems, including streamlined voting processes and enhanced stakeholder participation.

With OpenGov, Polkadot achieves a flexible, scalable, and democratic governance framework that allows multiple proposals to proceed simultaneously, ensuring the network evolves in alignment with its community's needs.

"},{"location":"polkadot-protocol/onchain-governance/overview/#governance-evolution","title":"Governance Evolution","text":"

Polkadot\u2019s governance journey began with Governance V1, a system that proved effective in managing treasury funds and protocol upgrades. However, it faced limitations, such as:

  • Slow voting cycles, causing delays in decision-making
  • Inflexibility in handling multiple referendums, restricting scalability

To address these challenges, Polkadot introduced OpenGov, a governance model designed for greater inclusivity, efficiency, and scalability. OpenGov replaces the centralized structures of Governance V1, such as the Council and Technical Committee, with a fully decentralized and dynamic framework.

For a full comparison of the historic and current governance models, visit the Gov1 vs. Polkadot OpenGov section of the Polkadot Wiki.

"},{"location":"polkadot-protocol/onchain-governance/overview/#opengov-key-features","title":"OpenGov Key Features","text":"

OpenGov transforms Polkadot\u2019s governance into a decentralized, stakeholder-driven model, eliminating centralized decision-making bodies like the Council. Key enhancements include:

  • Decentralization - shifts all decision-making power to the public, ensuring a more democratic process
  • Enhanced delegation - allows users to delegate their votes to trusted experts across specific governance tracks
  • Simultaneous referendums - multiple proposals can progress at once, enabling faster decision-making
  • Polkadot Technical Fellowship - a broad, community-driven group replacing the centralized Technical Committee

This new system ensures Polkadot governance remains agile and inclusive, even as the ecosystem grows.

"},{"location":"polkadot-protocol/onchain-governance/overview/#origins-and-tracks","title":"Origins and Tracks","text":"

In OpenGov, origins and tracks are central to managing proposals and votes.

  • Origin - determines the authority level of a proposal (e.g., Treasury, Root) which decides the track of all referendums from that origin
  • Track - define the procedural flow of a proposal, such as voting duration, approval thresholds, and enactment timelines

Developers must be aware that referendums from different origins and tracks will take varying amounts of time to reach approval and enactment. The Polkadot Technical Fellowship has the option to shorten this timeline by whitelisting a proposal and allowing it to be enacted through the Whitelist Caller origin.

Visit Origins and Tracks Info for details on current origins and tracks, associated terminology, and parameters.

"},{"location":"polkadot-protocol/onchain-governance/overview/#referendums","title":"Referendums","text":"

In OpenGov, anyone can submit a referendum, fostering an open and participatory system. The timeline for a referendum depends on the privilege level of the origin with more significant changes offering more time for community voting and participation before enactment.

The timeline for an individual referendum includes four distinct periods:

  • Lead-in - a minimum amount of time to allow for community participation, available room in the origin, and payment of the decision deposit. Voting is open during this period
  • Decision - voting continues
  • Confirmation - referendum must meet approval and support criteria during entire period to avoid rejection
  • Enactment - changes approved by the referendum are executed
"},{"location":"polkadot-protocol/onchain-governance/overview/#vote-on-referendums","title":"Vote on Referendums","text":"

Voters can vote with their tokens on each referendum. Polkadot uses a voluntary token locking mechanism, called conviction voting, as a way for voters to increase their voting power. A token holder signals they have a stronger preference for approving a proposal based upon their willingness to lock up tokens. Longer voluntary token locks are seen as a signal of continual approval and translate to increased voting weight.

See Voting on a Referendum for a deeper look at conviction voting and related token locks.

"},{"location":"polkadot-protocol/onchain-governance/overview/#delegate-voting-power","title":"Delegate Voting Power","text":"

The OpenGov system also supports multi-role delegations, allowing token holders to assign their voting power on different tracks to entities with expertise in those areas.

For example, if a token holder lacks the technical knowledge to evaluate proposals on the Root track, they can delegate their voting power for that track to an expert they trust to vote in the best interest of the network. This ensures informed decision-making across tracks while maintaining flexibility for token holders.

Visit Multirole Delegation for more details on delegating voting power.

"},{"location":"polkadot-protocol/onchain-governance/overview/#cancel-a-referendum","title":"Cancel a Referendum","text":"

Polkadot OpenGov has two origins for rejecting ongoing referendums:

  • Referendum Canceller - cancels an active referendum when non-malicious errors occur and refunds the deposits to the originators
  • Referendum Killer - used for urgent, malicious cases this origin instantly terminates an active referendum and slashes deposits

See Cancelling, Killing, and Blacklisting for additional information on rejecting referendums.

"},{"location":"polkadot-protocol/onchain-governance/overview/#additional-resources","title":"Additional Resources","text":"
  • Democracy pallet - handles administration of general stakeholder voting
  • Gov2: Polkadot\u2019s Next Generation of Decentralised Governance - Medium article by Gavin Wood
  • Polkadot Direction - Matrix Element client
  • Polkassembly - OpenGov dashboard and UI
  • Polkadot.js Apps Governance - overview of active referendums
"},{"location":"tutorials/blockchains/system-chains/asset-hub/asset-conversion/","title":"Asset Conversion on Asset Hub","text":""},{"location":"tutorials/blockchains/system-chains/asset-hub/asset-conversion/#introduction","title":"Introduction","text":"

Asset Conversion is an Automated Market Maker (AMM) utilizing Uniswap V2 logic and implemented as a pallet on Polkadot's Asset Hub. For more details about this feature, please visit the Asset Conversion on Asset Hub wiki page.

This guide will provide detailed information about the key functionalities offered by the Asset Conversion pallet on Asset Hub, including:

  • Creating a liquidity pool
  • Adding liquidity to a pool
  • Swapping assets
  • Withdrawing liquidity from a pool
"},{"location":"tutorials/blockchains/system-chains/asset-hub/asset-conversion/#prerequisites","title":"Prerequisites","text":"

Before converting assets on Asset Hub, you must ensure you have:

  • Access to the Polkadot.js Apps interface and a connection with the intended blockchain
  • A funded wallet containing the assets you wish to convert and enough available funds to cover the transaction fees
  • An asset registered on Asset Hub that you want to convert. If you haven't created an asset on Asset Hub yet, refer to the Register a Local Asset or Register a Foreign Asset documentation to create an asset.
"},{"location":"tutorials/blockchains/system-chains/asset-hub/asset-conversion/#creating-a-liquidity-pool","title":"Creating a Liquidity Pool","text":"

If an asset on Asset Hub does not have an existing liquidity pool, the first step is to create one.

The asset conversion pallet provides the createPool extrinsic to create a new liquidity pool, creating an empty liquidity pool and a new LP token asset.

Note

A testing token with the asset ID 1112 and the name PPM was created for this example.

As stated in the Test Environment Setup section, this tutorial is based on the assumption that you have an instance of Polkadot Asset Hub running locally. Therefore, the demo liquidity pool will be created between DOT and PPM tokens. However, the same steps can be applied to any other asset on Asset Hub.

From the Asset Hub perspective, the Multilocation that identifies the PPM token is the following:

{\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n

Note

The PalletInstance value of 50 represents the Assets pallet on Asset Hub. The GeneralIndex value of 1112 is the PPM asset's asset ID.

To create the liquidity pool, you can follow these steps:

  1. Navigate to the Extrinsics section on the Polkadot.js Apps interface

    1. Select Developer from the top menu
    2. Click on Extrinsics from the dropdown menu

  2. Choose the AssetConversion pallet and click on the createPool extrinsic

    1. Select the AssetConversion pallet
    2. Choose the createPool extrinsic from the list of available extrinsics

  3. Fill in the required fields:

    1. asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      {\n  parents: 0,\n  interior: 'Here'\n}\n
    2. asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      {\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n
    3. Click on Submit Transaction to create the liquidity pool

Signing and submitting the transaction triggers the creation of the liquidity pool. To verify the new pool's creation, check the Explorer section on the Polkadot.js Apps interface and ensure that the PoolCreated event was emitted.

As the preceding image shows, the lpToken ID created for this pool is 19. This ID is essential to identify the liquidity pool and associated LP tokens.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/asset-conversion/#adding-liquidity-to-a-pool","title":"Adding Liquidity to a Pool","text":"

The addLiquidity extrinsic allows users to provide liquidity to a pool of two assets. Users specify their preferred amounts for both assets and minimum acceptable quantities. The function determines the best asset contribution, which may vary from the amounts desired but won't fall below the specified minimums. Providers receive liquidity tokens representing their pool portion in return for their contribution.

To add liquidity to a pool, follow these steps:

  1. Navigate to the Extrinsics section on the Polkadot.js Apps interface

    1. Select Developer from the top menu
    2. Click on Extrinsics from the dropdown menu

  2. Choose the assetConversion pallet and click on the addLiquidity extrinsic

    1. Select the assetConversion pallet
    2. Choose the addLiquidity extrinsic from the list of available extrinsics

  3. Fill in the required fields:

    1. asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      {\n  parents: 0,\n  interior: 'Here'\n}\n
    2. asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      {\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n
    3. amount1Desired - the amount of the first asset that will be contributed to the pool

    4. amount2Desired - the quantity of the second asset intended for pool contribution
    5. amount1Min - the minimum amount of the first asset that will be contributed
    6. amount2Min - the lowest acceptable quantity of the second asset for contribution
    7. mintTo - the account to which the liquidity tokens will be minted
    8. Click on Submit Transaction to add liquidity to the pool

    Warning

    Ensure that the appropriate amount of tokens provided has been minted previously and is available in your account before adding liquidity to the pool.

    In this case, the liquidity provided to the pool is between DOT tokens and PPM tokens with the asset ID 1112 on Polkadot Asset Hub. The intention is to provide liquidity for 1 DOT token (u128 value of 1000000000000 as it has 10 decimals) and 1 PPM token (u128 value of 1000000000000 as it also has 10 decimals).

Signing and submitting the transaction adds liquidity to the pool. To verify the liquidity addition, check the Explorer section on the Polkadot.js Apps interface and ensure that the LiquidityAdded event was emitted.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/asset-conversion/#swapping-assets","title":"Swapping Assets","text":""},{"location":"tutorials/blockchains/system-chains/asset-hub/asset-conversion/#swapping-from-an-exact-amount-of-tokens","title":"Swapping From an Exact Amount of Tokens","text":"

The asset conversion pallet enables users to exchange a specific quantity of one asset for another in a designated liquidity pool by swapping them for an exact amount of tokens. It guarantees the user will receive at least a predetermined minimum amount of the second asset. This function increases trading predictability and allows users to conduct asset exchanges with confidence that they are assured a minimum return.

To swap assets for an exact amount of tokens, follow these steps:

  1. Navigate to the Extrinsics section on the Polkadot.js Apps interface

    1. Select Developer from the top menu
    2. Click on Extrinsics from the dropdown menu

  2. Choose the AssetConversion pallet and click on the swapExactTokensForTokens extrinsic

    1. Select the AssetConversion pallet
    2. Choose the swapExactTokensForTokens extrinsic from the list of available extrinsics

  3. Fill in the required fields:

    1. path:Vec<StagingXcmV3MultiLocation> - an array of Multilocations representing the path of the swap. The first and last elements of the array are the input and output assets, respectively. In this case, the path consists of two elements:

      • 0: StagingXcmV3MultiLocation - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

        {\n  parents: 0,\n  interior: 'Here'\n}\n
      • 1: StagingXcmV3MultiLocation - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

        {\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n
    2. amountOut - the exact amount of the second asset that the user wants to receive

    3. amountInMax - the maximum amount of the first asset that the user is willing to swap
    4. sendTo - the account to which the swapped assets will be sent
    5. keepAlive - a boolean value that determines whether the pool should be kept alive after the swap
    6. Click on Submit Transaction to swap assets for an exact amount of tokens

    Warning

    Ensure that the appropriate amount of tokens provided has been minted previously and is available in your account before adding liquidity to the pool.

    In this case, the intention is to swap 0.01 DOT token (u128 value of 100000000000 as it has 10 decimals) for 0.04 PPM token (u128 value of 400000000000 as it also has 10 decimals).

Signing and submitting the transaction will execute the swap. To verify execution, check the Explorer section on the Polkadot.js Apps interface and make sure that the SwapExecuted event was emitted.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/asset-conversion/#swapping-to-an-exact-amount-of-tokens","title":"Swapping To an Exact Amount of Tokens","text":"

Conversely, the Asset Conversion pallet comes with a function that allows users to trade a variable amount of one asset to acquire a precise quantity of another. It ensures that users stay within a set maximum of the initial asset to obtain the desired amount of the second asset. This provides a method to control transaction costs while achieving the intended result.

To swap assets for an exact amount of tokens, follow these steps:

  1. Navigate to the Extrinsics section on the Polkadot.js Apps interface

    1. Select Developer from the top menu
    2. Click on Extrinsics from the dropdown menu

  2. Choose the AssetConversion pallet and click on the swapTokensForExactTokens extrinsic:

    1. Select the AssetConversion pallet
    2. Choose the swapTokensForExactTokens extrinsic from the list of available extrinsics

  3. Fill in the required fields:

    1. path:Vec<StagingXcmV3MultiLocation\\> - an array of Multilocations representing the path of the swap. The first and last elements of the array are the input and output assets, respectively. In this case, the path consists of two elements:

      • 0: StagingXcmV3MultiLocation - the Multilocation of the first asset in the pool. In this case, it is the PPM token, which the following Multilocation represents:

        {\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n
      • 1: StagingXcmV3MultiLocation - the second asset's Multilocation within the pool. This refers to the DOT token, which the following Multilocation identifies:

        {\n  parents: 0,\n  interior: 'Here'\n}\n
    2. amountOut - the exact amount of the second asset that the user wants to receive

    3. amountInMax - the maximum amount of the first asset that the user is willing to swap
    4. sendTo - the account to which the swapped assets will be sent
    5. keepAlive - a boolean value that determines whether the pool should be kept alive after the swap
    6. Click on Submit Transaction to swap assets for an exact amount of tokens

    Warning

    Before swapping assets, ensure that the tokens provided have been minted previously and are available in your account.

    In this case, the intention is to swap 0.01 DOT token (u128 value of 100000000000 as it has ten decimals) for 0.04 PPM token (u128 value of 400000000000 as it also has ten decimals).

Signing and submitting the transaction will execute the swap. To verify execution, check the Explorer section on the Polkadot.js Apps interface and make sure that the SwapExecuted event was emitted.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/asset-conversion/#withdrawing-liquidity-from-a-pool","title":"Withdrawing Liquidity from a Pool","text":"

The Asset Conversion pallet provides the removeLiquidity extrinsic to remove liquidity from a pool. This function allows users to withdraw the liquidity they offered from a pool, returning the original assets. When calling this function, users specify the number of liquidity tokens (representing their share in the pool) they wish to burn. They also set minimum acceptable amounts for the assets they expect to receive back. This mechanism ensures that users can control the minimum value they receive, protecting against unfavorable price movements during the withdrawal process.

To withdraw liquidity from a pool, follow these steps:

  1. Navigate to the Extrinsics section on the Polkadot.js Apps interface

    1. Select Developer from the top menu
    2. Click on Extrinsics from the dropdown menu

  2. Choose the AssetConversion pallet and click on the remove_liquidity extrinsic

    1. Select the AssetConversion pallet
    2. Choose the removeLiquidity extrinsic from the list of available extrinsics

  3. Fill in the required fields:

    1. asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      {\n  parents: 0,\n  interior: 'Here'\n}\n
    2. asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      {\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n
    3. lpTokenBurn - the number of liquidity tokens to burn

    4. amount1MinReceived - the minimum amount of the first asset that the user expects to receive
    5. amount2MinReceived - the minimum quantity of the second asset the user expects to receive
    6. withdrawTo - the account to which the withdrawn assets will be sent
    7. Click on Submit Transaction to withdraw liquidity from the pool

    Warning

    Ensure that the tokens provided have been minted previously and are available in your account before withdrawing liquidity from the pool.

    In this case, the intention is to withdraw 0.05 liquidity tokens from the pool, expecting to receive 0.004 DOT token (u128 value of 40000000000 as it has 10 decimals) and 0.04 PPM token (u128 value of 400000000000 as it also has 10 decimals).

Signing and submitting the transaction will initiate the withdrawal of liquidity from the pool. To verify the withdrawal, check the Explorer section on the Polkadot.js Apps interface and ensure that the LiquidityRemoved event was emitted.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/asset-conversion/#test-environment-setup","title":"Test Environment Setup","text":"

To test the Asset Conversion pallet, you can set up a local test environment to simulate different scenarios. This guide uses Chopsticks to spin up an instance of Polkadot Asset Hub. For further details on using Chopsticks, please refer to the Chopsticks documentation.

To set up a local test environment, execute the following command:

npx @acala-network/chopsticks \\\n--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml\n

Note

This command initiates a lazy fork of Polkadot Asset Hub, including the most recent block information from the network. For Kusama Asset Hub testing, simply switch out polkadot-asset-hub.yml with kusama-asset-hub.yml in the command.

You now have a local Asset Hub instance up and running, ready for you to test various asset conversion procedures. The process here mirrors what you'd do on MainNet. After completing a transaction on TestNet, you can apply the same steps to convert assets on MainNet.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/","title":"Register a Foreign Asset on Asset Hub","text":""},{"location":"tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/#introduction","title":"Introduction","text":"

As outlined in the Asset Hub Overview, Asset Hub supports two categories of assets: local and foreign. Local assets are created on the Asset Hub system parachain and are identified by integer IDs. On the other hand, foreign assets, which originate outside of Asset Hub, are recognized by Multilocations.

When registering a foreign asset on Asset Hub, it's essential to notice that the process involves communication between two parachains. The Asset Hub parachain will be the destination of the foreign asset, while the source parachain will be the origin of the asset. The communication between the two parachains is facilitated by the Cross-Chain Message Passing (XCMP) protocol.

This guide will take you through the process of registering a foreign asset on the Asset Hub parachain.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/#prerequisites","title":"Prerequisites","text":"

The Asset Hub parachain is one of the system parachains on a relay chain, such as Polkadot or Kusama. To interact with these parachains, you can use the Polkadot.js Apps interface for:

  • Polkadot Asset Hub
  • Kusama Asset Hub

For testing purposes, you can also interact with the Asset Hub instance on the following test networks:

  • Paseo Asset Hub

Before you start, ensure that you have:

  • Access to the Polkadot.js Apps interface, and you are connected to the desired chain
  • A parachain that supports the XCMP protocol to interact with the Asset Hub parachain
  • A funded wallet to pay for the transaction fees and subsequent registration of the foreign asset

This guide will use Polkadot, its local Asset Hub instance, and the Astar parachain (ID 2006), as stated in the Test Environment Setup section. However, the process is the same for other relay chains and their respective Asset Hub parachain, regardless of the network you are using and the parachain owner of the foreign asset.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/#steps-to-register-a-foreign-asset","title":"Steps to Register a Foreign Asset","text":""},{"location":"tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/#asset-hub","title":"Asset Hub","text":"
  1. Open the Polkadot.js Apps interface and connect to the Asset Hub parachain using the network selector in the top left corner

    • Testing foreign asset registration is recommended on TestNet before proceeding to MainNet. If you haven't set up a local testing environment yet, consult the Environment setup guide. After setting up, connect to the Local Node (Chopsticks) at ws://127.0.0.1:8000
    • For live network operations, connect to the Asset Hub parachain. You can choose either Polkadot or Kusama Asset Hub from the dropdown menu, selecting your preferred RPC provider
  2. Navigate to the Extrinsics page

    1. Click on the Developer tab from the top navigation bar
    2. Select Extrinsics from the dropdown

  3. Select the Foreign Assets pallet

    1. Select the foreignAssets pallet from the dropdown list
    2. Choose the create extrinsic

  4. Fill out the required fields and click on the copy icon to copy the encoded call data to your clipboard. The fields to be filled are:

    • id - as this is a foreign asset, the ID will be represented by a Multilocation that reflects its origin. For this case, the Multilocation of the asset will be from the source parachain perspective:

      { parents: 1, interior: { X1: [{ Parachain: 2006 }] } }\n
    • admin - refers to the account that will be the admin of this asset. This account will be able to manage the asset, including updating its metadata. As the registered asset corresponds to a native asset of the source parachain, the admin account should be the sovereign account of the source parachain

      Obtain the sovereign account

      The sovereign account can be obtained through Substrate Utilities.

      Ensure that Sibling is selected and that the Para ID corresponds to the source parachain. In this case, since the guide follows the test setup stated in the Test Environment Setup section, the Para ID is 2006.

    • minBalance - the minimum balance required to hold this asset

    Encoded call data

    If you want an example of the encoded call data, you can copy the following:

    0x3500010100591f007369626cd6070000000000000000000000000000000000000000000000000000a0860100000000000000000000000000\n

"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/#source-parachain","title":"Source Parachain","text":"
  1. Navigate to the Developer > Extrinsics section
  2. Create the extrinsic to register the foreign asset through XCM

    1. Paste the encoded call data copied in the previous step
    2. Click the Submit Transaction button

    This XCM call involves withdrawing DOT from the sibling account of the parachain, using it to initiate an execution. The transaction will be carried out with XCM as the origin kind, and will be a hex-encoded call to create a foreign asset on Asset Hub for the specified parachain asset multilocation. Any surplus will be refunded, and the asset will be deposited into the sibling account.

    Warning

    Note that the sovereign account on the Asset Hub parachain must have a sufficient balance to cover the XCM BuyExecution instruction. If the account does not have enough balance, the transaction will fail.

    Example of the encoded call data

    If you want to have the whole XCM call ready to be copied, go to the Developer > Extrinsics > Decode section and paste the following hex-encoded call data:

    0x6300330003010100a10f030c000400010000070010a5d4e81300010000070010a5d4e80006030700b4f13501419ce03500010100591f007369626cd607000000000000000000000000000000000000000000000000000000000000000000000000000000000000\n

    Ensure to replace the encoded call data with the one you copied in the previous step.

After the transaction is successfully executed, the foreign asset will be registered on the Asset Hub parachain.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/#asset-registration-verification","title":"Asset Registration Verification","text":"

To confirm that a foreign asset has been successfully accepted and registered on the Asset Hub parachain, you can navigate to the Network > Explorer section of the Polkadot.js Apps interface for Asset Hub. You should be able to see an event that includes the following details:

In the image above, the success field indicates whether the asset registration was successful.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/#test-environment-setup","title":"Test Environment Setup","text":"

To test the foreign asset registration process before deploying it on a live network, you can set up a local parachain environment. This guide uses Chopsticks to simulate that process. For more information on using Chopsticks, please refer to the Chopsticks documentation.

To set up a test environment, run the following command:

npx @acala-network/chopsticks xcm \\\n--r polkadot \\\n--p polkadot-asset-hub \\\n--p astar\n

Note

The above command will create a lazy fork of Polkadot as the relay chain, its Asset Hub instance, and the Astar parachain. The xcm parameter enables communication through the XCMP protocol between the relay chain and the parachains, allowing the registration of foreign assets on Asset Hub. For further information on the chopsticks usage of the XCMP protocol, refer to the XCM Testing section of the Chopsticks documentation.

After executing the command, the terminal will display output indicating the Polkadot relay chain, the Polkadot Asset Hub, and the Astar parachain are running locally and connected through XCM. You can access them individually via the Polkadot.js Apps interface.

  • Polkadot Relay Chain
  • Polkadot Asset Hub
  • Astar Parachain
"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-local-asset/","title":"Register a Local Asset on Asset Hub","text":""},{"location":"tutorials/blockchains/system-chains/asset-hub/register-local-asset/#introduction","title":"Introduction","text":"

As detailed in the Asset Hub Overview page, Asset Hub accommodates two types of assets: local and foreign. Local assets are those that were created in Asset Hub and are identifiable by an integer ID. On the other hand, foreign assets originate from a sibling parachain and are identified by a Multilocation.

This guide will take you through the steps of registering a local asset on the Asset Hub parachain.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-local-asset/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure you have access to the Polkadot.js Apps interface and a funded wallet with DOT or KSM.

  • For Polkadot Asset Hub, you would need a deposit of 10 DOT and around 0.201 DOT for the metadata
  • For Kusama Asset Hub, the deposit is 0.1 KSM and around 0.000669 KSM for the metadata

You need to ensure that your Asset Hub account balance is a bit more than the sum of those two deposits, which should seamlessly account for the required deposits and transaction fees.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-local-asset/#steps-to-register-a-local-asset","title":"Steps to Register a Local Asset","text":"

To register a local asset on the Asset Hub parachain, follow these steps:

  1. Open the Polkadot.js Apps interface and connect to the Asset Hub parachain using the network selector in the top left corner

    • You may prefer to test local asset registration on TestNet before registering the asset on a MainNet hub. If you still need to set up a local testing environment, review the Environment setup section for instructions. Once the local environment is set up, connect to the Local Node (Chopsticks) available on ws://127.0.0.1:8000
    • For the live network, connect to the Asset Hub parachain. Either Polkadot or Kusama Asset Hub can be selected from the dropdown list, choosing the desired RPC provider
  2. Click on the Network tab on the top navigation bar and select Assets from the dropdown list

  3. Now, you need to examine all the registered asset IDs. This step is crucial to ensure that the asset ID you are about to register is unique. Asset IDs are displayed in the assets column

  4. Once you have confirmed that the asset ID is unique, click on the Create button on the top right corner of the page

  5. Fill in the required fields in the Create Asset form:

    1. creator account - the account to be used for creating this asset and setting up the initial metadata
    2. asset name - the descriptive name of the asset you are registering
    3. asset symbol - the symbol that will be used to represent the asset
    4. asset decimals - the number of decimal places for this token, with a maximum of 20 allowed through the user interface
    5. minimum balance - the minimum balance for the asset. This is specified in the units and decimals as requested
    6. asset ID - the selected id for the asset. This should not match an already-existing asset id
    7. Click on the Next button

  6. Choose the accounts for the roles listed below:

    1. admin account - the account designated for continuous administration of the token
    2. issuer account - the account that will be used for issuing this token
    3. freezer account - the account that will be used for performing token freezing operations
    4. Click on the Create button

  7. Click on the Sign and Submit button to complete the asset registration process

"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-local-asset/#verify-asset-registration","title":"Verify Asset Registration","text":"

After completing these steps, the asset will be successfully registered. You can now view your asset listed on the Assets section of the Polkadot.js Apps interface.

Note

Take into consideration that the Assets section\u2019s link may differ depending on the network you are using. For the local environment, enter ws://127.0.0.1:8000 into the Custom Endpoint field.

In this way, you have successfully registered a local asset on the Asset Hub parachain.

For an in-depth explanation of Asset Hub and its features, please refer to the Polkadot Wiki page on Asset Hub.

"},{"location":"tutorials/blockchains/system-chains/asset-hub/register-local-asset/#test-setup-environment","title":"Test Setup Environment","text":"

You can set up a local parachain environment to test the asset registration process before deploying it on the live network. This guide uses Chopsticks to simulate that process. For further information on chopsticks usage, refer to the Chopsticks documentation.

To set up a test environment, execute the following command:

npx @acala-network/chopsticks \\\n--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml\n

Note

The above command will spawn a lazy fork of Polkadot Asset Hub with the latest block data from the network. If you need to test Kusama Asset Hub, replace polkadot-asset-hub.yml with kusama-asset-hub.yml in the command.

An Asset Hub instance is now running locally, and you can proceed with the asset registration process. Note that the local registration process does not differ from the live network process. Once you have a successful TestNet transaction, you can use the same steps to register the asset on MainNet.

"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..0f8724ef --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..9e000f53 Binary files /dev/null and b/sitemap.xml.gz differ diff --git a/tutorials/blockchains/index.html b/tutorials/blockchains/index.html new file mode 100644 index 00000000..afdea4e9 --- /dev/null +++ b/tutorials/blockchains/index.html @@ -0,0 +1,3436 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Blockchain Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + +

Blockchain

+
+ + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/blockchains/system-chains/asset-hub/asset-conversion/index.html b/tutorials/blockchains/system-chains/asset-hub/asset-conversion/index.html new file mode 100644 index 00000000..bc89dd33 --- /dev/null +++ b/tutorials/blockchains/system-chains/asset-hub/asset-conversion/index.html @@ -0,0 +1,3920 @@ + + + + + + + + + + + + + + + + + + + + + + + + Convert Assets on Asset Hub | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Asset Conversion on Asset Hub

+

Introduction

+

Asset Conversion is an Automated Market Maker (AMM) utilizing Uniswap V2 logic and implemented as a pallet on Polkadot's Asset Hub. For more details about this feature, please visit the Asset Conversion on Asset Hub wiki page.

+

This guide will provide detailed information about the key functionalities offered by the Asset Conversion pallet on Asset Hub, including:

+
    +
  • Creating a liquidity pool
  • +
  • Adding liquidity to a pool
  • +
  • Swapping assets
  • +
  • Withdrawing liquidity from a pool
  • +
+

Prerequisites

+

Before converting assets on Asset Hub, you must ensure you have:

+
    +
  • Access to the Polkadot.js Apps interface and a connection with the intended blockchain
  • +
  • A funded wallet containing the assets you wish to convert and enough available funds to cover the transaction fees
  • +
  • An asset registered on Asset Hub that you want to convert. If you haven't created an asset on Asset Hub yet, refer to the Register a Local Asset or Register a Foreign Asset documentation to create an asset.
  • +
+

Creating a Liquidity Pool

+

If an asset on Asset Hub does not have an existing liquidity pool, the first step is to create one.

+

The asset conversion pallet provides the createPool extrinsic to create a new liquidity pool, creating an empty liquidity pool and a new LP token asset.

+
+

Note

+

A testing token with the asset ID 1112 and the name PPM was created for this example.

+
+

As stated in the Test Environment Setup section, this tutorial is based on the assumption that you have an instance of Polkadot Asset Hub running locally. Therefore, the demo liquidity pool will be created between DOT and PPM tokens. However, the same steps can be applied to any other asset on Asset Hub.

+

From the Asset Hub perspective, the Multilocation that identifies the PPM token is the following:

+
{
+  parents: 0,
+  interior: {
+    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
+  }
+}
+
+
+

Note

+

The PalletInstance value of 50 represents the Assets pallet on Asset Hub. The GeneralIndex value of 1112 is the PPM asset's asset ID.

+
+

To create the liquidity pool, you can follow these steps:

+
    +
  1. +

    Navigate to the Extrinsics section on the Polkadot.js Apps interface

    +
      +
    1. Select Developer from the top menu
    2. +
    3. Click on Extrinsics from the dropdown menu
    4. +
    +

    Extrinsics Section

    +
  2. +
  3. +

    Choose the AssetConversion pallet and click on the createPool extrinsic

    +
      +
    1. Select the AssetConversion pallet
    2. +
    3. Choose the createPool extrinsic from the list of available extrinsics
    4. +
    +

    Create Pool Extrinsic

    +
  4. +
  5. +

    Fill in the required fields:

    +
      +
    1. +

      asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      +
      {
      +  parents: 0,
      +  interior: 'Here'
      +}
      +
      +
    2. +
    3. +

      asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      +
      {
      +  parents: 0,
      +  interior: {
      +    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
      +  }
      +}
      +
      +
    4. +
    5. +

      Click on Submit Transaction to create the liquidity pool

      +
    6. +
    +

    Create Pool Fields

    +
  6. +
+

Signing and submitting the transaction triggers the creation of the liquidity pool. To verify the new pool's creation, check the Explorer section on the Polkadot.js Apps interface and ensure that the PoolCreated event was emitted.

+

Pool Created Event

+

As the preceding image shows, the lpToken ID created for this pool is 19. This ID is essential to identify the liquidity pool and associated LP tokens.

+

Adding Liquidity to a Pool

+

The addLiquidity extrinsic allows users to provide liquidity to a pool of two assets. Users specify their preferred amounts for both assets and minimum acceptable quantities. The function determines the best asset contribution, which may vary from the amounts desired but won't fall below the specified minimums. Providers receive liquidity tokens representing their pool portion in return for their contribution.

+

To add liquidity to a pool, follow these steps:

+
    +
  1. +

    Navigate to the Extrinsics section on the Polkadot.js Apps interface

    +
      +
    1. Select Developer from the top menu
    2. +
    3. Click on Extrinsics from the dropdown menu
    4. +
    +

    Extrinsics Section

    +
  2. +
  3. +

    Choose the assetConversion pallet and click on the addLiquidity extrinsic

    +
      +
    1. Select the assetConversion pallet
    2. +
    3. Choose the addLiquidity extrinsic from the list of available extrinsics
    4. +
    +

    Add Liquidity Extrinsic

    +
  4. +
  5. +

    Fill in the required fields:

    +
      +
    1. +

      asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      +
      {
      +  parents: 0,
      +  interior: 'Here'
      +}
      +
      +
    2. +
    3. +

      asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      +
      {
      +  parents: 0,
      +  interior: {
      +    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
      +  }
      +}
      +
      +
    4. +
    5. +

      amount1Desired - the amount of the first asset that will be contributed to the pool

      +
    6. +
    7. amount2Desired - the quantity of the second asset intended for pool contribution
    8. +
    9. amount1Min - the minimum amount of the first asset that will be contributed
    10. +
    11. amount2Min - the lowest acceptable quantity of the second asset for contribution
    12. +
    13. mintTo - the account to which the liquidity tokens will be minted
    14. +
    15. Click on Submit Transaction to add liquidity to the pool
    16. +
    +

    Add Liquidity Fields

    +
    +

    Warning

    +

    Ensure that the appropriate amount of tokens provided has been minted previously and is available in your account before adding liquidity to the pool.

    +
    +

    In this case, the liquidity provided to the pool is between DOT tokens and PPM tokens with the asset ID 1112 on Polkadot Asset Hub. The intention is to provide liquidity for 1 DOT token (u128 value of 1000000000000 as it has 10 decimals) and 1 PPM token (u128 value of 1000000000000 as it also has 10 decimals).

    +
  6. +
+

Signing and submitting the transaction adds liquidity to the pool. To verify the liquidity addition, check the Explorer section on the Polkadot.js Apps interface and ensure that the LiquidityAdded event was emitted.

+

Liquidity Added Event

+

Swapping Assets

+

Swapping From an Exact Amount of Tokens

+

The asset conversion pallet enables users to exchange a specific quantity of one asset for another in a designated liquidity pool by swapping them for an exact amount of tokens. It guarantees the user will receive at least a predetermined minimum amount of the second asset. This function increases trading predictability and allows users to conduct asset exchanges with confidence that they are assured a minimum return.

+

To swap assets for an exact amount of tokens, follow these steps:

+
    +
  1. +

    Navigate to the Extrinsics section on the Polkadot.js Apps interface

    +
      +
    1. Select Developer from the top menu
    2. +
    3. Click on Extrinsics from the dropdown menu
    4. +
    +

    Extrinsics Section

    +
  2. +
  3. +

    Choose the AssetConversion pallet and click on the swapExactTokensForTokens extrinsic

    +
      +
    1. Select the AssetConversion pallet
    2. +
    3. Choose the swapExactTokensForTokens extrinsic from the list of available extrinsics
    4. +
    +

    Swap From Exact Tokens Extrinsic

    +
  4. +
  5. +

    Fill in the required fields:

    +
      +
    1. +

      path:Vec<StagingXcmV3MultiLocation> - an array of Multilocations representing the path of the swap. The first and last elements of the array are the input and output assets, respectively. In this case, the path consists of two elements:

      +
        +
      • +

        0: StagingXcmV3MultiLocation - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

        +
        {
        +  parents: 0,
        +  interior: 'Here'
        +}
        +
        +
      • +
      • +

        1: StagingXcmV3MultiLocation - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

        +
        {
        +  parents: 0,
        +  interior: {
        +    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
        +  }
        +}
        +
        +
      • +
      +
    2. +
    3. +

      amountOut - the exact amount of the second asset that the user wants to receive

      +
    4. +
    5. amountInMax - the maximum amount of the first asset that the user is willing to swap
    6. +
    7. sendTo - the account to which the swapped assets will be sent
    8. +
    9. keepAlive - a boolean value that determines whether the pool should be kept alive after the swap
    10. +
    11. Click on Submit Transaction to swap assets for an exact amount of tokens
    12. +
    +

    Swap For Exact Tokens Fields

    +
    +

    Warning

    +

    Ensure that the appropriate amount of tokens provided has been minted previously and is available in your account before adding liquidity to the pool.

    +
    +

    In this case, the intention is to swap 0.01 DOT token (u128 value of 100000000000 as it has 10 decimals) for 0.04 PPM token (u128 value of 400000000000 as it also has 10 decimals).

    +
  6. +
+

Signing and submitting the transaction will execute the swap. To verify execution, check the Explorer section on the Polkadot.js Apps interface and make sure that the SwapExecuted event was emitted.

+

Swap From Exact Tokens Event

+

Swapping To an Exact Amount of Tokens

+

Conversely, the Asset Conversion pallet comes with a function that allows users to trade a variable amount of one asset to acquire a precise quantity of another. It ensures that users stay within a set maximum of the initial asset to obtain the desired amount of the second asset. This provides a method to control transaction costs while achieving the intended result.

+

To swap assets for an exact amount of tokens, follow these steps:

+
    +
  1. +

    Navigate to the Extrinsics section on the Polkadot.js Apps interface

    +
      +
    1. Select Developer from the top menu
    2. +
    3. Click on Extrinsics from the dropdown menu
    4. +
    +

    Extrinsics Section

    +
  2. +
  3. +

    Choose the AssetConversion pallet and click on the swapTokensForExactTokens extrinsic:

    +
      +
    1. Select the AssetConversion pallet
    2. +
    3. Choose the swapTokensForExactTokens extrinsic from the list of available extrinsics
    4. +
    +

    Swap Tokens For Exact Tokens Extrinsic

    +
  4. +
  5. +

    Fill in the required fields:

    +
      +
    1. +

      path:Vec<StagingXcmV3MultiLocation\> - an array of Multilocations representing the path of the swap. The first and last elements of the array are the input and output assets, respectively. In this case, the path consists of two elements:

      +
        +
      • +

        0: StagingXcmV3MultiLocation - the Multilocation of the first asset in the pool. In this case, it is the PPM token, which the following Multilocation represents:

        +
        {
        +  parents: 0,
        +  interior: {
        +    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
        +  }
        +}
        +
        +
      • +
      • +

        1: StagingXcmV3MultiLocation - the second asset's Multilocation within the pool. This refers to the DOT token, which the following Multilocation identifies:

        +
        {
        +  parents: 0,
        +  interior: 'Here'
        +}
        +
        +
      • +
      +
    2. +
    3. +

      amountOut - the exact amount of the second asset that the user wants to receive

      +
    4. +
    5. amountInMax - the maximum amount of the first asset that the user is willing to swap
    6. +
    7. sendTo - the account to which the swapped assets will be sent
    8. +
    9. keepAlive - a boolean value that determines whether the pool should be kept alive after the swap
    10. +
    11. Click on Submit Transaction to swap assets for an exact amount of tokens
    12. +
    +

    Swap Tokens For Exact Tokens Fields

    +
    +

    Warning

    +

    Before swapping assets, ensure that the tokens provided have been minted previously and are available in your account.

    +
    +

    In this case, the intention is to swap 0.01 DOT token (u128 value of 100000000000 as it has ten decimals) for 0.04 PPM token (u128 value of 400000000000 as it also has ten decimals).

    +
  6. +
+

Signing and submitting the transaction will execute the swap. To verify execution, check the Explorer section on the Polkadot.js Apps interface and make sure that the SwapExecuted event was emitted.

+

Swap Tokens For Exact Tokens Event

+

Withdrawing Liquidity from a Pool

+

The Asset Conversion pallet provides the removeLiquidity extrinsic to remove liquidity from a pool. This function allows users to withdraw the liquidity they offered from a pool, returning the original assets. When calling this function, users specify the number of liquidity tokens (representing their share in the pool) they wish to burn. They also set minimum acceptable amounts for the assets they expect to receive back. This mechanism ensures that users can control the minimum value they receive, protecting against unfavorable price movements during the withdrawal process.

+

To withdraw liquidity from a pool, follow these steps:

+
    +
  1. +

    Navigate to the Extrinsics section on the Polkadot.js Apps interface

    +
      +
    1. Select Developer from the top menu
    2. +
    3. Click on Extrinsics from the dropdown menu
    4. +
    +

    Extrinsics Section

    +
  2. +
  3. +

    Choose the AssetConversion pallet and click on the remove_liquidity extrinsic

    +
      +
    1. Select the AssetConversion pallet
    2. +
    3. Choose the removeLiquidity extrinsic from the list of available extrinsics
    4. +
    +

    Remove Liquidity Extrinsic

    +
  4. +
  5. +

    Fill in the required fields:

    +
      +
    1. +

      asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      +
      {
      +  parents: 0,
      +  interior: 'Here'
      +}
      +
      +
    2. +
    3. +

      asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      +
      {
      +  parents: 0,
      +  interior: {
      +    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
      +  }
      +}
      +
      +
    4. +
    5. +

      lpTokenBurn - the number of liquidity tokens to burn

      +
    6. +
    7. amount1MinReceived - the minimum amount of the first asset that the user expects to receive
    8. +
    9. amount2MinReceived - the minimum quantity of the second asset the user expects to receive
    10. +
    11. withdrawTo - the account to which the withdrawn assets will be sent
    12. +
    13. Click on Submit Transaction to withdraw liquidity from the pool
    14. +
    +

    Remove Liquidity Fields

    +
    +

    Warning

    +

    Ensure that the tokens provided have been minted previously and are available in your account before withdrawing liquidity from the pool.

    +
    +

    In this case, the intention is to withdraw 0.05 liquidity tokens from the pool, expecting to receive 0.004 DOT token (u128 value of 40000000000 as it has 10 decimals) and 0.04 PPM token (u128 value of 400000000000 as it also has 10 decimals).

    +
  6. +
+

Signing and submitting the transaction will initiate the withdrawal of liquidity from the pool. To verify the withdrawal, check the Explorer section on the Polkadot.js Apps interface and ensure that the LiquidityRemoved event was emitted.

+

Remove Liquidity Event

+

Test Environment Setup

+

To test the Asset Conversion pallet, you can set up a local test environment to simulate different scenarios. This guide uses Chopsticks to spin up an instance of Polkadot Asset Hub. For further details on using Chopsticks, please refer to the Chopsticks documentation.

+

To set up a local test environment, execute the following command:

+
npx @acala-network/chopsticks \
+--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml
+
+
+

Note

+

This command initiates a lazy fork of Polkadot Asset Hub, including the most recent block information from the network. For Kusama Asset Hub testing, simply switch out polkadot-asset-hub.yml with kusama-asset-hub.yml in the command.

+
+

You now have a local Asset Hub instance up and running, ready for you to test various asset conversion procedures. The process here mirrors what you'd do on MainNet. After completing a transaction on TestNet, you can apply the same steps to convert assets on MainNet.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/blockchains/system-chains/asset-hub/index.html b/tutorials/blockchains/system-chains/asset-hub/index.html new file mode 100644 index 00000000..df45ccb0 --- /dev/null +++ b/tutorials/blockchains/system-chains/asset-hub/index.html @@ -0,0 +1,3516 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Asset Hub Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/index.html b/tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/index.html new file mode 100644 index 00000000..4dac7a8e --- /dev/null +++ b/tutorials/blockchains/system-chains/asset-hub/register-foreign-asset/index.html @@ -0,0 +1,3682 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Register a Foreign Asset on Asset Hub | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Register a Foreign Asset on Asset Hub

+

Introduction

+

As outlined in the Asset Hub Overview, Asset Hub supports two categories of assets: local and foreign. Local assets are created on the Asset Hub system parachain and are identified by integer IDs. On the other hand, foreign assets, which originate outside of Asset Hub, are recognized by Multilocations.

+

When registering a foreign asset on Asset Hub, it's essential to notice that the process involves communication between two parachains. The Asset Hub parachain will be the destination of the foreign asset, while the source parachain will be the origin of the asset. The communication between the two parachains is facilitated by the Cross-Chain Message Passing (XCMP) protocol.

+

This guide will take you through the process of registering a foreign asset on the Asset Hub parachain.

+

Prerequisites

+

The Asset Hub parachain is one of the system parachains on a relay chain, such as Polkadot or Kusama. To interact with these parachains, you can use the Polkadot.js Apps interface for:

+ +

For testing purposes, you can also interact with the Asset Hub instance on the following test networks:

+ +

Before you start, ensure that you have:

+
    +
  • Access to the Polkadot.js Apps interface, and you are connected to the desired chain
  • +
  • A parachain that supports the XCMP protocol to interact with the Asset Hub parachain
  • +
  • A funded wallet to pay for the transaction fees and subsequent registration of the foreign asset
  • +
+

This guide will use Polkadot, its local Asset Hub instance, and the Astar parachain (ID 2006), as stated in the Test Environment Setup section. However, the process is the same for other relay chains and their respective Asset Hub parachain, regardless of the network you are using and the parachain owner of the foreign asset.

+

Steps to Register a Foreign Asset

+

Asset Hub

+
    +
  1. +

    Open the Polkadot.js Apps interface and connect to the Asset Hub parachain using the network selector in the top left corner

    +
      +
    • Testing foreign asset registration is recommended on TestNet before proceeding to MainNet. If you haven't set up a local testing environment yet, consult the Environment setup guide. After setting up, connect to the Local Node (Chopsticks) at ws://127.0.0.1:8000
    • +
    • For live network operations, connect to the Asset Hub parachain. You can choose either Polkadot or Kusama Asset Hub from the dropdown menu, selecting your preferred RPC provider
    • +
    +
  2. +
  3. +

    Navigate to the Extrinsics page

    +
      +
    1. Click on the Developer tab from the top navigation bar
    2. +
    3. Select Extrinsics from the dropdown
    4. +
    +

    Access to Developer Extrinsics section

    +
  4. +
  5. +

    Select the Foreign Assets pallet

    +
      +
    1. Select the foreignAssets pallet from the dropdown list
    2. +
    3. Choose the create extrinsic
    4. +
    +

    Select the Foreign Asset pallet

    +
  6. +
  7. +

    Fill out the required fields and click on the copy icon to copy the encoded call data to your clipboard. The fields to be filled are:

    +
      +
    • +

      id - as this is a foreign asset, the ID will be represented by a Multilocation that reflects its origin. For this case, the Multilocation of the asset will be from the source parachain perspective:

      +
      { parents: 1, interior: { X1: [{ Parachain: 2006 }] } }
      +
      +
    • +
    • +

      admin - refers to the account that will be the admin of this asset. This account will be able to manage the asset, including updating its metadata. As the registered asset corresponds to a native asset of the source parachain, the admin account should be the sovereign account of the source parachain

      +
      +Obtain the sovereign account +

      The sovereign account can be obtained through Substrate Utilities.

      +

      Ensure that Sibling is selected and that the Para ID corresponds to the source parachain. In this case, since the guide follows the test setup stated in the Test Environment Setup section, the Para ID is 2006.

      +

      Get parachain sovereign account

      +
      +
    • +
    • +

      minBalance - the minimum balance required to hold this asset

      +
    • +
    +

    Fill out the required fields

    +
    +Encoded call data +

    If you want an example of the encoded call data, you can copy the following: +

    0x3500010100591f007369626cd6070000000000000000000000000000000000000000000000000000a0860100000000000000000000000000
    +

    +
    +
  8. +
+

Source Parachain

+
    +
  1. Navigate to the Developer > Extrinsics section
  2. +
  3. +

    Create the extrinsic to register the foreign asset through XCM

    +
      +
    1. Paste the encoded call data copied in the previous step
    2. +
    3. Click the Submit Transaction button
    4. +
    +

    Register foreign asset through XCM

    +

    This XCM call involves withdrawing DOT from the sibling account of the parachain, using it to initiate an execution. The transaction will be carried out with XCM as the origin kind, and will be a hex-encoded call to create a foreign asset on Asset Hub for the specified parachain asset multilocation. Any surplus will be refunded, and the asset will be deposited into the sibling account.

    +
    +

    Warning

    +

    Note that the sovereign account on the Asset Hub parachain must have a sufficient balance to cover the XCM BuyExecution instruction. If the account does not have enough balance, the transaction will fail.

    +
    +
    +Example of the encoded call data +

    If you want to have the whole XCM call ready to be copied, go to the Developer > Extrinsics > Decode section and paste the following hex-encoded call data: +

    0x6300330003010100a10f030c000400010000070010a5d4e81300010000070010a5d4e80006030700b4f13501419ce03500010100591f007369626cd607000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    +

    +

    Ensure to replace the encoded call data with the one you copied in the previous step.

    +
    +
  4. +
+

After the transaction is successfully executed, the foreign asset will be registered on the Asset Hub parachain.

+

Asset Registration Verification

+

To confirm that a foreign asset has been successfully accepted and registered on the Asset Hub parachain, you can navigate to the Network > Explorer section of the Polkadot.js Apps interface for Asset Hub. You should be able to see an event that includes the following details:

+

Asset registration event

+

In the image above, the success field indicates whether the asset registration was successful.

+

Test Environment Setup

+

To test the foreign asset registration process before deploying it on a live network, you can set up a local parachain environment. This guide uses Chopsticks to simulate that process. For more information on using Chopsticks, please refer to the Chopsticks documentation.

+

To set up a test environment, run the following command:

+
npx @acala-network/chopsticks xcm \
+--r polkadot \
+--p polkadot-asset-hub \
+--p astar
+
+
+

Note

+

The above command will create a lazy fork of Polkadot as the relay chain, its Asset Hub instance, and the Astar parachain. The xcm parameter enables communication through the XCMP protocol between the relay chain and the parachains, allowing the registration of foreign assets on Asset Hub. For further information on the chopsticks usage of the XCMP protocol, refer to the XCM Testing section of the Chopsticks documentation.

+
+

After executing the command, the terminal will display output indicating the Polkadot relay chain, the Polkadot Asset Hub, and the Astar parachain are running locally and connected through XCM. You can access them individually via the Polkadot.js Apps interface.

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/blockchains/system-chains/asset-hub/register-local-asset/index.html b/tutorials/blockchains/system-chains/asset-hub/register-local-asset/index.html new file mode 100644 index 00000000..d44bbd0a --- /dev/null +++ b/tutorials/blockchains/system-chains/asset-hub/register-local-asset/index.html @@ -0,0 +1,3590 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Register a Local Asset | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Register a Local Asset on Asset Hub

+

Introduction

+

As detailed in the Asset Hub Overview page, Asset Hub accommodates two types of assets: local and foreign. Local assets are those that were created in Asset Hub and are identifiable by an integer ID. On the other hand, foreign assets originate from a sibling parachain and are identified by a Multilocation.

+

This guide will take you through the steps of registering a local asset on the Asset Hub parachain.

+

Prerequisites

+

Before you begin, ensure you have access to the Polkadot.js Apps interface and a funded wallet with DOT or KSM.

+
    +
  • For Polkadot Asset Hub, you would need a deposit of 10 DOT and around 0.201 DOT for the metadata
  • +
  • For Kusama Asset Hub, the deposit is 0.1 KSM and around 0.000669 KSM for the metadata
  • +
+

You need to ensure that your Asset Hub account balance is a bit more than the sum of those two deposits, which should seamlessly account for the required deposits and transaction fees.

+

Steps to Register a Local Asset

+

To register a local asset on the Asset Hub parachain, follow these steps:

+
    +
  1. +

    Open the Polkadot.js Apps interface and connect to the Asset Hub parachain using the network selector in the top left corner

    +
      +
    • You may prefer to test local asset registration on TestNet before registering the asset on a MainNet hub. If you still need to set up a local testing environment, review the Environment setup section for instructions. Once the local environment is set up, connect to the Local Node (Chopsticks) available on ws://127.0.0.1:8000
    • +
    • For the live network, connect to the Asset Hub parachain. Either Polkadot or Kusama Asset Hub can be selected from the dropdown list, choosing the desired RPC provider
    • +
    +
  2. +
  3. +

    Click on the Network tab on the top navigation bar and select Assets from the dropdown list

    +

    Access to Asset Hub through Polkadot.JS

    +
  4. +
  5. +

    Now, you need to examine all the registered asset IDs. This step is crucial to ensure that the asset ID you are about to register is unique. Asset IDs are displayed in the assets column

    +

    Asset IDs on Asset Hub

    +
  6. +
  7. +

    Once you have confirmed that the asset ID is unique, click on the Create button on the top right corner of the page

    +

    Create a new asset

    +
  8. +
  9. +

    Fill in the required fields in the Create Asset form:

    +
      +
    1. creator account - the account to be used for creating this asset and setting up the initial metadata
    2. +
    3. asset name - the descriptive name of the asset you are registering
    4. +
    5. asset symbol - the symbol that will be used to represent the asset
    6. +
    7. asset decimals - the number of decimal places for this token, with a maximum of 20 allowed through the user interface
    8. +
    9. minimum balance - the minimum balance for the asset. This is specified in the units and decimals as requested
    10. +
    11. asset ID - the selected id for the asset. This should not match an already-existing asset id
    12. +
    13. Click on the Next button
    14. +
    +

    Create Asset Form

    +
  10. +
  11. +

    Choose the accounts for the roles listed below:

    +
      +
    1. admin account - the account designated for continuous administration of the token
    2. +
    3. issuer account - the account that will be used for issuing this token
    4. +
    5. freezer account - the account that will be used for performing token freezing operations
    6. +
    7. Click on the Create button
    8. +
    +

    Admin, Issuer, Freezer accounts

    +
  12. +
  13. +

    Click on the Sign and Submit button to complete the asset registration process

    +

    Sign and Submit

    +
  14. +
+

Verify Asset Registration

+

After completing these steps, the asset will be successfully registered. You can now view your asset listed on the Assets section of the Polkadot.js Apps interface.

+

Asset listed on Polkadot.js Apps

+
+

Note

+

Take into consideration that the Assets section’s link may differ depending on the network you are using. For the local environment, enter ws://127.0.0.1:8000 into the Custom Endpoint field.

+
+

In this way, you have successfully registered a local asset on the Asset Hub parachain.

+

For an in-depth explanation of Asset Hub and its features, please refer to the Polkadot Wiki page on Asset Hub.

+

Test Setup Environment

+

You can set up a local parachain environment to test the asset registration process before deploying it on the live network. This guide uses Chopsticks to simulate that process. For further information on chopsticks usage, refer to the Chopsticks documentation.

+

To set up a test environment, execute the following command:

+
npx @acala-network/chopsticks \
+--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml
+
+
+

Note

+

The above command will spawn a lazy fork of Polkadot Asset Hub with the latest block data from the network. If you need to test Kusama Asset Hub, replace polkadot-asset-hub.yml with kusama-asset-hub.yml in the command.

+
+

An Asset Hub instance is now running locally, and you can proceed with the asset registration process. Note that the local registration process does not differ from the live network process. Once you have a successful TestNet transaction, you can use the same steps to register the asset on MainNet.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/blockchains/system-chains/index.html b/tutorials/blockchains/system-chains/index.html new file mode 100644 index 00000000..760dcd31 --- /dev/null +++ b/tutorials/blockchains/system-chains/index.html @@ -0,0 +1,3454 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + System Chains Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

System Chains

+
+ + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/index.html b/tutorials/index.html new file mode 100644 index 00000000..46755782 --- /dev/null +++ b/tutorials/index.html @@ -0,0 +1,3418 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + +
+ + + + + + + + + + + + + + + + + +

Tutorials

+
+ + + + + + + + + + + + +
+ + + + + + + + + + + + + + + +
+ +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + \ No newline at end of file diff --git a/variables.yml b/variables.yml new file mode 100644 index 00000000..5038d7ee --- /dev/null +++ b/variables.yml @@ -0,0 +1,23 @@ +# Variables that can be reused should be added to this file +dependencies: + open_zeppelin: + repository_url: https://github.com/OpenZeppelin/polkadot-runtime-templates + version: v1.0.0 + chopsticks: + repository_url: https://github.com/AcalaNetwork/chopsticks + version: 0.13.1 + zombienet: + repository_url: https://github.com/paritytech/zombienet + version: v1.3.106 + architecture: macos-arm64 + asset_transfer_api: + repository_url: https://github.com/paritytech/asset-transfer-api + version: v0.3.1 + polkadot_sdk_solochain_template: + repository_url: https://github.com/paritytech/polkadot-sdk-solochain-template + version: v0.0.2 + srtool: + repository_url: https://github.com/paritytech/srtool + version: v0.16.0 + docker_image_name: paritytech/srtool + docker_image_version: 1.62.0 \ No newline at end of file