diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..aa9c7672 --- /dev/null +++ b/404.html @@ -0,0 +1,507 @@ + + + + + + + + + + + + + + + + + + + + Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + +
+ +
+ +

404 - Not found

+ +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/LICENSE/index.html b/LICENSE/index.html new file mode 100644 index 00000000..a22be354 --- /dev/null +++ b/LICENSE/index.html @@ -0,0 +1,5104 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + LICENSE | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

LICENSE

+ +

Attribution 4.0 International

+

=======================================================================

+

Creative Commons Corporation ("Creative Commons") is not a law firm and +does not provide legal services or legal advice. Distribution of +Creative Commons public licenses does not create a lawyer-client or +other relationship. Creative Commons makes its licenses and related +information available on an "as-is" basis. Creative Commons gives no +warranties regarding its licenses, any material licensed under their +terms and conditions, or any related information. Creative Commons +disclaims all liability for damages resulting from their use to the +fullest extent possible.

+

Using Creative Commons Public Licenses

+

Creative Commons public licenses provide a standard set of terms and +conditions that creators and other rights holders may use to share +original works of authorship and other material subject to copyright +and certain other rights specified in the public license below. The +following considerations are for informational purposes only, are not +exhaustive, and do not form part of our licenses.

+
 Considerations for licensors: Our public licenses are
+ intended for use by those authorized to give the public
+ permission to use material in ways otherwise restricted by
+ copyright and certain other rights. Our licenses are
+ irrevocable. Licensors should read and understand the terms
+ and conditions of the license they choose before applying it.
+ Licensors should also secure all rights necessary before
+ applying our licenses so that the public can reuse the
+ material as expected. Licensors should clearly mark any
+ material not subject to the license. This includes other CC-
+ licensed material, or material used under an exception or
+ limitation to copyright. More considerations for licensors:
+wiki.creativecommons.org/Considerations_for_licensors
+
+ Considerations for the public: By using one of our public
+ licenses, a licensor grants the public permission to use the
+ licensed material under specified terms and conditions. If
+ the licensor's permission is not necessary for any reason--for
+ example, because of any applicable exception or limitation to
+ copyright--then that use is not regulated by the license. Our
+ licenses grant only permissions under copyright and certain
+ other rights that a licensor has authority to grant. Use of
+ the licensed material may still be restricted for other
+ reasons, including because others have copyright or other
+ rights in the material. A licensor may make special requests,
+ such as asking that all changes be marked or described.
+ Although not required by our licenses, you are encouraged to
+ respect those requests where reasonable. More_considerations
+ for the public:
+wiki.creativecommons.org/Considerations_for_licensees
+
+ +

=======================================================================

+

Creative Commons Attribution 4.0 International Public License

+

By exercising the Licensed Rights (defined below), You accept and agree +to be bound by the terms and conditions of this Creative Commons +Attribution 4.0 International Public License ("Public License"). To the +extent this Public License may be interpreted as a contract, You are +granted the Licensed Rights in consideration of Your acceptance of +these terms and conditions, and the Licensor grants You such rights in +consideration of benefits the Licensor receives from making the +Licensed Material available under these terms and conditions.

+

Section 1 -- Definitions.

+

a. Adapted Material means material subject to Copyright and Similar + Rights that is derived from or based upon the Licensed Material + and in which the Licensed Material is translated, altered, + arranged, transformed, or otherwise modified in a manner requiring + permission under the Copyright and Similar Rights held by the + Licensor. For purposes of this Public License, where the Licensed + Material is a musical work, performance, or sound recording, + Adapted Material is always produced where the Licensed Material is + synched in timed relation with a moving image.

+

b. Adapter's License means the license You apply to Your Copyright + and Similar Rights in Your contributions to Adapted Material in + accordance with the terms and conditions of this Public License.

+

c. Copyright and Similar Rights means copyright and/or similar rights + closely related to copyright including, without limitation, + performance, broadcast, sound recording, and Sui Generis Database + Rights, without regard to how the rights are labeled or + categorized. For purposes of this Public License, the rights + specified in Section 2(b)(1)-(2) are not Copyright and Similar + Rights.

+

d. Effective Technological Measures means those measures that, in the + absence of proper authority, may not be circumvented under laws + fulfilling obligations under Article 11 of the WIPO Copyright + Treaty adopted on December 20, 1996, and/or similar international + agreements.

+

e. Exceptions and Limitations means fair use, fair dealing, and/or + any other exception or limitation to Copyright and Similar Rights + that applies to Your use of the Licensed Material.

+

f. Licensed Material means the artistic or literary work, database, + or other material to which the Licensor applied this Public + License.

+

g. Licensed Rights means the rights granted to You subject to the + terms and conditions of this Public License, which are limited to + all Copyright and Similar Rights that apply to Your use of the + Licensed Material and that the Licensor has authority to license.

+

h. Licensor means the individual(s) or entity(ies) granting rights + under this Public License.

+

i. Share means to provide material to the public by any means or + process that requires permission under the Licensed Rights, such + as reproduction, public display, public performance, distribution, + dissemination, communication, or importation, and to make material + available to the public including in ways that members of the + public may access the material from a place and at a time + individually chosen by them.

+

j. Sui Generis Database Rights means rights other than copyright + resulting from Directive 96/9/EC of the European Parliament and of + the Council of 11 March 1996 on the legal protection of databases, + as amended and/or succeeded, as well as other essentially + equivalent rights anywhere in the world.

+

k. You means the individual or entity exercising the Licensed Rights + under this Public License. Your has a corresponding meaning.

+

Section 2 -- Scope.

+

a. License grant.

+
   1. Subject to the terms and conditions of this Public License,
+      the Licensor hereby grants You a worldwide, royalty-free,
+      non-sublicensable, non-exclusive, irrevocable license to
+      exercise the Licensed Rights in the Licensed Material to:
+
+        a. reproduce and Share the Licensed Material, in whole or
+           in part; and
+
+        b. produce, reproduce, and Share Adapted Material.
+
+   2. Exceptions and Limitations. For the avoidance of doubt, where
+      Exceptions and Limitations apply to Your use, this Public
+      License does not apply, and You do not need to comply with
+      its terms and conditions.
+
+   3. Term. The term of this Public License is specified in Section
+      6(a).
+
+   4. Media and formats; technical modifications allowed. The
+      Licensor authorizes You to exercise the Licensed Rights in
+      all media and formats whether now known or hereafter created,
+      and to make technical modifications necessary to do so. The
+      Licensor waives and/or agrees not to assert any right or
+      authority to forbid You from making technical modifications
+      necessary to exercise the Licensed Rights, including
+      technical modifications necessary to circumvent Effective
+      Technological Measures. For purposes of this Public License,
+      simply making modifications authorized by this Section 2(a)
+      (4) never produces Adapted Material.
+
+   5. Downstream recipients.
+
+        a. Offer from the Licensor -- Licensed Material. Every
+           recipient of the Licensed Material automatically
+           receives an offer from the Licensor to exercise the
+           Licensed Rights under the terms and conditions of this
+           Public License.
+
+        b. No downstream restrictions. You may not offer or impose
+           any additional or different terms or conditions on, or
+           apply any Effective Technological Measures to, the
+           Licensed Material if doing so restricts exercise of the
+           Licensed Rights by any recipient of the Licensed
+           Material.
+
+   6. No endorsement. Nothing in this Public License constitutes or
+      may be construed as permission to assert or imply that You
+      are, or that Your use of the Licensed Material is, connected
+      with, or sponsored, endorsed, or granted official status by,
+      the Licensor or others designated to receive attribution as
+      provided in Section 3(a)(1)(A)(i).
+
+ +

b. Other rights.

+
   1. Moral rights, such as the right of integrity, are not
+      licensed under this Public License, nor are publicity,
+      privacy, and/or other similar personality rights; however, to
+      the extent possible, the Licensor waives and/or agrees not to
+      assert any such rights held by the Licensor to the limited
+      extent necessary to allow You to exercise the Licensed
+      Rights, but not otherwise.
+
+   2. Patent and trademark rights are not licensed under this
+      Public License.
+
+   3. To the extent possible, the Licensor waives any right to
+      collect royalties from You for the exercise of the Licensed
+      Rights, whether directly or through a collecting society
+      under any voluntary or waivable statutory or compulsory
+      licensing scheme. In all other cases the Licensor expressly
+      reserves any right to collect such royalties.
+
+ +

Section 3 -- License Conditions.

+

Your exercise of the Licensed Rights is expressly made subject to the +following conditions.

+

a. Attribution.

+
   1. If You Share the Licensed Material (including in modified
+      form), You must:
+
+        a. retain the following if it is supplied by the Licensor
+           with the Licensed Material:
+
+             i. identification of the creator(s) of the Licensed
+                Material and any others designated to receive
+                attribution, in any reasonable manner requested by
+                the Licensor (including by pseudonym if
+                designated);
+
+            ii. a copyright notice;
+
+           iii. a notice that refers to this Public License;
+
+            iv. a notice that refers to the disclaimer of
+                warranties;
+
+             v. a URI or hyperlink to the Licensed Material to the
+                extent reasonably practicable;
+
+        b. indicate if You modified the Licensed Material and
+           retain an indication of any previous modifications; and
+
+        c. indicate the Licensed Material is licensed under this
+           Public License, and include the text of, or the URI or
+           hyperlink to, this Public License.
+
+   2. You may satisfy the conditions in Section 3(a)(1) in any
+      reasonable manner based on the medium, means, and context in
+      which You Share the Licensed Material. For example, it may be
+      reasonable to satisfy the conditions by providing a URI or
+      hyperlink to a resource that includes the required
+      information.
+
+   3. If requested by the Licensor, You must remove any of the
+      information required by Section 3(a)(1)(A) to the extent
+      reasonably practicable.
+
+   4. If You Share Adapted Material You produce, the Adapter's
+      License You apply must not prevent recipients of the Adapted
+      Material from complying with this Public License.
+
+ +

Section 4 -- Sui Generis Database Rights.

+

Where the Licensed Rights include Sui Generis Database Rights that +apply to Your use of the Licensed Material:

+

a. for the avoidance of doubt, Section 2(a)(1) grants You the right + to extract, reuse, reproduce, and Share all or a substantial + portion of the contents of the database;

+

b. if You include all or a substantial portion of the database + contents in a database in which You have Sui Generis Database + Rights, then the database in which You have Sui Generis Database + Rights (but not its individual contents) is Adapted Material; and

+

c. You must comply with the conditions in Section 3(a) if You Share + all or a substantial portion of the contents of the database.

+

For the avoidance of doubt, this Section 4 supplements and does not +replace Your obligations under this Public License where the Licensed +Rights include other Copyright and Similar Rights.

+

Section 5 -- Disclaimer of Warranties and Limitation of Liability.

+

a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE + EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS + AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF + ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, + IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, + WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR + PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, + ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT + KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT + ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.

+

b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE + TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, + NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, + INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, + COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR + USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN + ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR + DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR + IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.

+

c. The disclaimer of warranties and limitation of liability provided + above shall be interpreted in a manner that, to the extent + possible, most closely approximates an absolute disclaimer and + waiver of all liability.

+

Section 6 -- Term and Termination.

+

a. This Public License applies for the term of the Copyright and + Similar Rights licensed here. However, if You fail to comply with + this Public License, then Your rights under this Public License + terminate automatically.

+

b. Where Your right to use the Licensed Material has terminated under + Section 6(a), it reinstates:

+
   1. automatically as of the date the violation is cured, provided
+      it is cured within 30 days of Your discovery of the
+      violation; or
+
+   2. upon express reinstatement by the Licensor.
+
+ For the avoidance of doubt, this Section 6(b) does not affect any
+ right the Licensor may have to seek remedies for Your violations
+ of this Public License.
+
+ +

c. For the avoidance of doubt, the Licensor may also offer the + Licensed Material under separate terms or conditions or stop + distributing the Licensed Material at any time; however, doing so + will not terminate this Public License.

+

d. Sections 1, 5, 6, 7, and 8 survive termination of this Public + License.

+

Section 7 -- Other Terms and Conditions.

+

a. The Licensor shall not be bound by any additional or different + terms or conditions communicated by You unless expressly agreed.

+

b. Any arrangements, understandings, or agreements regarding the + Licensed Material not stated herein are separate from and + independent of the terms and conditions of this Public License.

+

Section 8 -- Interpretation.

+

a. For the avoidance of doubt, this Public License does not, and + shall not be interpreted to, reduce, limit, restrict, or impose + conditions on any use of the Licensed Material that could lawfully + be made without permission under this Public License.

+

b. To the extent possible, if any provision of this Public License is + deemed unenforceable, it shall be automatically reformed to the + minimum extent necessary to make it enforceable. If the provision + cannot be reformed, it shall be severed from this Public License + without affecting the enforceability of the remaining terms and + conditions.

+

c. No term or condition of this Public License will be waived and no + failure to comply consented to unless expressly agreed to by the + Licensor.

+

d. Nothing in this Public License constitutes or may be interpreted + as a limitation upon, or waiver of, any privileges and immunities + that apply to the Licensor or You, including from the legal + processes of any jurisdiction or authority.

+

=======================================================================

+

Creative Commons is not a party to its public +licenses. Notwithstanding, Creative Commons may elect to apply one of +its public licenses to material it publishes and in those instances +will be considered the “Licensor.” The text of the Creative Commons +public licenses is dedicated to the public domain under the CC0 Public +Domain Dedication. Except for the limited purpose of indicating that +material is shared under a Creative Commons public license or as +otherwise permitted by the Creative Commons policies published at +creativecommons.org/policies, Creative Commons does not authorize the +use of the trademark "Creative Commons" or any other trademark or logo +of Creative Commons without its prior written consent including, +without limitation, in connection with any unauthorized modifications +to any of its public licenses or any other arrangements, +understandings, or agreements concerning use of licensed material. For +the avoidance of doubt, this paragraph does not form part of the +public licenses.

+

Creative Commons may be contacted at creativecommons.org

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 00000000..09624f03 Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/images/final-cta-background.png b/assets/images/final-cta-background.png new file mode 100644 index 00000000..bf955291 Binary files /dev/null and b/assets/images/final-cta-background.png differ diff --git a/assets/images/gradient.png b/assets/images/gradient.png new file mode 100644 index 00000000..3cf67b53 Binary files /dev/null and b/assets/images/gradient.png differ diff --git a/assets/javascripts/bundle.d7c377c4.min.js b/assets/javascripts/bundle.d7c377c4.min.js new file mode 100644 index 00000000..6a0bcf88 --- /dev/null +++ b/assets/javascripts/bundle.d7c377c4.min.js @@ -0,0 +1,29 @@ +"use strict";(()=>{var Mi=Object.create;var gr=Object.defineProperty;var Li=Object.getOwnPropertyDescriptor;var _i=Object.getOwnPropertyNames,Ft=Object.getOwnPropertySymbols,Ai=Object.getPrototypeOf,xr=Object.prototype.hasOwnProperty,ro=Object.prototype.propertyIsEnumerable;var to=(e,t,r)=>t in e?gr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,P=(e,t)=>{for(var r in t||(t={}))xr.call(t,r)&&to(e,r,t[r]);if(Ft)for(var r of Ft(t))ro.call(t,r)&&to(e,r,t[r]);return e};var oo=(e,t)=>{var r={};for(var o in e)xr.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&Ft)for(var o of Ft(e))t.indexOf(o)<0&&ro.call(e,o)&&(r[o]=e[o]);return r};var yr=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var Ci=(e,t,r,o)=>{if(t&&typeof t=="object"||typeof t=="function")for(let n of _i(t))!xr.call(e,n)&&n!==r&&gr(e,n,{get:()=>t[n],enumerable:!(o=Li(t,n))||o.enumerable});return e};var jt=(e,t,r)=>(r=e!=null?Mi(Ai(e)):{},Ci(t||!e||!e.__esModule?gr(r,"default",{value:e,enumerable:!0}):r,e));var no=(e,t,r)=>new Promise((o,n)=>{var i=c=>{try{a(r.next(c))}catch(p){n(p)}},s=c=>{try{a(r.throw(c))}catch(p){n(p)}},a=c=>c.done?o(c.value):Promise.resolve(c.value).then(i,s);a((r=r.apply(e,t)).next())});var ao=yr((Er,io)=>{(function(e,t){typeof Er=="object"&&typeof io!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(Er,function(){"use strict";function e(r){var o=!0,n=!1,i=null,s={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function a(C){return!!(C&&C!==document&&C.nodeName!=="HTML"&&C.nodeName!=="BODY"&&"classList"in C&&"contains"in C.classList)}function c(C){var ct=C.type,Ve=C.tagName;return!!(Ve==="INPUT"&&s[ct]&&!C.readOnly||Ve==="TEXTAREA"&&!C.readOnly||C.isContentEditable)}function p(C){C.classList.contains("focus-visible")||(C.classList.add("focus-visible"),C.setAttribute("data-focus-visible-added",""))}function l(C){C.hasAttribute("data-focus-visible-added")&&(C.classList.remove("focus-visible"),C.removeAttribute("data-focus-visible-added"))}function f(C){C.metaKey||C.altKey||C.ctrlKey||(a(r.activeElement)&&p(r.activeElement),o=!0)}function u(C){o=!1}function d(C){a(C.target)&&(o||c(C.target))&&p(C.target)}function y(C){a(C.target)&&(C.target.classList.contains("focus-visible")||C.target.hasAttribute("data-focus-visible-added"))&&(n=!0,window.clearTimeout(i),i=window.setTimeout(function(){n=!1},100),l(C.target))}function b(C){document.visibilityState==="hidden"&&(n&&(o=!0),D())}function D(){document.addEventListener("mousemove",J),document.addEventListener("mousedown",J),document.addEventListener("mouseup",J),document.addEventListener("pointermove",J),document.addEventListener("pointerdown",J),document.addEventListener("pointerup",J),document.addEventListener("touchmove",J),document.addEventListener("touchstart",J),document.addEventListener("touchend",J)}function Q(){document.removeEventListener("mousemove",J),document.removeEventListener("mousedown",J),document.removeEventListener("mouseup",J),document.removeEventListener("pointermove",J),document.removeEventListener("pointerdown",J),document.removeEventListener("pointerup",J),document.removeEventListener("touchmove",J),document.removeEventListener("touchstart",J),document.removeEventListener("touchend",J)}function J(C){C.target.nodeName&&C.target.nodeName.toLowerCase()==="html"||(o=!1,Q())}document.addEventListener("keydown",f,!0),document.addEventListener("mousedown",u,!0),document.addEventListener("pointerdown",u,!0),document.addEventListener("touchstart",u,!0),document.addEventListener("visibilitychange",b,!0),D(),r.addEventListener("focus",d,!0),r.addEventListener("blur",y,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var Kr=yr((kt,qr)=>{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof kt=="object"&&typeof qr=="object"?qr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof kt=="object"?kt.ClipboardJS=r():t.ClipboardJS=r()})(kt,function(){return function(){var e={686:function(o,n,i){"use strict";i.d(n,{default:function(){return Oi}});var s=i(279),a=i.n(s),c=i(370),p=i.n(c),l=i(817),f=i.n(l);function u(V){try{return document.execCommand(V)}catch(_){return!1}}var d=function(_){var O=f()(_);return u("cut"),O},y=d;function b(V){var _=document.documentElement.getAttribute("dir")==="rtl",O=document.createElement("textarea");O.style.fontSize="12pt",O.style.border="0",O.style.padding="0",O.style.margin="0",O.style.position="absolute",O.style[_?"right":"left"]="-9999px";var $=window.pageYOffset||document.documentElement.scrollTop;return O.style.top="".concat($,"px"),O.setAttribute("readonly",""),O.value=V,O}var D=function(_,O){var $=b(_);O.container.appendChild($);var N=f()($);return u("copy"),$.remove(),N},Q=function(_){var O=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},$="";return typeof _=="string"?$=D(_,O):_ instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(_==null?void 0:_.type)?$=D(_.value,O):($=f()(_),u("copy")),$},J=Q;function C(V){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?C=function(O){return typeof O}:C=function(O){return O&&typeof Symbol=="function"&&O.constructor===Symbol&&O!==Symbol.prototype?"symbol":typeof O},C(V)}var ct=function(){var _=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},O=_.action,$=O===void 0?"copy":O,N=_.container,Y=_.target,ke=_.text;if($!=="copy"&&$!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(Y!==void 0)if(Y&&C(Y)==="object"&&Y.nodeType===1){if($==="copy"&&Y.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if($==="cut"&&(Y.hasAttribute("readonly")||Y.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(ke)return J(ke,{container:N});if(Y)return $==="cut"?y(Y):J(Y,{container:N})},Ve=ct;function Fe(V){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Fe=function(O){return typeof O}:Fe=function(O){return O&&typeof Symbol=="function"&&O.constructor===Symbol&&O!==Symbol.prototype?"symbol":typeof O},Fe(V)}function vi(V,_){if(!(V instanceof _))throw new TypeError("Cannot call a class as a function")}function eo(V,_){for(var O=0;O<_.length;O++){var $=_[O];$.enumerable=$.enumerable||!1,$.configurable=!0,"value"in $&&($.writable=!0),Object.defineProperty(V,$.key,$)}}function gi(V,_,O){return _&&eo(V.prototype,_),O&&eo(V,O),V}function xi(V,_){if(typeof _!="function"&&_!==null)throw new TypeError("Super expression must either be null or a function");V.prototype=Object.create(_&&_.prototype,{constructor:{value:V,writable:!0,configurable:!0}}),_&&br(V,_)}function br(V,_){return br=Object.setPrototypeOf||function($,N){return $.__proto__=N,$},br(V,_)}function yi(V){var _=Ti();return function(){var $=Rt(V),N;if(_){var Y=Rt(this).constructor;N=Reflect.construct($,arguments,Y)}else N=$.apply(this,arguments);return Ei(this,N)}}function Ei(V,_){return _&&(Fe(_)==="object"||typeof _=="function")?_:wi(V)}function wi(V){if(V===void 0)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return V}function Ti(){if(typeof Reflect=="undefined"||!Reflect.construct||Reflect.construct.sham)return!1;if(typeof Proxy=="function")return!0;try{return Date.prototype.toString.call(Reflect.construct(Date,[],function(){})),!0}catch(V){return!1}}function Rt(V){return Rt=Object.setPrototypeOf?Object.getPrototypeOf:function(O){return O.__proto__||Object.getPrototypeOf(O)},Rt(V)}function vr(V,_){var O="data-clipboard-".concat(V);if(_.hasAttribute(O))return _.getAttribute(O)}var Si=function(V){xi(O,V);var _=yi(O);function O($,N){var Y;return vi(this,O),Y=_.call(this),Y.resolveOptions(N),Y.listenClick($),Y}return gi(O,[{key:"resolveOptions",value:function(){var N=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof N.action=="function"?N.action:this.defaultAction,this.target=typeof N.target=="function"?N.target:this.defaultTarget,this.text=typeof N.text=="function"?N.text:this.defaultText,this.container=Fe(N.container)==="object"?N.container:document.body}},{key:"listenClick",value:function(N){var Y=this;this.listener=p()(N,"click",function(ke){return Y.onClick(ke)})}},{key:"onClick",value:function(N){var Y=N.delegateTarget||N.currentTarget,ke=this.action(Y)||"copy",It=Ve({action:ke,container:this.container,target:this.target(Y),text:this.text(Y)});this.emit(It?"success":"error",{action:ke,text:It,trigger:Y,clearSelection:function(){Y&&Y.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(N){return vr("action",N)}},{key:"defaultTarget",value:function(N){var Y=vr("target",N);if(Y)return document.querySelector(Y)}},{key:"defaultText",value:function(N){return vr("text",N)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(N){var Y=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return J(N,Y)}},{key:"cut",value:function(N){return y(N)}},{key:"isSupported",value:function(){var N=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],Y=typeof N=="string"?[N]:N,ke=!!document.queryCommandSupported;return Y.forEach(function(It){ke=ke&&!!document.queryCommandSupported(It)}),ke}}]),O}(a()),Oi=Si},828:function(o){var n=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function s(a,c){for(;a&&a.nodeType!==n;){if(typeof a.matches=="function"&&a.matches(c))return a;a=a.parentNode}}o.exports=s},438:function(o,n,i){var s=i(828);function a(l,f,u,d,y){var b=p.apply(this,arguments);return l.addEventListener(u,b,y),{destroy:function(){l.removeEventListener(u,b,y)}}}function c(l,f,u,d,y){return typeof l.addEventListener=="function"?a.apply(null,arguments):typeof u=="function"?a.bind(null,document).apply(null,arguments):(typeof l=="string"&&(l=document.querySelectorAll(l)),Array.prototype.map.call(l,function(b){return a(b,f,u,d,y)}))}function p(l,f,u,d){return function(y){y.delegateTarget=s(y.target,f),y.delegateTarget&&d.call(l,y)}}o.exports=c},879:function(o,n){n.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},n.nodeList=function(i){var s=Object.prototype.toString.call(i);return i!==void 0&&(s==="[object NodeList]"||s==="[object HTMLCollection]")&&"length"in i&&(i.length===0||n.node(i[0]))},n.string=function(i){return typeof i=="string"||i instanceof String},n.fn=function(i){var s=Object.prototype.toString.call(i);return s==="[object Function]"}},370:function(o,n,i){var s=i(879),a=i(438);function c(u,d,y){if(!u&&!d&&!y)throw new Error("Missing required arguments");if(!s.string(d))throw new TypeError("Second argument must be a String");if(!s.fn(y))throw new TypeError("Third argument must be a Function");if(s.node(u))return p(u,d,y);if(s.nodeList(u))return l(u,d,y);if(s.string(u))return f(u,d,y);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function p(u,d,y){return u.addEventListener(d,y),{destroy:function(){u.removeEventListener(d,y)}}}function l(u,d,y){return Array.prototype.forEach.call(u,function(b){b.addEventListener(d,y)}),{destroy:function(){Array.prototype.forEach.call(u,function(b){b.removeEventListener(d,y)})}}}function f(u,d,y){return a(document.body,u,d,y)}o.exports=c},817:function(o){function n(i){var s;if(i.nodeName==="SELECT")i.focus(),s=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var a=i.hasAttribute("readonly");a||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),a||i.removeAttribute("readonly"),s=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var c=window.getSelection(),p=document.createRange();p.selectNodeContents(i),c.removeAllRanges(),c.addRange(p),s=c.toString()}return s}o.exports=n},279:function(o){function n(){}n.prototype={on:function(i,s,a){var c=this.e||(this.e={});return(c[i]||(c[i]=[])).push({fn:s,ctx:a}),this},once:function(i,s,a){var c=this;function p(){c.off(i,p),s.apply(a,arguments)}return p._=s,this.on(i,p,a)},emit:function(i){var s=[].slice.call(arguments,1),a=((this.e||(this.e={}))[i]||[]).slice(),c=0,p=a.length;for(c;c{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var Wa=/["'&<>]/;Vn.exports=Ua;function Ua(e){var t=""+e,r=Wa.exec(t);if(!r)return t;var o,n="",i=0,s=0;for(i=r.index;i0&&i[i.length-1])&&(p[0]===6||p[0]===2)){r=0;continue}if(p[0]===3&&(!i||p[1]>i[0]&&p[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function z(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],s;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(a){s={error:a}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(s)throw s.error}}return i}function K(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||a(u,d)})})}function a(u,d){try{c(o[u](d))}catch(y){f(i[0][3],y)}}function c(u){u.value instanceof ot?Promise.resolve(u.value.v).then(p,l):f(i[0][2],u)}function p(u){a("next",u)}function l(u){a("throw",u)}function f(u,d){u(d),i.shift(),i.length&&a(i[0][0],i[0][1])}}function po(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof be=="function"?be(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(s){return new Promise(function(a,c){s=e[i](s),n(a,c,s.done,s.value)})}}function n(i,s,a,c){Promise.resolve(c).then(function(p){i({value:p,done:a})},s)}}function k(e){return typeof e=="function"}function pt(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var Ut=pt(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(o,n){return n+1+") "+o.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function ze(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var je=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var s=this._parentage;if(s)if(this._parentage=null,Array.isArray(s))try{for(var a=be(s),c=a.next();!c.done;c=a.next()){var p=c.value;p.remove(this)}}catch(b){t={error:b}}finally{try{c&&!c.done&&(r=a.return)&&r.call(a)}finally{if(t)throw t.error}}else s.remove(this);var l=this.initialTeardown;if(k(l))try{l()}catch(b){i=b instanceof Ut?b.errors:[b]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=be(f),d=u.next();!d.done;d=u.next()){var y=d.value;try{lo(y)}catch(b){i=i!=null?i:[],b instanceof Ut?i=K(K([],z(i)),z(b.errors)):i.push(b)}}}catch(b){o={error:b}}finally{try{d&&!d.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new Ut(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)lo(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&ze(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&ze(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var Tr=je.EMPTY;function Nt(e){return e instanceof je||e&&"closed"in e&&k(e.remove)&&k(e.add)&&k(e.unsubscribe)}function lo(e){k(e)?e():e.unsubscribe()}var He={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var lt={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,s=n.isStopped,a=n.observers;return i||s?Tr:(this.currentObservers=null,a.push(r),new je(function(){o.currentObservers=null,ze(a,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,s=o.isStopped;n?r.error(i):s&&r.complete()},t.prototype.asObservable=function(){var r=new I;return r.source=this,r},t.create=function(r,o){return new xo(r,o)},t}(I);var xo=function(e){se(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:Tr},t}(x);var St={now:function(){return(St.delegate||Date).now()},delegate:void 0};var Ot=function(e){se(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=St);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,s=o._infiniteTimeWindow,a=o._timestampProvider,c=o._windowTime;n||(i.push(r),!s&&i.push(a.now()+c)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,s=n._buffer,a=s.slice(),c=0;c0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=ut.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var s=r.actions;o!=null&&((i=s[s.length-1])===null||i===void 0?void 0:i.id)!==o&&(ut.cancelAnimationFrame(o),r._scheduled=void 0)},t}(zt);var wo=function(e){se(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o=this._scheduled;this._scheduled=void 0;var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t}(qt);var ge=new wo(Eo);var M=new I(function(e){return e.complete()});function Kt(e){return e&&k(e.schedule)}function Cr(e){return e[e.length-1]}function Ge(e){return k(Cr(e))?e.pop():void 0}function Ae(e){return Kt(Cr(e))?e.pop():void 0}function Qt(e,t){return typeof Cr(e)=="number"?e.pop():t}var dt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Yt(e){return k(e==null?void 0:e.then)}function Bt(e){return k(e[ft])}function Gt(e){return Symbol.asyncIterator&&k(e==null?void 0:e[Symbol.asyncIterator])}function Jt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Wi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Xt=Wi();function Zt(e){return k(e==null?void 0:e[Xt])}function er(e){return co(this,arguments,function(){var r,o,n,i;return Wt(this,function(s){switch(s.label){case 0:r=e.getReader(),s.label=1;case 1:s.trys.push([1,,9,10]),s.label=2;case 2:return[4,ot(r.read())];case 3:return o=s.sent(),n=o.value,i=o.done,i?[4,ot(void 0)]:[3,5];case 4:return[2,s.sent()];case 5:return[4,ot(n)];case 6:return[4,s.sent()];case 7:return s.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function tr(e){return k(e==null?void 0:e.getReader)}function F(e){if(e instanceof I)return e;if(e!=null){if(Bt(e))return Ui(e);if(dt(e))return Ni(e);if(Yt(e))return Di(e);if(Gt(e))return To(e);if(Zt(e))return Vi(e);if(tr(e))return zi(e)}throw Jt(e)}function Ui(e){return new I(function(t){var r=e[ft]();if(k(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function Ni(e){return new I(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?v(function(n,i){return e(n,i,o)}):pe,ue(1),r?$e(t):Uo(function(){return new or}))}}function Rr(e){return e<=0?function(){return M}:g(function(t,r){var o=[];t.subscribe(E(r,function(n){o.push(n),e=2,!0))}function de(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new x}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,s=i===void 0?!0:i,a=e.resetOnRefCountZero,c=a===void 0?!0:a;return function(p){var l,f,u,d=0,y=!1,b=!1,D=function(){f==null||f.unsubscribe(),f=void 0},Q=function(){D(),l=u=void 0,y=b=!1},J=function(){var C=l;Q(),C==null||C.unsubscribe()};return g(function(C,ct){d++,!b&&!y&&D();var Ve=u=u!=null?u:r();ct.add(function(){d--,d===0&&!b&&!y&&(f=jr(J,c))}),Ve.subscribe(ct),!l&&d>0&&(l=new it({next:function(Fe){return Ve.next(Fe)},error:function(Fe){b=!0,D(),f=jr(Q,n,Fe),Ve.error(Fe)},complete:function(){y=!0,D(),f=jr(Q,s),Ve.complete()}}),F(C).subscribe(l))})(p)}}function jr(e,t){for(var r=[],o=2;oe.next(document)),e}function W(e,t=document){return Array.from(t.querySelectorAll(e))}function U(e,t=document){let r=ce(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function ce(e,t=document){return t.querySelector(e)||void 0}function Ie(){return document.activeElement instanceof HTMLElement&&document.activeElement||void 0}var ca=L(h(document.body,"focusin"),h(document.body,"focusout")).pipe(ye(1),q(void 0),m(()=>Ie()||document.body),Z(1));function vt(e){return ca.pipe(m(t=>e.contains(t)),X())}function qo(e,t){return L(h(e,"mouseenter").pipe(m(()=>!0)),h(e,"mouseleave").pipe(m(()=>!1))).pipe(t?ye(t):pe,q(!1))}function Ue(e){return{x:e.offsetLeft,y:e.offsetTop}}function Ko(e){return L(h(window,"load"),h(window,"resize")).pipe(Le(0,ge),m(()=>Ue(e)),q(Ue(e)))}function ir(e){return{x:e.scrollLeft,y:e.scrollTop}}function et(e){return L(h(e,"scroll"),h(window,"resize")).pipe(Le(0,ge),m(()=>ir(e)),q(ir(e)))}function Qo(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Qo(e,r)}function S(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)Qo(o,n);return o}function ar(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function gt(e){let t=S("script",{src:e});return H(()=>(document.head.appendChild(t),L(h(t,"load"),h(t,"error").pipe(w(()=>kr(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),A(()=>document.head.removeChild(t)),ue(1))))}var Yo=new x,pa=H(()=>typeof ResizeObserver=="undefined"?gt("https://unpkg.com/resize-observer-polyfill"):R(void 0)).pipe(m(()=>new ResizeObserver(e=>{for(let t of e)Yo.next(t)})),w(e=>L(Ke,R(e)).pipe(A(()=>e.disconnect()))),Z(1));function le(e){return{width:e.offsetWidth,height:e.offsetHeight}}function Se(e){return pa.pipe(T(t=>t.observe(e)),w(t=>Yo.pipe(v(({target:r})=>r===e),A(()=>t.unobserve(e)),m(()=>le(e)))),q(le(e)))}function xt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function sr(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}var Bo=new x,la=H(()=>R(new IntersectionObserver(e=>{for(let t of e)Bo.next(t)},{threshold:0}))).pipe(w(e=>L(Ke,R(e)).pipe(A(()=>e.disconnect()))),Z(1));function yt(e){return la.pipe(T(t=>t.observe(e)),w(t=>Bo.pipe(v(({target:r})=>r===e),A(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function Go(e,t=16){return et(e).pipe(m(({y:r})=>{let o=le(e),n=xt(e);return r>=n.height-o.height-t}),X())}var cr={drawer:U("[data-md-toggle=drawer]"),search:U("[data-md-toggle=search]")};function Jo(e){return cr[e].checked}function Ye(e,t){cr[e].checked!==t&&cr[e].click()}function Ne(e){let t=cr[e];return h(t,"change").pipe(m(()=>t.checked),q(t.checked))}function ma(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function fa(){return L(h(window,"compositionstart").pipe(m(()=>!0)),h(window,"compositionend").pipe(m(()=>!1))).pipe(q(!1))}function Xo(){let e=h(window,"keydown").pipe(v(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:Jo("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),v(({mode:t,type:r})=>{if(t==="global"){let o=Ie();if(typeof o!="undefined")return!ma(o,r)}return!0}),de());return fa().pipe(w(t=>t?M:e))}function me(){return new URL(location.href)}function st(e,t=!1){if(G("navigation.instant")&&!t){let r=S("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function Zo(){return new x}function en(){return location.hash.slice(1)}function pr(e){let t=S("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function ua(e){return L(h(window,"hashchange"),e).pipe(m(en),q(en()),v(t=>t.length>0),Z(1))}function tn(e){return ua(e).pipe(m(t=>ce(`[id="${t}"]`)),v(t=>typeof t!="undefined"))}function At(e){let t=matchMedia(e);return nr(r=>t.addListener(()=>r(t.matches))).pipe(q(t.matches))}function rn(){let e=matchMedia("print");return L(h(window,"beforeprint").pipe(m(()=>!0)),h(window,"afterprint").pipe(m(()=>!1))).pipe(q(e.matches))}function Dr(e,t){return e.pipe(w(r=>r?t():M))}function lr(e,t){return new I(r=>{let o=new XMLHttpRequest;o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network Error"))}),o.addEventListener("abort",()=>{r.error(new Error("Request aborted"))}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{if(n.lengthComputable)t.progress$.next(n.loaded/n.total*100);else{let i=Number(o.getResponseHeader("Content-Length"))||0;t.progress$.next(n.loaded/i*100)}}),t.progress$.next(5)),o.send()})}function De(e,t){return lr(e,t).pipe(w(r=>r.text()),m(r=>JSON.parse(r)),Z(1))}function on(e,t){let r=new DOMParser;return lr(e,t).pipe(w(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),Z(1))}function nn(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function an(){return L(h(window,"scroll",{passive:!0}),h(window,"resize",{passive:!0})).pipe(m(nn),q(nn()))}function sn(){return{width:innerWidth,height:innerHeight}}function cn(){return h(window,"resize",{passive:!0}).pipe(m(sn),q(sn()))}function pn(){return B([an(),cn()]).pipe(m(([e,t])=>({offset:e,size:t})),Z(1))}function mr(e,{viewport$:t,header$:r}){let o=t.pipe(te("size")),n=B([o,r]).pipe(m(()=>Ue(e)));return B([r,t,n]).pipe(m(([{height:i},{offset:s,size:a},{x:c,y:p}])=>({offset:{x:s.x-c,y:s.y-p+i},size:a})))}function da(e){return h(e,"message",t=>t.data)}function ha(e){let t=new x;return t.subscribe(r=>e.postMessage(r)),t}function ln(e,t=new Worker(e)){let r=da(t),o=ha(t),n=new x;n.subscribe(o);let i=o.pipe(ee(),oe(!0));return n.pipe(ee(),Re(r.pipe(j(i))),de())}var ba=U("#__config"),Et=JSON.parse(ba.textContent);Et.base=`${new URL(Et.base,me())}`;function he(){return Et}function G(e){return Et.features.includes(e)}function we(e,t){return typeof t!="undefined"?Et.translations[e].replace("#",t.toString()):Et.translations[e]}function Oe(e,t=document){return U(`[data-md-component=${e}]`,t)}function ne(e,t=document){return W(`[data-md-component=${e}]`,t)}function va(e){let t=U(".md-typeset > :first-child",e);return h(t,"click",{once:!0}).pipe(m(()=>U(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function mn(e){if(!G("announce.dismiss")||!e.childElementCount)return M;if(!e.hidden){let t=U(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return H(()=>{let t=new x;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),va(e).pipe(T(r=>t.next(r)),A(()=>t.complete()),m(r=>P({ref:e},r)))})}function ga(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function fn(e,t){let r=new x;return r.subscribe(({hidden:o})=>{e.hidden=o}),ga(e,t).pipe(T(o=>r.next(o)),A(()=>r.complete()),m(o=>P({ref:e},o)))}function Ct(e,t){return t==="inline"?S("div",{class:"md-tooltip md-tooltip--inline",id:e,role:"tooltip"},S("div",{class:"md-tooltip__inner md-typeset"})):S("div",{class:"md-tooltip",id:e,role:"tooltip"},S("div",{class:"md-tooltip__inner md-typeset"}))}function un(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return S("aside",{class:"md-annotation",tabIndex:0},Ct(t),S("a",{href:r,class:"md-annotation__index",tabIndex:-1},S("span",{"data-md-annotation-id":e})))}else return S("aside",{class:"md-annotation",tabIndex:0},Ct(t),S("span",{class:"md-annotation__index",tabIndex:-1},S("span",{"data-md-annotation-id":e})))}function dn(e){return S("button",{class:"md-clipboard md-icon",title:we("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}function Vr(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(c=>!e.terms[c]).reduce((c,p)=>[...c,S("del",null,p)," "],[]).slice(0,-1),i=he(),s=new URL(e.location,i.base);G("search.highlight")&&s.searchParams.set("h",Object.entries(e.terms).filter(([,c])=>c).reduce((c,[p])=>`${c} ${p}`.trim(),""));let{tags:a}=he();return S("a",{href:`${s}`,class:"md-search-result__link",tabIndex:-1},S("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&S("div",{class:"md-search-result__icon md-icon"}),r>0&&S("h1",null,e.title),r<=0&&S("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&e.tags.map(c=>{let p=a?c in a?`md-tag-icon md-tag--${a[c]}`:"md-tag-icon":"";return S("span",{class:`md-tag ${p}`},c)}),o>0&&n.length>0&&S("p",{class:"md-search-result__terms"},we("search.result.term.missing"),": ",...n)))}function hn(e){let t=e[0].score,r=[...e],o=he(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),s=r.findIndex(l=>l.scoreVr(l,1)),...c.length?[S("details",{class:"md-search-result__more"},S("summary",{tabIndex:-1},S("div",null,c.length>0&&c.length===1?we("search.result.more.one"):we("search.result.more.other",c.length))),...c.map(l=>Vr(l,1)))]:[]];return S("li",{class:"md-search-result__item"},p)}function bn(e){return S("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>S("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?ar(r):r)))}function zr(e){let t=`tabbed-control tabbed-control--${e}`;return S("div",{class:t,hidden:!0},S("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function vn(e){return S("div",{class:"md-typeset__scrollwrap"},S("div",{class:"md-typeset__table"},e))}function xa(e){let t=he(),r=new URL(`../${e.version}/`,t.base);return S("li",{class:"md-version__item"},S("a",{href:`${r}`,class:"md-version__link"},e.title))}function gn(e,t){return S("div",{class:"md-version"},S("button",{class:"md-version__current","aria-label":we("select.version")},t.title),S("ul",{class:"md-version__list"},e.map(xa)))}var ya=0;function Ea(e,t){document.body.append(e);let{width:r}=le(e);e.style.setProperty("--md-tooltip-width",`${r}px`),e.remove();let o=sr(t),n=typeof o!="undefined"?et(o):R({x:0,y:0}),i=L(vt(t),qo(t)).pipe(X());return B([i,n]).pipe(m(([s,a])=>{let{x:c,y:p}=Ue(t),l=le(t),f=t.closest("table");return f&&t.parentElement&&(c+=f.offsetLeft+t.parentElement.offsetLeft,p+=f.offsetTop+t.parentElement.offsetTop),{active:s,offset:{x:c-a.x+l.width/2-r/2,y:p-a.y+l.height+8}}}))}function Be(e){let t=e.title;if(!t.length)return M;let r=`__tooltip_${ya++}`,o=Ct(r,"inline"),n=U(".md-typeset",o);return n.innerHTML=t,H(()=>{let i=new x;return i.subscribe({next({offset:s}){o.style.setProperty("--md-tooltip-x",`${s.x}px`),o.style.setProperty("--md-tooltip-y",`${s.y}px`)},complete(){o.style.removeProperty("--md-tooltip-x"),o.style.removeProperty("--md-tooltip-y")}}),L(i.pipe(v(({active:s})=>s)),i.pipe(ye(250),v(({active:s})=>!s))).subscribe({next({active:s}){s?(e.insertAdjacentElement("afterend",o),e.setAttribute("aria-describedby",r),e.removeAttribute("title")):(o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t))},complete(){o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t)}}),i.pipe(Le(16,ge)).subscribe(({active:s})=>{o.classList.toggle("md-tooltip--active",s)}),i.pipe(_t(125,ge),v(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:s})=>s)).subscribe({next(s){s?o.style.setProperty("--md-tooltip-0",`${-s}px`):o.style.removeProperty("--md-tooltip-0")},complete(){o.style.removeProperty("--md-tooltip-0")}}),Ea(o,e).pipe(T(s=>i.next(s)),A(()=>i.complete()),m(s=>P({ref:e},s)))}).pipe(qe(ie))}function wa(e,t){let r=H(()=>B([Ko(e),et(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:s,height:a}=le(e);return{x:o-i.x+s/2,y:n-i.y+a/2}}));return vt(e).pipe(w(o=>r.pipe(m(n=>({active:o,offset:n})),ue(+!o||1/0))))}function xn(e,t,{target$:r}){let[o,n]=Array.from(e.children);return H(()=>{let i=new x,s=i.pipe(ee(),oe(!0));return i.subscribe({next({offset:a}){e.style.setProperty("--md-tooltip-x",`${a.x}px`),e.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),yt(e).pipe(j(s)).subscribe(a=>{e.toggleAttribute("data-md-visible",a)}),L(i.pipe(v(({active:a})=>a)),i.pipe(ye(250),v(({active:a})=>!a))).subscribe({next({active:a}){a?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe(Le(16,ge)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(_t(125,ge),v(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?e.style.setProperty("--md-tooltip-0",`${-a}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),h(n,"click").pipe(j(s),v(a=>!(a.metaKey||a.ctrlKey))).subscribe(a=>{a.stopPropagation(),a.preventDefault()}),h(n,"mousedown").pipe(j(s),ae(i)).subscribe(([a,{active:c}])=>{var p;if(a.button!==0||a.metaKey||a.ctrlKey)a.preventDefault();else if(c){a.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(p=Ie())==null||p.blur()}}),r.pipe(j(s),v(a=>a===o),Qe(125)).subscribe(()=>e.focus()),wa(e,t).pipe(T(a=>i.next(a)),A(()=>i.complete()),m(a=>P({ref:e},a)))})}function Ta(e){return e.tagName==="CODE"?W(".c, .c1, .cm",e):[e]}function Sa(e){let t=[];for(let r of Ta(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let s;for(;s=/(\(\d+\))(!)?/.exec(i.textContent);){let[,a,c]=s;if(typeof c=="undefined"){let p=i.splitText(s.index);i=p.splitText(a.length),t.push(p)}else{i.textContent=a,t.push(i);break}}}}return t}function yn(e,t){t.append(...Array.from(e.childNodes))}function fr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,s=new Map;for(let a of Sa(t)){let[,c]=a.textContent.match(/\((\d+)\)/);ce(`:scope > li:nth-child(${c})`,e)&&(s.set(c,un(c,i)),a.replaceWith(s.get(c)))}return s.size===0?M:H(()=>{let a=new x,c=a.pipe(ee(),oe(!0)),p=[];for(let[l,f]of s)p.push([U(".md-typeset",f),U(`:scope > li:nth-child(${l})`,e)]);return o.pipe(j(c)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of p)l?yn(f,u):yn(u,f)}),L(...[...s].map(([,l])=>xn(l,t,{target$:r}))).pipe(A(()=>a.complete()),de())})}function En(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return En(t)}}function wn(e,t){return H(()=>{let r=En(e);return typeof r!="undefined"?fr(r,e,t):M})}var Tn=jt(Kr());var Oa=0;function Sn(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return Sn(t)}}function Ma(e){return Se(e).pipe(m(({width:t})=>({scrollable:xt(e).width>t})),te("scrollable"))}function On(e,t){let{matches:r}=matchMedia("(hover)"),o=H(()=>{let n=new x,i=n.pipe(Rr(1));n.subscribe(({scrollable:c})=>{c&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")});let s=[];if(Tn.default.isSupported()&&(e.closest(".copy")||G("content.code.copy")&&!e.closest(".no-copy"))){let c=e.closest("pre");c.id=`__code_${Oa++}`;let p=dn(c.id);c.insertBefore(p,e),G("content.tooltips")&&s.push(Be(p))}let a=e.closest(".highlight");if(a instanceof HTMLElement){let c=Sn(a);if(typeof c!="undefined"&&(a.classList.contains("annotate")||G("content.code.annotate"))){let p=fr(c,e,t);s.push(Se(a).pipe(j(i),m(({width:l,height:f})=>l&&f),X(),w(l=>l?p:M)))}}return Ma(e).pipe(T(c=>n.next(c)),A(()=>n.complete()),m(c=>P({ref:e},c)),Re(...s))});return G("content.lazy")?yt(e).pipe(v(n=>n),ue(1),w(()=>o)):o}function La(e,{target$:t,print$:r}){let o=!0;return L(t.pipe(m(n=>n.closest("details:not([open])")),v(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(v(n=>n||!o),T(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Mn(e,t){return H(()=>{let r=new x;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),La(e,t).pipe(T(o=>r.next(o)),A(()=>r.complete()),m(o=>P({ref:e},o)))})}var Ln=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel rect,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel rect{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color);stroke-width:.05rem}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs #classDiagram-compositionEnd,defs #classDiagram-compositionStart,defs #classDiagram-dependencyEnd,defs #classDiagram-dependencyStart,defs #classDiagram-extensionEnd,defs #classDiagram-extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs #classDiagram-aggregationEnd,defs #classDiagram-aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.attributeBoxEven,.attributeBoxOdd{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var Qr,Aa=0;function Ca(){return typeof mermaid=="undefined"||mermaid instanceof Element?gt("https://unpkg.com/mermaid@10.6.1/dist/mermaid.min.js"):R(void 0)}function _n(e){return e.classList.remove("mermaid"),Qr||(Qr=Ca().pipe(T(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Ln,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),Z(1))),Qr.subscribe(()=>no(this,null,function*(){e.classList.add("mermaid");let t=`__mermaid_${Aa++}`,r=S("div",{class:"mermaid"}),o=e.textContent,{svg:n,fn:i}=yield mermaid.render(t,o),s=r.attachShadow({mode:"closed"});s.innerHTML=n,e.replaceWith(r),i==null||i(s)})),Qr.pipe(m(()=>({ref:e})))}var An=S("table");function Cn(e){return e.replaceWith(An),An.replaceWith(vn(e)),R({ref:e})}function ka(e){let t=e.find(r=>r.checked)||e[0];return L(...e.map(r=>h(r,"change").pipe(m(()=>U(`label[for="${r.id}"]`))))).pipe(q(U(`label[for="${t.id}"]`)),m(r=>({active:r})))}function kn(e,{viewport$:t,target$:r}){let o=U(".tabbed-labels",e),n=W(":scope > input",e),i=zr("prev");e.append(i);let s=zr("next");return e.append(s),H(()=>{let a=new x,c=a.pipe(ee(),oe(!0));B([a,Se(e)]).pipe(j(c),Le(1,ge)).subscribe({next([{active:p},l]){let f=Ue(p),{width:u}=le(p);e.style.setProperty("--md-indicator-x",`${f.x}px`),e.style.setProperty("--md-indicator-width",`${u}px`);let d=ir(o);(f.xd.x+l.width)&&o.scrollTo({left:Math.max(0,f.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),B([et(o),Se(o)]).pipe(j(c)).subscribe(([p,l])=>{let f=xt(o);i.hidden=p.x<16,s.hidden=p.x>f.width-l.width-16}),L(h(i,"click").pipe(m(()=>-1)),h(s,"click").pipe(m(()=>1))).pipe(j(c)).subscribe(p=>{let{width:l}=le(o);o.scrollBy({left:l*p,behavior:"smooth"})}),r.pipe(j(c),v(p=>n.includes(p))).subscribe(p=>p.click()),o.classList.add("tabbed-labels--linked");for(let p of n){let l=U(`label[for="${p.id}"]`);l.replaceChildren(S("a",{href:`#${l.htmlFor}`,tabIndex:-1},...Array.from(l.childNodes))),h(l.firstElementChild,"click").pipe(j(c),v(f=>!(f.metaKey||f.ctrlKey)),T(f=>{f.preventDefault(),f.stopPropagation()})).subscribe(()=>{history.replaceState({},"",`#${l.htmlFor}`),l.click()})}return G("content.tabs.link")&&a.pipe(Ee(1),ae(t)).subscribe(([{active:p},{offset:l}])=>{let f=p.innerText.trim();if(p.hasAttribute("data-md-switching"))p.removeAttribute("data-md-switching");else{let u=e.offsetTop-l.y;for(let y of W("[data-tabs]"))for(let b of W(":scope > input",y)){let D=U(`label[for="${b.id}"]`);if(D!==p&&D.innerText.trim()===f){D.setAttribute("data-md-switching",""),b.click();break}}window.scrollTo({top:e.offsetTop-u});let d=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([f,...d])])}}),a.pipe(j(c)).subscribe(()=>{for(let p of W("audio, video",e))p.pause()}),ka(n).pipe(T(p=>a.next(p)),A(()=>a.complete()),m(p=>P({ref:e},p)))}).pipe(qe(ie))}function Hn(e,{viewport$:t,target$:r,print$:o}){return L(...W(".annotate:not(.highlight)",e).map(n=>wn(n,{target$:r,print$:o})),...W("pre:not(.mermaid) > code",e).map(n=>On(n,{target$:r,print$:o})),...W("pre.mermaid",e).map(n=>_n(n)),...W("table:not([class])",e).map(n=>Cn(n)),...W("details",e).map(n=>Mn(n,{target$:r,print$:o})),...W("[data-tabs]",e).map(n=>kn(n,{viewport$:t,target$:r})),...W("[title]",e).filter(()=>G("content.tooltips")).map(n=>Be(n)))}function Ha(e,{alert$:t}){return t.pipe(w(r=>L(R(!0),R(!1).pipe(Qe(2e3))).pipe(m(o=>({message:r,active:o})))))}function $n(e,t){let r=U(".md-typeset",e);return H(()=>{let o=new x;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),Ha(e,t).pipe(T(n=>o.next(n)),A(()=>o.complete()),m(n=>P({ref:e},n)))})}function $a({viewport$:e}){if(!G("header.autohide"))return R(!1);let t=e.pipe(m(({offset:{y:n}})=>n),Ce(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),X()),o=Ne("search");return B([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),X(),w(n=>n?r:R(!1)),q(!1))}function Pn(e,t){return H(()=>B([Se(e),$a(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),X((r,o)=>r.height===o.height&&r.hidden===o.hidden),Z(1))}function Rn(e,{header$:t,main$:r}){return H(()=>{let o=new x,n=o.pipe(ee(),oe(!0));o.pipe(te("active"),Ze(t)).subscribe(([{active:s},{hidden:a}])=>{e.classList.toggle("md-header--shadow",s&&!a),e.hidden=a});let i=fe(W("[title]",e)).pipe(v(()=>G("content.tooltips")),re(s=>Be(s)));return r.subscribe(o),t.pipe(j(n),m(s=>P({ref:e},s)),Re(i.pipe(j(n))))})}function Pa(e,{viewport$:t,header$:r}){return mr(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=le(e);return{active:o>=n}}),te("active"))}function In(e,t){return H(()=>{let r=new x;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=ce(".md-content h1");return typeof o=="undefined"?M:Pa(o,t).pipe(T(n=>r.next(n)),A(()=>r.complete()),m(n=>P({ref:e},n)))})}function Fn(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),X()),n=o.pipe(w(()=>Se(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),te("bottom"))));return B([o,n,t]).pipe(m(([i,{top:s,bottom:a},{offset:{y:c},size:{height:p}}])=>(p=Math.max(0,p-Math.max(0,s-c,i)-Math.max(0,p+c-a)),{offset:s-i,height:p,active:s-i<=c})),X((i,s)=>i.offset===s.offset&&i.height===s.height&&i.active===s.active))}function Ra(e){let t=__md_get("__palette")||{index:e.findIndex(r=>matchMedia(r.getAttribute("data-md-color-media")).matches)};return R(...e).pipe(re(r=>h(r,"change").pipe(m(()=>r))),q(e[Math.max(0,t.index)]),m(r=>({index:e.indexOf(r),color:{media:r.getAttribute("data-md-color-media"),scheme:r.getAttribute("data-md-color-scheme"),primary:r.getAttribute("data-md-color-primary"),accent:r.getAttribute("data-md-color-accent")}})),Z(1))}function jn(e){let t=W("input",e),r=S("meta",{name:"theme-color"});document.head.appendChild(r);let o=S("meta",{name:"color-scheme"});document.head.appendChild(o);let n=At("(prefers-color-scheme: light)");return H(()=>{let i=new x;return i.subscribe(s=>{if(document.body.setAttribute("data-md-color-switching",""),s.color.media==="(prefers-color-scheme)"){let a=matchMedia("(prefers-color-scheme: light)"),c=document.querySelector(a.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']");s.color.scheme=c.getAttribute("data-md-color-scheme"),s.color.primary=c.getAttribute("data-md-color-primary"),s.color.accent=c.getAttribute("data-md-color-accent")}for(let[a,c]of Object.entries(s.color))document.body.setAttribute(`data-md-color-${a}`,c);for(let a=0;a{let s=Oe("header"),a=window.getComputedStyle(s);return o.content=a.colorScheme,a.backgroundColor.match(/\d+/g).map(c=>(+c).toString(16).padStart(2,"0")).join("")})).subscribe(s=>r.content=`#${s}`),i.pipe(Me(ie)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")}),Ra(t).pipe(j(n.pipe(Ee(1))),at(),T(s=>i.next(s)),A(()=>i.complete()),m(s=>P({ref:e},s)))})}function Wn(e,{progress$:t}){return H(()=>{let r=new x;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(T(o=>r.next({value:o})),A(()=>r.complete()),m(o=>({ref:e,value:o})))})}var Yr=jt(Kr());function Ia(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r.trimEnd()}function Un({alert$:e}){Yr.default.isSupported()&&new I(t=>{new Yr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||Ia(U(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(T(t=>{t.trigger.focus()}),m(()=>we("clipboard.copied"))).subscribe(e)}function Fa(e){if(e.length<2)return[""];let[t,r]=[...e].sort((n,i)=>n.length-i.length).map(n=>n.replace(/[^/]+$/,"")),o=0;if(t===r)o=t.length;else for(;t.charCodeAt(o)===r.charCodeAt(o);)o++;return e.map(n=>n.replace(t.slice(0,o),""))}function ur(e){let t=__md_get("__sitemap",sessionStorage,e);if(t)return R(t);{let r=he();return on(new URL("sitemap.xml",e||r.base)).pipe(m(o=>Fa(W("loc",o).map(n=>n.textContent))),xe(()=>M),$e([]),T(o=>__md_set("__sitemap",o,sessionStorage,e)))}}function Nn(e){let t=ce("[rel=canonical]",e);typeof t!="undefined"&&(t.href=t.href.replace("//localhost:","//127.0.0.1:"));let r=new Map;for(let o of W(":scope > *",e)){let n=o.outerHTML;for(let i of["href","src"]){let s=o.getAttribute(i);if(s===null)continue;let a=new URL(s,t==null?void 0:t.href),c=o.cloneNode();c.setAttribute(i,`${a}`),n=c.outerHTML;break}r.set(n,o)}return r}function Dn({location$:e,viewport$:t,progress$:r}){let o=he();if(location.protocol==="file:")return M;let n=ur().pipe(m(l=>l.map(f=>`${new URL(f,o.base)}`))),i=h(document.body,"click").pipe(ae(n),w(([l,f])=>{if(!(l.target instanceof Element))return M;let u=l.target.closest("a");if(u===null)return M;if(u.target||l.metaKey||l.ctrlKey)return M;let d=new URL(u.href);return d.search=d.hash="",f.includes(`${d}`)?(l.preventDefault(),R(new URL(u.href))):M}),de());i.pipe(ue(1)).subscribe(()=>{let l=ce("link[rel=icon]");typeof l!="undefined"&&(l.href=l.href)}),h(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),i.pipe(ae(t)).subscribe(([l,{offset:f}])=>{history.scrollRestoration="manual",history.replaceState(f,""),history.pushState(null,"",l)}),i.subscribe(e);let s=e.pipe(q(me()),te("pathname"),Ee(1),w(l=>lr(l,{progress$:r}).pipe(xe(()=>(st(l,!0),M))))),a=new DOMParser,c=s.pipe(w(l=>l.text()),w(l=>{let f=a.parseFromString(l,"text/html");for(let b of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...G("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let D=ce(b),Q=ce(b,f);typeof D!="undefined"&&typeof Q!="undefined"&&D.replaceWith(Q)}let u=Nn(document.head),d=Nn(f.head);for(let[b,D]of d)D.getAttribute("rel")==="stylesheet"||D.hasAttribute("src")||(u.has(b)?u.delete(b):document.head.appendChild(D));for(let b of u.values())b.getAttribute("rel")==="stylesheet"||b.hasAttribute("src")||b.remove();let y=Oe("container");return We(W("script",y)).pipe(w(b=>{let D=f.createElement("script");if(b.src){for(let Q of b.getAttributeNames())D.setAttribute(Q,b.getAttribute(Q));return b.replaceWith(D),new I(Q=>{D.onload=()=>Q.complete()})}else return D.textContent=b.textContent,b.replaceWith(D),M}),ee(),oe(f))}),de());return h(window,"popstate").pipe(m(me)).subscribe(e),e.pipe(q(me()),Ce(2,1),v(([l,f])=>l.pathname===f.pathname&&l.hash!==f.hash),m(([,l])=>l)).subscribe(l=>{var f,u;history.state!==null||!l.hash?window.scrollTo(0,(u=(f=history.state)==null?void 0:f.y)!=null?u:0):(history.scrollRestoration="auto",pr(l.hash),history.scrollRestoration="manual")}),e.pipe(Ir(i),q(me()),Ce(2,1),v(([l,f])=>l.pathname===f.pathname&&l.hash===f.hash),m(([,l])=>l)).subscribe(l=>{history.scrollRestoration="auto",pr(l.hash),history.scrollRestoration="manual",history.back()}),c.pipe(ae(e)).subscribe(([,l])=>{var f,u;history.state!==null||!l.hash?window.scrollTo(0,(u=(f=history.state)==null?void 0:f.y)!=null?u:0):pr(l.hash)}),t.pipe(te("offset"),ye(100)).subscribe(({offset:l})=>{history.replaceState(l,"")}),c}var qn=jt(zn());function Kn(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,s)=>`${i}${s}`;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return s=>(0,qn.default)(s).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function Ht(e){return e.type===1}function dr(e){return e.type===3}function Qn(e,t){let r=ln(e);return L(R(location.protocol!=="file:"),Ne("search")).pipe(Pe(o=>o),w(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:G("search.suggest")}}})),r}function Yn({document$:e}){let t=he(),r=De(new URL("../versions.json",t.base)).pipe(xe(()=>M)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:s,aliases:a})=>s===i||a.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),w(n=>h(document.body,"click").pipe(v(i=>!i.metaKey&&!i.ctrlKey),ae(o),w(([i,s])=>{if(i.target instanceof Element){let a=i.target.closest("a");if(a&&!a.target&&n.has(a.href)){let c=a.href;return!i.target.closest(".md-version")&&n.get(c)===s?M:(i.preventDefault(),R(c))}}return M}),w(i=>{let{version:s}=n.get(i);return ur(new URL(i)).pipe(m(a=>{let p=me().href.replace(t.base,"");return a.includes(p.split("#")[0])?new URL(`../${s}/${p}`,t.base):new URL(i)}))})))).subscribe(n=>st(n,!0)),B([r,o]).subscribe(([n,i])=>{U(".md-header__topic").appendChild(gn(n,i))}),e.pipe(w(()=>o)).subscribe(n=>{var s;let i=__md_get("__outdated",sessionStorage);if(i===null){i=!0;let a=((s=t.version)==null?void 0:s.default)||"latest";Array.isArray(a)||(a=[a]);e:for(let c of a)for(let p of n.aliases.concat(n.version))if(new RegExp(c,"i").test(p)){i=!1;break e}__md_set("__outdated",i,sessionStorage)}if(i)for(let a of ne("outdated"))a.hidden=!1})}function Da(e,{worker$:t}){let{searchParams:r}=me();r.has("q")&&(Ye("search",!0),e.value=r.get("q"),e.focus(),Ne("search").pipe(Pe(i=>!i)).subscribe(()=>{let i=me();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=vt(e),n=L(t.pipe(Pe(Ht)),h(e,"keyup"),o).pipe(m(()=>e.value),X());return B([n,o]).pipe(m(([i,s])=>({value:i,focus:s})),Z(1))}function Bn(e,{worker$:t}){let r=new x,o=r.pipe(ee(),oe(!0));B([t.pipe(Pe(Ht)),r],(i,s)=>s).pipe(te("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(te("focus")).subscribe(({focus:i})=>{i&&Ye("search",i)}),h(e.form,"reset").pipe(j(o)).subscribe(()=>e.focus());let n=U("header [for=__search]");return h(n,"click").subscribe(()=>e.focus()),Da(e,{worker$:t}).pipe(T(i=>r.next(i)),A(()=>r.complete()),m(i=>P({ref:e},i)),Z(1))}function Gn(e,{worker$:t,query$:r}){let o=new x,n=Go(e.parentElement).pipe(v(Boolean)),i=e.parentElement,s=U(":scope > :first-child",e),a=U(":scope > :last-child",e);Ne("search").subscribe(l=>a.setAttribute("role",l?"list":"presentation")),o.pipe(ae(r),Wr(t.pipe(Pe(Ht)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:s.textContent=f.length?we("search.result.none"):we("search.result.placeholder");break;case 1:s.textContent=we("search.result.one");break;default:let u=ar(l.length);s.textContent=we("search.result.other",u)}});let c=o.pipe(T(()=>a.innerHTML=""),w(({items:l})=>L(R(...l.slice(0,10)),R(...l.slice(10)).pipe(Ce(4),Nr(n),w(([f])=>f)))),m(hn),de());return c.subscribe(l=>a.appendChild(l)),c.pipe(re(l=>{let f=ce("details",l);return typeof f=="undefined"?M:h(f,"toggle").pipe(j(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(v(dr),m(({data:l})=>l)).pipe(T(l=>o.next(l)),A(()=>o.complete()),m(l=>P({ref:e},l)))}function Va(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=me();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function Jn(e,t){let r=new x,o=r.pipe(ee(),oe(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),h(e,"click").pipe(j(o)).subscribe(n=>n.preventDefault()),Va(e,t).pipe(T(n=>r.next(n)),A(()=>r.complete()),m(n=>P({ref:e},n)))}function Xn(e,{worker$:t,keyboard$:r}){let o=new x,n=Oe("search-query"),i=L(h(n,"keydown"),h(n,"focus")).pipe(Me(ie),m(()=>n.value),X());return o.pipe(Ze(i),m(([{suggest:a},c])=>{let p=c.split(/([\s-]+)/);if(a!=null&&a.length&&p[p.length-1]){let l=a[a.length-1];l.startsWith(p[p.length-1])&&(p[p.length-1]=l)}else p.length=0;return p})).subscribe(a=>e.innerHTML=a.join("").replace(/\s/g," ")),r.pipe(v(({mode:a})=>a==="search")).subscribe(a=>{switch(a.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(v(dr),m(({data:a})=>a)).pipe(T(a=>o.next(a)),A(()=>o.complete()),m(()=>({ref:e})))}function Zn(e,{index$:t,keyboard$:r}){let o=he();try{let n=Qn(o.search,t),i=Oe("search-query",e),s=Oe("search-result",e);h(e,"click").pipe(v(({target:c})=>c instanceof Element&&!!c.closest("a"))).subscribe(()=>Ye("search",!1)),r.pipe(v(({mode:c})=>c==="search")).subscribe(c=>{let p=Ie();switch(c.type){case"Enter":if(p===i){let l=new Map;for(let f of W(":first-child [href]",s)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,d])=>d-u);f.click()}c.claim()}break;case"Escape":case"Tab":Ye("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof p=="undefined")i.focus();else{let l=[i,...W(":not(details) > [href], summary, details[open] [href]",s)],f=Math.max(0,(Math.max(0,l.indexOf(p))+l.length+(c.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}c.claim();break;default:i!==Ie()&&i.focus()}}),r.pipe(v(({mode:c})=>c==="global")).subscribe(c=>{switch(c.type){case"f":case"s":case"/":i.focus(),i.select(),c.claim();break}});let a=Bn(i,{worker$:n});return L(a,Gn(s,{worker$:n,query$:a})).pipe(Re(...ne("search-share",e).map(c=>Jn(c,{query$:a})),...ne("search-suggest",e).map(c=>Xn(c,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,Ke}}function ei(e,{index$:t,location$:r}){return B([t,r.pipe(q(me()),v(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>Kn(o.config)(n.searchParams.get("h"))),m(o=>{var s;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let a=i.nextNode();a;a=i.nextNode())if((s=a.parentElement)!=null&&s.offsetHeight){let c=a.textContent,p=o(c);p.length>c.length&&n.set(a,p)}for(let[a,c]of n){let{childNodes:p}=S("span",null,c);a.replaceWith(...Array.from(p))}return{ref:e,nodes:n}}))}function za(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return B([r,t]).pipe(m(([{offset:i,height:s},{offset:{y:a}}])=>(s=s+Math.min(n,Math.max(0,a-i))-n,{height:s,locked:a>=i+n})),X((i,s)=>i.height===s.height&&i.locked===s.locked))}function Br(e,o){var n=o,{header$:t}=n,r=oo(n,["header$"]);let i=U(".md-sidebar__scrollwrap",e),{y:s}=Ue(i);return H(()=>{let a=new x,c=a.pipe(ee(),oe(!0)),p=a.pipe(Le(0,ge));return p.pipe(ae(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*s}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),p.pipe(Pe()).subscribe(()=>{for(let l of W(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=le(f);f.scrollTo({top:u-d/2})}}}),fe(W("label[tabindex]",e)).pipe(re(l=>h(l,"click").pipe(Me(ie),m(()=>l),j(c)))).subscribe(l=>{let f=U(`[id="${l.htmlFor}"]`);U(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),za(e,r).pipe(T(l=>a.next(l)),A(()=>a.complete()),m(l=>P({ref:e},l)))})}function ti(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return Lt(De(`${r}/releases/latest`).pipe(xe(()=>M),m(o=>({version:o.tag_name})),$e({})),De(r).pipe(xe(()=>M),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),$e({}))).pipe(m(([o,n])=>P(P({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return De(r).pipe(m(o=>({repositories:o.public_repos})),$e({}))}}function ri(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return De(r).pipe(xe(()=>M),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),$e({}))}function oi(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return ti(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return ri(r,o)}return M}var qa;function Ka(e){return qa||(qa=H(()=>{let t=__md_get("__source",sessionStorage);if(t)return R(t);if(ne("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return M}return oi(e.href).pipe(T(o=>__md_set("__source",o,sessionStorage)))}).pipe(xe(()=>M),v(t=>Object.keys(t).length>0),m(t=>({facts:t})),Z(1)))}function ni(e){let t=U(":scope > :last-child",e);return H(()=>{let r=new x;return r.subscribe(({facts:o})=>{t.appendChild(bn(o)),t.classList.add("md-source__repository--active")}),Ka(e).pipe(T(o=>r.next(o)),A(()=>r.complete()),m(o=>P({ref:e},o)))})}function Qa(e,{viewport$:t,header$:r}){return Se(document.body).pipe(w(()=>mr(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),te("hidden"))}function ii(e,t){return H(()=>{let r=new x;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(G("navigation.tabs.sticky")?R({hidden:!1}):Qa(e,t)).pipe(T(o=>r.next(o)),A(()=>r.complete()),m(o=>P({ref:e},o)))})}function Ya(e,{viewport$:t,header$:r}){let o=new Map,n=W("[href^=\\#]",e);for(let a of n){let c=decodeURIComponent(a.hash.substring(1)),p=ce(`[id="${c}"]`);typeof p!="undefined"&&o.set(a,p)}let i=r.pipe(te("height"),m(({height:a})=>{let c=Oe("main"),p=U(":scope > :first-child",c);return a+.8*(p.offsetTop-c.offsetTop)}),de());return Se(document.body).pipe(te("height"),w(a=>H(()=>{let c=[];return R([...o].reduce((p,[l,f])=>{for(;c.length&&o.get(c[c.length-1]).tagName>=f.tagName;)c.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let d=f.offsetParent;for(;d;d=d.offsetParent)u+=d.offsetTop;return p.set([...c=[...c,l]].reverse(),u)},new Map))}).pipe(m(c=>new Map([...c].sort(([,p],[,l])=>p-l))),Ze(i),w(([c,p])=>t.pipe(Fr(([l,f],{offset:{y:u},size:d})=>{let y=u+d.height>=Math.floor(a.height);for(;f.length;){let[,b]=f[0];if(b-p=u&&!y)f=[l.pop(),...f];else break}return[l,f]},[[],[...c]]),X((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([a,c])=>({prev:a.map(([p])=>p),next:c.map(([p])=>p)})),q({prev:[],next:[]}),Ce(2,1),m(([a,c])=>a.prev.length{let i=new x,s=i.pipe(ee(),oe(!0));if(i.subscribe(({prev:a,next:c})=>{for(let[p]of c)p.classList.remove("md-nav__link--passed"),p.classList.remove("md-nav__link--active");for(let[p,[l]]of a.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",p===a.length-1)}),G("toc.follow")){let a=L(t.pipe(ye(1),m(()=>{})),t.pipe(ye(250),m(()=>"smooth")));i.pipe(v(({prev:c})=>c.length>0),Ze(o.pipe(Me(ie))),ae(a)).subscribe(([[{prev:c}],p])=>{let[l]=c[c.length-1];if(l.offsetHeight){let f=sr(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=le(f);f.scrollTo({top:u-d/2,behavior:p})}}})}return G("navigation.tracking")&&t.pipe(j(s),te("offset"),ye(250),Ee(1),j(n.pipe(Ee(1))),at({delay:250}),ae(i)).subscribe(([,{prev:a}])=>{let c=me(),p=a[a.length-1];if(p&&p.length){let[l]=p,{hash:f}=new URL(l.href);c.hash!==f&&(c.hash=f,history.replaceState({},"",`${c}`))}else c.hash="",history.replaceState({},"",`${c}`)}),Ya(e,{viewport$:t,header$:r}).pipe(T(a=>i.next(a)),A(()=>i.complete()),m(a=>P({ref:e},a)))})}function Ba(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:s}})=>s),Ce(2,1),m(([s,a])=>s>a&&a>0),X()),i=r.pipe(m(({active:s})=>s));return B([i,n]).pipe(m(([s,a])=>!(s&&a)),X(),j(o.pipe(Ee(1))),oe(!0),at({delay:250}),m(s=>({hidden:s})))}function si(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new x,s=i.pipe(ee(),oe(!0));return i.subscribe({next({hidden:a}){e.hidden=a,a?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(j(s),te("height")).subscribe(({height:a})=>{e.style.top=`${a+16}px`}),h(e,"click").subscribe(a=>{a.preventDefault(),window.scrollTo({top:0})}),Ba(e,{viewport$:t,main$:o,target$:n}).pipe(T(a=>i.next(a)),A(()=>i.complete()),m(a=>P({ref:e},a)))}function ci({document$:e}){e.pipe(w(()=>W(".md-ellipsis")),re(t=>yt(t).pipe(j(e.pipe(Ee(1))),v(r=>r),m(()=>t),ue(1))),v(t=>t.offsetWidth{let r=t.innerText,o=t.closest("a")||t;return o.title=r,Be(o).pipe(j(e.pipe(Ee(1))),A(()=>o.removeAttribute("title")))})).subscribe(),e.pipe(w(()=>W(".md-status")),re(t=>Be(t))).subscribe()}function pi({document$:e,tablet$:t}){e.pipe(w(()=>W(".md-toggle--indeterminate")),T(r=>{r.indeterminate=!0,r.checked=!1}),re(r=>h(r,"change").pipe(Ur(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),ae(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function Ga(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function li({document$:e}){e.pipe(w(()=>W("[data-md-scrollfix]")),T(t=>t.removeAttribute("data-md-scrollfix")),v(Ga),re(t=>h(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function mi({viewport$:e,tablet$:t}){B([Ne("search"),t]).pipe(m(([r,o])=>r&&!o),w(r=>R(r).pipe(Qe(r?400:100))),ae(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function Ja(){return location.protocol==="file:"?gt(`${new URL("search/search_index.js",Gr.base)}`).pipe(m(()=>__index),Z(1)):De(new URL("search/search_index.json",Gr.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var rt=zo(),Pt=Zo(),wt=tn(Pt),Jr=Xo(),_e=pn(),hr=At("(min-width: 960px)"),ui=At("(min-width: 1220px)"),di=rn(),Gr=he(),hi=document.forms.namedItem("search")?Ja():Ke,Xr=new x;Un({alert$:Xr});var Zr=new x;G("navigation.instant")&&Dn({location$:Pt,viewport$:_e,progress$:Zr}).subscribe(rt);var fi;((fi=Gr.version)==null?void 0:fi.provider)==="mike"&&Yn({document$:rt});L(Pt,wt).pipe(Qe(125)).subscribe(()=>{Ye("drawer",!1),Ye("search",!1)});Jr.pipe(v(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=ce("link[rel=prev]");typeof t!="undefined"&&st(t);break;case"n":case".":let r=ce("link[rel=next]");typeof r!="undefined"&&st(r);break;case"Enter":let o=Ie();o instanceof HTMLLabelElement&&o.click()}});ci({document$:rt});pi({document$:rt,tablet$:hr});li({document$:rt});mi({viewport$:_e,tablet$:hr});var tt=Pn(Oe("header"),{viewport$:_e}),$t=rt.pipe(m(()=>Oe("main")),w(e=>Fn(e,{viewport$:_e,header$:tt})),Z(1)),Xa=L(...ne("consent").map(e=>fn(e,{target$:wt})),...ne("dialog").map(e=>$n(e,{alert$:Xr})),...ne("header").map(e=>Rn(e,{viewport$:_e,header$:tt,main$:$t})),...ne("palette").map(e=>jn(e)),...ne("progress").map(e=>Wn(e,{progress$:Zr})),...ne("search").map(e=>Zn(e,{index$:hi,keyboard$:Jr})),...ne("source").map(e=>ni(e))),Za=H(()=>L(...ne("announce").map(e=>mn(e)),...ne("content").map(e=>Hn(e,{viewport$:_e,target$:wt,print$:di})),...ne("content").map(e=>G("search.highlight")?ei(e,{index$:hi,location$:Pt}):M),...ne("header-title").map(e=>In(e,{viewport$:_e,header$:tt})),...ne("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?Dr(ui,()=>Br(e,{viewport$:_e,header$:tt,main$:$t})):Dr(hr,()=>Br(e,{viewport$:_e,header$:tt,main$:$t}))),...ne("tabs").map(e=>ii(e,{viewport$:_e,header$:tt})),...ne("toc").map(e=>ai(e,{viewport$:_e,header$:tt,main$:$t,target$:wt})),...ne("top").map(e=>si(e,{viewport$:_e,header$:tt,main$:$t,target$:wt})))),bi=rt.pipe(w(()=>Za),Re(Xa),Z(1));bi.subscribe();window.document$=rt;window.location$=Pt;window.target$=wt;window.keyboard$=Jr;window.viewport$=_e;window.tablet$=hr;window.screen$=ui;window.print$=di;window.alert$=Xr;window.progress$=Zr;window.component$=bi;})(); +//# sourceMappingURL=bundle.d7c377c4.min.js.map + diff --git a/assets/javascripts/bundle.d7c377c4.min.js.map b/assets/javascripts/bundle.d7c377c4.min.js.map new file mode 100644 index 00000000..a57d388a --- /dev/null +++ b/assets/javascripts/bundle.d7c377c4.min.js.map @@ -0,0 +1,7 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/clipboard/dist/clipboard.js", "node_modules/escape-html/index.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/rxjs/node_modules/tslib/tslib.es6.js", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/sample.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/hover/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/tooltip/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/ellipsis/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"], + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*\n * Copyright (c) 2016-2023 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchEllipsis,\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchEllipsis({ document$ })\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation.\r\n\r\nPermission to use, copy, modify, and/or distribute this software for any\r\npurpose with or without fee is hereby granted.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\r\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\r\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\r\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\r\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\r\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\r\nPERFORMANCE OF THIS SOFTWARE.\r\n***************************************************************************** */\r\n/* global Reflect, Promise */\r\n\r\nvar extendStatics = function(d, b) {\r\n extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\r\n return extendStatics(d, b);\r\n};\r\n\r\nexport function __extends(d, b) {\r\n if (typeof b !== \"function\" && b !== null)\r\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n}\r\n\r\nexport var __assign = function() {\r\n __assign = Object.assign || function __assign(t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n }\r\n return __assign.apply(this, arguments);\r\n}\r\n\r\nexport function __rest(s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n}\r\n\r\nexport function __decorate(decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n}\r\n\r\nexport function __param(paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n}\r\n\r\nexport function __metadata(metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n}\r\n\r\nexport function __awaiter(thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n}\r\n\r\nexport function __generator(thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n}\r\n\r\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });\r\n}) : (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n o[k2] = m[k];\r\n});\r\n\r\nexport function __exportStar(m, o) {\r\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\r\n}\r\n\r\nexport function __values(o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n}\r\n\r\nexport function __read(o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spread() {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spreadArrays() {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n}\r\n\r\nexport function __spreadArray(to, from, pack) {\r\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\r\n if (ar || !(i in from)) {\r\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\r\n ar[i] = from[i];\r\n }\r\n }\r\n return to.concat(ar || Array.prototype.slice.call(from));\r\n}\r\n\r\nexport function __await(v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n}\r\n\r\nexport function __asyncGenerator(thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n}\r\n\r\nexport function __asyncDelegator(o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n}\r\n\r\nexport function __asyncValues(o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n}\r\n\r\nexport function __makeTemplateObject(cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n};\r\n\r\nvar __setModuleDefault = Object.create ? (function(o, v) {\r\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\r\n}) : function(o, v) {\r\n o[\"default\"] = v;\r\n};\r\n\r\nexport function __importStar(mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\r\n __setModuleDefault(result, mod);\r\n return result;\r\n}\r\n\r\nexport function __importDefault(mod) {\r\n return (mod && mod.__esModule) ? mod : { default: mod };\r\n}\r\n\r\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\r\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\r\n}\r\n\r\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\r\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\r\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\r\n}\r\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n *\n * @class Subscription\n */\nexport class Subscription implements SubscriptionLike {\n /** @nocollapse */\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n * @return {void}\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n *\n * @class Subscriber\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @nocollapse\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param {T} [value] The `next` value.\n * @return {void}\n */\n next(value?: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param {any} [err] The `error` exception.\n * @return {void}\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n * @return {void}\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as (((value: T) => void) | undefined),\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent\n * @param subscriber The stopped subscriber\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n *\n * @class Observable\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @constructor\n * @param {Function} subscribe the function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @owner Observable\n * @method create\n * @param {Function} subscribe? the subscriber function to be passed to the Observable constructor\n * @return {Observable} a new observable\n * @nocollapse\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @method lift\n * @param operator the operator defining the operation to take on the observable\n * @return a new observable with the Operator applied\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening.\n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param {Observer|Function} observerOrNext (optional) Either an observer with methods to be called,\n * or the first of three possible handlers, which is the handler for each value emitted from the subscribed\n * Observable.\n * @param {Function} error (optional) A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param {Function} complete (optional) A handler for a terminal event resulting from successful completion.\n * @return {Subscription} a subscription reference to the registered handlers\n * @method subscribe\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next a handler for each value emitted by the observable\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @method Symbol.observable\n * @return {Observable} this instance of the observable\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n * @method pipe\n * @return {Observable} the Observable result of all of the operators having\n * been called in the order they were passed in.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n */\n pipe(...operations: OperatorFunction[]): Observable {\n return pipeFromArray(operations)(this);\n }\n\n /* tslint:disable:max-line-length */\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: typeof Promise): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: PromiseConstructorLike): Promise;\n /* tslint:enable:max-line-length */\n\n /**\n * Subscribe to this Observable and get a Promise resolving on\n * `complete` with the last emission (if any).\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * @method toPromise\n * @param [promiseCtor] a constructor function used to instantiate\n * the Promise\n * @return A Promise that resolves with the last value emit, or\n * rejects on an error. If there were no emissions, Promise\n * resolves with undefined.\n * @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise\n */\n toPromise(promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n let value: T | undefined;\n this.subscribe(\n (x: T) => (value = x),\n (err: any) => reject(err),\n () => resolve(value)\n );\n }) as Promise;\n }\n}\n\n/**\n * Decides between a passed promise constructor from consuming code,\n * A default configured promise constructor, and the native promise\n * constructor and returns it. If nothing can be found, it will throw\n * an error.\n * @param promiseCtor The optional promise constructor to passed by consuming code\n */\nfunction getPromiseCtor(promiseCtor: PromiseConstructorLike | undefined) {\n return promiseCtor ?? config.Promise ?? Promise;\n}\n\nfunction isObserver(value: any): value is Observer {\n return value && isFunction(value.next) && isFunction(value.error) && isFunction(value.complete);\n}\n\nfunction isSubscriber(value: any): value is Subscriber {\n return (value && value instanceof Subscriber) || (isObserver(value) && isSubscription(value));\n}\n", "import { Observable } from '../Observable';\nimport { Subscriber } from '../Subscriber';\nimport { OperatorFunction } from '../types';\nimport { isFunction } from './isFunction';\n\n/**\n * Used to determine if an object is an Observable with a lift function.\n */\nexport function hasLift(source: any): source is { lift: InstanceType['lift'] } {\n return isFunction(source?.lift);\n}\n\n/**\n * Creates an `OperatorFunction`. Used to define operators throughout the library in a concise way.\n * @param init The logic to connect the liftedSource to the subscriber at the moment of subscription.\n */\nexport function operate(\n init: (liftedSource: Observable, subscriber: Subscriber) => (() => void) | void\n): OperatorFunction {\n return (source: Observable) => {\n if (hasLift(source)) {\n return source.lift(function (this: Subscriber, liftedSource: Observable) {\n try {\n return init(liftedSource, this);\n } catch (err) {\n this.error(err);\n }\n });\n }\n throw new TypeError('Unable to lift unknown Observable type');\n };\n}\n", "import { Subscriber } from '../Subscriber';\n\n/**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional teardown logic here. This will only be called on teardown if the\n * subscriber itself is not already closed. This is called after all other teardown logic is executed.\n */\nexport function createOperatorSubscriber(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n onFinalize?: () => void\n): Subscriber {\n return new OperatorSubscriber(destination, onNext, onComplete, onError, onFinalize);\n}\n\n/**\n * A generic helper for allowing operators to be created with a Subscriber and\n * use closures to capture necessary state from the operator function itself.\n */\nexport class OperatorSubscriber extends Subscriber {\n /**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional finalization logic here. This will only be called on finalization if the\n * subscriber itself is not already closed. This is called after all other finalization logic is executed.\n * @param shouldUnsubscribe An optional check to see if an unsubscribe call should truly unsubscribe.\n * NOTE: This currently **ONLY** exists to support the strange behavior of {@link groupBy}, where unsubscription\n * to the resulting observable does not actually disconnect from the source if there are active subscriptions\n * to any grouped observable. (DO NOT EXPOSE OR USE EXTERNALLY!!!)\n */\n constructor(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n private onFinalize?: () => void,\n private shouldUnsubscribe?: () => boolean\n ) {\n // It's important - for performance reasons - that all of this class's\n // members are initialized and that they are always initialized in the same\n // order. This will ensure that all OperatorSubscriber instances have the\n // same hidden class in V8. This, in turn, will help keep the number of\n // hidden classes involved in property accesses within the base class as\n // low as possible. If the number of hidden classes involved exceeds four,\n // the property accesses will become megamorphic and performance penalties\n // will be incurred - i.e. inline caches won't be used.\n //\n // The reasons for ensuring all instances have the same hidden class are\n // further discussed in this blog post from Benedikt Meurer:\n // https://benediktmeurer.de/2018/03/23/impact-of-polymorphism-on-component-based-frameworks-like-react/\n super(destination);\n this._next = onNext\n ? function (this: OperatorSubscriber, value: T) {\n try {\n onNext(value);\n } catch (err) {\n destination.error(err);\n }\n }\n : super._next;\n this._error = onError\n ? function (this: OperatorSubscriber, err: any) {\n try {\n onError(err);\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._error;\n this._complete = onComplete\n ? function (this: OperatorSubscriber) {\n try {\n onComplete();\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._complete;\n }\n\n unsubscribe() {\n if (!this.shouldUnsubscribe || this.shouldUnsubscribe()) {\n const { closed } = this;\n super.unsubscribe();\n // Execute additional teardown if we have any and we didn't already do so.\n !closed && this.onFinalize?.();\n }\n }\n}\n", "import { Subscription } from '../Subscription';\n\ninterface AnimationFrameProvider {\n schedule(callback: FrameRequestCallback): Subscription;\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n delegate:\n | {\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n }\n | undefined;\n}\n\nexport const animationFrameProvider: AnimationFrameProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n schedule(callback) {\n let request = requestAnimationFrame;\n let cancel: typeof cancelAnimationFrame | undefined = cancelAnimationFrame;\n const { delegate } = animationFrameProvider;\n if (delegate) {\n request = delegate.requestAnimationFrame;\n cancel = delegate.cancelAnimationFrame;\n }\n const handle = request((timestamp) => {\n // Clear the cancel function. The request has been fulfilled, so\n // attempting to cancel the request upon unsubscription would be\n // pointless.\n cancel = undefined;\n callback(timestamp);\n });\n return new Subscription(() => cancel?.(handle));\n },\n requestAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.requestAnimationFrame || requestAnimationFrame)(...args);\n },\n cancelAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.cancelAnimationFrame || cancelAnimationFrame)(...args);\n },\n delegate: undefined,\n};\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface ObjectUnsubscribedError extends Error {}\n\nexport interface ObjectUnsubscribedErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (): ObjectUnsubscribedError;\n}\n\n/**\n * An error thrown when an action is invalid because the object has been\n * unsubscribed.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n *\n * @class ObjectUnsubscribedError\n */\nexport const ObjectUnsubscribedError: ObjectUnsubscribedErrorCtor = createErrorClass(\n (_super) =>\n function ObjectUnsubscribedErrorImpl(this: any) {\n _super(this);\n this.name = 'ObjectUnsubscribedError';\n this.message = 'object unsubscribed';\n }\n);\n", "import { Operator } from './Operator';\nimport { Observable } from './Observable';\nimport { Subscriber } from './Subscriber';\nimport { Subscription, EMPTY_SUBSCRIPTION } from './Subscription';\nimport { Observer, SubscriptionLike, TeardownLogic } from './types';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { arrRemove } from './util/arrRemove';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A Subject is a special type of Observable that allows values to be\n * multicasted to many Observers. Subjects are like EventEmitters.\n *\n * Every Subject is an Observable and an Observer. You can subscribe to a\n * Subject, and you can call next to feed values as well as error and complete.\n */\nexport class Subject extends Observable implements SubscriptionLike {\n closed = false;\n\n private currentObservers: Observer[] | null = null;\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n observers: Observer[] = [];\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n isStopped = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n hasError = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n thrownError: any = null;\n\n /**\n * Creates a \"subject\" by basically gluing an observer to an observable.\n *\n * @nocollapse\n * @deprecated Recommended you do not use. Will be removed at some point in the future. Plans for replacement still under discussion.\n */\n static create: (...args: any[]) => any = (destination: Observer, source: Observable): AnonymousSubject => {\n return new AnonymousSubject(destination, source);\n };\n\n constructor() {\n // NOTE: This must be here to obscure Observable's constructor.\n super();\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n lift(operator: Operator): Observable {\n const subject = new AnonymousSubject(this, this);\n subject.operator = operator as any;\n return subject as any;\n }\n\n /** @internal */\n protected _throwIfClosed() {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n }\n\n next(value: T) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n if (!this.currentObservers) {\n this.currentObservers = Array.from(this.observers);\n }\n for (const observer of this.currentObservers) {\n observer.next(value);\n }\n }\n });\n }\n\n error(err: any) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.hasError = this.isStopped = true;\n this.thrownError = err;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.error(err);\n }\n }\n });\n }\n\n complete() {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.isStopped = true;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.complete();\n }\n }\n });\n }\n\n unsubscribe() {\n this.isStopped = this.closed = true;\n this.observers = this.currentObservers = null!;\n }\n\n get observed() {\n return this.observers?.length > 0;\n }\n\n /** @internal */\n protected _trySubscribe(subscriber: Subscriber): TeardownLogic {\n this._throwIfClosed();\n return super._trySubscribe(subscriber);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._checkFinalizedStatuses(subscriber);\n return this._innerSubscribe(subscriber);\n }\n\n /** @internal */\n protected _innerSubscribe(subscriber: Subscriber) {\n const { hasError, isStopped, observers } = this;\n if (hasError || isStopped) {\n return EMPTY_SUBSCRIPTION;\n }\n this.currentObservers = null;\n observers.push(subscriber);\n return new Subscription(() => {\n this.currentObservers = null;\n arrRemove(observers, subscriber);\n });\n }\n\n /** @internal */\n protected _checkFinalizedStatuses(subscriber: Subscriber) {\n const { hasError, thrownError, isStopped } = this;\n if (hasError) {\n subscriber.error(thrownError);\n } else if (isStopped) {\n subscriber.complete();\n }\n }\n\n /**\n * Creates a new Observable with this Subject as the source. You can do this\n * to create custom Observer-side logic of the Subject and conceal it from\n * code that uses the Observable.\n * @return {Observable} Observable that the Subject casts to\n */\n asObservable(): Observable {\n const observable: any = new Observable();\n observable.source = this;\n return observable;\n }\n}\n\n/**\n * @class AnonymousSubject\n */\nexport class AnonymousSubject extends Subject {\n constructor(\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n public destination?: Observer,\n source?: Observable\n ) {\n super();\n this.source = source;\n }\n\n next(value: T) {\n this.destination?.next?.(value);\n }\n\n error(err: any) {\n this.destination?.error?.(err);\n }\n\n complete() {\n this.destination?.complete?.();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n return this.source?.subscribe(subscriber) ?? EMPTY_SUBSCRIPTION;\n }\n}\n", "import { TimestampProvider } from '../types';\n\ninterface DateTimestampProvider extends TimestampProvider {\n delegate: TimestampProvider | undefined;\n}\n\nexport const dateTimestampProvider: DateTimestampProvider = {\n now() {\n // Use the variable rather than `this` so that the function can be called\n // without being bound to the provider.\n return (dateTimestampProvider.delegate || Date).now();\n },\n delegate: undefined,\n};\n", "import { Subject } from './Subject';\nimport { TimestampProvider } from './types';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * A variant of {@link Subject} that \"replays\" old values to new subscribers by emitting them when they first subscribe.\n *\n * `ReplaySubject` has an internal buffer that will store a specified number of values that it has observed. Like `Subject`,\n * `ReplaySubject` \"observes\" values by having them passed to its `next` method. When it observes a value, it will store that\n * value for a time determined by the configuration of the `ReplaySubject`, as passed to its constructor.\n *\n * When a new subscriber subscribes to the `ReplaySubject` instance, it will synchronously emit all values in its buffer in\n * a First-In-First-Out (FIFO) manner. The `ReplaySubject` will also complete, if it has observed completion; and it will\n * error if it has observed an error.\n *\n * There are two main configuration items to be concerned with:\n *\n * 1. `bufferSize` - This will determine how many items are stored in the buffer, defaults to infinite.\n * 2. `windowTime` - The amount of time to hold a value in the buffer before removing it from the buffer.\n *\n * Both configurations may exist simultaneously. So if you would like to buffer a maximum of 3 values, as long as the values\n * are less than 2 seconds old, you could do so with a `new ReplaySubject(3, 2000)`.\n *\n * ### Differences with BehaviorSubject\n *\n * `BehaviorSubject` is similar to `new ReplaySubject(1)`, with a couple of exceptions:\n *\n * 1. `BehaviorSubject` comes \"primed\" with a single value upon construction.\n * 2. `ReplaySubject` will replay values, even after observing an error, where `BehaviorSubject` will not.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n * @see {@link shareReplay}\n */\nexport class ReplaySubject extends Subject {\n private _buffer: (T | number)[] = [];\n private _infiniteTimeWindow = true;\n\n /**\n * @param bufferSize The size of the buffer to replay on subscription\n * @param windowTime The amount of time the buffered items will stay buffered\n * @param timestampProvider An object with a `now()` method that provides the current timestamp. This is used to\n * calculate the amount of time something has been buffered.\n */\n constructor(\n private _bufferSize = Infinity,\n private _windowTime = Infinity,\n private _timestampProvider: TimestampProvider = dateTimestampProvider\n ) {\n super();\n this._infiniteTimeWindow = _windowTime === Infinity;\n this._bufferSize = Math.max(1, _bufferSize);\n this._windowTime = Math.max(1, _windowTime);\n }\n\n next(value: T): void {\n const { isStopped, _buffer, _infiniteTimeWindow, _timestampProvider, _windowTime } = this;\n if (!isStopped) {\n _buffer.push(value);\n !_infiniteTimeWindow && _buffer.push(_timestampProvider.now() + _windowTime);\n }\n this._trimBuffer();\n super.next(value);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._trimBuffer();\n\n const subscription = this._innerSubscribe(subscriber);\n\n const { _infiniteTimeWindow, _buffer } = this;\n // We use a copy here, so reentrant code does not mutate our array while we're\n // emitting it to a new subscriber.\n const copy = _buffer.slice();\n for (let i = 0; i < copy.length && !subscriber.closed; i += _infiniteTimeWindow ? 1 : 2) {\n subscriber.next(copy[i] as T);\n }\n\n this._checkFinalizedStatuses(subscriber);\n\n return subscription;\n }\n\n private _trimBuffer() {\n const { _bufferSize, _timestampProvider, _buffer, _infiniteTimeWindow } = this;\n // If we don't have an infinite buffer size, and we're over the length,\n // use splice to truncate the old buffer values off. Note that we have to\n // double the size for instances where we're not using an infinite time window\n // because we're storing the values and the timestamps in the same array.\n const adjustedBufferSize = (_infiniteTimeWindow ? 1 : 2) * _bufferSize;\n _bufferSize < Infinity && adjustedBufferSize < _buffer.length && _buffer.splice(0, _buffer.length - adjustedBufferSize);\n\n // Now, if we're not in an infinite time window, remove all values where the time is\n // older than what is allowed.\n if (!_infiniteTimeWindow) {\n const now = _timestampProvider.now();\n let last = 0;\n // Search the array for the first timestamp that isn't expired and\n // truncate the buffer up to that point.\n for (let i = 1; i < _buffer.length && (_buffer[i] as number) <= now; i += 2) {\n last = i;\n }\n last && _buffer.splice(0, last + 1);\n }\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Subscription } from '../Subscription';\nimport { SchedulerAction } from '../types';\n\n/**\n * A unit of work to be executed in a `scheduler`. An action is typically\n * created from within a {@link SchedulerLike} and an RxJS user does not need to concern\n * themselves about creating and manipulating an Action.\n *\n * ```ts\n * class Action extends Subscription {\n * new (scheduler: Scheduler, work: (state?: T) => void);\n * schedule(state?: T, delay: number = 0): Subscription;\n * }\n * ```\n *\n * @class Action\n */\nexport class Action extends Subscription {\n constructor(scheduler: Scheduler, work: (this: SchedulerAction, state?: T) => void) {\n super();\n }\n /**\n * Schedules this action on its parent {@link SchedulerLike} for execution. May be passed\n * some context object, `state`. May happen at some point in the future,\n * according to the `delay` parameter, if specified.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler.\n * @return {void}\n */\n public schedule(state?: T, delay: number = 0): Subscription {\n return this;\n }\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetIntervalFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearIntervalFunction = (handle: TimerHandle) => void;\n\ninterface IntervalProvider {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n delegate:\n | {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n }\n | undefined;\n}\n\nexport const intervalProvider: IntervalProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setInterval(handler: () => void, timeout?: number, ...args) {\n const { delegate } = intervalProvider;\n if (delegate?.setInterval) {\n return delegate.setInterval(handler, timeout, ...args);\n }\n return setInterval(handler, timeout, ...args);\n },\n clearInterval(handle) {\n const { delegate } = intervalProvider;\n return (delegate?.clearInterval || clearInterval)(handle as any);\n },\n delegate: undefined,\n};\n", "import { Action } from './Action';\nimport { SchedulerAction } from '../types';\nimport { Subscription } from '../Subscription';\nimport { AsyncScheduler } from './AsyncScheduler';\nimport { intervalProvider } from './intervalProvider';\nimport { arrRemove } from '../util/arrRemove';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncAction extends Action {\n public id: TimerHandle | undefined;\n public state?: T;\n // @ts-ignore: Property has no initializer and is not definitely assigned\n public delay: number;\n protected pending: boolean = false;\n\n constructor(protected scheduler: AsyncScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (this.closed) {\n return this;\n }\n\n // Always replace the current state with the new state.\n this.state = state;\n\n const id = this.id;\n const scheduler = this.scheduler;\n\n //\n // Important implementation note:\n //\n // Actions only execute once by default, unless rescheduled from within the\n // scheduled callback. This allows us to implement single and repeat\n // actions via the same code path, without adding API surface area, as well\n // as mimic traditional recursion but across asynchronous boundaries.\n //\n // However, JS runtimes and timers distinguish between intervals achieved by\n // serial `setTimeout` calls vs. a single `setInterval` call. An interval of\n // serial `setTimeout` calls can be individually delayed, which delays\n // scheduling the next `setTimeout`, and so on. `setInterval` attempts to\n // guarantee the interval callback will be invoked more precisely to the\n // interval period, regardless of load.\n //\n // Therefore, we use `setInterval` to schedule single and repeat actions.\n // If the action reschedules itself with the same delay, the interval is not\n // canceled. If the action doesn't reschedule, or reschedules with a\n // different delay, the interval will be canceled after scheduled callback\n // execution.\n //\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, delay);\n }\n\n // Set the pending flag indicating that this action has been scheduled, or\n // has recursively rescheduled itself.\n this.pending = true;\n\n this.delay = delay;\n // If this action has already an async Id, don't request a new one.\n this.id = this.id ?? this.requestAsyncId(scheduler, this.id, delay);\n\n return this;\n }\n\n protected requestAsyncId(scheduler: AsyncScheduler, _id?: TimerHandle, delay: number = 0): TimerHandle {\n return intervalProvider.setInterval(scheduler.flush.bind(scheduler, this), delay);\n }\n\n protected recycleAsyncId(_scheduler: AsyncScheduler, id?: TimerHandle, delay: number | null = 0): TimerHandle | undefined {\n // If this action is rescheduled with the same delay time, don't clear the interval id.\n if (delay != null && this.delay === delay && this.pending === false) {\n return id;\n }\n // Otherwise, if the action's delay time is different from the current delay,\n // or the action has been rescheduled before it's executed, clear the interval id\n if (id != null) {\n intervalProvider.clearInterval(id);\n }\n\n return undefined;\n }\n\n /**\n * Immediately executes this action and the `work` it contains.\n * @return {any}\n */\n public execute(state: T, delay: number): any {\n if (this.closed) {\n return new Error('executing a cancelled action');\n }\n\n this.pending = false;\n const error = this._execute(state, delay);\n if (error) {\n return error;\n } else if (this.pending === false && this.id != null) {\n // Dequeue if the action didn't reschedule itself. Don't call\n // unsubscribe(), because the action could reschedule later.\n // For example:\n // ```\n // scheduler.schedule(function doWork(counter) {\n // /* ... I'm a busy worker bee ... */\n // var originalAction = this;\n // /* wait 100ms before rescheduling the action */\n // setTimeout(function () {\n // originalAction.schedule(counter + 1);\n // }, 100);\n // }, 1000);\n // ```\n this.id = this.recycleAsyncId(this.scheduler, this.id, null);\n }\n }\n\n protected _execute(state: T, _delay: number): any {\n let errored: boolean = false;\n let errorValue: any;\n try {\n this.work(state);\n } catch (e) {\n errored = true;\n // HACK: Since code elsewhere is relying on the \"truthiness\" of the\n // return here, we can't have it return \"\" or 0 or false.\n // TODO: Clean this up when we refactor schedulers mid-version-8 or so.\n errorValue = e ? e : new Error('Scheduled action threw falsy error');\n }\n if (errored) {\n this.unsubscribe();\n return errorValue;\n }\n }\n\n unsubscribe() {\n if (!this.closed) {\n const { id, scheduler } = this;\n const { actions } = scheduler;\n\n this.work = this.state = this.scheduler = null!;\n this.pending = false;\n\n arrRemove(actions, this);\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, null);\n }\n\n this.delay = null!;\n super.unsubscribe();\n }\n }\n}\n", "import { Action } from './scheduler/Action';\nimport { Subscription } from './Subscription';\nimport { SchedulerLike, SchedulerAction } from './types';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * An execution context and a data structure to order tasks and schedule their\n * execution. Provides a notion of (potentially virtual) time, through the\n * `now()` getter method.\n *\n * Each unit of work in a Scheduler is called an `Action`.\n *\n * ```ts\n * class Scheduler {\n * now(): number;\n * schedule(work, delay?, state?): Subscription;\n * }\n * ```\n *\n * @class Scheduler\n * @deprecated Scheduler is an internal implementation detail of RxJS, and\n * should not be used directly. Rather, create your own class and implement\n * {@link SchedulerLike}. Will be made internal in v8.\n */\nexport class Scheduler implements SchedulerLike {\n public static now: () => number = dateTimestampProvider.now;\n\n constructor(private schedulerActionCtor: typeof Action, now: () => number = Scheduler.now) {\n this.now = now;\n }\n\n /**\n * A getter method that returns a number representing the current time\n * (at the time this function was called) according to the scheduler's own\n * internal clock.\n * @return {number} A number that represents the current time. May or may not\n * have a relation to wall-clock time. May or may not refer to a time unit\n * (e.g. milliseconds).\n */\n public now: () => number;\n\n /**\n * Schedules a function, `work`, for execution. May happen at some point in\n * the future, according to the `delay` parameter, if specified. May be passed\n * some context object, `state`, which will be passed to the `work` function.\n *\n * The given arguments will be processed an stored as an Action object in a\n * queue of actions.\n *\n * @param {function(state: ?T): ?Subscription} work A function representing a\n * task, or some unit of work to be executed by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler itself.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @return {Subscription} A subscription in order to be able to unsubscribe\n * the scheduled work.\n */\n public schedule(work: (this: SchedulerAction, state?: T) => void, delay: number = 0, state?: T): Subscription {\n return new this.schedulerActionCtor(this, work).schedule(state, delay);\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Action } from './Action';\nimport { AsyncAction } from './AsyncAction';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncScheduler extends Scheduler {\n public actions: Array> = [];\n /**\n * A flag to indicate whether the Scheduler is currently executing a batch of\n * queued actions.\n * @type {boolean}\n * @internal\n */\n public _active: boolean = false;\n /**\n * An internal ID used to track the latest asynchronous task such as those\n * coming from `setTimeout`, `setInterval`, `requestAnimationFrame`, and\n * others.\n * @type {any}\n * @internal\n */\n public _scheduled: TimerHandle | undefined;\n\n constructor(SchedulerAction: typeof Action, now: () => number = Scheduler.now) {\n super(SchedulerAction, now);\n }\n\n public flush(action: AsyncAction): void {\n const { actions } = this;\n\n if (this._active) {\n actions.push(action);\n return;\n }\n\n let error: any;\n this._active = true;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions.shift()!)); // exhaust the scheduler queue\n\n this._active = false;\n\n if (error) {\n while ((action = actions.shift()!)) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\n/**\n *\n * Async Scheduler\n *\n * Schedule task as if you used setTimeout(task, duration)\n *\n * `async` scheduler schedules tasks asynchronously, by putting them on the JavaScript\n * event loop queue. It is best used to delay tasks in time or to schedule tasks repeating\n * in intervals.\n *\n * If you just want to \"defer\" task, that is to perform it right after currently\n * executing synchronous code ends (commonly achieved by `setTimeout(deferredTask, 0)`),\n * better choice will be the {@link asapScheduler} scheduler.\n *\n * ## Examples\n * Use async scheduler to delay task\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * const task = () => console.log('it works!');\n *\n * asyncScheduler.schedule(task, 2000);\n *\n * // After 2 seconds logs:\n * // \"it works!\"\n * ```\n *\n * Use async scheduler to repeat task in intervals\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * function task(state) {\n * console.log(state);\n * this.schedule(state + 1, 1000); // `this` references currently executing Action,\n * // which we reschedule with new state and delay\n * }\n *\n * asyncScheduler.schedule(task, 3000, 0);\n *\n * // Logs:\n * // 0 after 3s\n * // 1 after 4s\n * // 2 after 5s\n * // 3 after 6s\n * ```\n */\n\nexport const asyncScheduler = new AsyncScheduler(AsyncAction);\n\n/**\n * @deprecated Renamed to {@link asyncScheduler}. Will be removed in v8.\n */\nexport const async = asyncScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\nimport { SchedulerAction } from '../types';\nimport { animationFrameProvider } from './animationFrameProvider';\nimport { TimerHandle } from './timerHandle';\n\nexport class AnimationFrameAction extends AsyncAction {\n constructor(protected scheduler: AnimationFrameScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n protected requestAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay is greater than 0, request as an async action.\n if (delay !== null && delay > 0) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n // Push the action to the end of the scheduler queue.\n scheduler.actions.push(this);\n // If an animation frame has already been requested, don't request another\n // one. If an animation frame hasn't been requested yet, request one. Return\n // the current animation frame request id.\n return scheduler._scheduled || (scheduler._scheduled = animationFrameProvider.requestAnimationFrame(() => scheduler.flush(undefined)));\n }\n\n protected recycleAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle | undefined {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n if (delay != null ? delay > 0 : this.delay > 0) {\n return super.recycleAsyncId(scheduler, id, delay);\n }\n // If the scheduler queue has no remaining actions with the same async id,\n // cancel the requested animation frame and set the scheduled flag to\n // undefined so the next AnimationFrameAction will request its own.\n const { actions } = scheduler;\n if (id != null && actions[actions.length - 1]?.id !== id) {\n animationFrameProvider.cancelAnimationFrame(id as number);\n scheduler._scheduled = undefined;\n }\n // Return undefined so the action knows to request a new async id if it's rescheduled.\n return undefined;\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\nexport class AnimationFrameScheduler extends AsyncScheduler {\n public flush(action?: AsyncAction): void {\n this._active = true;\n // The async id that effects a call to flush is stored in _scheduled.\n // Before executing an action, it's necessary to check the action's async\n // id to determine whether it's supposed to be executed in the current\n // flush.\n // Previous implementations of this method used a count to determine this,\n // but that was unsound, as actions that are unsubscribed - i.e. cancelled -\n // are removed from the actions array and that can shift actions that are\n // scheduled to be executed in a subsequent flush into positions at which\n // they are executed within the current flush.\n const flushId = this._scheduled;\n this._scheduled = undefined;\n\n const { actions } = this;\n let error: any;\n action = action || actions.shift()!;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions[0]) && action.id === flushId && actions.shift());\n\n this._active = false;\n\n if (error) {\n while ((action = actions[0]) && action.id === flushId && actions.shift()) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AnimationFrameAction } from './AnimationFrameAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\n\n/**\n *\n * Animation Frame Scheduler\n *\n * Perform task when `window.requestAnimationFrame` would fire\n *\n * When `animationFrame` scheduler is used with delay, it will fall back to {@link asyncScheduler} scheduler\n * behaviour.\n *\n * Without delay, `animationFrame` scheduler can be used to create smooth browser animations.\n * It makes sure scheduled task will happen just before next browser content repaint,\n * thus performing animations as efficiently as possible.\n *\n * ## Example\n * Schedule div height animation\n * ```ts\n * // html:
\n * import { animationFrameScheduler } from 'rxjs';\n *\n * const div = document.querySelector('div');\n *\n * animationFrameScheduler.schedule(function(height) {\n * div.style.height = height + \"px\";\n *\n * this.schedule(height + 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * }, 0, 0);\n *\n * // You will see a div element growing in height\n * ```\n */\n\nexport const animationFrameScheduler = new AnimationFrameScheduler(AnimationFrameAction);\n\n/**\n * @deprecated Renamed to {@link animationFrameScheduler}. Will be removed in v8.\n */\nexport const animationFrame = animationFrameScheduler;\n", "import { Observable } from '../Observable';\nimport { SchedulerLike } from '../types';\n\n/**\n * A simple Observable that emits no items to the Observer and immediately\n * emits a complete notification.\n *\n * Just emits 'complete', and nothing else.\n *\n * ![](empty.png)\n *\n * A simple Observable that only emits the complete notification. It can be used\n * for composing with other Observables, such as in a {@link mergeMap}.\n *\n * ## Examples\n *\n * Log complete notification\n *\n * ```ts\n * import { EMPTY } from 'rxjs';\n *\n * EMPTY.subscribe({\n * next: () => console.log('Next'),\n * complete: () => console.log('Complete!')\n * });\n *\n * // Outputs\n * // Complete!\n * ```\n *\n * Emit the number 7, then complete\n *\n * ```ts\n * import { EMPTY, startWith } from 'rxjs';\n *\n * const result = EMPTY.pipe(startWith(7));\n * result.subscribe(x => console.log(x));\n *\n * // Outputs\n * // 7\n * ```\n *\n * Map and flatten only odd numbers to the sequence `'a'`, `'b'`, `'c'`\n *\n * ```ts\n * import { interval, mergeMap, of, EMPTY } from 'rxjs';\n *\n * const interval$ = interval(1000);\n * const result = interval$.pipe(\n * mergeMap(x => x % 2 === 1 ? of('a', 'b', 'c') : EMPTY),\n * );\n * result.subscribe(x => console.log(x));\n *\n * // Results in the following to the console:\n * // x is equal to the count on the interval, e.g. (0, 1, 2, 3, ...)\n * // x will occur every 1000ms\n * // if x % 2 is equal to 1, print a, b, c (each on its own)\n * // if x % 2 is not equal to 1, nothing will be output\n * ```\n *\n * @see {@link Observable}\n * @see {@link NEVER}\n * @see {@link of}\n * @see {@link throwError}\n */\nexport const EMPTY = new Observable((subscriber) => subscriber.complete());\n\n/**\n * @param scheduler A {@link SchedulerLike} to use for scheduling\n * the emission of the complete notification.\n * @deprecated Replaced with the {@link EMPTY} constant or {@link scheduled} (e.g. `scheduled([], scheduler)`). Will be removed in v8.\n */\nexport function empty(scheduler?: SchedulerLike) {\n return scheduler ? emptyScheduled(scheduler) : EMPTY;\n}\n\nfunction emptyScheduled(scheduler: SchedulerLike) {\n return new Observable((subscriber) => scheduler.schedule(() => subscriber.complete()));\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport function isScheduler(value: any): value is SchedulerLike {\n return value && isFunction(value.schedule);\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\nimport { isScheduler } from './isScheduler';\n\nfunction last(arr: T[]): T | undefined {\n return arr[arr.length - 1];\n}\n\nexport function popResultSelector(args: any[]): ((...args: unknown[]) => unknown) | undefined {\n return isFunction(last(args)) ? args.pop() : undefined;\n}\n\nexport function popScheduler(args: any[]): SchedulerLike | undefined {\n return isScheduler(last(args)) ? args.pop() : undefined;\n}\n\nexport function popNumber(args: any[], defaultValue: number): number {\n return typeof last(args) === 'number' ? args.pop()! : defaultValue;\n}\n", "export const isArrayLike = ((x: any): x is ArrayLike => x && typeof x.length === 'number' && typeof x !== 'function');", "import { isFunction } from \"./isFunction\";\n\n/**\n * Tests to see if the object is \"thennable\".\n * @param value the object to test\n */\nexport function isPromise(value: any): value is PromiseLike {\n return isFunction(value?.then);\n}\n", "import { InteropObservable } from '../types';\nimport { observable as Symbol_observable } from '../symbol/observable';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being Observable (but not necessary an Rx Observable) */\nexport function isInteropObservable(input: any): input is InteropObservable {\n return isFunction(input[Symbol_observable]);\n}\n", "import { isFunction } from './isFunction';\n\nexport function isAsyncIterable(obj: any): obj is AsyncIterable {\n return Symbol.asyncIterator && isFunction(obj?.[Symbol.asyncIterator]);\n}\n", "/**\n * Creates the TypeError to throw if an invalid object is passed to `from` or `scheduled`.\n * @param input The object that was passed.\n */\nexport function createInvalidObservableTypeError(input: any) {\n // TODO: We should create error codes that can be looked up, so this can be less verbose.\n return new TypeError(\n `You provided ${\n input !== null && typeof input === 'object' ? 'an invalid object' : `'${input}'`\n } where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.`\n );\n}\n", "export function getSymbolIterator(): symbol {\n if (typeof Symbol !== 'function' || !Symbol.iterator) {\n return '@@iterator' as any;\n }\n\n return Symbol.iterator;\n}\n\nexport const iterator = getSymbolIterator();\n", "import { iterator as Symbol_iterator } from '../symbol/iterator';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being an Iterable */\nexport function isIterable(input: any): input is Iterable {\n return isFunction(input?.[Symbol_iterator]);\n}\n", "import { ReadableStreamLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport async function* readableStreamLikeToAsyncGenerator(readableStream: ReadableStreamLike): AsyncGenerator {\n const reader = readableStream.getReader();\n try {\n while (true) {\n const { value, done } = await reader.read();\n if (done) {\n return;\n }\n yield value!;\n }\n } finally {\n reader.releaseLock();\n }\n}\n\nexport function isReadableStreamLike(obj: any): obj is ReadableStreamLike {\n // We don't want to use instanceof checks because they would return\n // false for instances from another Realm, like an + + + +
+ +
+
+
+
+

What are you waiting for?

+

Get hands-on experience with Polkadot's in-depth tutorials. Covering everything from blockchain basics to advanced skills, our tutorials help you build expertise and start creating with confidence.

+ + + +
+
+
+
+ + + + + + + + + + + + + +
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/index.html b/infrastructure/index.html new file mode 100644 index 00000000..96aa9997 --- /dev/null +++ b/infrastructure/index.html @@ -0,0 +1,4950 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Infrastructure | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Infrastructure

+

Running infrastructure on Polkadot is essential to supporting the network’s performance and security. Operators must focus on reliability, ensure proper configuration, and meet the necessary hardware requirements to contribute effectively to the decentralized ecosystem.

+ +

Choosing the Right Role

+

Selecting your role within the Polkadot ecosystem depends on your goals, resources, and expertise. Below are detailed considerations for each role:

+
    +
  • +

    Running a node:

    +
      +
    • Purpose - a node provides access to network data and supports API queries. It is commonly used for:
        +
      • Development and testing - offers a local instance to simulate network conditions and test applications
      • +
      • Production use - acts as a data source for dApps, clients, and other applications needing reliable access to the blockchain
      • +
      +
    • +
    • Requirements - moderate hardware resources to handle blockchain data efficiently
    • +
    • Responsibilities - a node’s responsibilities vary based on its purpose:
        +
      • Development and testing - enables developers to test features, debug code, and simulate network interactions in a controlled environment
      • +
      • Production use - provides consistent and reliable data access for dApps and other applications, ensuring minimal downtime
      • +
      +
    • +
    +
  • +
  • +

    Running a validator:

    +
      +
    • Purpose - validators play a critical role in securing the Polkadot relay chain. They validate parachain block submissions, participate in consensus, and help maintain the network's overall integrity
    • +
    • Requirements - becoming a validator requires:
        +
      • Staking - a variable amount of DOT tokens to secure the network and demonstrate commitment
      • +
      • Hardware - high-performing hardware resources capable of supporting intensive blockchain operations
      • +
      • Technical expertise - proficiency in setting up and maintaining nodes, managing updates, and understanding Polkadot's consensus mechanisms
      • +
      • Community involvement - building trust and rapport within the community to attract nominators willing to stake with your validator
      • +
      +
    • +
    • Responsibilities - validators have critical responsibilities to ensure network health:
        +
      • Uptime - maintain near-constant availability to avoid slashing penalties for downtime or unresponsiveness
      • +
      • Network security - participate in consensus and verify parachain transactions to uphold the network's security and integrity
      • +
      • Availability - monitor the network for events and respond to issues promptly, such as misbehavior reports or protocol updates
      • +
      +
    • +
    +
  • +
+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-node/index.html b/infrastructure/running-a-node/index.html new file mode 100644 index 00000000..79097bdb --- /dev/null +++ b/infrastructure/running-a-node/index.html @@ -0,0 +1,4964 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Running a Node | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Running a Node

+

Running a node on the Polkadot network enables you to access blockchain data, interact with the network, and support decentralized applications (dApps). This guide will walk you through the process of setting up and connecting to a Polkadot node, including essential configuration steps for ensuring connectivity and security.

+

Full Nodes vs Bootnodes

+

Full nodes and bootnodes serve different roles within the network, each contributing in unique ways to connectivity and data access:

+
    +
  • Full node - stores blockchain data, validates transactions, and can serve as a source for querying data
  • +
  • Bootnode - assists new nodes in discovering peers and connecting to the network, but doesn’t store blockchain data
  • +
+

The following sections describe the different types of full nodes—pruned, archive, and light nodes—and the unique features of each for various use cases.

+

Types of Full Nodes

+

The three main types of nodes are as follows:

+
    +
  • Pruned node - prunes historical states of all finalized block states older than a specified number except for the genesis block's state
  • +
  • Archive node - preserves all the past blocks and their states, making it convenient to query the past state of the chain at any given time. Archive nodes use a lot of disk space, which means they should be limited to use cases that require easy access to past on-chain data, such as block explorers
  • +
  • Light node - has only the runtime and the current state but doesn't store past blocks, making them useful for resource-restricted devices
  • +
+

Each node type can be configured to provide remote access to blockchain data via RPC endpoints, allowing external clients, like dApps or developers, to submit transactions, query data, and interact with the blockchain remotely.

+
+

Tip

+

On Stakeworld, you can find a list of the database sizes of Polkadot and Kusama nodes.

+
+

State vs. Block Pruning

+

A pruned node retains only a subset of finalized blocks, discarding older data. The two main types of pruning are:

+
    +
  • State pruning - removes the states of old blocks while retaining block headers
  • +
  • Block pruning - removes both the full content of old blocks and their associated states, but keeps the block headers
  • +
+

Despite these deletions, pruned nodes are still capable of performing many essential functions, such as displaying account balances, making transfers, setting up session keys, and participating in staking.

+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-node/setup-bootnode/index.html b/infrastructure/running-a-node/setup-bootnode/index.html new file mode 100644 index 00000000..db8004fc --- /dev/null +++ b/infrastructure/running-a-node/setup-bootnode/index.html @@ -0,0 +1,5089 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Set Up a Bootnode | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+ +
+
+ + +
+ +
+ + + + +

Set Up a Bootnode

+

Introduction

+

Bootnodes are essential for helping blockchain nodes discover peers and join the network. When a node starts, it needs to find other nodes, and bootnodes provide an initial point of contact. Once connected, a node can expand its peer connections and play its role in the network, like participating as a validator.

+

This guide will walk you through setting up a Polkadot bootnode, configuring P2P, WebSocket (WS), secure WSS connections, and managing network keys. You'll also learn how to test your bootnode to ensure it is running correctly and accessible to other nodes.

+

Prerequisites

+

Before you start, you need to have the following prerequisites:

+
    +
  • Verify a working Polkadot (polkadot) binary is available on your machine
  • +
  • Ensure you have nginx installed. Please refer to the Installation Guide for help with installation if needed
  • +
  • A VPS or other dedicated server setup
  • +
+

Accessing the Bootnode

+

Bootnodes must be accessible through three key channels to connect with other nodes in the network:

+
    +
  • +

    P2P - a direct peer-to-peer connection, set by:

    +
    --listen-addr /ip4/0.0.0.0/tcp/INSERT_PORT
    +
    +
    +

    Note

    +

    This is not enabled by default on non-validator nodes like archive RPC nodes.

    +
    +
  • +
  • +

    P2P/WS - a WebSocket (WS) connection, also configured via --listen-addr

    +
  • +
  • P2P/WSS - a secure WebSocket (WSS) connection using SSL, often required for light clients. An SSL proxy is needed, as the node itself cannot handle certificates
  • +
+

Node Key

+

A node key is the ED25519 key used by libp2p to assign your node an identity or peer ID. Generating a known node key for a bootnode is crucial, as it gives you a consistent key that can be placed in chain specifications as a known, reliable bootnode.

+

Starting a node creates its node key in the chains/INSERT_CHAIN/network/secret_ed25519 file.

+

You can create a node key using:

+
polkadot key generate-node-key
+
+

This key can be used in the startup command line.

+

It is imperative that you backup the node key. If it is included in the polkadot binary, it is hardcoded into the binary, which must be recompiled to change the key.

+

Running the Bootnode

+

A bootnode can be run as follows:

+
polkadot --chain polkadot \
+--name dot-bootnode \
+--listen-addr /ip4/0.0.0.0/tcp/30310 \
+--listen-addr /ip4/0.0.0.0/tcp/30311/ws
+
+

This assigns the p2p to port 30310 and p2p/ws to port 30311. For the p2p/wss port, a proxy must be set up with a DNS name and a corresponding certificate. The following example is for the popular nginx server and enables p2p/wss on port 30312 by adding a proxy to the p2p/ws port 30311:

+
/etc/nginx/sites-enabled/dot-bootnode
server {
+       listen       30312 ssl http2 default_server;
+       server_name  dot-bootnode.stakeworld.io;
+       root         /var/www/html;
+
+       ssl_certificate "INSERT_YOUR_CERT";
+       ssl_certificate_key "INSERT_YOUR_KEY";
+
+       location / {
+         proxy_buffers 16 4k;
+         proxy_buffer_size 2k;
+         proxy_pass http://localhost:30311;
+         proxy_http_version 1.1;
+         proxy_set_header Upgrade $http_upgrade;
+         proxy_set_header Connection "Upgrade";
+         proxy_set_header Host $host;
+   }
+
+}
+
+

Testing Bootnode Connection

+

If the preceding node is running with DNS name dot-bootnode.stakeworld.io, which contains a proxy with a valid certificate and node-id 12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg then the following commands should output syncing 1 peers.

+
+

Tip

+

You can add -lsub-libp2p=trace on the end to get libp2p trace logging for debugging purposes.

+
+

P2P

+
polkadot --chain polkadot \
+--base-path /tmp/node \
+--name "Bootnode testnode" \
+--reserved-only \
+--reserved-nodes "/dns/dot-bootnode.stakeworld.io/tcp/30310/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg" \
+--no-hardware-benchmarks
+
+

P2P/WS

+
polkadot --chain polkadot \
+--base-path /tmp/node \
+--name "Bootnode testnode" \
+--reserved-only \
+--reserved-nodes "/dns/dot-bootnode.stakeworld.io/tcp/30311/ws/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg" \
+--no-hardware-benchmarks
+
+

P2P/WSS

+
polkadot --chain polkadot \
+--base-path /tmp/node \
+--name "Bootnode testnode" \
+--reserved-only \
+--reserved-nodes "/dns/dot-bootnode.stakeworld.io/tcp/30312/wss/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg" \
+--no-hardware-benchmarks
+
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-node/setup-full-node/index.html b/infrastructure/running-a-node/setup-full-node/index.html new file mode 100644 index 00000000..6e3d9acf --- /dev/null +++ b/infrastructure/running-a-node/setup-full-node/index.html @@ -0,0 +1,5265 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Set Up a Node | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Set Up a Node

+

Introduction

+

Running a node on Polkadot provides direct interaction with the network, enhanced privacy, and full control over RPC requests, transactions, and data queries. As the backbone of the network, nodes ensure decentralized data propagation, transaction validation, and seamless communication across the ecosystem.

+

Polkadot supports multiple node types, including pruned, archive, and light nodes, each suited to specific use cases. During setup, you can use configuration flags to choose the node type you wish to run.

+

This guide walks you through configuring, securing, and maintaining a node on Polkadot or any Polkadot SDK-based chain. It covers instructions for the different node types and how to safely expose your node's RPC server for external access. Whether you're building a local development environment, powering dApps, or supporting network decentralization, this guide provides all the essentials.

+

Set Up a Node

+

Now that you're familiar with the different types of nodes, this section will walk you through configuring, securing, and maintaining a node on Polkadot or any Polkadot SDK-based chain.

+

Prerequisites

+

Before getting started, ensure the following prerequisites are met:

+ +
+

Warning

+

This setup is not recommended for validators. If you plan to run a validator, refer to the Running a Validator guide for proper instructions.

+
+

Install and Build the Polkadot Binary

+

This section will walk you through installing and building the Polkadot binary for different operating systems and methods.

+
+macOS +

To get started, update and configure the Rust toolchain by running the following commands:

+
source ~/.cargo/env
+
+rustup default stable
+rustup update
+
+rustup update nightly
+rustup target add wasm32-unknown-unknown --toolchain nightly
+rustup component add rust-src --toolchain stable-aarch64-apple-darwin
+
+

You can verify your installation by running:

+
rustup show
+rustup +nightly show
+
+

You should see output similar to the following:

+

+ rustup show
+ rustup +nightly show</span

+
+

active toolchain + ---------------- + + stable-aarch64-apple-darwin (default) + rustc 1.82.0 (f6e511eec 2024-10-15) + + active toolchain + ---------------- + + nightly-aarch64-apple-darwin (overridden by +toolchain on the command line) + rustc 1.84.0-nightly (03ee48451 2024-11-18) + +

+ +

Then, run the following commands to clone and build the Polkadot binary:

+
git clone https://github.com/paritytech/polkadot-sdk polkadot-sdk
+cd polkadot-sdk
+cargo build --release
+
+

Depending upon the specs of your machine, compiling the binary may take an hour or more. After building the Polkadot node from source, the executable binary will be located in the ./target/release/polkadot directory.

+
+
+Windows +

To get started, make sure that you have WSL and Ubuntu installed on your Windows machine.

+

Once installed, you have a couple options for installing the Polkadot binary:

+
    +
  • If Rust is installed, then cargo can be used similar to the macOS instructions
  • +
  • Or, the instructions in the Linux section can be used
  • +
+
+
+Linux (pre-built binary) +

To grab the latest release of the Polkadot binary, you can use wget:

+
wget https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION/polkadot
+
+

Ensure you note the executable binary's location, as you'll need to use it when running the start-up command. If you prefer, you can specify the output location of the executable binary with the -O flag, for example:

+
wget https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION/polkadot \
+- O /var/lib/polkadot-data/polkadot
+
+
+

Info

+

The nature of pre-built binaries means that they may not work on your particular architecture or Linux distribution. If you see an error like cannot execute binary file: Exec format error it likely means the binary is incompatible with your system. You will either need to compile the binary or use Docker.

+
+

Ensure that you properly configure the permissions to make the Polkadot release binary executable:

+
sudo chmod +x polkadot
+
+
+
+Linux (compile binary) +

The most reliable (although perhaps not the fastest) way of launching a full node is to compile the binary yourself. Depending on your machine's specs, this may take an hour or more.

+

To get started, run the following commands to configure the Rust toolchain:

+
rustup default stable
+rustup update
+rustup update nightly
+rustup target add wasm32-unknown-unknown --toolchain nightly
+rustup target add wasm32-unknown-unknown --toolchain stable-x86_64-unknown-linux-gnu
+rustup component add rust-src --toolchain stable-x86_64-unknown-linux-gnu
+
+

You can verify your installation by running:

+
rustup show
+
+

You should see output similar to the following:

+

+ rustup show
+ rustup +nightly show</span

+
+

active toolchain + ---------------- + + stable-x86_64-unknown-linux-gnu (default) + rustc 1.82.0 (f6e511eec 2024-10-15) +

+ +

Once Rust is configured, run the following commands to clone and build Polkadot:

+
git clone https://github.com/paritytech/polkadot-sdk polkadot-sdk
+cd polkadot-sdk
+cargo build --release
+
+

Compiling the binary may take an hour or more, depending on your machine's specs. After building the Polkadot node from the source, the executable binary will be located in the ./target/release/polkadot directory.

+
+
+Linux (snap package) +

Polkadot can be installed as a snap package. If you don't already have Snap installed, take the following steps to install it:

+
sudo apt update
+sudo apt install snapd
+
+

Install the Polkadot snap package:

+
sudo snap install polkadot
+
+

Before continuing on with the following instructions, check out the Configure and Run Your Node section to learn more about the configuration options.

+

To configure your Polkadot node with your desired options, you'll run a command similar to the following:

+
sudo snap set polkadot service-args="--name=MyName --chain=polkadot"
+
+

Then to start the node service, run:

+
sudo snap start polkadot
+
+

You can review the logs to check on the status of the node:

+
snap logs polkadot -f
+
+

And at any time, you can stop the node service:

+
sudo snap stop polkadot
+
+

You can optionally prevent the service from stopping when snap is updated with the following command:

+
sudo snap set polkadot endure=true
+
+
+

Use Docker

+

As an additional option, you can use Docker to run your node in a container. Doing this is more advanced, so it's best left up to those already familiar with Docker or who have completed the other set-up instructions in this guide. You can review the latest versions on DockerHub.

+

Be aware that when you run Polkadot in Docker, the process only listens on localhost by default. If you would like to connect to your node's services (RPC and Prometheus) you need to ensure that you run the node with the --rpc-external, and --prometheus-external commands.

+
docker run -p 9944:9944 -p 9615:9615 parity/polkadot:v1.16.2 --name "my-polkadot-node-calling-home" --rpc-external --prometheus-external
+
+

If you're running Docker on an Apple Silicon machine (e.g. M4), you'll need to adapt the command slightly:

+
docker run --platform linux/amd64 -p 9944:9944 -p 9615:9615 parity/polkadot:v1.16.2 --name "kearsarge-calling-home" --rpc-external --prometheus-external
+
+

Configure and Run Your Node

+

Now that you've installed and built the Polkadot binary, the next step is to configure the start-up command depending on the type of node that you want to run. You'll need to modify the start-up command accordingly based on the location of the binary. In some cases, it may be located within the ./target/release/ folder, so you'll need to replace polkadot with ./target/release/polkadot in the following commands.

+

Also, note that you can use the same binary for Polkadot as you would for Kusama or any other relay chain. You'll need to use the --chain flag to differentiate between chains.

+
+

Note

+

Not sure which type of node to run? Explore an overview of the different node types.

+
+

The base commands for running a Polkadot node are as follows:

+
+
+
+

This uses the default pruning value of the last 256 blocks:

+
polkadot --chain polkadot \
+--name "INSERT_NODE_NAME"
+
+
+
+

You can customize the pruning value, for example, to the last 1000 finalized blocks:

+
polkadot --chain polkadot \
+--name INSERT_YOUR_NODE_NAME \
+--state-pruning 1000 \
+--blocks-pruning archive \
+--rpc-cors all \
+--rpc-methods safe
+
+
+
+

To support the full state, use the archive option:

+
polkadot --chain polkadot \
+--name INSERT_YOUR_NODE_NAME \
+--state-pruning archive \
+--blocks-pruning archive \
+
+
+
+
+

If you want to run an RPC node, please refer to the following RPC Configurations section.

+

To review a complete list of the available commands, flags, and options, you can use the --help flag:

+
polkadot --help
+
+

Once you've fully configured your start-up command, you can execute it in your terminal and your node will start syncing.

+

RPC Configurations

+

The node startup settings allow you to choose what to expose, how many connections to expose, and which systems should be granted access through the RPC server.

+
    +
  • You can limit the methods to use with --rpc-methods; an easy way to set this to a safe mode is --rpc-methods safe
  • +
  • You can set your maximum connections through --rpc-max-connections, for example, --rpc-max-connections 200
  • +
  • By default, localhost and Polkadot.js can access the RPC server. You can change this by setting --rpc-cors. To allow access from everywhere, you can use --rpc-cors all
  • +
+

For a list of important flags when running RPC nodes, refer to the Parity DevOps documentation: Important Flags for Running an RPC Node.

+

Sync Your Node

+

The syncing process will take a while, depending on your capacity, processing power, disk speed, and RAM. The process may be completed on a $10 DigitalOcean droplet in about ~36 hours. While syncing, your node name should be visible in gray on Polkadot Telemetry, and once it is fully synced, your node name will appear in white on Polkadot Telemetry.

+

A healthy node syncing blocks will output logs like the following:

+
+ 2024-11-19 23:49:57 Parity Polkadot + 2024-11-19 23:49:57 ✌️ version 1.14.1-7c4cd60da6d + 2024-11-19 23:49:57 ❤️ by Parity Technologies <admin@parity.io>, 2017-2024 + 2024-11-19 23:49:57 📋 Chain specification: Polkadot + 2024-11-19 23:49:57 🏷 Node name: myPolkadotNode + 2024-11-19 23:49:57 👤 Role: FULL + 2024-11-19 23:49:57 💾 Database: RocksDb at /home/ubuntu/.local/share/polkadot/chains/polkadot/db/full + 2024-11-19 23:50:00 🏷 Local node identity is: 12D3KooWDmhHEgPRJUJnUpJ4TFWn28EENqvKWH4dZGCN9TS51y9h + 2024-11-19 23:50:00 Running libp2p network backend + 2024-11-19 23:50:00 💻 Operating system: linux + 2024-11-19 23:50:00 💻 CPU architecture: x86_64 + 2024-11-19 23:50:00 💻 Target environment: gnu + 2024-11-19 23:50:00 💻 CPU: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz + 2024-11-19 23:50:00 💻 CPU cores: 4 + 2024-11-19 23:50:00 💻 Memory: 32001MB + 2024-11-19 23:50:00 💻 Kernel: 5.15.0-113-generic + 2024-11-19 23:50:00 💻 Linux distribution: Ubuntu 22.04.5 LTS + 2024-11-19 23:50:00 💻 Virtual machine: no + 2024-11-19 23:50:00 📦 Highest known block at #9319 + 2024-11-19 23:50:00 〽️ Prometheus exporter started at 127.0.0.1:9615 + 2024-11-19 23:50:00 Running JSON-RPC server: addr=127.0.0.1:9944, allowed origins=["http://localhost:*", "http://127.0.0.1:*", "https://localhost:*", "https://127.0.0.1:*", "https://polkadot.js.org"] + 2024-11-19 23:50:00 🏁 CPU score: 671.67 MiBs + 2024-11-19 23:50:00 🏁 Memory score: 7.96 GiBs + 2024-11-19 23:50:00 🏁 Disk score (seq. writes): 377.87 MiBs + 2024-11-19 23:50:00 🏁 Disk score (rand. writes): 147.92 MiBs + 2024-11-19 23:50:00 🥩 BEEFY gadget waiting for BEEFY pallet to become available... + 2024-11-19 23:50:00 🔍 Discovered new external address for our node: /ip4/37.187.93.17/tcp/30333/ws/p2p/12D3KooWDmhHEgPRJUJnUpJ4TFWn28EENqvKWH4dZGCN9TS51y9h + 2024-11-19 23:50:01 🔍 Discovered new external address for our node: /ip6/2001:41d0:a:3511::1/tcp/30333/ws/p2p/12D3KooWDmhHEgPRJUJnUpJ4TFWn28EENqvKWH4dZGCN9TS51y9h + 2024-11-19 23:50:05 ⚙️ Syncing, target=#23486325 (5 peers), best: #12262 (0x8fb5…f310), finalized #11776 (0x9de1…32fb), ⬇ 430.5kiB/s ⬆ 17.8kiB/s + 2024-11-19 23:50:10 ⚙️ Syncing 628.8 bps, target=#23486326 (6 peers), best: #15406 (0x9ce1…2d76), finalized #15360 (0x0e41…a064), ⬇ 255.0kiB/s ⬆ 1.8kiB/s +
+ +

Congratulations, you're now syncing a Polkadot full node! Remember that the process is identical when using any other Polkadot SDK-based chain, although individual chains may have chain-specific flag requirements.

+

Connect to Your Node

+

Open Polkadot.js Apps and click the logo in the top left to switch the node. Activate the Development toggle and input your node's domain or IP address. The default WSS endpoint for a local node is:

+
ws://127.0.0.1:9944
+
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/index.html b/infrastructure/running-a-validator/index.html new file mode 100644 index 00000000..3bd479b7 --- /dev/null +++ b/infrastructure/running-a-validator/index.html @@ -0,0 +1,4959 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Running a Validator | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Running a Validator

+

Running a Polkadot validator is crucial for securing the network and maintaining its integrity. Validators play a key role in verifying parachain blocks, participating in consensus, and ensuring the reliability of the Polkadot relay chain.

+

Learn the requirements for setting up a Polkadot validator node, along with detailed steps on how to install, run, upgrade, and maintain the node.

+

In This Section

+

+

+

+

Additional Resources

+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/onboarding-and-offboarding/index.html b/infrastructure/running-a-validator/onboarding-and-offboarding/index.html new file mode 100644 index 00000000..5d0f54bf --- /dev/null +++ b/infrastructure/running-a-validator/onboarding-and-offboarding/index.html @@ -0,0 +1,4964 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Onboarding and Offboarding | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Onboarding and Offboarding

+

Successfully onboarding and offboarding a Polkadot validator node is crucial to maintaining the security and integrity of the network. This process involves setting up, activating, deactivating, and securely managing your validator’s key and staking details.

+

This section provides guidance on how to set up, activate, and deactivate your validator.

+

In This Section

+

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+

+

Additional Resources

+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/index.html b/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/index.html new file mode 100644 index 00000000..ffa115b9 --- /dev/null +++ b/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/index.html @@ -0,0 +1,5890 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Set Up a Validator | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Set Up a Validator

+

Introduction

+

Setting up a Polkadot validator node is essential for securing the network and earning staking rewards. This guide walks you through the technical steps to set up a validator, from installing the necessary software to managing keys and synchronizing your node with the chain.

+

Running a validator requires a commitment to maintaining a stable, secure infrastructure. Validators are responsible for their own stakes and those of nominators who trust them with their tokens. Proper setup and ongoing management are critical to ensuring smooth operation and avoiding potential penalties such as slashing.

+

Prerequisites

+

To get the most from this guide, ensure you've done the following before going forward:

+ +

Initial Setup

+

Before you can begin running your validator, you'll need to configure your server environment to meet the operational and security standards required for validating. Configuration includes setting up time synchronization, ensuring critical security features are active, and installing the necessary binaries. Proper setup at this stage is essential to prevent issues like block production errors or being penalized for downtime. Below are the essential steps to get your system ready.

+

Install Network Time Protocol Client

+

Accurate timekeeping is critical to ensure your validator is synchronized with the network. Validators need local clocks in sync with the blockchain to avoid missing block authorship opportunities. Using Network Time Protocol (NTP) is the standard solution to keep your system's clock accurate.

+

If you are using Ubuntu version 18.04 or newer, the NTP Client should be installed by default. You can check whether you have the NTP client by running:

+
timedatectl
+
+

If NTP is running, you should see a message like the following:

+
System clock synchronized: yes
+
+

If NTP is not installed or running, you can install it using:

+
sudo apt-get install ntp
+
+

After installation, NTP will automatically start. To check its status:

+
sudo ntpq -p
+
+

This command will return a message with the status of the NTP synchronization. Skipping this step could result in your validator node missing blocks due to minor clock drift, potentially affecting its network performance.

+

Verify Landlock is Activated

+

Landlock is an important security feature integrated into Linux kernels starting with version 5.13. It allows processes, even those without special privileges, to limit their access to the system to reduce the machine's attack surface. This feature is crucial for validators, as it helps ensure the security and stability of the node by preventing unauthorized access or malicious behavior.

+

To use Landlock, ensure you use the reference kernel or newer versions. Most Linux distributions should already have Landlock activated. You can check if Landlock is activated on your machine by running the following command as root:

+
dmesg | grep landlock || journalctl -kg landlock
+
+

If Landlock is not activated, your system logs won't show any related output. In this case, you will need to activate it manually or ensure that your Linux distribution supports it. Most modern distributions with the required kernel version should have Landlock activated by default. However, if your system lacks support, you may need to build the kernel with Landlock activated. For more information on doing so, refer to the official kernel documentation.

+

Implementing Landlock ensures your node operates in a restricted, self-imposed sandbox, limiting potential damage from security breaches or bugs. While not a mandatory requirement, enabling this feature greatly improves the security of your validator setup.

+

Install the Polkadot Binaries

+

You must install the Polkadot binaries required to run your validator node. These binaries include the main polkadot, polkadot-prepare-worker, and polkadot-execute-worker binaries. All three are needed to run a fully functioning validator node.

+

Depending on your preference and operating system setup, there are multiple methods to install these binaries. Below are the main options:

+

Install from Official Releases

+

The preferred, most straightforward method to install the required binaries is downloading the latest versions from the official releases. You can visit the Github Releases page for the most current versions of the polkadot, polkadot-prepare-worker, and polkadot-execute-worker binaries.

+

You can also download the binaries by using the following direct links and replacing INSERT_VERSION_NUMBER with the version number, e.g. v1.16.1

+
+
+
+
https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION_NUMBER/polkadot
+
+
+
+
https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION_NUMBER/polkadot-prepare-worker
+
+
+
+
https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION_NUMBER/polkadot-execute-worker
+
+
+
+
+

Install with Package Managers

+

Users running Debian-based distributions like Ubuntu, or RPM-based distributions such as Fedora or CentOS can install the binaries via package managers.

+
+Debian-based (Debian, Ubuntu) +

Run the following commands as the root user to add the necessary repository and install the binaries:

+
# Import the security@parity.io GPG key
+gpg --recv-keys --keyserver hkps://keys.mailvelope.com 9D4B2B6EB8F97156D19669A9FF0812D491B96798
+gpg --export 9D4B2B6EB8F97156D19669A9FF0812D491B96798 > /usr/share/keyrings/parity.gpg
+# Add the Parity repository and update the package index
+echo 'deb [signed-by=/usr/share/keyrings/parity.gpg] https://releases.parity.io/deb release main' > /etc/apt/sources.list.d/parity.list
+apt update
+# Install the `parity-keyring` package - This will ensure the GPG key
+# used by APT remains up-to-date
+apt install parity-keyring
+# Install polkadot
+apt install polkadot
+
+

After installation, ensure the binaries are properly installed by verifying the installation.

+
+
+RPM-based (Fedora, CentOS)" +

Run the following commands as the root user to install the binaries on an RPM-based system:

+
# Install dnf-plugins-core (This might already be installed)
+dnf install dnf-plugins-core
+# Add the repository and activate it
+dnf config-manager --add-repo https://releases.parity.io/rpm/polkadot.repo
+dnf config-manager --set-enabled polkadot
+# Install polkadot (You may have to confirm the import of the GPG key, which
+# should have the following fingerprint: 9D4B2B6EB8F97156D19669A9FF0812D491B96798)
+dnf install polkadot
+
+

After installation, ensure the binaries are properly installed by verifying the installation.

+
+

Install with Ansible

+

You can also manage Polkadot installations using Ansible. This approach can be beneficial for users managing multiple validator nodes or requiring automated deployment. The Parity chain operations Ansible collection provides a Substrate node role for this purpose.

+

Install with Docker

+

If you prefer using Docker or an OCI-compatible container runtime, the official Polkadot Docker image can be pulled directly from Docker Hub.

+

To pull the latest image, run the following command. Make sure to replace INSERT_VERSION_NUMBER with the appropriate version number, e.g. v1.16.1

+
docker.io/parity/polkadot:INSERT_VERSION_NUMBER
+
+

Build from Sources

+

You may build the binaries from source by following the instructions on the Polkadot SDK repository.

+

Verify Installation

+

Once the Polkadot binaries are installed, it's essential to verify that everything is set up correctly and that all the necessary components are in place. Follow these steps to ensure the binaries are installed and functioning as expected.

+
    +
  1. +

    Check the versions - run the following commands to verify the versions of the installed binaries:

    +
    polkadot --version
    +polkadot-execute-worker --version
    +polkadot-prepare-worker --version
    +
    +

    The output should show the version numbers for each of the binaries. Ensure that the versions match and are consistent, similar to the following example (the specific version may vary):

    +

    If the versions do not match or if there is an error, double-check that all the binaries were correctly installed and are accessible within your $PATH.

    +
  2. +
  3. +

    Ensure all binaries are in the same directory - all the binaries must be in the same directory for the Polkadot validator node to function properly. If the binaries are not in the same location, move them to a unified directory and ensure this directory is added to your system's $PATH

    +

    To verify the $PATH, run the following command:

    +
    echo $PATH
    +
    +

    If necessary, you can move the binaries to a shared location, such as /usr/local/bin/, and add it to your $PATH.

    +
  4. +
+

Run a Validator on a TestNet

+

Running your validator on a test network like Westend or Kusama is a smart way to familiarize yourself with the process and identify any setup issues in a lower-stakes environment before joining the Polkadot MainNet.

+

Choose a Network

+
    +
  • Westend - Polkadot's primary TestNet is open to anyone for testing purposes. Validator slots are intentionally limited to keep the network stable for the Polkadot release process, so it may not support as many validators at any given time
  • +
  • Kusama - often called Polkadot's “canary network,” Kusama has real economic value but operates with a faster and more experimental approach. Running a validator here provides an experience closer to MainNet with the benefit of more frequent validation opportunities with an era time of 6 hours vs 24 hours for Polkadot
  • +
+

Synchronize Chain Data

+

After successfully installing and verifying the Polkadot binaries, the next step is to sync your node with the blockchain network. Synchronization is necessary to download and validate the blockchain data, ensuring your node is ready to participate as a validator. Follow these steps to sync your node:

+
    +
  1. +

    Start syncing - you can run a full or warp sync

    +
    +
    +
    +

    Polkadot defaults to using a full sync, which downloads and validates the entire blockchain history from the genesis block. Start the syncing process by running the following command:

    +
    polkadot
    +
    +

    This command starts your Polkadot node in non-validator mode, allowing you to synchronize the chain data.

    +
    +
    +

    You can opt to use warp sync which initially downloads only GRANDPA finality proofs and the latest finalized block's state. Use the following command to start a warp sync:

    +
    polkadot --sync warp
    +
    +

    Warp sync ensures that your node quickly updates to the latest finalized state. The historical blocks are downloaded in the background as the node continues to operate.

    +
    +
    +
    +
    +Adjustments for TestNets +

    If you're planning to run a validator on a TetNet, you can specify the chain using the --chain flag. For example, the following will run a validator on Kusama:

    +
    polkadot --chain=kusama
    +
    +
    +
  2. +
  3. +

    Monitor sync progress - once the sync starts, you will see a stream of logs providing information about the node's status and progress. Here's an example of what the output might look like:

    +

    + polkadot + 2021-06-17 03:07:07 Parity Polkadot + 2021-06-17 03:07:07 ✌️ version 0.9.5-95f6aa201-x86_64-linux-gnu + 2021-06-17 03:07:07 ❤️ by Parity Technologies <admin@parity.io>, 2017-2021 + 2021-06-17 03:07:07 📋 Chain specification: Polkadot + 2021-06-17 03:07:07 🏷 Node name: boiling-pet-7554 + 2021-06-17 03:07:07 👤 Role: FULL + 2021-06-17 03:07:07 💾 Database: RocksDb at /root/.local/share/polkadot/chains/polkadot/db + 2021-06-17 03:07:07 ⛓ Native runtime: polkadot-9050 (parity-polkadot-0.tx7.au0) + 2021-06-17 03:07:10 🏷 Local node identity is: 12D3KooWLtXFWf1oGrnxMGmPKPW54xWCHAXHbFh4Eap6KXmxoi9u + 2021-06-17 03:07:10 📦 Highest known block at #17914 + 2021-06-17 03:07:10 〽️ Prometheus server started at 127.0.0.1:9615 + 2021-06-17 03:07:10 Listening for new connections on 127.0.0.1:9944 + ... +

    +

    The output logs provide information such as the current block number, node name, and network connections. Monitor the sync progress and any errors that might occur during the process. Look for information about the latest processed block and compare it with the current highest block using tools like Telemetry or Polkadot.js Apps Explorer.

    +
  4. +
+

Database Snapshot Services

+

If you'd like to speed up the process further, you can use a database snapshot. Snapshots are compressed backups of the blockchain's database directory and can significantly reduce the time required to sync a new node. Here are a few public snapshot providers:

+ +
+

Warning

+

Although snapshots are convenient, syncing from scratch is recommended for security purposes. If snapshots become corrupted and most nodes rely on them, the network could inadvertently run on a non-canonical chain.

+
+
+Why am I unable to synchronize the chain with 0 peers? +

Make sure you have libp2p port 30333 activated. It will take some time to discover other peers over the network.

+

Terminal logs showing 0 peers

+
+

Bond DOT

+

Once your validator node is synced, the next step is bonding DOT. A bonded account, or stash, holds your staked tokens (DOT) that back your validator node. Bonding your DOT means locking it for a period, during which it cannot be transferred or spent but is used to secure your validator's role in the network. Visit the Minimum Bond Requirement section for details on how much DOT is required.

+

The following sections will guide you through bonding DOT for your validator.

+

Bonding DOT on Polkadot.js Apps

+

Once you're ready to bond your DOT, head over to the Polkadot.js Apps staking page by clicking the Network dropdown at the top of the page and selecting Staking.

+

To get started with the bond submission, click on the Accounts tab, then the + Stash button, and then enter the following information:

+
    +
  1. Stash account - select your stash account (which is the account with the DOT/KSM balance)
  2. +
  3. Value bonded - enter how much DOT from the stash account you want to bond/stake. You are not required to bond all of the DOT in that account and you may bond more DOT at a later time. Be aware, withdrawing any bonded amount requires waiting for the unbonding period. The unbonding period is seven days for Kusama and 28 days for Polkadot
  4. +
  5. Payment destination - add the recipient account for validator rewards. If you'd like to redirect payments to an account that is not the stash account, you can do it by entering the address here. Note that it is extremely unsafe to set an exchange address as the recipient of the staking rewards
  6. +
+

Once everything is filled in properly, select Bond and sign the transaction with your stash account. If successful, you should see an ExtrinsicSuccess message.

+

Your bonded account will be available under Stashes. After refreshing the screen, you should now see a card with all your accounts. The bonded amount on the right corresponds to the funds bonded by the stash account.

+

Set Session Keys

+

Setting up your validator's session keys is essential to associate your node with your stash account on the Polkadot network. Validators use session keys to participate in the consensus process. Your validator can only perform its role in the network by properly setting session keys which consist of several key pairs for different parts of the protocol (e.g., GRANDPA, BABE). These keys must be registered on-chain and associated with your validator node to ensure it can participate in validating blocks.

+

The following sections will cover generating session keys, submitting key data on-chain, and verifying that session keys are correctly set.

+

Generate Session Keys

+

The Polkadot.js Apps UI and the CLI are the two primary methods used to generate session keys.

+
+
+
+
    +
  1. Ensure that you are connected to your validator node through the Polkadot.js Apps interface
  2. +
  3. In the Toolbox tab, navigate to RPC calls
  4. +
  5. Select author_rotateKeys from the drop-down menu and run the command. This will generate new session keys in your node's keystore and return the result as a hex-encoded string
  6. +
  7. Copy and save this hex-encoded output for the next step
  8. +
+
+
+

Generate session keys by running the following command on your validator node:

+
curl -H "Content-Type: application/json" \
+-d '{"id":1, "jsonrpc":"2.0", "method": "author_rotateKeys", "params":[]}' \
+http://localhost:9944
+
+

This command will return a hex-encoded string that is the concatenation of your session keys. Save this string for later use.

+
+
+
+

Submit Transaction to Set Keys

+

Now that you have generated your session keys, you must submit them to the chain. Follow these steps:

+
    +
  1. Go to the Network > Staking > Accounts section on Polkadot.js Apps
  2. +
  3. Select Set Session Key on the bonding account you generated earlier
  4. +
  5. Paste the hex-encoded session key string you generated (from either the UI or CLI) into the input field and submit the transaction
  6. +
+

+

Once the transaction is signed and submitted, your session keys will be registered on-chain.

+

Verify Session Key Setup

+

To verify that your session keys are properly set, you can use one of two RPC calls:

+
    +
  • hasKey - checks if the node has a specific key by public key and key type
  • +
  • hasSessionKeys - verifies if your node has the full session key string associated with the validator
  • +
+

For example, you can check session keys on the Polkadot.js Apps interface or by running an RPC query against your node. Once this is done, your validator node is ready for its role.

+

Set the Node Key

+

Validators on Polkadot need a static network key (also known as the node key) to maintain a stable node identity. This key ensures that your validator can maintain a consistent peer ID, even across restarts, which is crucial for maintaining reliable network connections.

+

Starting with Polkadot version 1.11, validators without a stable network key may encounter the following error on startup:

+
+ polkadot --validator --name "INSERT_NAME_FROM_TELEMETRY" + Error: + 0: Starting an authority without network key + This is not a safe operation because other authorities in the network may depend on your node having a stable identity. + Otherwise these other authorities may not being able to reach you. + If it is the first time running your node you could use one of the following methods: + 1. [Preferred] Separately generate the key with: INSERT_NODE_BINARY key generate-node-key --base-path INSERT_YOUR_BASE_PATH + 2. [Preferred] Separately generate the key with: INSERT_NODE_BINARY key generate-node-key --file INSERT_YOUR_PATH_TO_NODE_KEY + 3. [Preferred] Separately generate the key with: INSERT_NODE_BINARY key generate-node-key --default-base-path + 4. [Unsafe] Pass --unsafe-force-node-key-generation and make sure you remove it for subsequent node restarts + +
+ +

Generate the Node Key

+

Use one of the following methods to generate your node key:

+
+
+
+

The recommended solution is to generate a node key and save it to a file using the following command:

+
polkadot key generate-node-key --file INSERT_PATH_TO_NODE_KEY
+
+
+
+

You can also generate the node key with the following command, which will automatically save the key to the base path of your node:

+
polkadot key generate-node-key --default-base-path
+
+
+
+
+

Save the file path for reference. You will need it in the next step to configure your node with a static identity.

+

Set the Node Key

+

After generating the node key, configure your node to use it by specifying the path to the key file when launching your node. Add the following flag to your validator node's startup command:

+
polkadot --node-key-file INSERT_PATH_TO_NODE_KEY
+
+

Following these steps ensures that your node retains its identity, making it discoverable by peers without the risk of conflicting identities across sessions. For further technical background, see Polkadot SDK Pull Request #3852 for the rationale behind requiring static keys.

+

Validate

+

Once your validator node is fully synced and ready, the next step is to ensure it's visible on the network and performing as expected. Below are steps for monitoring and managing your node on the Polkadot network.

+

Verify Sync via Telemetry

+

To confirm that your validator is live and synchronized with the Polkadot network, visit the Telemetry page. Telemetry provides real-time information on node performance and can help you check if your validator is connected properly. Search for your node by name. You can search all nodes currently active on the network, which is why you should use a unique name for easy recognition. Now, confirm that your node is fully synced by comparing the block height of your node with the network's latest block. Nodes that are fully synced will appear white in the list, while nodes that are not yet fully synced will appear gray.

+

In the following example, a node named techedtest is successfully located and synchronized, ensuring it's prepared to participate in the network:

+

Polkadot telemetry dashboard

+

Activate using Polkadot.js Apps

+

Follow these steps to use Polkadot.js Apps to activate your validator:

+
    +
  1. +

    Go to the Validator tab in the Polkadot.js Apps UI and locate the section where you input the keys generated from rotateKeys. Paste the output from author_rotateKeys, which is a hex-encoded key that links your validator with its session keys:

    +

    +
  2. +
  3. +

    Set a reward commission percentage if desired. You can set a percentage of the rewards to pay to your validator and the remainder pays to your nominators. A 100% commission rate indicates the validator intends to keep all rewards and is seen as a signal the validator is not seeking nominators

    +
  4. +
  5. Toggle the allows new nominations option if your validator is open to more nominations from DOT holders
  6. +
  7. +

    Once everything is configured, select Bond & Validate to activate your validator status

    +

    +
  8. +
+

Monitor Validation Status and Slots

+

On the Staking tab in Polkadot.js Apps, you can see your validator's status, the number of available validator slots, and the nodes that have signaled their intent to validate. Your node may initially appear in the waiting queue, especially if the validator slots are full. The following is an example view of the Staking tab:

+

staking queue

+

The validator set refreshes each era. If there's an available slot in the next era, your node may be selected to move from the waiting queue to the active validator set, allowing it to start validating blocks. If your validator is not selected, it remains in the waiting queue. Increasing your stake or gaining more nominators may improve your chance of being selected in future eras.

+

Run a Validator Using Systemd

+

Running your Polkadot validator as a systemd service is an effective way to ensure its high uptime and reliability. Using systemd allows your validator to automatically restart after server reboots or unexpected crashes, significantly reducing the risk of slashing due to downtime.

+

This following sections will walk you through creating and managing a systemd service for your validator, allowing you to seamlessly monitor and control it as part of your Linux system.

+

Ensure the following requirements are met before proceeding with the systemd setup:

+ +

Create the Systemd Service File

+

First create a new unit file called polkadot-validator.service in /etc/systemd/system/:

+
touch /etc/systemd/system/polkadot-validator.service
+
+

In this unit file, you will write the commands that you want to run on server boot/restart:

+
/etc/systemd/system/polkadot-validator.service
[Unit]
+Description=Polkadot Node
+After=network.target
+Documentation=https://github.com/paritytech/polkadot
+
+[Service]
+EnvironmentFile=-/etc/default/polkadot
+ExecStart=/usr/bin/polkadot $POLKADOT_CLI_ARGS
+User=polkadot
+Group=polkadot
+Restart=always
+RestartSec=120
+CapabilityBoundingSet=
+LockPersonality=true
+NoNewPrivileges=true
+PrivateDevices=true
+PrivateMounts=true
+PrivateTmp=true
+PrivateUsers=true
+ProtectClock=true
+ProtectControlGroups=true
+ProtectHostname=true
+ProtectKernelModules=true
+ProtectKernelTunables=true
+ProtectSystem=strict
+RemoveIPC=true
+RestrictAddressFamilies=AF_INET AF_INET6 AF_NETLINK AF_UNIX
+RestrictNamespaces=false
+RestrictSUIDSGID=true
+SystemCallArchitectures=native
+SystemCallFilter=@system-service
+SystemCallFilter=landlock_add_rule landlock_create_ruleset landlock_restrict_self seccomp mount umount2
+SystemCallFilter=~@clock @module @reboot @swap @privileged
+SystemCallFilter=pivot_root
+UMask=0027
+
+[Install]
+WantedBy=multi-user.target
+
+
+

Restart Delay Recommendation

+

It is recommended that a node's restart be delayed with RestartSec in the case of a crash. It's possible that when a node crashes, consensus votes in GRANDPA aren't persisted to disk. In this case, there is potential to equivocate when immediately restarting. Delaying the restart will allow the network to progress past potentially conflicting votes.

+
+

Run the Service

+

Activate the systemd service to start on system boot by running:

+
systemctl enable polkadot-validator.service
+
+

To start the service manually, use:

+
systemctl start polkadot-validator.service
+
+

Check the service's status to confirm it is running:

+
systemctl status polkadot-validator.service
+
+

To view the logs in real-time, use journalctl like so:

+
journalctl -f -u polkadot-validator
+
+

With these steps, you can effectively manage and monitor your validator as a systemd service.

+

Once your validator is active, it's officially part of Polkadot's security infrastructure. For questions or further support, you can reach out to the Polkadot Validator chat for tips and troubleshooting.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/index.html b/infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/index.html new file mode 100644 index 00000000..600c83af --- /dev/null +++ b/infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/index.html @@ -0,0 +1,4938 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Stop Validating | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+ +
+ + +
+ +
+ + + + +

Stop Validating

+

Introduction

+

If you're ready to stop validating on Polkadot, there are essential steps to ensure a smooth transition while protecting your funds and account integrity. Whether you're taking a break for maintenance or unbonding entirely, you'll need to chill your validator, purge session keys, and unbond your tokens. This guide explains how to use Polkadot's tools and extrinsics to safely withdraw from validation activities, safeguarding your account's future usability.

+

Pause Versus Stop

+

If you wish to remain a validator or nominator (for example, stopping for planned downtime or server maintenance), submitting the chill extrinsic in the staking pallet should suffice. Additional steps are only needed to unbond funds or reap an account.

+

The following are steps to ensure a smooth stop to validation:

+
    +
  • Chill the validator
  • +
  • Purge validator session keys
  • +
  • Unbond your tokens
  • +
+

Chill Validator

+

When stepping back from validating, the first step is to chill your validator status. This action stops your validator from being considered for the next era without fully unbonding your tokens, which can be useful for temporary pauses like maintenance or planned downtime.

+

Use the staking.chill extrinsic to initiate this. For more guidance on chilling your node, refer to the Pause Validating guide. You may also claim any pending staking rewards at this point.

+

Purge Validator Session Keys

+

Purging validator session keys is a critical step in removing the association between your validator account and its session keys, which ensures that your account is fully disassociated from validator activities. The session.purgeKeys extrinsic removes the reference to your session keys from the stash or staking proxy account that originally set them.

+

Here are a couple of important things to know about purging keys:

+
    +
  • Account used to purge keys - always use the same account to purge keys you originally used to set them, usually your stash or staking proxy account. Using a different account may leave an unremovable reference to the session keys on the original account, preventing its reaping
  • +
  • Account reaping issue - failing to purge keys will prevent you from reaping (fully deleting) your stash account. If you attempt to transfer tokens without purging, you'll need to rebond, purge the session keys, unbond again, and wait through the unbonding period before any transfer
  • +
+

Unbond Your Tokens

+

After chilling your node and purging session keys, the final step is to unbond your staked tokens. This action removes them from staking and begins the unbonding period (usually 28 days for Polkadot and seven days for Kusama), after which the tokens will be transferable.

+

To unbond tokens, go to Network > Staking > Account Actions on Polkadot.js Apps. Select your stash account, click on the dropdown menu, and choose Unbond Funds. Alternatively, you can use the staking.unbond extrinsic if you handle this via a staking proxy account.

+

Once the unbonding period is complete, your tokens will be available for use in transactions or transfers outside of staking.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/operational-tasks/general-management/index.html b/infrastructure/running-a-validator/operational-tasks/general-management/index.html new file mode 100644 index 00000000..1b617fbc --- /dev/null +++ b/infrastructure/running-a-validator/operational-tasks/general-management/index.html @@ -0,0 +1,5737 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + General Management | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

General Management

+

Introduction

+

Validator performance is pivotal in maintaining the security and stability of the Polkadot network. As a validator, optimizing your setup ensures efficient transaction processing, minimizes latency, and maintains system reliability during high-demand periods. Proper configuration and proactive monitoring also help mitigate risks like slashing and service interruptions.

+

This guide covers essential practices for managing a validator, including performance tuning techniques, security hardening, and tools for real-time monitoring. Whether you're fine-tuning CPU settings, configuring NUMA balancing, or setting up a robust alert system, these steps will help you build a resilient and efficient validator operation.

+

Configuration Optimization

+

For those seeking to optimize their validator's performance, the following configurations can improve responsiveness, reduce latency, and ensure consistent performance during high-demand periods.

+

Deactivate Simultaneous Multithreading

+

Polkadot validators operate primarily in single-threaded mode for critical paths, meaning optimizing for single-core CPU performance can reduce latency and improve stability. Deactivating simultaneous multithreading (SMT) can prevent virtual cores from affecting performance. SMT implementation is called Hyper-Threading on Intel and 2-way SMT on AMD Zen. The following will deactivate every other (vCPU) core:

+
for cpunum in $(cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | cut -s -d, -f2- | tr ',' '\n' | sort -un)
+do
+  echo 0 > /sys/devices/system/cpu/cpu$cpunum/online
+done
+
+

To save the changes permanently, add nosmt=force as kernel parameter. Edit /etc/default/grub and add nosmt=force to GRUB_CMDLINE_LINUX_DEFAULT variable as follows:

+
sudo nano /etc/default/grub
+# Add to GRUB_CMDLINE_LINUX_DEFAULT
+
+
/etc/default/grub
GRUB_HIDDEN_TIMEOUT = 0;
+GRUB_HIDDEN_TIMEOUT_QUIET = true;
+GRUB_TIMEOUT = 10;
+GRUB_DISTRIBUTOR = `lsb_release -i -s 2> /dev/null || echo Debian`;
+GRUB_CMDLINE_LINUX_DEFAULT = 'nosmt=force';
+GRUB_CMDLINE_LINUX = '';
+
+

After updating the variable, be sure to update GRUB to apply changes:

+
sudo update-grub
+
+

After the reboot, you should see that half of the cores are offline. To confirm, run:

+
lscpu --extended
+
+

Deactivate Automatic NUMA Balancing

+

Deactivating NUMA (Non-Uniform Memory Access) balancing for multi-CPU setups helps keep processes on the same CPU node, minimizing latency. Run the following command to deactivate NUMA balancing in runtime:

+
sysctl kernel.numa_balancing=0
+
+

To deactivate NUMA balancing permanently, add numa_balancing=disable to GRUB settings:

+
sudo nano /etc/default/grub
+# Add to GRUB_CMDLINE_LINUX_DEFAULT
+
+
/etc/default/grub
GRUB_DEFAULT = 0;
+GRUB_HIDDEN_TIMEOUT = 0;
+GRUB_HIDDEN_TIMEOUT_QUIET = true;
+GRUB_TIMEOUT = 10;
+GRUB_DISTRIBUTOR = `lsb_release -i -s 2> /dev/null || echo Debian`;
+GRUB_CMDLINE_LINUX_DEFAULT = 'numa_balancing=disable';
+GRUB_CMDLINE_LINUX = '';
+
+

After updating the variable, be sure to update GRUB to apply changes:

+
sudo update-grub
+
+

Confirm the deactivation by running the following command:

+
sysctl -a | grep 'kernel.numa_balancing'
+
+

If you successfully deactivated NUMA balancing, the preceding command should return 0.

+

Spectre and Meltdown Mitigations

+

Spectre and Meltdown are well-known vulnerabilities in modern CPUs that exploit speculative execution to access sensitive data. These vulnerabilities have been patched in recent Linux kernels, but the mitigations can slightly impact performance, especially in high-throughput or containerized environments.

+

If your security needs allow it, you may selectively deactivate specific mitigations for performance gains. The Spectre V2 and Speculative Store Bypass Disable (SSBD) for Spectre V4 apply to speculative execution and are particularly impactful in containerized environments. Deactivating them can help regain performance if your environment doesn't require these security layers.

+

To selectively deactivate the Spectre mitigations, update the GRUB_CMDLINE_LINUX_DEFAULT variable in your /etc/default/grub configuration:

+
sudo nano /etc/default/grub
+# Add to GRUB_CMDLINE_LINUX_DEFAULT
+
+
/etc/default/grub
GRUB_DEFAULT = 0;
+GRUB_HIDDEN_TIMEOUT = 0;
+GRUB_HIDDEN_TIMEOUT_QUIET = true;
+GRUB_TIMEOUT = 10;
+GRUB_DISTRIBUTOR = `lsb_release -i -s 2> /dev/null || echo Debian`;
+GRUB_CMDLINE_LINUX_DEFAULT =
+  'spec_store_bypass_disable=prctl spectre_v2_user=prctl';
+
+

After updating the variable, be sure to update GRUB to apply changes and then reboot:

+
sudo update-grub
+sudo reboot
+
+

This approach selectively deactivates the Spectre V2 and Spectre V4 mitigations, leaving other protections intact. For full security, keep mitigations activated unless there's a significant performance need, as disabling them could expose the system to potential attacks on affected CPUs.

+

Monitor Your Node

+

Monitoring your node's performance is critical to maintaining network reliability and security. Tools like Prometheus and Grafana provide insights into block height, peer connections, CPU and memory usage, and more. This section walks through setting up these tools and configuring alerts to notify you of potential issues.

+

Prepare Environment

+

Before installing Prometheus, it's important to set up the environment securely to ensure Prometheus runs with restricted user privileges. You can set up Prometheus securely as follows:

+
    +
  1. Create a Prometheus user - ensure Prometheus runs with minimal permissions +
    sudo useradd --no-create-home --shell /usr/sbin/nologin prometheus
    +
  2. +
  3. Set up directories - create directories for configuration and data storage +
    sudo mkdir /etc/prometheus
    +sudo mkdir /var/lib/prometheus
    +
  4. +
  5. Change directory ownership - ensure Prometheus has access +
    sudo chown -R prometheus:prometheus /etc/prometheus
    +sudo chown -R prometheus:prometheus /var/lib/prometheus
    +
  6. +
+

Install and Configure Prometheus

+

After preparing the environment; install and configure the latest version of Prometheus as follows:

+
    +
  1. Download Prometheus - obtain the respective release binary for your system architecture from the Prometheus releases page. Replace the placeholder text with the respective release binary, e.g. https://github.com/prometheus/prometheus/releases/download/v3.0.0/prometheus-3.0.0.linux-amd64.tar.gz +
    sudo apt-get update && sudo apt-get upgrade
    +wget INSERT_RELEASE_DOWNLOAD_LINK
    +tar xfz prometheus-*.tar.gz
    +cd prometheus-3.0.0.linux-amd64
    +
  2. +
  3. +

    Set up Prometheus - copy binaries and directories, assign ownership of these files to the prometheus user, and clean up download directory as follows:

    +
    +
    +
    +
    sudo cp ./prometheus /usr/local/bin/
    +sudo cp ./promtool /usr/local/bin/
    +sudo cp ./prometheus /usr/local/bin/
    +
    +
    +
    +
    sudo cp -r ./consoles /etc/prometheus
    +sudo cp -r ./console_libraries /etc/prometheus
    +sudo chown -R prometheus:prometheus /etc/prometheus/consoles
    +sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries
    +
    +
    +
    +
    cd .. && rm -r prometheus*
    +
    +
    +
    +
    +
  4. +
  5. +

    Create prometheus.yml for configuration - run this command to define global settings, rule files, and scrape targets: +

    sudo nano /etc/prometheus/prometheus.yml
    +
    + Prometheus is scraped every 5 seconds in this example configuration file, ensuring detailed internal metrics. Node metrics with customizable intervals are scraped from port 9615 by default. +
    prometheus-config.yml
    global:
    +  scrape_interval: 15s
    +  evaluation_interval: 15s
    +
    +rule_files:
    +  # - "first.rules"
    +  # - "second.rules"
    +
    +scrape_configs:
    +  - job_name: 'prometheus'
    +    scrape_interval: 5s
    +    static_configs:
    +      - targets: ['localhost:9090']
    +  - job_name: 'substrate_node'
    +    scrape_interval: 5s
    +    static_configs:
    +      - targets: ['localhost:9615']
    +

    +
  6. +
  7. +

    Validate configuration with promtool - use the open source monitoring system to check your configuration +

    promtool check config /etc/prometheus/prometheus.yml
    +

    +
  8. +
  9. Assign ownership - save the configuration file and change the ownership of the file to prometheus user +
    sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml
    +
  10. +
+

Start Prometheus

+
    +
  1. +

    Launch Prometheus - use the following command to launch Prometheus with a given configuration, set the storage location for metric data, and enable web console templates and libraries:

    +
    sudo -u prometheus /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus/ --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries
    +
    +

    If you set the server up properly, you should see terminal output similar to the following:

    +
  2. +
  3. +

    Verify access - verify you can access the Prometheus interface by visiting the following address: +

    http://SERVER_IP_ADDRESS:9090/graph
    +

    +

    If the interface appears to work as expected, exit the process using Control + C.

    +
  4. +
  5. +

    Create new systemd service file - this will automatically start the server during the boot process +

    sudo nano /etc/systemd/system/prometheus.service
    +
    + Add the following code to the service file:

    +

    prometheus.service
    [Unit]
    +Description=Prometheus Monitoring
    +Wants=network-online.target
    +After=network-online.target
    +
    +[Service]
    +User=prometheus
    +Group=prometheus
    +Type=simple
    +ExecStart=/usr/local/bin/prometheus \
    + --config.file /etc/prometheus/prometheus.yml \
    + --storage.tsdb.path /var/lib/prometheus/ \
    + --web.console.templates=/etc/prometheus/consoles \
    + --web.console.libraries=/etc/prometheus/console_libraries
    +ExecReload=/bin/kill -HUP $MAINPID
    +
    +[Install]
    +WantedBy=multi-user.target
    +
    +Once you save the file, execute the following command to reload systemd and enable the service so that it will load automatically during the operating system's startup:

    +

    sudo systemctl daemon-reload && sudo systemctl enable prometheus && sudo systemctl start prometheus
    +
    +4. Verify service - return to the Prometheus interface at the following address to verify the service is running: +
    http://SERVER_IP_ADDRESS:9090/
    +

    +
  6. +
+

Install and Configure Grafana

+

Grafana provides a powerful, customizable interface to visualize metrics collected by Prometheus. This guide follows Grafana's canonical installation instructions. To install and configure Grafana, follow these steps:

+
    +
  1. +

    Install Grafana prerequisites - run the following commands to install the required packages: +

    sudo apt-get install -y apt-transport-https software-properties-common wget    
    +

    +
  2. +
  3. +

    Import the GPG key: +

    sudo mkdir -p /etc/apt/keyrings/
    +wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /etc/apt/keyrings/grafana.gpg > /dev/null
    +

    +
  4. +
  5. +

    Configure the stable release repo and update packages: +

    echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
    +sudo apt-get update
    +

    +
  6. +
  7. +

    Install the latest stable version of Grafana: +

    sudo apt-get install grafana
    +

    +
  8. +
+

After installing Grafana, you can move on to the configuration steps:

+
    +
  1. +

    Set Grafana to auto-start - configure Grafana to start automatically on system boot and start the service +

    sudo systemctl daemon-reload
    +sudo systemctl enable grafana-server.service
    +sudo systemctl start grafana-server
    +

    +
  2. +
  3. +

    Verify the Grafana service is running** with the following command: +

    sudo systemctl status grafana-server
    +
    +If necessary, you can stop or restart the service with the following commands:

    +
    sudo systemctl stop grafana-server
    +sudo systemctl restart grafana-server
    +
    +
  4. +
  5. +

    Access Grafana - open your browser, navigate to the following address, and use the default user and password admin to login: +

    http://SERVER_IP_ADDRESS:3000/login
    +

    +
  6. +
+
+

Change default port

+

If you want run Grafana on another port, edit the file /usr/share/grafana/conf/defaults.ini with a command like: +

sudo vim /usr/share/grafana/conf/defaults.ini 
+
+You can change the http_port value as desired. Then restart Grafana with: +
sudo systemctl restart grafana-server
+

+
+

Grafana login screen

+

Follow these steps to visualize node metrics:

+
    +
  1. Select the gear icon for settings to configure the Data Sources
  2. +
  3. Select Add data source to define the data source +Select Prometheus
  4. +
  5. Select Prometheus +Save and test
  6. +
  7. Enter http://localhost:9090 in the URL field, then select Save & Test. If you see the message "Data source is working" your connection is configured correctly +Import dashboard
  8. +
  9. Next, select Import from the menu bar on the left, select Prometheus in the dropdown list and select Import
  10. +
  11. Finally, start your Polkadot node by running ./polkadot. You should now be able to monitor your node's performance such as the current block height, network traffic, and running tasks on the Grafana dashboard +Live dashboard
  12. +
+
+

Import via grafana.com

+

The Grafana dashboards page features user created dashboards made available for public use. Visit "Substrate Node Metrics" for an example of available dashboards.

+
+

Install and Configure Alertmanager

+

The optional Alertmanager complements Prometheus by handling alerts and notifying users of potential issues. Follow these steps to install and configure Alertmanager:

+
    +
  1. Download extract Alertmanager - download the latest version from the Prometheus Alertmanager releases page. Replace the placeholder text with the respective release binary, e.g. https://github.com/prometheus/alertmanager/releases/download/v0.28.0-rc.0/alertmanager-0.28.0-rc.0.linux-amd64.tar.gz +
    wget INSERT_RELEASE_DOWNLOAD_LINK
    +tar -xvzf alertmanager*
    +
  2. +
  3. Move binaries and set permissions - copy the binaries to a system directory and set appropriate permissions +
    cd alertmanager-0.28.0-rc.0.linux-amd64
    +sudo cp ./alertmanager /usr/local/bin/
    +sudo cp ./amtool /usr/local/bin/
    +sudo chown prometheus:prometheus /usr/local/bin/alertmanager
    +sudo chown prometheus:prometheus /usr/local/bin/amtool
    +
  4. +
  5. +

    Create configuration file - create a new alertmanager.yml file under /etc/alertmanager +

    sudo mkdir /etc/alertmanager
    +sudo nano /etc/alertmanager/alertmanager.yml
    +
    + Add the following code to the configuration file to define email notifications: +
    alertmanager.yml
    global:
    +  resolve_timeout: 1m
    +
    +route:
    +  receiver: 'gmail-notifications'
    +
    +receivers:
    +  - name: 'gmail-notifications'
    +    email_configs:
    +      - to: INSERT_YOUR_EMAIL
    +        from: INSERT_YOUR_EMAIL
    +        smarthost: smtp.gmail.com:587
    +        auth_username: INSERT_YOUR_EMAIL
    +        auth_identity: INSERT_YOUR_EMAIL
    +        auth_password: INSERT_YOUR_APP_PASSWORD
    +        send_resolved: true
    +

    +
    +

    App password

    +

    You must generate an app password in your Gmail account to allow Alertmanager to send you alert notification emails.

    +
    +

    Ensure the configuration file has the correct permissions: +

    sudo chown -R prometheus:prometheus /etc/alertmanager
    +
    +4. Configure as a service - set up Alertmanager to run as a service by creating a systemd service file +
    sudo nano /etc/systemd/system/alertmanager.service
    +
    +Add the following code to the service file: +
    alertmanager.service
    [Unit]
    +Description=AlertManager Server Service
    +Wants=network-online.target
    +After=network-online.target
    +
    +[Service]
    +User=root
    +Group=root
    +Type=simple
    +ExecStart=/usr/local/bin/alertmanager --config.file /etc/alertmanager/alertmanager.yml --web.external-url=http://SERVER_IP:9093 --cluster.advertise-address='0.0.0.0:9093'
    +
    +[Install]
    +WantedBy=multi-user.target
    +
    +Reload and enable the service +
    sudo systemctl daemon-reload
    +sudo systemctl enable alertmanager
    +sudo systemctl start alertmanager
    +
    +Verify the service status using the following command: +
    sudo systemctl status alertmanager
    +
    +If you have configured the Alertmanager properly, the Active field should display active (running) similar to below:

    +

    + sudo systemctl status alertmanager + alertmanager.service - AlertManager Server Service + Loaded: loaded (/etc/systemd/system/alertmanager.service; enabled; vendor preset: enabled) + Active: active (running) since Thu 2020-08-20 22:01:21 CEST; 3 days ago + Main PID: 20592 (alertmanager) + Tasks: 70 (limit: 9830) + CGroup: /system.slice/alertmanager.service + +

    +
  6. +
+

Grafana Plugin

+

There is an Alertmanager plugin in Grafana that can help you monitor alert information. Follow these steps to use the plugin:

+
    +
  1. Install the plugin - use the following command: +
    sudo grafana-cli plugins install camptocamp-prometheus-alertmanager-datasource
    +
  2. +
  3. Restart Grafana +
    sudo systemctl restart grafana-server
    +
  4. +
  5. Configure datasource - go to your Grafana dashboard SERVER_IP:3000 and configure the Alertmanager datasource as follows:
      +
    • Go to Configuration -> Data Sources, and search for Prometheus Alertmanager
    • +
    • Fill in the URL to your server location followed by the port number used in the Alertmanager. Select Save & Test to test the connection
    • +
    +
  6. +
  7. To monitor the alerts, import the 8010 dashboard, which is used for Alertmanager. Make sure to select the Prometheus Alertmanager in the last column then select Import
  8. +
+

Integrate Alertmanager

+

A few more steps are required to allow the Prometheus server to talk to the Alertmanager and to configure rules for detection and alerts. Complete the integration as follows:

+
    +
  1. Update configuration - update the configuration file in the etc/prometheus/prometheus.yml to add the following code: +
    prometheus.yml
    rule_files:
    +  - 'rules.yml'
    +
    +alerting:
    +  alertmanagers:
    +    - static_configs:
    +        - targets:
    +            - localhost:9093
    +
  2. +
  3. Create rules file - here you will define the rules for detection and alerts + Run the following command to create the rules file: +
    sudo nano /etc/prometheus/rules.yml
    +
    + If any of the conditions defined in the rules file are met, an alert will be triggered. The following sample rule checks for the node being down and triggers an email notification if an outage of more than five minutes is detected: +
    rules.yml
    groups:
    +  - name: alert_rules
    +    rules:
    +      - alert: InstanceDown
    +        expr: up == 0
    +        for: 5m
    +        labels:
    +          severity: critical
    +        annotations:
    +          summary: 'Instance [{{ $labels.instance }}] down'
    +          description: '[{{ $labels.instance }}] of job [{{ $labels.job }}] has been down for more than 5 minutes.'
    +
    + See Alerting Rules and additional alerts in the Prometheus documentation to learn more about defining and using alerting rules.
  4. +
  5. Update ownership of rules file - ensure user prometheus has access by running: +
    sudo chown prometheus:prometheus rules.yml
    +
  6. +
  7. Check rules - ensure the rules defined in rules.yml are syntactically correct by running the following command: +
    sudo -u prometheus promtool check rules rules.yml
    +
  8. +
  9. Restart Alertmanager +
    sudo systemctl restart prometheus && sudo systemctl restart alertmanager
    +
  10. +
+

Now you will receive an email alert if one of your rule triggering conditions is met.

+
+Updated prometheus.yml +
global:
+  scrape_interval: 15s
+  evaluation_interval: 15s
+
+rule_files:
+  - 'rules.yml'
+
+alerting:
+  alertmanagers:
+    - static_configs:
+        - targets:
+            - localhost:9093
+
+scrape_configs:
+  - job_name: 'prometheus'
+    scrape_interval: 5s
+    static_configs:
+      - targets: ['localhost:9090']
+  - job_name: 'substrate_node'
+    scrape_interval: 5s
+    static_configs:
+      - targets: ['localhost:9615']
+
+ +
+

Secure Your Validator

+

Validators in Polkadot's Proof of Stake network play a critical role in maintaining network integrity and security by keeping the network in consensus and verifying state transitions. To ensure optimal performance and minimize risks, validators must adhere to strict guidelines around security and reliable operations.

+

Key Management

+

Though they don't transfer funds, session keys are essential for validators as they sign messages related to consensus and parachains. Securing session keys is crucial as allowing them to be exploited or used across multiple nodes can lead to a loss of staked funds via slashing.

+

Given the current limitations in high-availability setups and the risks associated with double-signing, it’s recommended to run only a single validator instance. Keys should be securely managed, and processes automated to minimize human error.

+

There are two approaches for generating session keys:

+
    +
  1. +

    Generate and store in node - using the author.rotateKeys RPC call. For most users, generating keys directly within the client is recommended. You must submit a session certificate from your staking proxy to register new keys. See the How to Validate guide for instructions on setting keys

    +
  2. +
  3. +

    Generate outside node and insert - using the author.setKeys RPC call. This flexibility accommodates advanced security setups and should only be used by experienced validator operators

    +
  4. +
+

Signing Outside the Client

+

Polkadot plans to support external signing, allowing session keys to reside in secure environments like Hardware Security Modules (HSMs). However, these modules can sign any payload they receive, potentially enabling an attacker to perform slashable actions.

+

Secure-Validator Mode

+

Polkadot's Secure-Validator mode offers an extra layer of protection through strict filesystem, networking, and process sandboxing. This secure mode is activated by default if the machine meets the following requirements:

+
    +
  1. Linux (x86-64 architecture) - usually Intel or AMD
  2. +
  3. Enabled seccomp - this kernel feature facilitates a more secure approach for process management on Linux. Verify by running: +
    cat /boot/config-`uname -r` | grep CONFIG_SECCOMP=
    +
    + If seccomp is enabled, you should see output similar to the following: +
    CONFIG_SECCOMP=y
    +
  4. +
+
+

Note

+

Optionally, Linux 5.13 may also be used, as it provides access to even more strict filesystem protections.

+
+

Linux Best Practices

+

Follow these best practices to keep your validator secure:

+
    +
  • Use a non-root user for all operations
  • +
  • Regularly apply OS security patches
  • +
  • Enable and configure a firewall
  • +
  • Use key-based SSH authentication; deactivate password-based login
  • +
  • Regularly back up data and harden your SSH configuration. Visit this SSH guide for more details
  • +
+

Validator Best Practices

+

Additional best practices can add an additional layer of security and operational reliability:

+
    +
  • Only run the Polkadot binary, and only listen on the configured p2p port
  • +
  • Run on bare-metal machines, as opposed to virtual machines
  • +
  • Provisioning of the validator machine should be automated and defined in code which is kept in private version control, reviewed, audited, and tested
  • +
  • Generate and provide session keys in a secure way
  • +
  • Start Polkadot at boot and restart if stopped for any reason
  • +
  • Run Polkadot as a non-root user
  • +
  • Establish and maintain an on-call rotation for managing alerts
  • +
  • Establish and maintain a clear protocol with actions to perform for each level of each alert with an escalation policy
  • +
+

Additional Resources

+ +

For additional guidance, connect with other validators and the Polkadot engineering team in the Polkadot Validator Lounge on Element.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/operational-tasks/index.html b/infrastructure/running-a-validator/operational-tasks/index.html new file mode 100644 index 00000000..1fa244e1 --- /dev/null +++ b/infrastructure/running-a-validator/operational-tasks/index.html @@ -0,0 +1,4972 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Operational Tasks | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Operational Tasks

+

Running a Polkadot validator node involves several key operational tasks to ensure secure and efficient participation in the network. In this section, you'll learn how to manage and maintain your validator node by monitoring its performance, conducting regular maintenance, and ensuring high availability through strategies like running a backup validator. You'll also find instructions on rotating your session keys to enhance security and minimize vulnerabilities. Mastering these tasks is essential for maintaining a reliable and trusted presence within your network.

+

In This Section

+

+

+

+

Additional Resources

+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/operational-tasks/pause-validating/index.html b/infrastructure/running-a-validator/operational-tasks/pause-validating/index.html new file mode 100644 index 00000000..1356f412 --- /dev/null +++ b/infrastructure/running-a-validator/operational-tasks/pause-validating/index.html @@ -0,0 +1,4954 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Pause Validating | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Pause Validating

+

Introduction

+

If you need to temporarily stop participating in Polkadot staking activities without fully unbonding your funds, chilling your account allows you to do so efficiently. Chilling removes your node from active validation or nomination in the next era while keeping your funds bonded, making it ideal for planned downtimes or temporary pauses.

+

This guide covers the steps for chilling as a validator or nominator, using the chill and chillOther extrinsics, and how these affect your staking status and nominations.

+

Chilling Your Node

+

If you need to temporarily step back from staking without unbonding your funds, you can "chill" your account. Chilling pauses your active staking participation, setting your account to inactive in the next era while keeping your funds bonded.

+

To chill your account, go to the Network > Staking > Account Actions page on Polkadot.js Apps, and select Stop. Alternatively, you can call the chill extrinsic in the Staking pallet.

+

Staking Election Timing Considerations

+

When a node actively participates in staking but then chills, it will continue contributing for the remainder of the current era. However, its eligibility for the next election depends on the chill status at the start of the new era:

+
    +
  • Chilled during previous era - will not participate in the current era election and will remain inactive until reactivated +-Chilled during current era - will not be selected for the next era's election +-Chilled after current era - may be selected if it was active during the previous era and is now chilled
  • +
+

Chilling as a Nominator

+

When you choose to chill as a nominator, your active nominations are reset. Upon re-entering the nominating process, you must reselect validators to support manually. Depending on preferences, these can be the same validators as before or a new set. Remember that your previous nominations won’t be saved or automatically reactivated after chilling.

+

While chilled, your nominator account remains bonded, preserving your staked funds without requiring a full unbonding process. When you’re ready to start nominating again, you can issue a new nomination call to activate your bond with a fresh set of validators. This process bypasses the need for re-bonding, allowing you to maintain your stake while adjusting your involvement in active staking.

+

Chilling as a Validator

+

When you chill as a validator, your active validator status is paused. Although your nominators remain bonded to you, the validator bond will no longer appear as an active choice for new or revised nominations until reactivated. Any existing nominators who take no action will still have their stake linked to the validator, meaning they don’t need to reselect the validator upon reactivation. However, if nominators adjust their stakes while the validator is chilled, they will not be able to nominate the chilled validator until it resumes activity.

+

Upon reactivating as a validator, you must also reconfigure your validator preferences, such as commission rate and other parameters. These can be set to match your previous configuration or updated as desired. This step is essential for rejoining the active validator set and regaining eligibility for nominations.

+

Chill Other

+

Historical constraints in the runtime prevented unlimited nominators and validators from being supported. These constraints created a need for checks to keep the size of the staking system manageable. One of these checks is the chillOther extrinsic, allowing users to chill accounts that no longer met standards such as minimum staking requirements set through on-chain governance.

+

This control mechanism included a ChillThreshold, which was structured to define how close to the maximum number of nominators or validators the staking system would be allowed to get before users could start chilling one another. With the passage of Referendum #90, the value for maxNominatorCount on Polkadot was set to None, effectively removing the limit on how many nominators and validators can participate. This means the ChillThreshold will never be met; thus, chillOther no longer has any effect.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/operational-tasks/upgrade-your-node/index.html b/infrastructure/running-a-validator/operational-tasks/upgrade-your-node/index.html new file mode 100644 index 00000000..f13173d7 --- /dev/null +++ b/infrastructure/running-a-validator/operational-tasks/upgrade-your-node/index.html @@ -0,0 +1,5010 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Upgrade a Validator Node | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Upgrade a Validator Node

+

Introduction

+

Upgrading a Polkadot validator node is essential for staying current with network updates and maintaining optimal performance. This guide covers routine and extended maintenance scenarios, including software upgrades and major server changes. Following these steps, you can manage session keys and transition smoothly between servers without risking downtime, slashing, or network disruptions. The process requires strategic planning, especially if you need to perform long-lead maintenance, ensuring your validator remains active and compliant.

+

This guide will allow validators to seamlessly substitute an active validator server to allow for maintenance operations. The process can take several hours, so ensure you understand the instructions first and plan accordingly.

+

Prerequisites

+

Before beginning the upgrade process for your validator node, ensure the following:

+
    +
  • You have a fully functional validator setup with all required binaries installed. See Set Up a Validator and Validator Requirements for additional guidance
  • +
  • Your VPS infrastructure has enough capacity to run a secondary validator instance temporarily for the upgrade process
  • +
+

Session Keys

+

Session keys are used to sign validator operations and establish a connection between your validator node and your staking proxy account. These keys are stored in the client, and any change to them requires a waiting period. Specifically, if you modify your session keys, the change will take effect only after the current session is completed and two additional sessions have passed.

+

Remembering this delayed effect when planning upgrades is crucial to ensure that your validator continues to function correctly and avoids interruptions. To learn more about session keys and their importance, visit the Keys section.

+

Keystore

+

Your validator server's keystore folder holds the private keys needed for signing network-level transactions. It is important not to duplicate or transfer this folder between validator instances. Doing so could result in multiple validators signing with the duplicate keys, leading to severe consequences such as equivocation slashing. Instead, always generate new session keys for each validator instance.

+

The default path to the keystore is as follows:

+
/home/polkadot/.local/share/polkadot/chains/<chain>/keystore
+
+

Taking care to manage your keys securely ensures that your validator operates safely and without the risk of slashing penalties.

+

Upgrade Using Backup Validator

+

The following instructions outline how to temporarily switch between two validator nodes. The original active validator is referred to as Validator A and the backup node used for maintenance purposes as Validator B.

+

Session N

+
    +
  1. Start Validator B - launch a secondary node and wait until it is fully synced with the network. Once synced, start it with the --validator flag. This node will now act as Validator B
  2. +
  3. Generate session keys - create new session keys specifically for Validator B
  4. +
  5. Submit the set_key extrinsic - use your staking proxy account to submit a set_key extrinsic, linking the session keys for Validator B to your staking setup
  6. +
  7. Record the session - make a note of the session in which you executed this extrinsic
  8. +
  9. Wait for session changes - allow the current session to end and then wait for two additional full sessions for the new keys to take effect
  10. +
+
+

Keep Validator A running

+

It is crucial to keep Validator A operational during this entire waiting period. Since set_key does not take effect immediately, turning off Validator A too early may result in chilling or even slashing.

+
+

Session N+3

+

At this stage, Validator B becomes your active validator. You can now safely perform any maintenance tasks on Validator A.

+

Complete the following steps when you are ready to bring Validator A back online:

+
    +
  1. Start Validator A - launch Validator A, sync the blockchain database, and ensure it is running with the --validator flag
  2. +
  3. Generate new session keys for Validator A - create fresh session keys for Validator A
  4. +
  5. Submit the set_key extrinsic - using your staking proxy account, submit a set_key extrinsic with the new Validator A session keys
  6. +
  7. Record the session - again, make a note of the session in which you executed this extrinsic
  8. +
+

Keep Validator B active until the session during which you executed the set-key extrinsic completes plus two additional full sessions have passed. Once Validator A has successfully taken over, you can safely stop Validator B. This process helps ensure a smooth handoff between nodes and minimizes the risk of downtime or penalties. Verify the transition by checking for finalized blocks in the new session. The logs should indicate the successful change, similar to the example below:

+
+ INSERT_COMMAND + 2019-10-28 21:44:13 Applying authority set change scheduled at block #450092 + 2019-10-28 21:44:13 Applying GRANDPA set change to new set with 20 authorities + +
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/running-a-validator/requirements/index.html b/infrastructure/running-a-validator/requirements/index.html new file mode 100644 index 00000000..ff15f35b --- /dev/null +++ b/infrastructure/running-a-validator/requirements/index.html @@ -0,0 +1,5038 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Validator Requirements | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+ +
+
+ + +
+ +
+ + + + +

Validator Requirements

+

Introduction

+

Running a validator in the Polkadot ecosystem is essential for maintaining network security and decentralization. Validators are responsible for validating transactions and adding new blocks to the chain, ensuring the system operates smoothly. In return for their services, validators earn rewards. However, the role comes with inherent risks, such as slashing penalties for misbehavior or technical failures. If you’re new to validation, starting on Kusama provides a lower-stakes environment to gain valuable experience before progressing to the Polkadot network.

+

This guide covers everything you need to know about becoming a validator, including system requirements, staking prerequisites, and infrastructure setup. Whether you’re deploying on a VPS or running your node on custom hardware, you’ll learn how to optimize your validator for performance and security, ensuring compliance with network standards while minimizing risks.

+

Prerequisites

+

Running a validator requires solid system administration skills and a secure, well-maintained infrastructure. Below are the primary requirements you need to be aware of before getting started:

+
    +
  • System administration expertise - handling technical anomalies and maintaining node infrastructure is critical. Validators must be able to troubleshoot and optimize their setup
  • +
  • Security - ensure your setup follows best practices for securing your node. Refer to the Secure Your Validator section to learn about important security measures
  • +
  • Network choice - start with Kusama to gain experience. Look for "Adjustments for Kusama" throughout these guides for tips on adapting the provided instructions for the Kusama network
  • +
  • Staking requirements - a minimum amount of native token (KSM or DOT) is required to be elected into the validator set. The required stake can come from your own holdings or from nominators
  • +
  • Risk of slashing - any DOT you stake is at risk if your setup fails or your validator misbehaves. If you’re unsure of your ability to maintain a reliable validator, consider nominating your DOT to a trusted validator
  • +
+

Technical Requirements

+

Running a Polkadot validator node on Linux is the most common approach, especially for beginners. While you can use any VPS provider that meets the technical specifications, this guide uses Ubuntu 22.04. However, the steps should be adaptable to other Linux distributions.

+

Reference Hardware

+

Polkadot validators rely on high-performance hardware to process blocks efficiently. The following specifications are based on benchmarking using two VM instances:

+
    +
  • Google Cloud Platform (GCP) - n2-standard-8 instance
  • +
  • Amazon Web Services (AWS) - c6i.4xlarge instance
  • +
+

The recommended minimum hardware requirements to ensure a fully functional and performant validator are as follows:

+
    +
  • +

    CPU:

    +
      +
    • x86-64 compatible
    • +
    • Eight physical cores @ 3.4 GHz
        +
      • Per Referenda #1051, this will be a hard requirement as of January 2025
      • +
      +
    • +
    • Processor:
        +
      • Intel - Ice Lake or newer (Xeon or Core series)
      • +
      • AMD - Zen3 or newer (EPYC or Ryzen)
      • +
      +
    • +
    • Simultaneous multithreading disabled:
        +
      • Intel - Hyper-Threading
      • +
      • AMD - SMT
      • +
      +
    • +
    • Single-threaded performance is prioritized over higher cores count
    • +
    +
  • +
  • +

    Storage:

    +
      +
    • NVMe SSD - at least 1 TB for blockchain data (prioritize latency rather than throughput)
    • +
    • Storage requirements will increase as the chain grows. For current estimates, see the current chain snapshot
    • +
    +
  • +
  • +

    Memory:

    +
      +
    • 32 GB DDR4 ECC
    • +
    +
  • +
  • +

    System:

    +
      +
    • Linux Kernel 5.16 or newer
    • +
    +
  • +
  • +

    Network:

    +
      +
    • Symmetric networking speed of 500 Mbit/s is required to handle large numbers of parachains and ensure congestion control during peak times
    • +
    +
  • +
+

While the hardware specs above are best practices and not strict requirements, subpar hardware may lead to performance issues and increase the risk of slashing.

+

VPS Provider List

+

When selecting a VPS provider for your validator node, prioritize reliability, consistent performance, and adherence to the specific hardware requirements set for Polkadot validators. The following server types have been tested and showed acceptable performance in benchmark tests. However, this is not an endorsement and actual performance may vary depending on your workload and VPS provider.

+
    +
  • Google Cloud Platform (GCP) - c2 and c2d machine families offer high-performance configurations suitable for validators
  • +
  • Amazon Web Services (AWS) - c6id machine family provides strong performance, particularly for I/O-intensive workloads
  • +
  • OVH - can be a budget-friendly solution if it meets your minimum hardware specifications
  • +
  • Digital Ocean - popular among developers, Digital Ocean's premium droplets offer configurations suitable for medium to high-intensity workloads
  • +
  • Vultr - offers flexibility with plans that may meet validator requirements, especially for high-bandwidth needs
  • +
  • Linode - provides detailed documentation, which can be helpful for setup
  • +
  • Scaleway - offers high-performance cloud instances that can be suitable for validator nodes
  • +
  • OnFinality - specialized in blockchain infrastructure, OnFinality provides validator-specific support and configurations
  • +
+
+Acceptable use policies +

Different VPS providers have varying acceptable use policies, and not all allow cryptocurrency-related activities.

+

For example, Digital Ocean, requires explicit permission to use servers for cryptocurrency mining and defines unauthorized mining as network abuse in their acceptable use policy.

+

Review the terms for your VPS provider to avoid account suspension or server shutdown due to policy violations.

+
+

Minimum Bond Requirement

+

Before bonding DOT, ensure you meet the minimum bond requirement to start a validator instance. The minimum bond is the least DOT you need to stake to enter the validator set. To become eligible for rewards, your validator node must be nominated by enough staked tokens.

+

For example, on November 19, 2024, the minimum stake backing a validator in Polkadot's era 1632 was 1,159,434.248 DOT. You can check the current minimum stake required using these tools:

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/staking-mechanics/index.html b/infrastructure/staking-mechanics/index.html new file mode 100644 index 00000000..9955b45b --- /dev/null +++ b/infrastructure/staking-mechanics/index.html @@ -0,0 +1,4935 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Staking Mechanics | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Staking Mechanics

+

Gain a deep understanding of the staking mechanics in Polkadot, with a focus on how they impact validators. In this section, you'll explore key concepts such as offenses, slashing, and reward payouts, and learn how these mechanisms influence the behavior and performance of validators within the network. Understanding these elements is crucial for optimizing your validator's participation and ensuring alignment with Polkadot's governance and security protocols.

+

In This Section

+

+

+

+

Additional Resourcs

+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/staking-mechanics/offenses-and-slashes/index.html b/infrastructure/staking-mechanics/offenses-and-slashes/index.html new file mode 100644 index 00000000..38713c9a --- /dev/null +++ b/infrastructure/staking-mechanics/offenses-and-slashes/index.html @@ -0,0 +1,5316 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Offenses and Slashes | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Offenses and Slashes

+

Introduction

+

In Polkadot's Nominated Proof of Stake (NPoS) system, validator misconduct is deterred through a combination of slashing, disabling, and reputation penalties. Validators and nominators who stake tokens face consequences for validator misbehavior, which range from token slashes to restrictions on network participation.

+

This page outlines the types of offenses recognized by Polkadot, including block equivocations and invalid votes, as well as the corresponding penalties. While some parachains may implement additional custom slashing mechanisms, this guide focuses on the offenses tied to staking within the Polkadot ecosystem.

+

Offenses

+

Polkadot is a public permissionless network. As such, it has a mechanism to disincentivize offenses and incentivize good behavior. You can review the parachain protocol to understand better the terminology used to describe offenses. Polkadot validator offenses fall into two categories: invalid votes and equivocations.

+

Invalid Votes

+

A validator will be penalized for inappropriate voting activity during the block inclusion and approval processes. The invalid voting related offenses are as follows:

+
    +
  • Backing an invalid block - a para-validator backs an invalid block for inclusion in a fork of the relay chain
  • +
  • ForInvalid vote - when acting as a secondary checker, the validator votes in favor of an invalid block
  • +
  • AgainstValid vote - when acting as a secondary checker, the validator votes against a valid block. This type of vote wastes network resources required to resolve the disparate votes and resulting dispute
  • +
+

Equivocations

+

Equivocation occurs when a validator produces statements that conflict with each other when producing blocks or voting. Unintentional equivocations usually occur when duplicate signing keys reside on the validator host. If keys are never duplicated, the probability of an honest equivocation slash decreases to near zero. The equivocation related offenses are as follows:

+
    +
  • Equivocation - the validator produces two or more of the same block or vote
      +
    • GRANDPA and BEEFY equivocation - the validator signs two or more votes in the same round on different chains
    • +
    • BABE equivocation - the validator produces two or more blocks on the relay chain in the same time slot
    • +
    +
  • +
  • Double seconded equivocation - the validator attempts to second, or back, more than one block in the same round
  • +
  • Seconded and valid equivocation - the validator seconds, or backs, a block and then attempts to hide their role as the responsible backer by later placing a standard validation vote
  • +
+

Penalties

+

On Polkadot, offenses to the network incur different penalties depending on severity. There are three main penalties: slashing, disabling, and reputation changes.

+

Slashing

+

Validators engaging in bad actor behavior in the network may be subject to slashing if they commit a qualifying offense. When a validator is slashed, they and their nominators lose a percentage of their staked DOT or KSM, from as little as 0.01% up to 100% based on the severity of the offense. Nominators are evaluated for slashing against their active validations at any given time. Validator nodes are evaluated as discrete entities, meaning an operator can't attempt to mitigate the offense on another node they operate in order to avoid a slash.

+

Any slashed DOT or KSM will be added to the Treasury rather than burned or distributed as rewards. Moving slashed funds to the Treasury allows tokens to be quickly moved away from malicious validators while maintaining the ability to revert faulty slashes when needed.

+
+

Multiple active nominations

+

A nominator with a very large bond may nominate several validators in a single era. In this case, a slash is proportionate to the amount staked to the offending validator. Stake allocation and validator activation is controlled by the Phragmén algorithm.

+
+

A validator slash creates an unapplied state transition. You can view pending slashes on Polkadot.js Apps. The UI will display the slash per validator, the affected nominators, and the slash amounts. The unapplied state includes a 27-day grace period during which a governance proposal can be made to reverse the slash. Once this grace period expires, the slash is applied.

+

Equivocation Slash

+

The Web3 Foundation's Slashing mechanisms page provides guidelines for evaluating the security threat level of different offenses and determining penalties proportionate to the threat level of the offense. Offenses requiring coordination between validators or extensive computational costs to the system will typically call for harsher penalties than those more likely to be unintentional than malicious. A description of potential offenses for each threat level and the corresponding penalties is as follows:

+
    +
  • Level 1 - honest misconduct such as isolated cases of unresponsiveness
      +
    • Penalty - validator can be kicked out or slashed up to 0.1% of stake in the validator slot
    • +
    +
  • +
  • Level 2 - misconduct that can occur honestly but is a sign of bad practices. Examples include repeated cases of unresponsiveness and isolated cases of equivocation
      +
    • Penalty - slash of up to 1% of stake in the validator slot
    • +
    +
  • +
  • Level 3 - misconduct that is likely intentional but of limited effect on the performance or security of the network. This level will typically include signs of coordination between validators. Examples include repeated cases of equivocation or isolated cases of unjustified voting on GRANDPA
      +
    • Penalty - reduction in networking reputation metrics, slash of up to 10% of stake in the validator slot
    • +
    +
  • +
  • Level 4 - misconduct that poses severe security or monetary risk to the system or mass collusion. Examples include signs of extensive coordination, creating a serious security risk to the system, or forcing the system to use extensive resources to counter the misconduct
      +
    • Penalty - slash of up to 100% of stake in the validator slot
    • +
    +
  • +
+

See the next section to understand how slash amounts for equivocations are calculated. If you want to know more details about slashing, please look at the research page on Slashing mechanisms.

+

Slash Calculation for Equivocation

+

The slashing penalty for GRANDPA, BABE, and BEEFY equivocations is calculated using the formula below, where x represents the number of offenders and n is the total number of validators in the active set:

+
min((3 * x / n )^2, 1)
+
+

The following scenarios demonstrate how this formula means slash percentages can increase exponentially based on the number of offenders involved compared to the size of the validator pool:

+
    +
  • +

    Minor offense - assume 1 validator out of a 100 validator active set equivocates in a slot. A single validator committing an isolated offense is most likely a mistake rather than malicious attack on the network. This offense results in a 0.09% slash to the stake in the validator slot

    +
    flowchart LR
    +N["Total Validators = 100"]
    +X["Offenders = 1"]
    +F["min(3 * 1 / 100)^2, 1) = 0.0009"]
    +G["0.09% slash of stake"]
    +
    +N --> F
    +X --> F
    +F --> G
    +
  • +
  • +

    Moderate offense - assume 5 validators out a 100 validator active set equivocate in a slot. This is a slightly more serious event as there may be some element of coordination involved. This offense results in a 2.25% slash to the stake in the validator slot

    +
    flowchart LR
    +N["Total Validators = 100"]
    +X["Offenders = 5"]
    +F["min((3 * 5 / 100)^2, 1) = 0.0225"]
    +G["2.25% slash of stake"]
    +
    +N --> F
    +X --> F
    +F --> G
    +
  • +
  • +

    Major offense - assume 20 validators out a 100 validator active set equivocate in a slot. This is a major security threat as it possible represents a coordinated attack on the network. This offense results in a 36% slash and all slashed validators will also be chilled +

    flowchart LR
    +N["Total Validators = 100"]
    +X["Offenders = 20"]
    +F["min((3 * 20 / 100)^2, 1) = 0.36"]
    +G["36% slash of stake"]
    +
    +N --> F
    +X --> F
    +F --> G

    +
  • +
+

The examples above show the risk of nominating or running many validators in the active set. While rewards grow linearly (two validators will get you approximately twice as many staking rewards as one), slashing grows exponentially. Going from a single validator equivocating to two validators equivocating causes a slash four time as much as the single validator.

+

Validators may run their nodes on multiple machines to ensure they can still perform validation work if one of their nodes goes down. Still, validator operators should be cautious when setting these up. Equivocation is possible if they don't coordinate well in managing signing machines.

+

Best Practices to Avoid Slashing

+

The following are advised to node operators to ensure that they obtain pristine binaries or source code and to ensure the security of their node:

+
    +
  • Always download either source files or binaries from the official Parity repository
  • +
  • Verify the hash of downloaded files
  • +
  • Use the W3F secure validator setup or adhere to its principles
  • +
  • Ensure essential security items are checked, use a firewall, manage user access, use SSH certificates
  • +
  • Avoid using your server as a general-purpose system. Hosting a validator on your workstation or one that hosts other services increases the risk of maleficence
  • +
  • Avoid cloning servers (copying all contents) when migrating to new hardware. If an image is needed, create it before generating keys
  • +
  • High Availability (HA) systems are generally not recommended as equivocation may occur if concurrent operations happen—such as when a failed server restarts or two servers are falsely online simultaneously
  • +
  • Copying the keystore folder when moving a database between instances can cause equivocation. Even brief use of duplicated keystores can result in slashing
  • +
+

Below are some examples of small equivocations that happened in the past:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NetworkEraEvent TypeDetailsAction Taken
Polkadot774Small EquivocationThe validator migrated servers and cloned the keystore folder. The on-chain event can be viewed on Subscan.The validator didn't submit a request for the slash to be canceled.
Kusama3329Small EquivocationThe validator operated a test machine with cloned keys. The test machine was online simultaneously as the primary, which resulted in a slash. Details can be found on Polkassembly.The validator requested a slash cancellation, but the council declined.
Kusama3995Small EquivocationThe validator noticed several errors, after which the client crashed, and a slash was applied. The validator recorded all events and opened GitHub issues to allow for technical opinions to be shared. Details can be found on Polkassembly.The validator requested to cancel the slash. The council approved the request as they believed the error wasn't operator-related.
+

Slashing Across Eras

+

There are three main difficulties to account for with slashing in NPoS:

+
    +
  • A nominator can nominate multiple validators and be slashed as a result of actions taken by any of them
  • +
  • Until slashed, the stake is reused from era to era
  • +
  • Slashable offenses can be found after the fact and out of order
  • +
+

To balance this, the system applies only the maximum slash a participant can receive in a given time period rather than the sum. This ensures protection from excessive slashing.

+

Disabling

+

The disabling mechanism is triggered when validators commit serious infractions, such as backing invalid blocks or engaging in equivocations. Disabling stops validators from performing specific actions after they have committed an offense. Disabling is further divided into:

+
    +
  • On-chain disabling - lasts for a whole era and stops validators from authoring blocks, backing, and initiating a dispute
  • +
  • Off-chain disabling - lasts for a session, is caused by losing a dispute, and stops validators from initiating a dispute
  • +
+

Off-chain disabling is always a lower priority than on-chain disabling. Off-chain disabling prioritizes disabling first backers and then approval checkers.

+
+

Note

+

The material in this guide reflects the changes introduced in Stage 2. For more details, refer to the State of Disabling issue on GitHub.

+
+

Reputation Changes

+

Some minor offenses, such as spamming, are only punished by networking reputation changes. Validators use a reputation metric when choosing which peers to connect with. The system adds reputation if a peer provides valuable data and behaves appropriately. If they provide faulty or spam data, the system reduces their reputation. If a validator loses enough reputation, their peers will temporarily close their channels to them. This helps in fighting against Denial of Service (DoS) attacks. Performing validator tasks under reduced reputation will be harder, resulting in lower validator rewards.

+

Penalties by Offense

+

Below, you can find a summary of penalties for specific offenses:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OffenseSlash (%)On-Chain DisablingOff-Chain DisablingReputational Changes
Backing Invalid100%YesYes (High Priority)No
ForInvalid Vote-NoYes (Mid Priority)No
AgainstValid Vote-NoYes (Low Priority)No
GRANDPA / BABE / BEEFY Equivocations0.01-100%YesNoNo
Seconded + Valid Equivocation-NoNoNo
Double Seconded Equivocation-NoNoYes
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/infrastructure/staking-mechanics/rewards-payout/index.html b/infrastructure/staking-mechanics/rewards-payout/index.html new file mode 100644 index 00000000..67cdad99 --- /dev/null +++ b/infrastructure/staking-mechanics/rewards-payout/index.html @@ -0,0 +1,5101 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Rewards Payout | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Rewards Payout

+

Introduction

+

Understanding how rewards are distributed to validators and nominators is essential for network participants. In Polkadot and Kusama, validators earn rewards based on their era points, which are accrued through actions like block production and parachain validation.

+

This guide explains the payout scheme, factors influencing rewards, and how multiple validators affect returns. Validators can also share rewards with nominators, who contribute by staking behind them. By following the payout mechanics, validators can optimize their earnings and better engage with their nominators.

+

Era Points

+

The Polkadot ecosystem measures their reward cycles in a unit called an era. Kusama eras are approximately 6 hours long, and Polkadot eras are 24 hours. At the end of each era, validators are paid proportionally to the amount of era points they have collected. Era points are reward points earned for payable actions like:

+
    +
  • Issuing validity statements for parachain blocks
  • +
  • Producing a non-uncle block in the relay chain
  • +
  • Producing a reference to a previously unreferenced uncle block
  • +
  • Producing a referenced uncle block
  • +
+
+

Note

+

An uncle block is a relay chain block that is valid in every regard but has failed to become canonical. This can happen when two or more validators are block producers in a single slot, and the block produced by one validator reaches the next block producer before the others. The lagging blocks are called uncle blocks.

+
+

Payments occur at the end of every era.

+

Reward Variance

+

Rewards in Polkadot and Kusama staking systems can fluctuate due to differences in era points earned by para-validators and non-para-validators. Para-validators generally contribute more to the overall reward distribution due to their role in validating parachain blocks, thus influencing the variance in staking rewards.

+

To illustrate this relationship:

+
    +
  • Para-validator era points tend to have a higher impact on the expected value of staking rewards compared to non-para-validator points
  • +
  • The variance in staking rewards increases as the total number of validators grows relative to the number of para-validators
  • +
  • In simpler terms, when more validators are added to the active set without increasing the para-validator pool, the disparity in rewards between validators becomes more pronounced
  • +
+

However, despite this increased variance, rewards tend to even out over time due to the continuous rotation of para-validators across eras. The network's design ensures that over multiple eras, each validator has an equal opportunity to participate in para-validation, eventually leading to a balanced distribution of rewards.

+
+Probability in Staking Rewards +

This should only serve as a high-level overview of the probabilistic nature for staking rewards.

+

Let:

+
    +
  • pe = para-validator era points
  • +
  • ne = non-para-validator era points
  • +
  • EV = expected value of staking rewards
  • +
+

Then, EV(pe) has more influence on the EV than EV(ne).

+

Since EV(pe) has a more weighted probability on the EV, the increase in variance against the EV becomes apparent between the different validator pools (aka. validators in the active set and the ones chosen to para-validate).

+

Also, let:

+
    +
  • v = the variance of staking rewards
  • +
  • p = number of para-validators
  • +
  • w = number validators in the active set
  • +
  • e = era
  • +
+

Then, v ↑ if w ↑, as this reduces p : w, with respect to e.

+

Increased v is expected, and initially keeping p ↓ using the same para-validator set for all parachains ensures availability and voting. In addition, despite v ↑ on an e to e basis, over time, the amount of rewards each validator receives will equal out based on the continuous selection of para-validators.

+

There are plans to scale the active para-validation set in the future.

+
+

Payout Scheme

+

Validator rewards are distributed equally among all validators in the active set, regardless of the total stake behind each validator. However, individual payouts may differ based on the number of era points a validator has earned. Although factors like network connectivity can affect era points, well-performing validators should accumulate similar totals over time.

+

Validators can also receive tips from users, which incentivize them to include certain transactions in their blocks. Validators retain 100% of these tips.

+

Rewards are paid out in the network's native token (DOT for Polkadot and KSM for Kusama).

+

The following example illustrates a four member validator set with their names, amount they have staked, and how payout of rewards is divided. This scenario assumes all validators earned the same amount of era points and no one received tips:

+
%%Payout, 4 val set, A-D are validators/stakes, E is payout%%
+
+block-beta
+    columns 1
+  block
+    A["Alice (18 DOT)"]
+    B["Bob (9 DOT)"]
+    C["Carol (8 DOT)"]
+    D["Dave (7 DOT)"]
+  end
+    space
+    E["Payout (8 DOT total)"]:1
+    E --"2 DOT"--> A
+    E --"2 DOT"--> B
+    E --"2 DOT"--> C
+    E --"2 DOT"--> D 
+

Note that this is different than most other Proof of Stake (PoS) systems. As long as a validator is in the validator set, it will receive the same block reward as every other validator. Validator Alice, who had 18 DOT staked, received the same 2 DOT reward in this era as Dave, who had only 7 DOT staked.

+

Running Multiple Validators

+

Running multiple validators can offer a more favorable risk/reward ratio compared to running a single one. If you have sufficient DOT or nominators staking on your validators, maintaining multiple validators within the active set can yield higher rewards.

+

In the preceding section, with 18 DOT staked and no nominators, Alice earned 2 DOT in one era. This example uses DOT, but the same principles apply for KSM on the Kusama network. By managing stake across multiple validators, you can potentially increase overall returns. Recall the set of validators from the preceding section:

+
%%Payout, 4 val set, A-D are validators/stakes, E is payout%%
+
+block-beta
+    columns 1
+  block
+    A["Alice (18 DOT)"]
+    B["Bob (9 DOT)"]
+    C["Carol (8 DOT)"]
+    D["Dave (7 DOT)"]
+  end
+    space
+    E["Payout (8 DOT total)"]:1
+    E --"2 DOT"--> A
+    E --"2 DOT"--> B
+    E --"2 DOT"--> C
+    E --"2 DOT"--> D 
+

Now, assume Alice decides to split their stake and run two validators, each with a nine DOT stake. This validator set only has four spots and priority is given to validators with a larger stake. In this example, Dave has the smallest stake and loses his spot in the validator set. Now, Alice will earn two shares of the total payout each era as illustrated below:

+
%%Payout, 4 val set, A-D are validators/stakes, E is payout%%
+
+block-beta
+    columns 1
+  block
+    A["Alice (9 DOT)"]
+    F["Alice (9 DOT)"]
+    B["Bob (9 DOT)"]
+    C["Carol (8 DOT)"]
+  end
+    space
+    E["Payout (8 DOT total)"]:1
+    E --"2 DOT"--> A
+    E --"2 DOT"--> B
+    E --"2 DOT"--> C
+    E --"2 DOT"--> F 
+

With enough stake, you could run more than two validators. However, each validator must have enough stake behind it to maintain a spot in the validator set.

+

Nominators and Validator Payments

+

A nominator's stake allows them to vote for validators and earn a share of the rewards without managing a validator node. Although staking rewards depend on validator activity during an era, validators themselves never control or own nominator rewards. To trigger payouts, anyone can call the staking.payoutStakers or staking.payoutStakerByPage methods, which mint and distribute rewards directly to the recipients. This trustless process ensures nominators receive their earned rewards.

+

Validators set a commission rate as a percentage of the block reward, affecting how rewards are shared with nominators. A 0% commission means the validator keeps only rewards from their self-stake, while a 100% commission means they retain all rewards, leaving none for nominators.

+

The following examples model splitting validator payments between nominator and validator using various commission percentages. For simplicity, these examples assume a Polkadot-SDK based relay chain that uses DOT as a native token and a single nominator per validator. Calculations of KSM reward payouts for Kusama follow the same formula.

+

Start with the original validator set from the previous section:

+
block-beta
+    columns 1
+  block:e
+    A["Alice (18 DOT)"]
+    B["Bob (9 DOT)"]
+    C["Carol (8 DOT)"]
+    D["Dave (7 DOT)"]
+  end
+    space
+    E["Payout (8 DOT total)"]:1
+    E --"2 DOT"--> A
+    E --"2 DOT"--> B
+    E --"2 DOT"--> C
+    E --"2 DOT"--> D 
+

The preceding diagram shows each validator receiving a 2 DOT payout, but doesn't account for sharing rewards with nominators. The following diagram shows what nominator payout might look like for validator Alice. Alice has a 20% commission rate and holds 50% of the stake for their validator:

+

+flowchart TD
+    A["Gross Rewards = 2 DOT"]
+    E["Commission = 20%"]
+    F["Alice Validator Payment = 0.4 DOT"]
+    G["Total Stake Rewards = 1.6 DOT"]
+    B["Alice Validator Stake = 18 DOT"]
+    C["9 DOT Alice (50%)"]
+    H["Alice Stake Reward = 0.8 DOT"]
+    I["Total Alice Validator Reward = 1.2 DOT"]
+    D["9 DOT Nominator (50%)"]
+    J["Total Nominator Reward = 0.8 DOT"]
+
+    A --> E
+    E --(2 x 0.20)--> F
+    F --(2 - 0.4)--> G
+    B --> C
+    B --> D
+    C --(1.6 x 0.50)--> H
+    H --(0.4 + 0.8)--> I
+    D --(1.60 x 0.50)--> J
+

Notice the validator commission rate is applied against the gross amount of rewards for the era. The validator commission is subtracted from the total rewards. After the commission is paid to the validator, the remaining amount is split among stake owners according to their percentage of the total stake. A validator's total rewards for an era include their commission plus their piece of the stake rewards.

+

Now, consider a different scenario for validator Bob where the commission rate is 40%, and Bob holds 33% of the stake for their validator:

+

+flowchart TD
+    A["Gross Rewards = 2 DOT"]
+    E["Commission = 40%"]
+    F["Bob Validator Payment = 0.8 DOT"]
+    G["Total Stake Rewards = 1.2 DOT"]
+    B["Bob Validator Stake = 9 DOT"]
+    C["3 DOT Bob (33%)"]
+    H["Bob Stake Reward = 0.4 DOT"]
+    I["Total Bob Validator Reward = 1.2 DOT"]
+    D["6 DOT Nominator (67%)"]
+    J["Total Nominator Reward = 0.8 DOT"]
+
+    A --> E
+    E --(2 x 0.4)--> F
+    F --(2 - 0.8)--> G
+    B --> C
+    B --> D
+    C --(1.2 x 0.33)--> H
+    H --(0.8 + 0.4)--> I
+    D --(1.2 x 0.67)--> J
+

Bob holds a smaller percentage of their node's total stake, making their stake reward smaller than Alice's. In this scenario, Bob makes up the difference by charging a 40% commission rate and ultimately ends up with the same total payment as Alice. Each validator will need to find their ideal balance between the amount of stake and commission rate to attract nominators while still making running a validator worthwhile.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/js/header-scroll.js b/js/header-scroll.js new file mode 100644 index 00000000..e25fc834 --- /dev/null +++ b/js/header-scroll.js @@ -0,0 +1,11 @@ +// The purpose of this script is to move the header up out of view while +// the user is scrolling down a page +let lastScrollY = Math.max(0, window.scrollY); +const header = document.querySelector('.md-header__inner'); + +window.addEventListener('scroll', () => { + const currentScrollY = Math.max(0, window.scrollY); + const isScrollingDown = currentScrollY > lastScrollY; + header.classList.toggle('hidden', isScrollingDown); + lastScrollY = currentScrollY; +}); diff --git a/js/root-level-sections.js b/js/root-level-sections.js new file mode 100644 index 00000000..0faed2ec --- /dev/null +++ b/js/root-level-sections.js @@ -0,0 +1,25 @@ +// The purpose of this script is to add a specific class to root-level +// index pages, which allows the styling of the navigation menu to be +// more easily changed +document.addEventListener('DOMContentLoaded', function () { + // Get the current pathname + const path = window.location.pathname; + + const pathParts = path.split('/').filter((part) => part !== ''); // Split by '/' and remove empty parts + + // Remove 'polkadot-mkdocs' if it's the first part of the path + if (pathParts[0] === 'polkadot-mkdocs') { + pathParts.shift(); // Remove the first element + } + + // Check if the path contains exactly one item in its path + if (pathParts.length === 1) { + // Select the target element + const sidebarInner = document.querySelector( + '.md-sidebar--primary .md-sidebar__inner' + ); + if (sidebarInner) { + sidebarInner.classList.add('root-level-sidebar'); + } + } +}); diff --git a/js/search-bar-results.js b/js/search-bar-results.js new file mode 100644 index 00000000..9c59a949 --- /dev/null +++ b/js/search-bar-results.js @@ -0,0 +1,27 @@ +// The purpose of this script is to modify the default search functionality +// so that the "Type to start searching" text does not render in the search +// results dropdown and so that the dropdown only appears once a user has started +// to type in the input field +document.addEventListener('DOMContentLoaded', () => { + const searchInput = document.querySelector('.md-search__input'); + const searchOutput = document.querySelector('.md-search__output'); + const searchResultMeta = document.querySelector('.md-search-result__meta'); + + if (searchResultMeta.textContent.trim() === 'Initializing search') { + searchResultMeta.style.display = 'none'; + } + + searchInput.addEventListener('input', () => { + // Only show the search results if the user has started to type + // Toggle "visible" class based on input content + searchOutput.classList.toggle('visible', searchInput.value.trim() !== ''); + + // Do not show the search result meta text unless a user has started typing + // a value in the input field + if (searchInput.value.trim() === '' && searchResultMeta) { + searchResultMeta.style.display = 'none'; + } else if (searchInput.value.trim().length > 0) { + searchResultMeta.style.display = 'block'; + } + }); +}); diff --git a/package-lock.json b/package-lock.json new file mode 100644 index 00000000..65b7775a --- /dev/null +++ b/package-lock.json @@ -0,0 +1,43 @@ +{ + "name": "polkadot-docs", + "version": "1.0.0", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "polkadot-docs", + "version": "1.0.0", + "license": "ISC", + "devDependencies": { + "@taplo/cli": "^0.7.0", + "husky": "^8.0.0" + } + }, + "node_modules/@taplo/cli": { + "version": "0.7.0", + "resolved": "https://registry.npmjs.org/@taplo/cli/-/cli-0.7.0.tgz", + "integrity": "sha512-Ck3zFhQhIhi02Hl6T4ZmJsXdnJE+wXcJz5f8klxd4keRYgenMnip3JDPMGDRLbnC/2iGd8P0sBIQqI3KxfVjBg==", + "dev": true, + "license": "MIT", + "bin": { + "taplo": "dist/cli.js" + } + }, + "node_modules/husky": { + "version": "8.0.3", + "resolved": "https://registry.npmjs.org/husky/-/husky-8.0.3.tgz", + "integrity": "sha512-+dQSyqPh4x1hlO1swXBiNb2HzTDN1I2IGLQx1GrBuiqFJfoMrnZWwVmatvSiO+Iz8fBUnf+lekwNo4c2LlXItg==", + "dev": true, + "license": "MIT", + "bin": { + "husky": "lib/bin.js" + }, + "engines": { + "node": ">=14" + }, + "funding": { + "url": "https://github.com/sponsors/typicode" + } + } + } +} diff --git a/package.json b/package.json new file mode 100644 index 00000000..0abdd363 --- /dev/null +++ b/package.json @@ -0,0 +1,17 @@ +{ + "name": "polkadot-docs", + "version": "1.0.0", + "description": "This package contains tools to support the development and maintenance of the polkadot-docs repository.", + "main": "index.js", + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1", + "prepare": "husky install" + }, + "keywords": [], + "author": "", + "license": "ISC", + "devDependencies": { + "@taplo/cli": "^0.7.0", + "husky": "^8.0.0" + } +} diff --git a/polkadot-protocol/architecture/index.html b/polkadot-protocol/architecture/index.html new file mode 100644 index 00000000..dd2eecac --- /dev/null +++ b/polkadot-protocol/architecture/index.html @@ -0,0 +1,4945 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Architecture | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Architecture

+

Explore Polkadot's architecture, including the relay chain, parachains, and system chains, and discover the role each component plays in the broader ecosystem.

+

A Brief Look at Polkadot’s Chain Ecosystem

+

The following provides a brief overview of the role of each chain:

+
    +
  • +

    Polkadot chain - the central hub and main chain responsible for the overall security, consensus, and interoperability between all connected chains

    +
  • +
  • +

    System chains - specialized chains that provide essential services to the ecosystem, like the Asset Hub, Bridge Hub, and Coretime chain

    +
  • +
  • +

    Parachains - individual, specialized blockchains that run parallel to the relay chain and are connected to it

    +
  • +
+

Learn more about these components by checking out the articles in this section.

+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/parachains/consensus/index.html b/polkadot-protocol/architecture/parachains/consensus/index.html new file mode 100644 index 00000000..77c11ab7 --- /dev/null +++ b/polkadot-protocol/architecture/parachains/consensus/index.html @@ -0,0 +1,5028 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Parachain Consensus | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Parachain Consensus

+

Introduction

+

Parachains are independent blockchains built with the Polkadot SDK, designed to leverage Polkadot’s relay chain for shared security and transaction finality. These specialized chains operate as part of Polkadot’s execution sharding model, where each parachain manages its own state and transactions while relying on the relay chain for validation and consensus.

+

At the core of parachain functionality are collators, specialized nodes that sequence transactions into blocks and maintain the parachain’s state. Collators optimize Polkadot’s architecture by offloading state management from the relay chain, allowing relay chain validators to focus solely on validating parachain blocks.

+

This guide explores how parachain consensus works, including the roles of collators and validators, and the steps involved in securing parachain blocks within Polkadot’s scalable and decentralized framework.

+

The Role of Collators

+

Collators are responsible for sequencing end-user transactions into blocks and maintaining the current state of their respective parachains. Their role is akin to Ethereum’s sequencers but optimized for Polkadot's architecture.

+

Key responsibilities include:

+
    +
  • Transaction sequencing - organizing transactions into Proof of Validity (PoV) blocks
  • +
  • State management - maintaining parachain states without burdening the relay chain validators
  • +
  • Consensus participation - sending PoV blocks to relay chain validators for approval
  • +
+

Consensus and Validation

+

Parachain consensus operates in tandem with the relay chain, leveraging Nominated Proof of Stake (NPoS) for shared security. The process ensures parachain transactions achieve finality through the following steps:

+
    +
  1. Packaging transactions - collators bundle transactions into PoV blocks (parablocks)
  2. +
  3. Submission to validator - parablocks are submitted to a randomly selected subset of relay chain validators, known as paravalidators
  4. +
  5. Validation of PoV Blocks - paravalidators use the parachain’s state transition function (already available on the relay chain) to verify transaction validity
  6. +
  7. Backing and inclusion - if a sufficient number of positive validations are received, the parablock is backed and included via a para-header on the relay chain
  8. +
+

The following sections describe the actions taking place during each stage of the process.

+

Path of a Parachain Block

+

Polkadot achieves scalability through execution sharding, where each parachain operates as an independent shard with its own blockchain and state. Shared security for all parachains is provided by the relay chain, powered by Nominated Proof of Staking (NPoS). This framework allows parachains to focus on transaction processing and state management, while the relay chain ensures validation and finality.

+

The journey parachain transactions follow to reach consensus and finality can be described as follows:

+
    +
  • +

    Collators and parablocks:

    +
      +
    • Collators, specialized nodes on parachains, package network transactions into Proof of Validity (PoV) blocks, also called parablocks
    • +
    • These parablocks are sent to a subset of relay chain validators, known as paravalidators, for validation
    • +
    • The parachain's state transition function (Wasm blob) is not re-sent, as it is already stored on the relay chain
    • +
    +
  • +
+
flowchart TB
+    %% Subgraph: Parachain
+    subgraph Parachain
+        direction LR
+        Txs[Network Transactions]
+        Collator[Collator Node]
+        ParaBlock[ParaBlock + PoV]
+        Txs -->|Package Transactions| Collator
+        Collator -->|Create| ParaBlock
+    end
+
+    subgraph Relay["Relay Chain"]
+        ParaValidator
+    end
+
+    %% Main Flow
+    Parachain -->|Submit To| Relay
+
    +
  • +

    Validation by paravalidators:

    +
      +
    • Paravalidators are groups of approximately five relay chain validators, randomly assigned to parachains and shuffled every minute
    • +
    • Each paravalidator downloads the parachain's Wasm blob and validates the parablock by ensuring all transactions comply with the parachain’s state transition rules
    • +
    • Paravalidators sign positive or negative validation statements based on the block’s validity
    • +
    +
  • +
  • +

    Backing and approval:

    +
      +
    • If a parablock receives sufficient positive validation statements, it is backed and included on the relay chain as a para-header
    • +
    • An additional approval process resolves disputes. If a parablock contains invalid transactions, additional validators are tasked with verification
    • +
    • Validators who back invalid parablocks are penalized through slashing, creating strong incentives for honest behavior
    • +
    +
  • +
+
flowchart
+    subgraph RelayChain["Relay Chain"]
+        direction TB
+        subgraph InitialValidation["Initial Validation"]
+            direction LR
+            PValidators[ParaValidators]
+            Backing[Backing\nProcess]
+            Header[Submit Para-header\non Relay Chain]
+        end
+        subgraph Secondary["Secondary Validation"]
+            Approval[Approval\nProcess]
+            Dispute[Dispute\nResolution]
+            Slashing[Slashing\nMechanism]
+        end
+
+    end
+
+
+    %% Validation Process
+    PValidators -->|Download\nWasm\nValidate Block| Backing
+    Backing -->|If Valid\nSignatures| Header
+    InitialValidation -->|Additional\nVerification| Secondary
+
+    %% Dispute Flow
+    Approval -->|If Invalid\nDetected| Dispute
+    Dispute -->|Penalize\nDishonest\nValidators| Slashing
+

It is important to understand that relay chain blocks do not store full parachain blocks (parablocks). Instead, they include para-headers, which serve as summaries of the backed parablocks. The complete parablock remains within the parachain network, maintaining its autonomy while relying on the relay chain for validation and finality.

+

Where to Go Next

+

For more technical details, refer to the:

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/parachains/index.html b/polkadot-protocol/architecture/parachains/index.html new file mode 100644 index 00000000..85c48234 --- /dev/null +++ b/polkadot-protocol/architecture/parachains/index.html @@ -0,0 +1,4930 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Parachains | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Parachains

+

Discover how parachains secure their networks and reach consensus by harnessing Polkadot’s relay chain and its robust validator framework. This integrated architecture ensures shared security and seamless coordination across the entire ecosystem.

+

Parachains serve as the foundation of Polkadot’s multichain ecosystem, enabling diverse, application-specific blockchains to operate in parallel. By connecting to the relay chain, parachains gain access to Polkadot’s shared security, interoperability, and decentralized governance. This design allows developers to focus on building innovative features while benefiting from a secure and scalable infrastructure.

+

In This Section

+

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/parachains/overview/index.html b/polkadot-protocol/architecture/parachains/overview/index.html new file mode 100644 index 00000000..60c1e9cd --- /dev/null +++ b/polkadot-protocol/architecture/parachains/overview/index.html @@ -0,0 +1,5025 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Overview | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Overview

+ +

Introduction

+

A parachain is a coherent, application-specific blockchain that derives security from its respective relay chain. Parachains on Polkadot are each their own separate, fully functioning blockchain. The primary difference between a parachain and a regular, "solo" blockchain is that the relay chain verifies the state of all parachains that are connected to it. In many ways, parachains can be thought of as a "cynical" rollup, as the crypto-economic protocol used (ELVES) assumes the worst-case scenario, rather than the typical optimistic approach that many roll-up mechanisms take. Once enough validators attest that a block is valid, then the probability of that block being valid is high.

+

As each parachain’s state is validated by the relay chain, the relay chain represents the collective state of all parachains.

+
flowchart TB
+    subgraph "Relay Chain"
+        RC[Relay Chain Validators]
+        State[Collective State Validation]
+    end
+
+    PA[Parachain A]
+    PB[Parachain B]
+    PC[Parachain C]
+
+    RC -->|Validate State| PA
+    RC -->|Validate State| PB
+    RC -->|Validate State| PC
+
+    State -->|Represents Collective\nParachain State| RC
+
+    note["ELVES Protocol:\n- Crypto-economic security\n- Assumes worst-case scenario\n- High probability validation"]
+
+

Coherent Systems

+

Coherency refers to the degree of synchronization, consistency, and interoperability between different components or chains within a system. It encompasses the internal coherence of individual chains and the external coherence between chains regarding how they interact.

+

A single-state machine like Ethereum is very coherent, as all of its components (smart contracts, dApps/applications, staking, consensus) operate within a single environment with the downside of less scalability. Multi-protocol state machines, such as Polkadot, offer less coherency due to their sharded nature but more scalability due to the parallelization of their architecture.

+

Parachains are coherent, as they are self-contained environments with domain-specific functionality.

+
+

Parachains enable parallelization of different services within the same network. However, unlike most layer two rollups, parachains don't suffer the same interoperability pitfalls that most rollups suffer. Cross-Consensus Messaging (XCM) provides a common communication format for each parachain and can be configured to allow a parachain to communicate with just the relay chain or certain parachains.

+

The diagram below highlights the flexibility of the Polkadot ecosystem, where each parachain specializes in a distinct domain. This example illustrates how parachains, like DeFi and GameFi, leverage XCM for cross-chain operations such as asset transfers and credential verification.

+
flowchart TB
+    subgraph "Polkadot Relay Chain"
+        RC[Relay Chain\nCross-Consensus\nRouting]
+    end
+
+    subgraph "Parachain Ecosystem"
+        direction TB
+        DeFi[DeFi Parachain\nFinancial Services]
+        GameFi[GameFi Parachain\nGaming Ecosystem]
+        NFT[NFT Parachain\nDigital Collectibles]
+        Identity[Identity Parachain\nUser Verification]
+    end
+
+    DeFi <-->|XCM: Asset Transfer| GameFi
+    GameFi <-->|XCM: Token Exchange| NFT
+    Identity <-->|XCM: Credential Verification| DeFi
+
+    RC -->|Validate & Route XCM| DeFi
+    RC -->|Validate & Route XCM| GameFi
+    RC -->|Validate & Route XCM| NFT
+    RC -->|Validate & Route XCM| Identity
+
+    note["XCM Features:\n- Standardized Messaging\n- Cross-Chain Interactions\n- Secure Asset/Data Transfer"]
+

Most parachains are built using the Polkadot SDK, which provides all the tools to create a fully functioning parachain. However, it is possible to construct a parachain that can inherit the security of the relay chain as long as it implements the correct mechanisms expected by the relay chain.

+

State Transition Functions (Runtimes)

+

At their core, parachains, like most blockchains, are deterministic, finite-state machines that are often backed by game theory and economics. The previous state of the parachain, combined with external input in the form of extrinsics, allows the state machine to progress forward, one block at a time.

+
+

Deterministic State Machines

+

Determinism refers to the concept that a particular input will always produce the same output. State machines are algorithmic machines that state changes based on their inputs to produce a new, updated state.

+
+
stateDiagram-v2
+    direction LR
+    [*] --> StateA : Initial State
+
+    StateA --> STF : Extrinsics/Transactions
+    STF --> StateB : Deterministic Transformation
+    StateB --> [*] : New State
+

The primary driver of this progression is the state transition function (STF), commonly referred to as a runtime. Each time a block is submitted, it represents the next proposed state for a parachain. By applying the state transition function to the previous state and including a new block that contains the proposed changes in the form of a list of extrinsics/transactions, the runtime defines just exactly how the parachain is to advance from state A to state B.

+

The STF in a Polkadot SDK-based chain is compiled to Wasm and uploaded on the relay chain. This STF is crucial for the relay chain to validate the state changes coming from the parachain, as it is used to ensure that all proposed state transitions are happening correctly as part of the validation process.

+
+

Wasm Runtimes

+

For more information on the Wasm meta protocol that powers runtimes, see the Polkadot SDK Rust Docs: WASM Meta Protocol

+
+

Shared Security: Validated by the Relay Chain

+

The relay chain provides a layer of economic security for its parachains. Parachains submit proof of validation (PoV) data to the relay chain for validation through collators, upon which the relay chains' validators ensure the validity of this data in accordance with the STF for that particular parachain. In other words, the consensus for a parachain follows the relay chain. While parachains choose how a block is authored, what it contains, and who authors it, the relay chain ultimately provides finality and consensus for those blocks.

+
+

The Parachains Protocol

+

For more information regarding the parachain and relay chain validation process, view the Polkadot Wiki: Parachains' Protocol Overview: Protocols' Summary

+
+

Parachains need at least one honest collator to submit PoV data to the relay chain. Without this, the parachain can't progress. The mechanisms that facilitate this are found in the Cumulus portion of the Polkadot SDK, some of which are found in the cumulus_pallet_parachain_system

+

Cryptoeconomic Security: ELVES Protocol

+

The ELVES (Economic Last Validation Enforcement System) protocol forms the foundation of Polkadot's cryptoeconomic security model. ELVES assumes a worst-case scenario by enforcing strict validation rules before any state transitions are finalized. Unlike optimistic approaches that rely on post-facto dispute resolution, ELVES ensures that validators collectively confirm the validity of a block before it becomes part of the parachain's state.

+

Validators are incentivized through staking and penalized for malicious or erroneous actions, ensuring adherence to the protocol. This approach minimizes the probability of invalid states being propagated across the network, providing robust security for parachains.

+

Interoperability

+

Polkadot's interoperability framework allows parachains to communicate with each other, fostering a diverse ecosystem of interconnected blockchains. Through Cross-Consensus Messaging (XCM), parachains can transfer assets, share data, and invoke functionalities on other chains securely. This standardized messaging protocol ensures that parachains can interact with the relay chain and each other, supporting efficient cross-chain operations.

+

The XCM protocol mitigates common interoperability challenges in isolated blockchain networks, such as fragmented ecosystems and limited collaboration. By enabling decentralized applications to leverage resources and functionality across parachains, Polkadot promotes a scalable, cooperative blockchain environment that benefits all participants.

+

Where to Go Next

+

For further information about the consensus protocol used by parachains, see the Consensus page.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/polkadot-chain/agile-coretime/index.html b/polkadot-protocol/architecture/polkadot-chain/agile-coretime/index.html new file mode 100644 index 00000000..fdca7a21 --- /dev/null +++ b/polkadot-protocol/architecture/polkadot-chain/agile-coretime/index.html @@ -0,0 +1,4938 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Agile Coretime | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Agile Coretime

+

Introduction

+

Agile Coretime is the scheduling framework on Polkadot that lets parachains efficiently access cores, which comprise an active validator set tasked with parablock validation. As the first blockchain to enable a flexible scheduling system for blockspace production, Polkadot offers unparalleled adaptability for parachains.

+

Cores can be designated to a parachain either continuously through bulk coretime or dynamically via on-demand coretime. Additionally, Polkadot supports scheduling multiple cores in parallel through elastic scaling, which is a feature under active development on Polkadot. This flexibility empowers parachains to optimize their resource usage and block production according to their unique needs.

+

In this guide, you'll learn how bulk coretime enables continuous core access with features like interlacing and splitting, and how on-demand coretime provides flexible, pay-per-use scheduling for parachains. For a deep dive on Agile Coretime and its terminology, refer to the Wiki doc.

+

Bulk Coretime

+

Bulk coretime is a fixed duration of continuous coretime represented by an NFT that can be purchased through coretime sales in DOT and can be split, shared, or resold. Currently, the duration of bulk coretime is set to 28 days. Coretime purchased in bulk and assigned to a single parachain is eligible for a price-capped renewal, providing a form of rent-controlled access, which is important for predicting the running costs in the near future. Suppose the bulk coretime is interlaced or split or is kept idle without assigning it to a parachain. In that case, it will be ineligible for the price-capped renewal.

+

Coretime Interlacing

+

It is the action of dividing bulk coretime across multiple parachains that produce blocks spaced uniformly in time. For example, think of multiple parachains taking turns producing blocks, demonstrating a simple form of interlacing. This feature can be used by parachains with a low transaction volume and need not continuously produce blocks.

+

Coretime Splitting

+

It is the action of dividing bulk coretime into multiple contiguous regions. This feature can be used by parachains that need to produce blocks continuously but do not require the whole 28 days of bulk coretime and require only part of it.

+

On-Demand Coretime

+

Polkadot has dedicated cores assigned to provide core time on demand. These cores are excluded from the coretime sales and are reserved for on-demand parachains, which pay in DOT per block.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/polkadot-chain/index.html b/polkadot-protocol/architecture/polkadot-chain/index.html new file mode 100644 index 00000000..9d2a1fff --- /dev/null +++ b/polkadot-protocol/architecture/polkadot-chain/index.html @@ -0,0 +1,4950 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + The Polkadot Relay Chain | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

The Polkadot Relay Chain

+

Discover the central role of the Polkadot Relay Chain in securing the network and fostering interoperability. As the backbone of Polkadot, the relay chain provides shared security and ensures consensus across the ecosystem. It empowers parachains with flexible coretime allocation, enabling them to purchase blockspace on demand, ensuring efficiency and scalability for diverse blockchain applications.

+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/polkadot-chain/overview/index.html b/polkadot-protocol/architecture/polkadot-chain/overview/index.html new file mode 100644 index 00000000..a5d288c4 --- /dev/null +++ b/polkadot-protocol/architecture/polkadot-chain/overview/index.html @@ -0,0 +1,5181 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Overview of the Polkadot Relay Chain | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Overview

+

Introduction

+

Polkadot is a next-generation blockchain protocol designed to support a multi-chain future by enabling secure communication and interoperability between different blockchains. Built as a Layer-0 protocol, Polkadot introduces innovations like application-specific Layer-1 chains (parachains), shared security through Nominated Proof of Stake (NPoS), and cross-chain interactions via its native Cross-Consensus Messaging Format (XCM).

+

This guide covers key aspects of Polkadot’s architecture, including its high-level protocol structure, blockspace commoditization, and the role of its native token, DOT, in governance, staking, and resource allocation.

+

Polkadot 1.0

+

Polkadot 1.0 represents the state of Polkadot as of 2023, coinciding with the release of Polkadot runtime v1.0.0. This section will focus on Polkadot 1.0, along with philosophical insights into network resilience and blockspace.

+

As a Layer-0 blockchain, Polkadot contributes to the multi-chain vision through several key innovations and initiatives, including:

+
    +
  • +

    Application-specific Layer-1 blockchains (parachains) - Polkadot's sharded network allows for parallel transaction processing, with shards that can have unique state transition functions, enabling custom-built L1 chains optimized for specific applications

    +
  • +
  • +

    Shared security and scalability - L1 chains connected to Polkadot benefit from its Nominated Proof of Stake (NPoS) system, providing security out-of-the-box without the need to bootstrap their own

    +
  • +
  • +

    Secure interoperability - Polkadot's native interoperability enables seamless data and value exchange between parachains. This interoperability can also be used outside of the ecosystem for bridging with external networks

    +
  • +
  • +

    Resilient infrastructure - decentralized and scalable, Polkadot ensures ongoing support for development and community initiatives via its on-chain treasury and governance

    +
  • +
  • +

    Rapid L1 development - the Polkadot SDK allows fast, flexible creation and deployment of Layer-1 chains

    +
  • +
  • +

    Cultivating the next generation of Web3 developers - Polkadot supports the growth of Web3 core developers through initiatives such as:

    + +
  • +
+

High-Level Architecture

+

Polkadot features a chain that serves as the central component of the system. This chain is depicted as a ring encircled by several parachains that are connected to it.

+

According to Polkadot's design, any blockchain that can compile to WebAssembly (Wasm) and adheres to the Parachains Protocol becomes a parachain on the Polkadot network.

+

Here’s a high-level overview of the Polkadot protocol architecture:

+

+

Parachains propose blocks to Polkadot validators, who check for availability and validity before finalizing them. With the relay chain providing security, collators—full nodes of parachains—can focus on their tasks without needing strong incentives.

+

The Cross-Consensus Messaging Format (XCM) allows parachains to exchange messages freely, leveraging the chain's security for trust-free communication.

+

In order to interact with chains that want to use their own finalization process (e.g., Bitcoin), Polkadot has bridges that offer two-way compatibility, meaning that transactions can be made between different parachains.

+

Polkadot's Additional Functionalities

+

The Polkadot chain oversees crowdloans and auctions. Chain cores were leased through auctions for three-month periods, up to a maximum of two years.

+

Crowdloans enabled users to securely lend funds to teams for lease deposits in exchange for pre-sale tokens, which is the only way to access slots on Polkadot 1.0.

+
+

Note

+

Auctions are deprecated in favor of coretime.

+
+

Additionally, the chain handles staking, accounts, balances, and governance.

+

Agile Coretime

+

The new and more efficient way of obtaining core on Polkadot is to go through the process of purchasing coretime.

+

Agile coretime improves the efficient use of Polkadot's network resources and offers economic flexibility for developers, extending Polkadot's capabilities far beyond the original vision outlined in the whitepaper.

+

It enables parachains to purchase monthly "bulk" allocations of coretime (the time allocated for utilizing a core, measured in Polkadot relay chain blocks), ensuring heavy-duty parachains that can author a block every six seconds with Asynchronous Backing can reliably renew their coretime each month. Although six-second block times are now the default, parachains have the option of producing blocks less frequently.

+

Renewal orders are prioritized over new orders, offering stability against price fluctuations and helping parachains budget more effectively for project costs.

+

Polkadot's Resilience

+

Decentralization is a vital component of blockchain networks, but it comes with trade-offs:

+
    +
  • An overly decentralized network may face challenges in reaching consensus and require significant energy to operate
  • +
  • Also, a network that achieves consensus quickly risks centralization, making it easier to manipulate or attack
  • +
+

A network should be decentralized enough to prevent manipulative or malicious influence. In this sense, decentralization is a tool for achieving resilience.

+

Polkadot 1.0 currently achieves resilience through several strategies:

+
    +
  • +

    Nominated Proof of Stake (NPoS) - ensures that the stake per validator is maximized and evenly distributed among validators

    +
  • +
  • +

    Decentralized nodes - designed to encourage operators to join the network. This program aims to expand and diversify the validators in the ecosystem who aim to become independent of the program during their term. Feel free to explore more about the program on the official Decentralized Nodes page

    +
  • +
  • +

    On-chain treasury and governance - known as OpenGov, this system allows every decision to be made through public referenda, enabling any token holder to cast a vote

    +
  • +
+

Polkadot's Blockspace

+

Polkadot 1.0’s design allows for the commoditization of blockspace.

+

Blockspace is a blockchain's capacity to finalize and commit operations, encompassing its security, computing, and storage capabilities. Its characteristics can vary across different blockchains, affecting security, flexibility, and availability.

+
    +
  • +

    Security - measures the robustness of blockspace in Proof of Stake (PoS) networks linked to the stake locked on validator nodes, the variance in stake among validators, and the total number of validators. It also considers social centralization (how many validators are owned by single operators) and physical centralization (how many validators run on the same service provider)

    +
  • +
  • +

    Flexibility - reflects the functionalities and types of data that can be stored, with high-quality data essential to avoid bottlenecks in critical processes

    +
  • +
  • +

    Availability - indicates how easily users can access blockspace. It should be easily accessible, allowing diverse business models to thrive, ideally regulated by a marketplace based on demand and supplemented by options for "second-hand" blockspace

    +
  • +
+

Polkadot is built on core blockspace principles, but there's room for improvement. Tasks like balance transfers, staking, and governance are managed on the relay chain.

+

Delegating these responsibilities to system chains could enhance flexibility and allow the relay chain to concentrate on providing shared security and interoperability.

+
+

Note

+

For more information about blockspace, watch Robert Habermeier’s interview or read his technical blog post.

+
+

DOT Token

+

DOT is the native token of the Polkadot network, much like BTC for Bitcoin and Ether for the Ethereum blockchain. DOT has 10 decimals, uses the Planck base unit, and has a balance type of u128. The same is true for Kusama's KSM token with the exception of having 12 decimals.

+
+Redenomination of DOT +

Polkadot conducted a community poll, which ended on 27 July 2020 at block 888,888, to decide whether to redenominate the DOT token. The stakeholders chose to redenominate the token, changing the value of 1 DOT from 1e12 plancks to 1e10 plancks.

+

Importantly, this did not affect the network's total number of base units (plancks); it only affects how a single DOT is represented.

+

The redenomination became effective 72 hours after transfers were enabled, occurring at block 1,248,328 on 21 August 2020 around 16:50 UTC.

+
+

The Planck Unit

+

The smallest unit of account balance on Polkadot SDK-based blockchains (such as Polkadot and Kusama) is called Planck, named after the Planck length, the smallest measurable distance in the physical universe.

+

Similar to how BTC's smallest unit is the Satoshi and ETH's is the Wei, Polkadot's native token DOT equals 1e10 Planck, while Kusama's native token KSM equals 1e12 Planck.

+

Uses for DOT

+

DOT serves three primary functions within the Polkadot network:

+
    +
  • Governance - it is used to participate in the governance of the network
  • +
  • Staking - DOT is staked to support the network's operation and security
  • +
  • Buying coretime - used to purchase coretime in-bulk or on-demand and access the chain to benefit from Polkadot's security and interoperability
  • +
+

Additionally, DOT can serve as a transferable token. For example, DOT, held in the treasury, can be allocated to teams developing projects that benefit the Polkadot ecosystem.

+

JAM and the Road Ahead

+

The Join-Accumulate Machine (JAM) represents a transformative redesign of Polkadot's core architecture, envisioned as the successor to the current relay chain. Unlike traditional blockchain architectures, JAM introduces a unique computational model that processes work through two primary functions:

+
    +
  • Join - handles data integration
  • +
  • Accumulate - folds computations into the chain's state
  • +
+

JAM removes many of the opinions and constraints of the current relay chain while maintaining its core security properties. Expected improvements include:

+
    +
  • Permissionless code execution - JAM is designed to be more generic and flexible, allowing for permissionless code execution through services that can be deployed without governance approval
  • +
  • More effective block time utilization - JAM's efficient pipeline processing model places the prior state root in block headers instead of the posterior state root, enabling more effective utilization of block time for computations
  • +
+

This architectural evolution promises to enhance Polkadot's scalability and flexibility while maintaining robust security guarantees. JAM is planned to be rolled out to Polkadot as a single, complete upgrade rather than a stream of smaller updates. This approach seeks to minimize the developer overhead required to address any breaking changes.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/polkadot-chain/pos-consensus/index.html b/polkadot-protocol/architecture/polkadot-chain/pos-consensus/index.html new file mode 100644 index 00000000..1d62ec3b --- /dev/null +++ b/polkadot-protocol/architecture/polkadot-chain/pos-consensus/index.html @@ -0,0 +1,5135 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Proof of Stake Consensus | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Proof of Stake Consensus

+

Introduction

+

Polkadot's Proof of Stake consensus model leverages a unique hybrid approach by design to promote decentralized and secure network operations. In traditional Proof of Stake (PoS) systems, a node's ability to validate transactions is tied to its token holdings, which can lead to centralization risks and limited validator participation. Polkadot addresses these concerns through its Nominated Proof of Stake (NPoS) model and a combination of advanced consensus mechanisms to ensure efficient block production and strong finality guarantees. This combination enables the Polkadot network to scale while maintaining security and decentralization.

+

Nominated Proof of Stake

+

Polkadot uses Nominated Proof of Stake (NPoS) to select the validator set and secure the network. This model is designed to maximize decentralization and security by balancing the roles of validators and nominators.

+
    +
  • Validators - play a key role in maintaining the network's integrity. They produce new blocks, validate parachain blocks, and ensure the finality of transactions across the relay chain
  • +
  • Nominators - support the network by selecting validators to back with their stake. This mechanism allows users who don't want to run a validator node to still participate in securing the network and earn rewards based on the validators they support
  • +
+

In Polkadot's NPoS system, nominators can delegate their tokens to trusted validators, giving them voting power in selecting validators while spreading security responsibilities across the network.

+

Hybrid Consensus

+

Polkadot employs a hybrid consensus model that combines two key protocols: a finality gadget called GRANDPA and a block production mechanism known as BABE. This hybrid approach enables the network to benefit from both rapid block production and provable finality, ensuring security and performance.

+

The hybrid consensus model has some key advantages:

+
    +
  • +

    Probabilistic finality - with BABE constantly producing new blocks, Polkadot ensures that the network continues to make progress, even when a final decision has not yet been reached on which chain is the true canonical chain

    +
  • +
  • +

    Provable finality - GRANDPA guarantees that once a block is finalized, it can never be reverted, ensuring that all network participants agree on the finalized chain

    +
  • +
+

By using separate protocols for block production and finality, Polkadot can achieve rapid block creation and strong guarantees of finality while avoiding the typical trade-offs seen in traditional consensus mechanisms.

+

Block Production - BABE

+

Blind Assignment for Blockchain Extension (BABE) is Polkadot's block production mechanism, working with GRANDPA to ensure blocks are produced consistently across the network. As validators participate in BABE, they are assigned block production slots through a randomness-based lottery system. This helps determine which validator is responsible for producing a block at a given time. BABE shares similarities with Ouroboros Praos but differs in key aspects like chain selection rules and slot timing.

+

Key features of BABE include:

+
    +
  • +

    Epochs and slots - BABE operates in phases called epochs, each of which is divided into slots (around 6 seconds per slot). Validators are assigned slots at the beginning of each epoch based on stake and randomness

    +
  • +
  • +

    Randomized block production - validators enter a lottery to determine which will produce a block in a specific slot. This randomness is sourced from the relay chain's randomness cycle

    +
  • +
  • +

    Multiple block producers per slot - in some cases, more than one validator might win the lottery for the same slot, resulting in multiple blocks being produced. These blocks are broadcasted, and the network's fork choice rule helps decide which chain to follow

    +
  • +
  • +

    Handling empty slots - if no validators win the lottery for a slot, a secondary selection algorithm ensures that a block is still produced. Validators selected through this method always produce a block, ensuring no slots are skipped

    +
  • +
+

BABE's combination of randomness and slot allocation creates a secure, decentralized system for consistent block production while also allowing for fork resolution when multiple validators produce blocks for the same slot.

+
+Additional Information +
    +
  • Refer to the BABE paper for further technical insights, including cryptographic details and formal proofs
  • +
  • Visit the Block Production Lottery section of the Polkadot Protocol Specification for technical definitions and formulas
  • +
+
+

Validator Participation

+

In BABE, validators participate in a lottery for every slot to determine whether they are responsible for producing a block during that slot. This process's randomness ensures a decentralized and unpredictable block production mechanism.

+

There are two lottery outcomes for any given slot that initiate additional processes:

+
    +
  • +

    Multiple validators in a slot - due to the randomness, multiple validators can be selected to produce a block for the same slot. When this happens, each validator produces a block and broadcasts it to the network resulting in a race condition. The network's topology and latency then determine which block reaches the majority of nodes first. BABE allows both chains to continue building until the finalization process resolves which one becomes canonical. The Fork Choice rule is then used to decide which chain the network should follow

    +
  • +
  • +

    No validators in a slot - on occasions when no validator is selected by the lottery, a secondary validator selection algorithm steps in. This backup ensures that a block is still produced, preventing skipped slots. However, if the primary block produced by a verifiable random function (VRF)-selected validator exists for that slot, the secondary block will be ignored. As a result, every slot will have either a primary or a secondary block

    +
  • +
+

This design ensures continuous block production, even in cases of multiple competing validators or an absence of selected validators.

+

Finality Gadget - GRANDPA

+

GRANDPA (GHOST-based Recursive ANcestor Deriving Prefix Agreement) serves as the finality gadget for Polkadot's relay chain. Operating alongside the BABE block production mechanism, it ensures provable finality, giving participants confidence that blocks finalized by GRANDPA cannot be reverted.

+

Key features of GRANDPA include:

+
    +
  • Independent finality service – GRANDPA runs separately from the block production process, operating in parallel to ensure seamless finalization
  • +
  • Chain-based finalization – instead of finalizing one block at a time, GRANDPA finalizes entire chains, speeding up the process significantly
  • +
  • Batch finalization – can finalize multiple blocks in a single round, enhancing efficiency and minimizing delays in the network
  • +
  • Partial synchrony tolerance – GRANDPA works effectively in a partially synchronous network environment, managing both asynchronous and synchronous conditions
  • +
  • Byzantine fault tolerance – can handle up to 1/5 Byzantine (malicious) nodes, ensuring the system remains secure even when faced with adversarial behavior
  • +
+
+What is GHOST? +

GHOST (Greedy Heaviest-Observed Subtree) is a consensus protocol used in blockchain networks to select the heaviest branch in a block tree. Unlike traditional longest-chain rules, GHOST can more efficiently handle high block production rates by considering the weight of subtrees rather than just the chain length.

+
+

Probabilistic vs. Provable Finality

+

In traditional Proof of Work (PoW) blockchains, finality is probabilistic. As blocks are added to the chain, the probability that a block is final increases, but it can never be guaranteed. Eventual consensus means that over time, all nodes will agree on a single version of the blockchain, but this process can be unpredictable and slow.

+

Conversely, GRANDPA provides provable finality, which means that once a block is finalized, it is irreversible. By using Byzantine fault-tolerant agreements, GRANDPA finalizes blocks more efficiently and securely than probabilistic mechanisms like Nakamoto consensus. Like Ethereum's Casper the Friendly Finality Gadget (FFG), GRANDPA ensures that finalized blocks cannot be reverted, offering stronger guarantees of consensus.

+
+Additional Information +

For more details, including formal proofs and detailed algorithms, see the GRANDPA paper.

+
+

Fork Choice

+

The fork choice of the relay chain combines BABE and GRANDPA:

+
    +
  1. BABE must always build on the chain that GRANDPA has finalized
  2. +
  3. When there are forks after the finalized head, BABE builds on the chain with the most primary blocks to provide probabilistic finality
  4. +
+

Fork choice diagram

+

In the preceding diagram, finalized blocks are black, and non-finalized blocks are yellow. Primary blocks are labeled '1', and secondary blocks are labeled '2.' The topmost chain is the longest chain originating from the last finalized block, but it is not selected because it only has one primary block at the time of evaluation. In comparison, the one below it originates from the last finalized block and has three primary blocks.

+

Bridging - BEEFY

+

Bridge Efficiency Enabling Finality Yielder (BEEFY) is a specialized protocol that extends the finality guarantees provided by GRANDPA. It is specifically designed to facilitate efficient bridging between Polkadot relay chains (such as Polkadot and Kusama) and external blockchains like Ethereum. While GRANDPA is well-suited for finalizing blocks within Polkadot, it has limitations when bridging external chains that weren't built with Polkadot's interoperability features in mind. BEEFY addresses these limitations by ensuring other networks can efficiently verify finality proofs.

+

Key features of BEEFY include:

+
    +
  • Efficient finality proof verification - BEEFY enables external networks to easily verify Polkadot finality proofs, ensuring seamless communication between chains
  • +
  • Merkle Mountain Ranges (MMR) - this data structure is used to efficiently store and transmit proofs between chains, optimizing data storage and reducing transmission overhead
  • +
  • ECDSA signature schemes - BEEFY uses ECDSA signatures, which are widely supported on Ethereum and other EVM-based chains, making integration with these ecosystems smoother
  • +
  • Light client optimization - BEEFY reduces the computational burden on light clients by allowing them to check for a super-majority of validator votes rather than needing to process all validator signatures, improving performance
  • +
+
+Additional Information +

For more details, including technical definitions and formulas, see Bridge design (BEEFY) in the Polkadot Protocol Specification.

+
+

Resources

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/system-chains/asset-hub/index.html b/polkadot-protocol/architecture/system-chains/asset-hub/index.html new file mode 100644 index 00000000..08b99533 --- /dev/null +++ b/polkadot-protocol/architecture/system-chains/asset-hub/index.html @@ -0,0 +1,5406 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Asset Hub | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Asset Hub

+

Introduction

+

The Asset Hub is a critical component in the Polkadot ecosystem, enabling the management of fungible and non-fungible assets across the network. Since the relay chain focuses on maintaining security and consensus without direct asset management, Asset Hub provides a streamlined platform for creating, managing, and using on-chain assets in a fee-efficient manner. This guide outlines the core features of Asset Hub, including how it handles asset operations, cross-chain transfers, and asset integration using XCM, as well as essential tools like API Sidecar and TxWrapper for developers working with on-chain assets.

+

Assets Basics

+

In the Polkadot ecosystem, the relay chain does not natively support additional assets beyond its native token (DOT for Polkadot, KSM for Kusama). The Asset Hub parachain on Polkadot and Kusama provides a fungible and non-fungible assets framework. Asset Hub allows developers and users to create, manage, and use assets across the ecosystem.

+

Asset creators can use Asset Hub to track their asset issuance across multiple parachains and manage assets through operations such as minting, burning, and transferring. Projects that need a standardized method of handling on-chain assets will find this particularly useful. The fungible asset interface provided by Asset Hub closely resembles Ethereum's ERC-20 standard but is directly integrated into Polkadot's runtime, making it more efficient in terms of speed and transaction fees.

+

Integrating with Asset Hub offers several key benefits, particularly for infrastructure providers and users:

+
    +
  • Support for non-native on-chain assets - Asset Hub enables seamless asset creation and management, allowing projects to develop tokens or assets that can interact with the broader ecosystem
  • +
  • Lower transaction fees - Asset Hub offers significantly lower transaction costs—approximately one-tenth of the fees on the relay chain, providing cost-efficiency for regular operations
  • +
  • Reduced deposit requirements - depositing assets in Asset Hub is more accessible, with deposit requirements that are around one one-hundredth of those on the relay chain
  • +
  • Payment of transaction fees with non-native assets - users can pay transaction fees in assets other than the native token (DOT or KSM), offering more flexibility for developers and users
  • +
+

Assets created on the Asset Hub are stored as part of a map, where each asset has a unique ID that links to information about the asset, including details like:

+
    +
  • The management team
  • +
  • The total supply
  • +
  • The number of accounts holding the asset
  • +
  • Sufficiency for account existence - whether the asset alone is enough to maintain an account without a native token balance
  • +
  • The metadata of the asset, including its name, symbol, and the number of decimals for representation
  • +
+

Some assets can be regarded as sufficient to maintain an account's existence, meaning that users can create accounts on the network without needing a native token balance (i.e., no existential deposit required). Developers can also set minimum balances for their assets. If an account's balance drops below the minimum, the balance is considered dust and may be cleared.

+

Assets Pallet

+

The Polkadot SDK's Assets pallet is a powerful module designated for creating and managing fungible asset classes with a fixed supply. It offers a secure and flexible way to issue, transfer, freeze, and destroy assets. The pallet supports various operations and includes permissioned and non-permissioned functions to cater to simple and advanced use cases.

+

Visit the Assets Pallet Rust docs for more in-depth information.

+

Key Features

+

Key features of the Assets pallet include:

+
    +
  • Asset issuance - allows the creation of a new asset, where the total supply is assigned to the creator's account
  • +
  • Asset transfer - enables transferring assets between accounts while maintaining a balance in both accounts
  • +
  • Asset freezing - prevents transfers of a specific asset from one account, locking it from further transactions
  • +
  • Asset destruction - allows accounts to burn or destroy their holdings, removing those assets from circulation
  • +
  • Non-custodial transfers - a non-custodial mechanism to enable one account to approve a transfer of assets on behalf of another
  • +
+

Main Functions

+

The Assets pallet provides a broad interface for managing fungible assets. Some of the main dispatchable functions include:

+
    +
  • create() - create a new asset class by placing a deposit, applicable when asset creation is permissionless
  • +
  • issue() - mint a fixed supply of a new asset and assign it to the creator's account
  • +
  • transfer() - transfer a specified amount of an asset between two accounts
  • +
  • approve_transfer() - approve a non-custodial transfer, allowing a third party to move assets between accounts
  • +
  • destroy() - destroy an entire asset class, removing it permanently from the chain
  • +
  • freeze() and thaw() - administrators or privileged users can lock or unlock assets from being transferred
  • +
+

For a full list of dispatchable and privileged functions, see the dispatchables Rust docs.

+

Querying Functions

+

The Assets pallet exposes several key querying functions that developers can interact with programmatically. These functions allow you to query asset information and perform operations essential for managing assets across accounts. The two main querying functions are:

+
    +
  • +

    balance(asset_id, account) - retrieves the balance of a given asset for a specified account. Useful for checking the holdings of an asset class across different accounts

    +
  • +
  • +

    total_supply(asset_id) - returns the total supply of the asset identified by asset_id. Allows users to verify how much of the asset exists on-chain

    +
  • +
+

In addition to these basic functions, other utility functions are available for querying asset metadata and performing asset transfers. You can view the complete list of querying functions in the Struct Pallet Rust docs.

+

Permission Models and Roles

+

The Assets pallet incorporates a robust permission model, enabling control over who can perform specific operations like minting, transferring, or freezing assets. The key roles within the permission model are:

+
    +
  • Admin - can freeze (preventing transfers) and forcibly transfer assets between accounts. Admins also have the power to reduce the balance of an asset class across arbitrary accounts. They manage the more sensitive and administrative aspects of the asset class
  • +
  • Issuer - responsible for minting new tokens. When new assets are created, the Issuer is the account that controls their distribution to other accounts
  • +
  • Freezer - can lock the transfer of assets from an account, preventing the account holder from moving their balance. This function is useful for freezing accounts involved in disputes or fraud
  • +
  • Owner - has overarching control, including destroying an entire asset class. Owners can also set or update the Issuer, Freezer, and Admin roles
  • +
+

These permissions provide fine-grained control over assets, enabling developers and asset managers to ensure secure, controlled operations. Each of these roles is crucial for managing asset lifecycles and ensuring that assets are used appropriately across the network.

+

Asset Freezing

+

The Assets pallet allows you to freeze assets. This feature prevents transfers or spending from a specific account, effectively locking the balance of an asset class until it is explicitly unfrozen. Asset freezing is beneficial when assets are restricted due to security concerns or disputes.

+

Freezing assets is controlled by the Freezer role, as mentioned earlier. Only the account with the Freezer privilege can perform these operations. Here are the key freezing functions:

+
    +
  • freeze(asset_id, account) - locks the specified asset of the account. While the asset is frozen, no transfers can be made from the frozen account
  • +
  • thaw(asset_id, account) - corresponding function for unfreezing, allowing the asset to be transferred again
  • +
+

This approach enables secure and flexible asset management, providing administrators the tools to control asset movement in special circumstances.

+

Non-Custodial Transfers (Approval API)

+

The Assets pallet also supports non-custodial transfers through the Approval API. This feature allows one account to approve another account to transfer a specific amount of its assets to a third-party recipient without granting full control over the account's balance. Non-custodial transfers enable secure transactions where trust is required between multiple parties.

+

Here's a brief overview of the key functions for non-custodial asset transfers:

+
    +
  • approve_transfer(asset_id, delegate, amount) - approves a delegate to transfer up to a certain amount of the asset on behalf of the original account holder
  • +
  • cancel_approval(asset_id, delegate) - cancels a previous approval for the delegate. Once canceled, the delegate no longer has permission to transfer the approved amount
  • +
  • transfer_approved(asset_id, owner, recipient, amount) - executes the approved asset transfer from the owner’s account to the recipient. The delegate account can call this function once approval is granted
  • +
+

These delegated operations make it easier to manage multi-step transactions and dApps that require complex asset flows between participants.

+

Foreign Assets

+

Foreign assets in Asset Hub refer to assets originating from external blockchains or parachains that are registered in the Asset Hub. These assets are typically native tokens from other parachains within the Polkadot ecosystem or bridged tokens from external blockchains such as Ethereum.

+

Once a foreign asset is registered in the Asset Hub by its originating blockchain's root origin, users are able to send these tokens to the Asset Hub and interact with them as they would any other asset within the Polkadot ecosystem.

+

Handling Foreign Assets

+

The Foreign Assets pallet, an instance of the Assets pallet, manages these assets. Since foreign assets are integrated into the same interface as native assets, developers can use the same functionalities, such as transferring and querying balances. However, there are important distinctions when dealing with foreign assets.

+
    +
  • +

    Asset identifier - unlike native assets, foreign assets are identified using an XCM Multilocation rather than a simple numeric AssetId. This multilocation identifier represents the cross-chain location of the asset and provides a standardized way to reference it across different parachains and relay chains

    +
  • +
  • +

    Transfers - once registered in the Asset Hub, foreign assets can be transferred between accounts, just like native assets. Users can also send these assets back to their originating blockchain if supported by the relevant cross-chain messaging mechanisms

    +
  • +
+

Integration

+

Asset Hub supports a variety of integration tools that make it easy for developers to manage assets and interact with the blockchain in their applications. The tools and libraries provided by Parity Technologies enable streamlined operations, such as querying asset information, building transactions, and monitoring cross-chain asset transfers.

+

Developers can integrate Asset Hub into their projects using these core tools:

+

API Sidecar

+

API Sidecar is a RESTful service that can be deployed alongside Polkadot and Kusama nodes. It provides endpoints to retrieve real-time blockchain data, including asset information. When used with Asset Hub, Sidecar allows querying:

+
    +
  • Asset look-ups - retrieve specific assets using AssetId
  • +
  • Asset balances - view the balance of a particular asset on Asset Hub
  • +
+

Public instances of API Sidecar connected to Asset Hub are available, such as:

+ +

These public instances are primarily for ad-hoc testing and quick checks.

+

TxWrapper

+

TxWrapper is a library that simplifies constructing and signing transactions for Polkadot SDK-based chains, including Polkadot and Kusama. This tool includes support for working with Asset Hub, enabling developers to:

+
    +
  • Construct offline transactions
  • +
  • Leverage asset-specific functions such as minting, burning, and transferring assets
  • +
+

TxWrapper provides the flexibility needed to integrate asset operations into custom applications while maintaining the security and efficiency of Polkadot's transaction model.

+

Asset Transfer API

+

Asset Transfer API is a library focused on simplifying the construction of asset transfers for Polkadot SDK-based chains that involve system parachains like Asset Hub. It exposes a reduced set of methods that facilitate users sending transfers to other parachains or locally. Refer to the cross-chain support table for the current status of cross-chain support development.

+

Key features include:

+
    +
  • Support for cross-chain transfers between parachains
  • +
  • Streamlined transaction construction with support for the necessary parachain metadata
  • +
+

The API supports various asset operations, such as paying transaction fees with non-native tokens and managing asset liquidity.

+

Parachain Node

+

To fully leverage the Asset Hub's functionality, developers will need to run a system parachain node. Setting up an Asset Hub node allows users to interact with the parachain in real time, syncing data and participating in the broader Polkadot ecosystem. Guidelines for setting up an Asset Hub node are available in the Parity documentation.

+

Using these integration tools, developers can manage assets seamlessly and integrate Asset Hub functionality into their applications, leveraging Polkadot's powerful infrastructure.

+

XCM Transfer Monitoring

+

Since Asset Hub facilitates cross-chain asset transfers across the Polkadot ecosystem, XCM transfer monitoring becomes an essential practice for developers and infrastructure providers. This section outlines how to monitor the cross-chain movement of assets between parachains, the relay chain, and other systems.

+

Monitor XCM Deposits

+

As assets move between chains, tracking the cross-chain transfers in real time is crucial. Whether assets are transferred via a teleport from system parachains or through a reserve-backed transfer from any other parachain, each transfer emits a relevant event (such as the balances.minted event).

+

To ensure accurate monitoring of these events:

+
    +
  • Track XCM deposits - query every new block created in the relay chain or Asset Hub, loop through the events array, and filter for any balances.minted events which confirm the asset was successfully transferred to the account
  • +
  • Track event origins - each balances.minted event points to a specific address. By monitoring this, service providers can verify that assets have arrived in the correct account
  • +
+

Track XCM Information Back to the Source

+

While the balances.minted event confirms the arrival of assets, there may be instances where you need to trace the origin of the cross-chain message that triggered the event. In such cases, you can:

+
    +
  1. Query the relevant chain at the block where the balances.minted event was emitted
  2. +
  3. Look for a messageQueue(Processed) event within that block's initialization. This event contains a parameter (Id) that identifies the cross-chain message received by the relay chain or Asset Hub. You can use this Id to trace the message back to its origin chain, offering full visibility of the asset transfer's journey
  4. +
+

Practical Monitoring Examples

+

The preceding sections outline the process of monitoring XCM deposits to specific accounts and then tracing back the origin of these deposits. The process of tracking an XCM transfer and the specific events to monitor may vary based on the direction of the XCM message. Here are some examples to showcase the slight differences:

+
    +
  • Transfer from parachain to relay chain - track parachainsystem(UpwardMessageSent) on the parachain and messagequeue(Processed) on the relay chain
  • +
  • Transfer from relay chain to parachain - track xcmPallet(sent) on the relay chain and dmpqueue(ExecutedDownward) on the parachain
  • +
  • Transfer between parachains - track xcmpqueue(XcmpMessageSent) on the system parachain and xcmpqueue(Success) on the destination parachain
  • +
+

Monitor for Failed XCM Transfers

+

Sometimes, XCM transfers may fail due to liquidity or other errors. Failed transfers emit specific error events, which are key to resolving issues in asset transfers. Monitoring for these failure events helps catch issues before they affect asset balances.

+
    +
  • Relay chain to system parachain - look for the dmpqueue(ExecutedDownward) event on the parachain with an Incomplete outcome and an error type such as UntrustedReserveLocation
  • +
  • Parachain to parachain - monitor for xcmpqueue(Fail) on the destination parachain with error types like TooExpensive
  • +
+

For detailed error management in XCM, see Gavin Wood's blog post on XCM Execution and Error Management.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/system-chains/bridge-hub/index.html b/polkadot-protocol/architecture/system-chains/bridge-hub/index.html new file mode 100644 index 00000000..ab244373 --- /dev/null +++ b/polkadot-protocol/architecture/system-chains/bridge-hub/index.html @@ -0,0 +1,4979 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Bridge Hub | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+ +
+
+ + +
+ +
+ + + + +

Bridge Hub

+

Introduction

+

The Bridge Hub system parachain plays a crucial role in facilitating trustless interactions between Polkadot, Kusama, Ethereum, and other blockchain ecosystems. By implementing on-chain light clients and supporting protocols like BEEFY and GRANDPA, Bridge Hub ensures seamless message transmission and state verification across chains. It also provides essential pallets for sending and receiving messages, making it a cornerstone of Polkadot’s interoperability framework. With built-in support for XCM (Cross-Consensus Messaging), Bridge Hub enables secure, efficient communication between diverse blockchain networks.

+

This guide covers the architecture, components, and deployment of the Bridge Hub system. You'll explore its trustless bridging mechanisms, key pallets for various blockchains, and specific implementations like Snowbridge and the Polkadot <> Kusama bridge. By the end, you'll understand how Bridge Hub enhances connectivity within the Polkadot ecosystem and beyond.

+

Trustless Bridging

+

Bridge Hub provides a mode of trustless bridging through its implementation of on-chain light clients and trustless relayers. The target chain and source chain both provide ways of verifying one another's state and actions (such as a transfer) based on the consensus and finality of both chains rather than an external mechanism controlled by a third party.

+

BEEFY (Bridge Efficiency Enabling Finality Yielder) is instrumental in this solution. It provides a more efficient way to verify the consensus on the relay chain. It allows the participants in a network to verify finality proofs, meaning a remote chain like Ethereum can verify the state of Polkadot at a given block height.

+
+

Info

+

In this context, "trustless" refers to the lack of need to trust a human when interacting with various system components. Trustless systems are based instead on trusting mathematics, cryptography, and code.

+
+

Trustless bridges are essentially two one-way bridges, where each chain has a method of verifying the state of the other in a trustless manner through consensus proofs.

+

For example, the Ethereum and Polkadot bridging solution that Snowbridge implements involves two light clients: one which verifies the state of Polkadot and the other which verifies the state of Ethereum. The light client for Polkadot is implemented in the runtime as a pallet, whereas the light client for Ethereum is implemented as a smart contract on the beacon chain.

+

Bridging Components

+

In any given Bridge Hub implementation (Kusama, Polkadot, or other relay chains), there are a few primary pallets that are utilized:

+ +

Ethereum-Specific Support

+

Bridge Hub also has a set of components and pallets that support a bridge between Polkadot and Ethereum through Snowbridge.

+

To view the complete list of which pallets are included in Bridge Hub, visit the Subscan Runtime Modules page. Alternatively, the source code for those pallets can be found in the Polkadot SDK Snowbridge Pallets repository.

+

Deployed Bridges

+
    +
  • Snowbridge - a general-purpose, trustless bridge between Polkadot and Ethereum
  • +
  • Hyperbridge - a cross-chain solution built as an interoperability coprocessor, providing state-proof-based interoperability across all blockchains
  • +
  • Polkadot <> Kusama Bridge - a bridge that utilizes relayers to bridge the Polkadot and Kusama relay chains trustlessly
  • +
+

Where to Go Next

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/system-chains/coretime/index.html b/polkadot-protocol/architecture/system-chains/coretime/index.html new file mode 100644 index 00000000..ee4e5d6c --- /dev/null +++ b/polkadot-protocol/architecture/system-chains/coretime/index.html @@ -0,0 +1,4942 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Coretime | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Coretime

+ +

Introduction

+

The Coretime system chain facilitates the allocation, procurement, sale, and scheduling of bulk coretime, enabling tasks (such as parachains) to utilize the computation and security provided by Polkadot.

+

The Broker pallet, along with Cross Consensus Messaging (XCM), enables this functionality to be delegated to the system chain rather than the relay chain. Using XCMP's Upward Message Passing (UMP) to the relay chain allows for core assignments to take place for a task registered on the relay chain.

+

The Fellowship RFC RFC-1: Agile Coretime contains the specification for the Coretime system chain and coretime as a concept.

+

Besides core management, its responsibilities include:

+
    +
  • The number of cores that should be made available
  • +
  • Which tasks should be running on which cores and in what ratios
  • +
  • Accounting information for the on-demand pool
  • +
+

From the relay chain, it expects the following via Downward Message Passing (DMP):

+
    +
  • The number of cores available to be scheduled
  • +
  • Account information on on-demand scheduling
  • +
+

The details for this interface can be found in RFC-5: Coretime Interface.

+

Bulk Coretime Assignment

+

The Coretime chain allocates coretime before its usage. It also manages the ownership of a core. As cores are made up of regions (by default, one core is a single region), a region is recognized as a non-fungible asset. The Coretime chain exposes Regions over XCM as an NFT. Users can transfer individual regions, partition, interlace, or allocate them to a task. Regions describe how a task may use a core.

+
+

One core can contain more than one region.

+

A core can be considered a logical representation of an active validator set on the relay chain, where these validators commit to verifying the state changes for a particular task running on that region. With partitioning, having more than one region per core is possible, allowing for different computational schemes. Therefore, running more than one task on a single core is possible.

+
+ + +

Regions can be managed in the following manner on the Coretime chain:

+
    +
  • Assigning region - regions can be assigned to a task on the relay chain, such as a parachain/rollup using the assign dispatchable
  • +
+
+

Coretime Availability

+

When bulk coretime is obtained, block production is not immediately available. It becomes available to produce blocks for a task in the next Coretime cycle. To view the status of the current or next Coretime cycle, go to the Subscan Coretime Dashboard.

+
+
    +
  • +

    Transferring regions - regions may be transferred on the Coretime chain, upon which the transfer dispatchable in the Broker pallet would assign a new owner to that specific region

    +
  • +
  • +

    Partitioning regions - using the partition dispatchable, regions may be partitioned into two non-overlapping subregions within the same core. A partition involves specifying a pivot, wherein the new region will be defined and available for use

    +
  • +
  • +

    Interlacing regions - using the interlace dispatchable, interlacing regions allows a core to have alternative-compute strategies. Whereas partitioned regions are mutually exclusive, interlaced regions overlap because multiple tasks may utilize a single core in an alternating manner

    +
  • +
+

For more information regarding these mechanisms, visit the coretime page on the Polkadot Wiki: Introduction to Agile Coretime.

+

On Demand Coretime

+

At this writing, on-demand coretime is currently deployed on the relay chain and will eventually be deployed to the Coretime chain. On-demand coretime allows parachains (previously known as parathreads) to utilize available cores per block.

+

The Coretime chain also handles coretime sales, details of which can be found on the Polkadot Wiki: Agile Coretime: Coretime Sales.

+

Where to Go Next

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/system-chains/index.html b/polkadot-protocol/architecture/system-chains/index.html new file mode 100644 index 00000000..951ad27c --- /dev/null +++ b/polkadot-protocol/architecture/system-chains/index.html @@ -0,0 +1,4972 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + System Chains | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

System Chains

+

Explore the critical roles Polkadot’s system chains play in enhancing the network’s efficiency and scalability. From managing on-chain assets with the Asset Hub to enabling seamless Web3 integration through the Bridge Hub and facilitating coretime operations with the Coretime chain, each system chain is designed to offload specialized tasks from the relay chain, optimizing the entire ecosystem.

+

These system chains are integral to Polkadot's architecture, ensuring that the relay chain remains focused on consensus and security while system chains handle vital functions like asset management, cross-chain communication, and resource allocation. By distributing responsibilities across specialized chains, Polkadot maintains high performance, scalability, and flexibility, enabling developers to build more efficient and interconnected blockchain solutions.

+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/architecture/system-chains/overview/index.html b/polkadot-protocol/architecture/system-chains/overview/index.html new file mode 100644 index 00000000..2a1086e6 --- /dev/null +++ b/polkadot-protocol/architecture/system-chains/overview/index.html @@ -0,0 +1,5063 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Overview of Polkadot's System Chains | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+ +
+
+ + +
+ +
+ + + + +

Overview

+ +

Introduction

+

Polkadot's relay chain is designed to secure parachains and facilitate seamless inter-chain communication. However, resource-intensive—tasks like governance, asset management, and bridging are more efficiently handled by system parachains. These specialized chains offload functionality from the relay chain, leveraging Polkadot's parallel execution model to improve performance and scalability. By distributing key functionalities across system parachains, Polkadot can maximize its relay chain's blockspace for its core purpose of securing and validating parachains.

+

This guide will explore how system parachains operate within Polkadot and Kusama, detailing their critical roles in network governance, asset management, and bridging. You'll learn about the currently deployed system parachains, their unique functions, and how they enhance Polkadot's decentralized ecosystem.

+

System Chains

+

System parachains contain core Polkadot protocol features, but in parachains rather than the relay chain. Execution cores for system chains are allocated via network governance rather than purchasing coretime on a marketplace.

+

System parachains defer to on-chain governance to manage their upgrades and other sensitive actions as they do not have native tokens or governance systems separate from DOT or KSM. It is not uncommon to see a system parachain implemented specifically to manage network governance.

+
+

Note

+

You may see system parachains called common good parachains in articles and discussions. This nomenclature caused confusion as the network evolved, so system parachains is preferred.

+

For more details on this evolution, review this parachains forum discussion.

+
+

Existing System Chains

+
---
+title: System Parachains at a Glance
+---
+flowchart TB
+    subgraph POLKADOT["Polkadot"]
+        direction LR
+            PAH["Polkadot Asset Hub"]
+            PCOL["Polkadot Collectives"]
+            PBH["Polkadot Bridge Hub"]
+            PPC["Polkadot People Chain"]
+            PCC["Polkadot Coretime Chain"]
+    end
+
+    subgraph KUSAMA["Kusama"]
+        direction LR
+            KAH["Kusama Asset Hub"]
+            KBH["Kusama Bridge Hub"]
+            KPC["Kusama People Chain"]
+            KCC["Kusama Coretime Chain"]
+            E["Encointer"]
+        end
+

All system parachains are on both Polkadot and Kusama with the following exceptions:

+ +

Asset Hub

+

The Asset Hub is an asset portal for the entire network. It helps asset creators, such as reserve-backed stablecoin issuers, track the total issuance of an asset in the network, including amounts transferred to other parachains. It also serves as the hub where asset creators can perform on-chain operations, such as minting and burning, to manage their assets effectively.

+

This asset management logic is encoded directly in the runtime of the chain rather than in smart contracts. The efficiency of executing logic in a parachain allows for fees and deposits that are about 1/10th of what is required on the relay chain. These low fees mean that the Asset Hub is well suited for handling the frequent transactions required when managing balances, transfers, and on-chain assets.

+

The Asset Hub also supports non-fungible assets (NFTs) via the Uniques pallet and NFTs pallet. For more information about NFTs, see the Polkadot Wiki section on NFT Pallets.

+

Collectives

+

The Polkadot Collectives parachain was added in Referendum 81 and exists on Polkadot but not on Kusama. The Collectives chain hosts on-chain collectives that serve the Polkadot network, including the following:

+
    +
  • Polkadot Alliance - provides a set of ethics and standards for the community to follow. Includes an on-chain means to call out bad actors
  • +
  • Polkadot Technical Fellowship - a rules-based social organization to support and incentivize highly-skilled developers to contribute to the technical stability, security, and progress of the network
  • +
+

These on-chain collectives will play essential roles in the future of network stewardship and decentralized governance. Networks can use a bridge hub to help them act as collectives and express their legislative voices as single opinions within other networks.

+

Bridge Hub

+

Before parachains, the only way to design a bridge was to put the logic onto the relay chain. Since both networks now support parachains and the isolation they provide, each network can have a parachain dedicated to bridges.

+

The Bridge Hub system parachain operates on the relay chain, and is responsible for faciliating bridges to the wider Web3 space. It contains the required bridge pallets in its runtime, which enable trustless bridging with other blockchain networks like Polkadot, Kusama, and Ethereum. The Bridge Hub uses the native token of the relay chain.

+

See the Bridge Hub documentation for additional information.

+

People Chain

+

The People Chain provides a naming system that allows users to manage and verify their account identity.

+

Coretime Chain

+

The Coretime system chain lets users buy coretime to access Polkadot's computation. Coretime marketplaces run on top of the Coretime chain. Kusama does not use the Collectives system chain. Instead, Kusama relies on the Encointer system chain, which provides Sybil resistance as a service to the entire Kusama ecosystem.

+

Visit Introduction to Agile Coretime in the Polkadot Wiki for more information.

+

Encointer

+

Encointer is a blockchain platform for self-sovereign ID and a global universal basic income (UBI). The Encointer protocol uses a novel Proof of Personhood (PoP) system to create unique identities and resist Sybil attacks. PoP is based on the notion that a person can only be in one place at any given time. Encointer offers a framework that allows for any group of real people to create, distribute, and use their own digital community tokens.

+

Participants are requested to attend physical key-signing ceremonies with small groups of random people at randomized locations. These local meetings are part of one global signing ceremony occurring at the same time. Participants use the Encointer wallet app to participate in these ceremonies and manage local community currencies.

+

Referendums marking key Encointer adoption milestones include:

+ +
+

Tip

+

To learn more about Encointer, check out the official Encointer book or watch an Encointer ceremony in action.

+
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/accounts/index.html b/polkadot-protocol/basics/accounts/index.html new file mode 100644 index 00000000..c03cd490 --- /dev/null +++ b/polkadot-protocol/basics/accounts/index.html @@ -0,0 +1,5722 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Polkadot SDK Accounts | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Accounts

+

Introduction

+

Accounts are essential for managing identity, transactions, and governance on the network in the Polkadot SDK. Understanding these components is critical for seamless development and operation on the network, whether you're building or interacting with Polkadot-based chains.

+

This page will guide you through the essential aspects of accounts, including their data structure, balance types, reference counters, and address formats. You’ll learn how accounts are managed within the runtime, how balances are categorized, and how addresses are encoded and validated.

+

Account Data Structure

+

Accounts are foundational to any blockchain, and the Polkadot SDK provides a flexible management system. This section explains how the Polkadot SDK defines accounts and manages their lifecycle through data structures within the runtime.

+

Account

+

The Account data type is a storage map within the System pallet that links an account ID to its corresponding data. This structure is fundamental for mapping account-related information within the chain.

+

The code snippet below shows how accounts are defined:

+
 /// The full account information for a particular account ID
+ #[pallet::storage]
+ #[pallet::getter(fn account)]
+ pub type Account<T: Config> = StorageMap<
+   _,
+   Blake2_128Concat,
+   T::AccountId,
+   AccountInfo<T::Nonce, T::AccountData>,
+   ValueQuery,
+ >;
+
+

The preceding code block defines a storage map named Account. The StorageMap is a type of on-chain storage that maps keys to values. In the Account map, the key is an account ID, and the value is the account's information. Here, T represents the generic parameter for the runtime configuration, which is defined by the pallet's configuration trait (Config).

+

The StorageMap consists of the following parameters:

+
    +
  • _ - used in macro expansion and acts as a placeholder for the storage prefix type. Tells the macro to insert the default prefix during expansion
  • +
  • Blake2_128Concat - the hashing function applied to keys in the storage map
  • +
  • T::AccountId - represents the key type, which corresponds to the account’s unique ID
  • +
  • AccountInfo<T::Nonce, T::AccountData> - the value type stored in the map. For each account ID, the map stores an AccountInfo struct containing:
      +
    • T::Nonce - a nonce for the account, which is incremented with each transaction to ensure transaction uniqueness
    • +
    • T::AccountData - custom account data defined by the runtime configuration, which could include balances, locked funds, or other relevant information
    • +
    +
  • +
  • ValueQuery - defines how queries to the storage map behave when no value is found; returns a default value instead of None
  • +
+
+Additional information +

For a detailed explanation of storage maps, refer to the StorageMap Rust docs.

+
+

Account Info

+

The AccountInfo structure is another key element within the System pallet, providing more granular details about each account's state. This structure tracks vital data, such as the number of transactions and the account’s relationships with other modules.

+
#[derive(Clone, Eq, PartialEq, Default, RuntimeDebug, Encode, Decode)]
+pub struct AccountInfo<Nonce, AccountData> {
+  pub nonce: Nonce,
+  pub consumers: RefCount,
+  pub providers: RefCount,
+  pub sufficients: RefCount,
+  pub data: AccountData,
+}
+
+

The AccountInfo structure includes the following components:

+
    +
  • nonce - tracks the number of transactions initiated by the account, which ensures transaction uniqueness and prevents replay attacks
  • +
  • consumers - counts how many other modules or pallets rely on this account’s existence. The account cannot be removed from the chain (reaped) until this count reaches zero
  • +
  • providers - tracks how many modules permit this account’s existence. An account can only be reaped once both providers and sufficients are zero
  • +
  • sufficients - represents the number of modules that allow the account to exist for internal purposes, independent of any other modules
  • +
  • AccountData - a flexible data structure that can be customized in the runtime configuration, usually containing balances or other user-specific data
  • +
+

This structure helps manage an account's state and prevents its premature removal while it is still referenced by other on-chain data or modules. The AccountInfo structure can vary as long as it satisfies the trait bounds defined by the AccountData associated type in the frame-system::pallet::Config trait.

+

Account Reference Counters

+

Polkadot SDK uses reference counters to track an account’s dependencies across different runtime modules. These counters ensure that accounts remain active while data is associated with them.

+

The reference counters include:

+
    +
  • consumers - prevents account removal while other pallets still rely on the account
  • +
  • providers - ensures an account is active before other pallets store data related to it
  • +
  • sufficients - indicates the account’s independence, ensuring it can exist even without a native token balance, such as when holding sufficient alternative assets
  • +
+

Providers Reference Counters

+

The providers counter ensures that an account is ready to be depended upon by other runtime modules. For example, it is incremented when an account has a balance above the existential deposit, which marks the account as active.

+

The system requires this reference counter to be greater than zero for the consumers counter to be incremented, ensuring the account is stable before any dependencies are added.

+

Consumers Reference Counters

+

The consumers counter ensures that the account cannot be reaped until all references to it across the runtime have been removed. This check prevents the accidental deletion of accounts that still have active on-chain data.

+

It is the user’s responsibility to clear out any data from other runtime modules if they wish to remove their account and reclaim their existential deposit.

+

Sufficients Reference Counter

+

The sufficients counter tracks accounts that can exist independently without relying on a native account balance. This is useful for accounts holding other types of assets, like tokens, without needing a minimum balance in the native token.

+

For instance, the Assets pallet, may increment this counter for an account holding sufficient tokens.

+

Account Deactivation

+

In Polkadot SDK-based chains, an account is deactivated when its reference counters (such as providers, consumers, and sufficient) reach zero. These counters ensure the account remains active as long as other runtime modules or pallets reference it.

+

When all dependencies are cleared and the counters drop to zero, the account becomes deactivated and may be removed from the chain (reaped). This is particularly important in Polkadot SDK-based blockchains, where accounts with balances below the existential deposit threshold are pruned from storage to conserve state resources.

+

Each pallet that references an account has cleanup functions that decrement these counters when the pallet no longer depends on the account. Once these counters reach zero, the account is marked for deactivation.

+

Updating Counters

+

The Polkadot SDK provides runtime developers with various methods to manage account lifecycle events, such as deactivation or incrementing reference counters. These methods ensure that accounts cannot be reaped while still in use.

+

The following helper functions manage these counters:

+
    +
  • inc_consumers() - increments the consumer reference counter for an account, signaling that another pallet depends on it
  • +
  • dec_consumers() - decrements the consumer reference counter, signaling that a pallet no longer relies on the account
  • +
  • inc_providers() - increments the provider reference counter, ensuring the account remains active
  • +
  • dec_providers() - decrements the provider reference counter, allowing for account deactivation when no longer in use
  • +
  • inc_sufficients() - increments the sufficient reference counter for accounts that hold sufficient assets
  • +
  • dec_sufficients() - decrements the sufficient reference counter
  • +
+

To ensure proper account cleanup and lifecycle management, a corresponding decrement should be made for each increment action.

+

The System pallet offers three query functions to assist developers in tracking account states:

+
    +
  • can_inc_consumer() - checks if the account can safely increment the consumer reference
  • +
  • can_dec_provider() - ensures that no consumers exist before allowing the decrement of the provider counter
  • +
  • is_provider_required() - verifies whether the account still has any active consumer references
  • +
+

This modular and flexible system of reference counters tightly controls the lifecycle of accounts in Polkadot SDK-based blockchains, preventing the accidental removal or retention of unneeded accounts. You can refer to the System pallet Rust docs for more details.

+

Account Balance Types

+

In the Polkadot ecosystem, account balances are categorized into different types based on how the funds are utilized and their availability. These balance types determine the actions that can be performed, such as transferring tokens, paying transaction fees, or participating in governance activities. Understanding these balance types helps developers manage user accounts and implement balance-dependent logic.

+
+

A more efficient distribution of account balance types is in development

+

Soon, pallets in the Polkadot SDK will implement the Fungible trait (see the tracking issue for more details). For example, the transaction-storage pallet changed the implementation of the Currency trait (see the Refactor transaction storage pallet to use fungible traits PR for further details):

+
type BalanceOf<T> = <<T as Config>::Currency as Currency<<T as frame_system::Config>::AccountId>>::Balance;
+
+

To the Fungible trait:

+
type BalanceOf<T> = <<T as Config>::Currency as FnInspect<<T as frame_system::Config>::AccountId>>::Balance;
+
+

This update will enable more efficient use of account balances, allowing the free balance to be utilized for on-chain activities such as setting proxies and managing identities.

+
+

Balance Types

+

The five main balance types are:

+
    +
  • Free balance - represents the total tokens available to the account for any on-chain activity, including staking, governance, and voting. However, it may not be fully spendable or transferrable if portions of it are locked or reserved
  • +
  • Locked balance - portions of the free balance that cannot be spent or transferred because they are tied up in specific activities like staking, vesting, or participating in governance. While the tokens remain part of the free balance, they are non-transferable for the duration of the lock
  • +
  • Reserved balance - funds locked by specific system actions, such as setting up an identity, creating proxies, or submitting deposits for governance proposals. These tokens are not part of the free balance and cannot be spent unless they are unreserved
  • +
  • Spendable balance - the portion of the free balance that is available for immediate spending or transfers. It is calculated by subtracting the maximum of locked or reserved amounts from the free balance, ensuring that existential deposit limits are met
  • +
  • Untouchable balance - funds that cannot be directly spent or transferred but may still be utilized for on-chain activities, such as governance participation or staking. These tokens are typically tied to certain actions or locked for a specific period
  • +
+

The spendable balance is calculated as follows:

+
spendable = free - max(locked - reserved, ED)
+
+

Here, free, locked, and reserved are defined above. The ED represents the existential deposit, the minimum balance required to keep an account active and prevent it from being reaped. You may find you can't see all balance types when looking at your account via a wallet. Wallet providers often display only spendable, locked, and reserved balances.

+

Locks

+

Locks are applied to an account's free balance, preventing that portion from being spent or transferred. Locks are automatically placed when an account participates in specific on-chain activities, such as staking or governance. Although multiple locks may be applied simultaneously, they do not stack. Instead, the largest lock determines the total amount of locked tokens.

+

Locks follow these basic rules:

+
    +
  • If different locks apply to varying amounts, the largest lock amount takes precedence
  • +
  • If multiple locks apply to the same amount, the lock with the longest duration governs when the balance can be unlocked
  • +
+

Locks Example

+

Consider an example where an account has 80 DOT locked for both staking and governance purposes like so:

+
    +
  • 80 DOT is staked with a 28-day lock period
  • +
  • 24 DOT is locked for governance with a 1x conviction and a 7-day lock period
  • +
  • 4 DOT is locked for governance with a 6x conviction and a 224-day lock period
  • +
+

In this case, the total locked amount is 80 DOT because only the largest lock (80 DOT from staking) governs the locked balance. These 80 DOT will be released at different times based on the lock durations. In this example, the 24 DOT locked for governance will be released first since the shortest lock period is seven days. The 80 DOT stake with a 28-day lock period is released next. Now, all that remains locked is the 4 DOT for governance. After 224 days, all 80 DOT (minus the existential deposit) will be free and transferrable.

+

Illustration of Lock Example

+

Edge Cases for Locks

+

In scenarios where multiple convictions and lock periods are active, the lock duration and amount are determined by the longest period and largest amount. For example, if you delegate with different convictions and attempt to undelegate during an active lock period, the lock may be extended for the full amount of tokens. For a detailed discussion on edge case lock behavior, see this Stack Exchange post.

+

Balance Types on Polkadot.js

+

Polkadot.js provides a user-friendly interface for managing and visualizing various account balances on Polkadot and Kusama networks. When interacting with Polkadot.js, you will encounter multiple balance types that are critical for understanding how your funds are distributed and restricted. This section explains how different balances are displayed in the Polkadot.js UI and what each type represents.

+

+

The most common balance types displayed on Polkadot.js are:

+
    +
  • +

    Total balance - the total number of tokens available in the account. This includes all tokens, whether they are transferable, locked, reserved, or vested. However, the total balance does not always reflect what can be spent immediately. In this example, the total balance is 0.6274 KSM

    +
  • +
  • +

    Transferrable balance - shows how many tokens are immediately available for transfer. It is calculated by subtracting the locked and reserved balances from the total balance. For example, if an account has a total balance of 0.6274 KSM and a transferrable balance of 0.0106 KSM, only the latter amount can be sent or spent freely

    +
  • +
  • +

    Vested balance - tokens that allocated to the account but released according to a specific schedule. Vested tokens remain locked and cannot be transferred until fully vested. For example, an account with a vested balance of 0.2500 KSM means that this amount is owned but not yet transferable

    +
  • +
  • +

    Locked balance - tokens that are temporarily restricted from being transferred or spent. These locks typically result from participating in staking, governance, or vested transfers. In Polkadot.js, locked balances do not stack—only the largest lock is applied. For instance, if an account has 0.5500 KSM locked for governance and staking, the locked balance would display 0.5500 KSM, not the sum of all locked amounts

    +
  • +
  • +

    Reserved balance - refers to tokens locked for specific on-chain actions, such as setting an identity, creating a proxy, or making governance deposits. Reserved tokens are not part of the free balance, but can be freed by performing certain actions. For example, removing an identity would unreserve those funds

    +
  • +
  • +

    Bonded balance - the tokens locked for staking purposes. Bonded tokens are not transferrable until they are unbonded after the unbonding period

    +
  • +
  • +

    Redeemable balance - the number of tokens that have completed the unbonding period and are ready to be unlocked and transferred again. For example, if an account has a redeemable balance of 0.1000 KSM, those tokens are now available for spending

    +
  • +
  • +

    Democracy balance - reflects the number of tokens locked for governance activities, such as voting on referenda. These tokens are locked for the duration of the governance action and are only released after the lock period ends

    +
  • +
+

By understanding these balance types and their implications, developers and users can better manage their funds and engage with on-chain activities more effectively.

+

Address Formats

+

The SS58 address format is a core component of the Polkadot SDK that enables accounts to be uniquely identified across Polkadot-based networks. This format is a modified version of Bitcoin's Base58Check encoding, specifically designed to accommodate the multi-chain nature of the Polkadot ecosystem. SS58 encoding allows each chain to define its own set of addresses while maintaining compatibility and checksum validation for security.

+

Basic Format

+

SS58 addresses consist of three main components:

+
base58encode(concat(<address-type>, <address>, <checksum>))
+
+
    +
  • Address type - a byte or set of bytes that define the network (or chain) for which the address is intended. This ensures that addresses are unique across different Polkadot SDK-based chains
  • +
  • Address - the public key of the account encoded as bytes
  • +
  • Checksum - a hash-based checksum which ensures that addresses are valid and unaltered. The checksum is derived from the concatenated address type and address components, ensuring integrity
  • +
+

The encoding process transforms the concatenated components into a Base58 string, providing a compact and human-readable format that avoids easily confused characters (e.g., zero '0', capital 'O', lowercase 'l'). This encoding function (encode) is implemented exactly as defined in Bitcoin and IPFS specifications, using the same alphabet as both implementations.

+
+Additional information +

Refer to Ss58Codec for more details on the SS58 address format implementation.

+
+

Address Type

+

The address type defines how an address is interpreted and to which network it belongs. Polkadot SDK uses different prefixes to distinguish between various chains and address formats:

+
    +
  • Address types 0-63 - simple addresses, commonly used for network identifiers
  • +
  • Address types 64-127 - full addresses that support a wider range of network identifiers
  • +
  • Address types 128-255 - reserved for future address format extensions
  • +
+

For example, Polkadot’s main network uses an address type of 0, while Kusama uses 2. This ensures that addresses can be used without confusion between networks.

+

The address type is always encoded as part of the SS58 address, making it easy to quickly identify the network. Refer to the SS58 registry for the canonical listing of all address type identifiers and how they map to Polkadot SDK-based networks.

+

Address Length

+

SS58 addresses can have different lengths depending on the specific format. Address lengths range from as short as 3 to 35 bytes, depending on the complexity of the address and network requirements. This flexibility allows SS58 addresses to adapt to different chains while providing a secure encoding mechanism.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TotalTypeRaw accountChecksum
3111
4121
5122
6141
7142
8143
9144
10181
11182
12183
13184
14185
15186
16187
17188
351322
+

SS58 addresses also support different payload sizes, allowing a flexible range of account identifiers.

+

Checksum Types

+

A checksum is applied to validate SS58 addresses. Polkadot SDK uses a Blake2b-512 hash function to calculate the checksum, which is appended to the address before encoding. The checksum length can vary depending on the address format (e.g., 1-byte, 2-byte, or longer), providing varying levels of validation strength.

+

The checksum ensures that an address is not modified or corrupted, adding an extra layer of security for account management.

+

Validating Addresses

+

SS58 addresses can be validated using the subkey command-line interface or the Polkadot.js API. These tools help ensure an address is correctly formatted and valid for the intended network. The following sections will provide an overview of how validation works with these tools.

+

Using Subkey

+

Subkey is a CLI tool provided by Polkadot SDK for generating and managing keys. It can inspect and validate SS58 addresses.

+

The inspect command gets a public key and an SS58 address from the provided secret URI. The basic syntax for the subkey inspect command is:

+
subkey inspect [flags] [options] uri
+
+

For the uri command-line argument, you can specify the secret seed phrase, a hex-encoded private key, or an SS58 address. If the input is a valid address, the subkey program displays the corresponding hex-encoded public key, account identifier, and SS58 addresses.

+

For example, to inspect the public keys derived from a secret seed phrase, you can run a command similar to the following:

+
subkey inspect "caution juice atom organ advance problem want pledge someone senior holiday very"
+
+

The command displays output similar to the following:

+
+

subkey inspect "caution juice atom organ advance problem want pledge someone senior holiday very" + Secret phrase caution juice atom organ advance problem want pledge someone senior holiday very is account: + Secret seed: 0xc8fa03532fb22ee1f7f6908b9c02b4e72483f0dbd66e4cd456b8f34c6230b849 + Public key (hex): 0xd6a3105d6768e956e9e5d41050ac29843f98561410d3a47f9dd5b3b227ab8746 + Public key (SS58): 5Gv8YYFu8H1btvmrJy9FjjAWfb99wrhV3uhPFoNEr918utyR + Account ID: 0xd6a3105d6768e956e9e5d41050ac29843f98561410d3a47f9dd5b3b227ab8746 + SS58 Address: 5Gv8YYFu8H1btvmrJy9FjjAWfb99wrhV3uhPFoNEr918utyR

+
+

The subkey program assumes an address is based on a public/private key pair. If you inspect an address, the command returns the 32-byte account identifier.

+

However, not all addresses in Polkadot SDK-based networks are based on keys.

+

Depending on the command-line options you specify and the input you provided, the command output might also display the network for which the address has been encoded. For example:

+
subkey inspect "12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU"
+
+

The command displays output similar to the following:

+
+

subkey inspect "12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU" + Public Key URI 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU is account: + Network ID/Version: polkadot + Public key (hex): 0x46ebddef8cd9bb167dc30878d7113b7e168e6f0646beffd77d69d39bad76b47a + Account ID: 0x46ebddef8cd9bb167dc30878d7113b7e168e6f0646beffd77d69d39bad76b47a + Public key (SS58): 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU + SS58 Address: 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU

+
+

Using Polkadot.js API

+

To verify an address in JavaScript or TypeScript projects, you can use the functions built into the Polkadot.js API. For example:

+
// Import Polkadot.js API dependencies
+const { decodeAddress, encodeAddress } = require('@polkadot/keyring');
+const { hexToU8a, isHex } = require('@polkadot/util');
+
+// Specify an address to test.
+const address = 'INSERT_ADDRESS_TO_TEST';
+
+// Check address
+const isValidSubstrateAddress = () => {
+  try {
+    encodeAddress(isHex(address) ? hexToU8a(address) : decodeAddress(address));
+
+    return true;
+  } catch (error) {
+    return false;
+  }
+};
+
+// Query result
+const isValid = isValidSubstrateAddress();
+console.log(isValid);
+
+

If the function returns true, the specified address is a valid address.

+

Other SS58 Implementations

+

Support for encoding and decoding Polkadot SDK SS58 addresses has been implemented in several other languages and libraries.

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/blocks-transactions-fees/blocks/index.html b/polkadot-protocol/basics/blocks-transactions-fees/blocks/index.html new file mode 100644 index 00000000..39acb549 --- /dev/null +++ b/polkadot-protocol/basics/blocks-transactions-fees/blocks/index.html @@ -0,0 +1,5026 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Blocks | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Blocks

+

Introduction

+

In the Polkadot SDK, blocks are fundamental to the functioning of the blockchain, serving as containers for transactions and changes to the chain's state. Blocks consist of headers and an array of transactions, ensuring the integrity and validity of operations on the network. This guide explores the essential components of a block, the process of block production, and how blocks are validated and imported across the network. By understanding these concepts, developers can better grasp how blockchains maintain security, consistency, and performance within the Polkadot ecosystem.

+

What is a Block?

+

In the Polkadot SDK, a block is a fundamental unit that encapsulates both the header and an array of transactions. The block header includes critical metadata to ensure the integrity and sequence of the blockchain. Here's a breakdown of its components:

+
    +
  • Block height - indicates the number of blocks created in the chain so far
  • +
  • Parent hash - the hash of the previous block, providing a link to maintain the blockchain's immutability
  • +
  • Transaction root - cryptographic digest summarizing all transactions in the block
  • +
  • State root - a cryptographic digest representing the post-execution state
  • +
  • Digest - additional information that can be attached to a block, such as consensus-related messages
  • +
+

Each transaction is part of a series that is executed according to the runtime's rules. The transaction root is a cryptographic digest of this series, which prevents alterations and enables succinct verification by light clients. This verification process allows light clients to confirm whether a transaction exists in a block with only the block header, avoiding downloading the entire block.

+

Block Production

+

When an authoring node is authorized to create a new block, it selects transactions from the transaction queue based on priority. This step, known as block production, relies heavily on the executive module to manage the initialization and finalization of blocks. The process is summarized as follows:

+

Initialize Block

+

The block initialization process begins with a series of function calls that prepare the block for transaction execution:

+
    +
  1. Call on_initialize - the executive module calls the on_initialize hook from the system pallet and other runtime pallets to prepare for the block's transactions
  2. +
  3. Coordinate runtime calls - coordinates function calls in the order defined by the transaction queue
  4. +
  5. Verify information - once on_initialize functions are executed, the executive module checks the parent hash in the block header and the trie root to verify information is consistent
  6. +
+

Finalize Block

+

Once transactions are processed, the block must be finalized before being broadcast to the network. The finalization steps are as follows:

+
    +
  1. -Call on_finalize - the executive module calls the on_finalize hooks in each pallet to ensure any remaining state updates or checks are completed before the block is sealed and published
  2. +
  3. -Verify information - the block's digest and storage root in the header are checked against the initialized block to ensure consistency
  4. +
  5. -Call on_idle - the on_idle hook is triggered to process any remaining tasks using the leftover weight from the block
  6. +
+

Block Authoring and Import

+

Once the block is finalized, it is gossiped to other nodes in the network. Nodes follow this procedure:

+
    +
  1. Receive transactions - the authoring node collects transactions from the network
  2. +
  3. Validate - transactions are checked for validity
  4. +
  5. Queue - valid transactions are placed in the transaction pool for execution
  6. +
  7. Execute - state changes are made as the transactions are executed
  8. +
  9. Publish - the finalized block is broadcast to the network
  10. +
+

Block Import Queue

+

After a block is published, other nodes on the network can import it into their chain state. The block import queue is part of the outer node in every Polkadot SDK-based node and ensures incoming blocks are valid before adding them to the node's state.

+

In most cases, you don't need to know details about how transactions are gossiped or how other nodes on the network import blocks. The following traits are relevant, however, if you plan to write any custom consensus logic or want a deeper dive into the block import queue:

+
    +
  • ImportQueue - the trait that defines the block import queue
  • +
  • Link - the trait that defines the link between the block import queue and the network
  • +
  • BasicQueue - a basic implementation of the block import queue
  • +
  • Verifier - the trait that defines the block verifier
  • +
  • BlockImport - the trait that defines the block import process
  • +
+

These traits govern how blocks are validated and imported across the network, ensuring consistency and security.

+
+Additional information +

Refer to the Block reference to learn more about the block structure in the Polkadot SDK runtime.

+
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/blocks-transactions-fees/fees/index.html b/polkadot-protocol/basics/blocks-transactions-fees/fees/index.html new file mode 100644 index 00000000..b35a2a63 --- /dev/null +++ b/polkadot-protocol/basics/blocks-transactions-fees/fees/index.html @@ -0,0 +1,5485 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Transactions Weights and Fees | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Transactions Weights and Fees

+

Introductions

+

When transactions are executed, or data is stored on-chain, the activity changes the chain's state and consumes blockchain resources. Because the resources available to a blockchain are limited, managing how operations on-chain consume them is important. In addition to being limited in practical terms, such as storage capacity, blockchain resources represent a potential attack vector for malicious users. For example, a malicious user might attempt to overload the network with messages to stop the network from producing new blocks. To protect blockchain resources from being drained or overloaded, you need to manage how they are made available and how they are consumed. The resources to be aware of include:

+
    +
  • Memory usage
  • +
  • Storage input and output
  • +
  • Computation
  • +
  • Transaction and block size
  • +
  • State database size
  • +
+

The Polkadot SDK provides block authors with several ways to manage access to resources and to prevent individual components of the chain from consuming too much of any single resource. Two of the most important mechanisms available to block authors are weights and transaction fees.

+

Weights manage the time it takes to validate a block and characterize the time it takes to execute the calls in the block's body. By controlling the execution time a block can consume, weights set limits on storage input, output, and computation.

+

Some of the weight allowed for a block is consumed as part of the block's initialization and finalization. The weight might also be used to execute mandatory inherent extrinsic calls. To help ensure blocks don’t consume too much execution time and prevent malicious users from overloading the system with unnecessary calls, weights are combined with transaction fees.

+

Transaction fees provide an economic incentive to limit execution time, computation, and the number of calls required to perform operations. Transaction fees are also used to make the blockchain economically sustainable because they are typically applied to transactions initiated by users and deducted before a transaction request is executed.

+

How Fees are Calculated

+

The final fee for a transaction is calculated using the following parameters:

+
    +
  • base fee - this is the minimum amount a user pays for a transaction. It is declared a base weight in the runtime and converted to a fee using the WeightToFee conversion
  • +
  • weight fee - a fee proportional to the execution time (input and output and computation) that a transaction consumes
  • +
  • length fee - a fee proportional to the encoded length of the transaction
  • +
  • tip - an optional tip to increase the transaction’s priority, giving it a higher chance to be included in the transaction queue
  • +
+

The base fee and proportional weight and length fees constitute the inclusion fee. The inclusion fee is the minimum fee that must be available for a transaction to be included in a block.

+
inclusion fee = base fee + weight fee + length fee
+
+

Transaction fees are withdrawn before the transaction is executed. After the transaction is executed, the weight can be adjusted to reflect the resources used. If a transaction uses fewer resources than expected, the transaction fee is corrected, and the adjusted transaction fee is deposited.

+

Using the Transaction Payment Pallet

+

The Transaction Payment pallet provides the basic logic for calculating the inclusion fee. You can also use the Transaction Payment pallet to:

+ +

You can learn more about these configuration traits in the Transaction Payment documentation.

+

Understanding the Inclusion Fee

+

The formula for calculating the inclusion fee is as follows:

+
inclusion_fee = base_fee + length_fee + [targeted_fee_adjustment * weight_fee]
+
+

And then, for calculating the final fee:

+
final_fee = inclusion_fee + tip
+
+

In the first formula, the targeted_fee_adjustment is a multiplier that can tune the final fee based on the network’s congestion.

+
    +
  • The base_fee derived from the base weight covers inclusion overhead like signature verification
  • +
  • The length_fee is a per-byte fee that is multiplied by the length of the encoded extrinsic
  • +
  • The weight_fee fee is calculated using two parameters:
  • +
  • The ExtrinsicBaseWeight that is declared in the runtime and applies to all extrinsics
  • +
  • The #[pallet::weight] annotation that accounts for an extrinsic's complexity
  • +
+

To convert the weight to Currency, the runtime must define a WeightToFee struct that implements a conversion function, Convert<Weight,Balance>.

+

Note that the extrinsic sender is charged the inclusion fee before the extrinsic is invoked. The fee is deducted from the sender's balance even if the transaction fails upon execution.

+

Accounts with an Insufficient Balance

+

If an account does not have a sufficient balance to pay the inclusion fee and remain alive—that is, enough to pay the inclusion fee and maintain the minimum existential deposit—then you should ensure the transaction is canceled so that no fee is deducted and the transaction does not begin execution.

+

The Polkadot SDK doesn't enforce this rollback behavior. However, this scenario would be rare because the transaction queue and block-making logic perform checks to prevent it before adding an extrinsic to a block.

+

Fee Multipliers

+

The inclusion fee formula always results in the same fee for the same input. However, weight can be dynamic and—based on how WeightToFee is defined—the final fee can include some degree of variability. +The Transaction Payment pallet provides the FeeMultiplierUpdate configurable parameter to account for this variability.

+

The Polkadot network inspires the default update function and implements a targeted adjustment in which a target saturation level of block weight is defined. If the previous block is more saturated, the fees increase slightly. Similarly, if the last block has fewer transactions than the target, fees are decreased by a small amount. For more information about fee multiplier adjustments, see the Web3 Research Page.

+

Transactions with Special Requirements

+

Inclusion fees must be computable before execution and can only represent fixed logic. Some transactions warrant limiting resources with other strategies. For example:

+
    +
  • Bonds are a type of fee that might be returned or slashed after some on-chain event. For example, you might want to require users to place a bond to participate in a vote. The bond might then be returned at the end of the referendum or slashed if the voter attempted malicious behavior
  • +
  • Deposits are fees that might be returned later. For example, you might require users to pay a deposit to execute an operation that uses storage. The user’s deposit could be returned if a subsequent operation frees up storage
  • +
  • Burn operations are used to pay for a transaction based on its internal logic. For example, a transaction might burn funds from the sender if the transaction creates new storage items to pay for the increased state size
  • +
  • Limits enable you to enforce constant or configurable limits on specific operations. For example, the default Staking pallet only allows nominators to nominate 16 validators to limit the complexity of the validator election process
  • +
+

It is important to note that if you query the chain for a transaction fee, it only returns the inclusion fee.

+

Default Weight Annotations

+

All dispatchable functions in the Polkadot SDK must specify a weight. The way of doing that is using the annotation-based system that lets you combine fixed values for database read/write weight and/or fixed values based on benchmarks. The most basic example would look like this:

+
#[pallet::weight(100_000)]
+fn my_dispatchable() {
+    // ...
+}
+
+

Note that the ExtrinsicBaseWeight is automatically added to the declared weight to account for the costs of simply including an empty extrinsic into a block.

+

Weights and Database Read/Write Operations

+

To make weight annotations independent of the deployed database backend, they are defined as a constant and then used in the annotations when expressing database accesses performed by the dispatchable:

+
#[pallet::weight(T::DbWeight::get().reads_writes(1, 2) + 20_000)]
+fn my_dispatchable() {
+    // ...
+}
+
+

This dispatchable allows one database to read and two to write, in addition to other things that add the additional 20,000. Database access is generally every time a value declared inside the #[pallet::storage] block is accessed. However, unique accesses are counted because after a value is accessed, it is cached, and reaccessing it does not result in a database operation. That is:

+
    +
  • Multiple reads of the exact value count as one read
  • +
  • Multiple writes of the exact value count as one write
  • +
  • Multiple reads of the same value, followed by a write to that value, count as one read and one write
  • +
  • A write followed by a read-only counts as one write
  • +
+

Dispatch Classes

+

Dispatches are broken into three classes:

+
    +
  • Normal
  • +
  • Operational
  • +
  • Mandatory
  • +
+

If a dispatch is not defined as Operational or Mandatory in the weight annotation, the dispatch is identified as Normal by default. You can specify that the dispatchable uses another class like this:

+
#[pallet::dispatch((DispatchClass::Operational))]
+fn my_dispatchable() {
+    // ...
+}
+
+

This tuple notation also allows you to specify a final argument determining whether the user is charged based on the annotated weight. If you don't specify otherwise, Pays::Yes is assumed:

+
#[pallet::dispatch(DispatchClass::Normal, Pays::No)]
+fn my_dispatchable() {
+    // ...
+}
+
+

Normal Dispatches

+

Dispatches in this class represent normal user-triggered transactions. These types of dispatches only consume a portion of a block's total weight limit. For information about the maximum portion of a block that can be consumed for normal dispatches, see AvailableBlockRatio. Normal dispatches are sent to the transaction pool.

+

Operational Dispatches

+

Unlike normal dispatches, which represent the usage of network capabilities, operational dispatches are those that provide network capabilities. Operational dispatches can consume the entire weight limit of a block. They are not bound by the AvailableBlockRatio. Dispatches in this class are given maximum priority and are exempt from paying the length_fee.

+

Mandatory Dispatches

+

Mandatory dispatches are included in a block even if they cause the block to surpass its weight limit. You can only use the mandatory dispatch class for inherent transactions that the block author submits. This dispatch class is intended to represent functions in the block validation process. Because these dispatches are always included in a block regardless of the function weight, the validation process must prevent malicious nodes from abusing the function to craft valid but impossibly heavy blocks. You can typically accomplish this by ensuring that:

+
    +
  • The operation performed is always light
  • +
  • The operation can only be included in a block once
  • +
+

To make it more difficult for malicious nodes to abuse mandatory dispatches, they cannot be included in blocks that return errors. This dispatch class serves the assumption that it is better to allow an overweight block to be created than not to allow any block to be created at all.

+

Dynamic Weights

+

In addition to purely fixed weights and constants, the weight calculation can consider the input arguments of a dispatchable. The weight should be trivially computable from the input arguments with some basic arithmetic:

+
use frame_support:: {
+    dispatch:: {
+        DispatchClass::Normal,
+        Pays::Yes,
+    },
+   weights::Weight,
+};
+
+#[pallet::weight(FunctionOf(
+  |args: (&Vec<User>,)| args.0.len().saturating_mul(10_000),
+  )
+]
+fn handle_users(origin, calls: Vec<User>) {
+    // Do something per user
+}
+
+

Post Dispatch Weight Correction

+

Depending on the execution logic, a dispatchable function might consume less weight than was prescribed pre-dispatch. To correct weight, the function declares a different return type and returns its actual weight:

+
#[pallet::weight(10_000 + 500_000_000)]
+fn expensive_or_cheap(input: u64) -> DispatchResultWithPostInfo {
+    let was_heavy = do_calculation(input);
+
+    if (was_heavy) {
+        // None means "no correction" from the weight annotation.
+        Ok(None.into())
+    } else {
+        // Return the actual weight consumed.
+        Ok(Some(10_000).into())
+    }
+}
+
+

Custom Fees

+

You can also define custom fee systems through custom weight functions or inclusion fee functions.

+

Custom Weights

+

Instead of using the default weight annotations, you can create a custom weight calculation type using the weights module. The custom weight calculation type must implement the following traits:

+ +

The Polkadot SDK then bundles the output information of the three traits into the DispatchInfo struct and provides it by implementing the GetDispatchInfo for all Call variants and opaque extrinsic types. This is used internally by the System and Executive modules.

+

ClassifyDispatchWeighData, and PaysFee are generic over T, which gets resolved into the tuple of all dispatch arguments except for the origin. The following example illustrates a struct that calculates the weight as m * len(args), where m is a given multiplier and args is the concatenated tuple of all dispatch arguments. In this example, the dispatch class is Operational if the transaction has more than 100 bytes of length in arguments and will pay fees if the encoded length exceeds 10 bytes.

+
struct LenWeight(u32);
+impl<T> WeighData<T> for LenWeight {
+    fn weigh_data(&self, target: T) -> Weight {
+        let multiplier = self.0;
+        let encoded_len = target.encode().len() as u32;
+        multiplier * encoded_len
+    }
+}
+
+impl<T> ClassifyDispatch<T> for LenWeight {
+    fn classify_dispatch(&self, target: T) -> DispatchClass {
+        let encoded_len = target.encode().len() as u32;
+        if encoded_len > 100 {
+            DispatchClass::Operational
+        } else {
+            DispatchClass::Normal
+        }
+    }
+}
+
+impl<T> PaysFee<T> {
+    fn pays_fee(&self, target: T) -> Pays {
+        let encoded_len = target.encode().len() as u32;
+        if encoded_len > 10 {
+            Pays::Yes
+        } else {
+            Pays::No
+        }
+    }
+}
+
+

A weight calculator function can also be coerced to the final type of the argument instead of defining it as a vague type that can be encoded. The code would roughly look like this:

+
struct CustomWeight;
+impl WeighData<(&u32, &u64)> for CustomWeight {
+    fn weigh_data(&self, target: (&u32, &u64)) -> Weight {
+        ...
+    }
+}
+
+// given a dispatch:
+#[pallet::call]
+impl<T: Config<I>, I: 'static> Pallet<T, I> {
+    #[pallet::weight(CustomWeight)]
+    fn foo(a: u32, b: u64) { ... }
+}
+
+

In this example, the CustomWeight can only be used in conjunction with a dispatch with a particular signature (u32, u64), as opposed to LenWeight, which can be used with anything because there aren't any assumptions about <T>.

+

Custom Inclusion Fee

+

The following example illustrates how to customize your inclusion fee. You must configure the appropriate associated types in the respective module.

+
// Assume this is the balance type
+type Balance = u64;
+
+// Assume we want all the weights to have a `100 + 2 * w` conversion to fees
+struct CustomWeightToFee;
+impl WeightToFee<Weight, Balance> for CustomWeightToFee {
+    fn convert(w: Weight) -> Balance {
+        let a = Balance::from(100);
+        let b = Balance::from(2);
+        let w = Balance::from(w);
+        a + b * w
+    }
+}
+
+parameter_types! {
+    pub const ExtrinsicBaseWeight: Weight = 10_000_000;
+}
+
+impl frame_system::Config for Runtime {
+    type ExtrinsicBaseWeight = ExtrinsicBaseWeight;
+}
+
+parameter_types! {
+    pub const TransactionByteFee: Balance = 10;
+}
+
+impl transaction_payment::Config {
+    type TransactionByteFee = TransactionByteFee;
+    type WeightToFee = CustomWeightToFee;
+    type FeeMultiplierUpdate = TargetedFeeAdjustment<TargetBlockFullness>;
+}
+
+struct TargetedFeeAdjustment<T>(sp_std::marker::PhantomData<T>);
+impl<T: Get<Perquintill>> WeightToFee<Fixed128, Fixed128> for TargetedFeeAdjustment<T> {
+    fn convert(multiplier: Fixed128) -> Fixed128 {
+        // Don't change anything. Put any fee update info here.
+        multiplier
+    }
+}
+
+

Further Resources

+

You now know the weight system, how it affects transaction fee computation, and how to specify weights for your dispatchable calls. The next step is determining the correct weight for your dispatchable operations. You can use Substrate benchmarking functions and frame-benchmarking calls to test your functions with different parameters and empirically determine the proper weight in their worst-case scenarios.

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/blocks-transactions-fees/index.html b/polkadot-protocol/basics/blocks-transactions-fees/index.html new file mode 100644 index 00000000..9df2814d --- /dev/null +++ b/polkadot-protocol/basics/blocks-transactions-fees/index.html @@ -0,0 +1,4961 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Blocks, Transactions, and Fees | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Blocks, Transactions, and Fees

+

Discover the inner workings of Polkadot’s blocks and transactions, including their structure, processing, and lifecycle within the network. Learn how blocks are authored, validated, and finalized, ensuring seamless operation and consensus across the ecosystem. Dive into the various types of transactions—signed, unsigned, and inherent—and understand how they are constructed, submitted, and validated.

+

Uncover how Polkadot’s fee system balances resource usage and economic incentives. Explore the role of transaction weights, runtime specifics, and the precise formula used to calculate fees. These mechanisms ensure fair resource allocation while maintaining the network’s efficiency and scalability.

+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/blocks-transactions-fees/transactions/index.html b/polkadot-protocol/basics/blocks-transactions-fees/transactions/index.html new file mode 100644 index 00000000..1092e602 --- /dev/null +++ b/polkadot-protocol/basics/blocks-transactions-fees/transactions/index.html @@ -0,0 +1,5424 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Transactions | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Transactions

+

Introduction

+

Transactions are essential components of blockchain networks, enabling state changes and the execution of key operations. In the Polkadot SDK, transactions, often called extrinsics, come in multiple forms, including signed, unsigned, and inherent transactions.

+

This guide walks you through the different transaction types and how they're formatted, validated, and processed within the Polkadot ecosystem. You'll also learn how to customize transaction formats and construct transactions for FRAME-based runtimes, ensuring a complete understanding of how transactions are built and executed in Polkadot SDK-based chains.

+

What Is a Transaction?

+

In the Polkadot SDK, transactions represent operations that modify the chain's state, bundled into blocks for execution. The term extrinsic is often used to refer to any data that originates outside the runtime and is included in the chain. While other blockchain systems typically refer to these operations as "transactions," the Polkadot SDK adopts the broader term "extrinsic" to capture the wide variety of data types that can be added to a block.

+

There are three primary types of transactions (extrinsics) in the Polkadot SDK:

+
    +
  • Signed transactions - signed by the submitting account, often carrying transaction fees
  • +
  • Unsigned transactions - submitted without a signature, often requiring custom validation logic
  • +
  • Inherent transactions - typically inserted directly into blocks by block authoring nodes, without gossiping between peers
  • +
+

Each type serves a distinct purpose, and understanding when and how to use each is key to efficiently working with the Polkadot SDK.

+

Signed Transactions

+

Signed transactions require an account's signature and typically involve submitting a request to execute a runtime call. The signature serves as a form of cryptographic proof that the sender has authorized the action, using their private key. These transactions often involve a transaction fee to cover the cost of execution and incentivize block producers.

+

Signed transactions are the most common type of transaction and are integral to user-driven actions, such as token transfers. For instance, when you transfer tokens from one account to another, the sending account must sign the transaction to authorize the operation.

+

For example, the pallet_balances::Call::transfer_allow_death extrinsic in the Balances pallet allows you to transfer tokens. Since your account initiates this transaction, your account key is used to sign it. You'll also be responsible for paying the associated transaction fee, with the option to include an additional tip to incentivize faster inclusion in the block.

+

Unsigned Transactions

+

Unsigned transactions do not require a signature or account-specific data from the sender. Unlike signed transactions, they do not come with any form of economic deterrent, such as fees, which makes them susceptible to spam or replay attacks. Custom validation logic must be implemented to mitigate these risks and ensure these transactions are secure.

+

Unsigned transactions typically involve scenarios where including a fee or signature is unnecessary or counterproductive. However, due to the absence of fees, they require careful validation to protect the network. For example, pallet_im_online::Call::heartbeat extrinsic allows validators to send a heartbeat signal, indicating they are active. Since only validators can make this call, the logic embedded in the transaction ensures that the sender is a validator, making the need for a signature or fee redundant.

+

Unsigned transactions are more resource-intensive than signed ones because custom validation is required, but they play a crucial role in certain operational scenarios, especially when regular user accounts aren't involved.

+

Inherent Transactions

+

Inherent transactions are a specialized type of unsigned transaction that is used primarily for block authoring. Unlike signed or other unsigned transactions, inherent transactions are added directly by block producers and are not broadcasted to the network or stored in the transaction queue. They don't require signatures or the usual validation steps and are generally used to insert system-critical data directly into blocks.

+

A key example of an inherent transaction is inserting a timestamp into each block. The pallet_timestamp::Call::now extrinsic allows block authors to include the current time in the block they are producing. Since the block producer adds this information, there is no need for transaction validation, like signature verification. The validation in this case is done indirectly by the validators, who check whether the timestamp is within an acceptable range before finalizing the block.

+

Another example is the paras_inherent::Call::enter extrinsic, which enables parachain collator nodes to send validation data to the relay chain. This inherent transaction ensures that the necessary parachain data is included in each block without the overhead of gossiped transactions.

+

Inherent transactions serve a critical role in block authoring by allowing important operational data to be added directly to the chain without needing the validation processes required for standard transactions.

+

Transaction Formats

+

Understanding the structure of signed and unsigned transactions is crucial for developers building on Polkadot SDK-based chains. Whether you're optimizing transaction processing, customizing formats, or interacting with the transaction pool, knowing the format of extrinsics, Polkadot's term for transactions, is essential.

+

Types of Transaction Formats

+

In Polkadot SDK-based chains, extrinsics can fall into three main categories:

+
    +
  • Unchecked extrinsics - typically used for signed transactions that require validation. They contain a signature and additional data, such as a nonce and information for fee calculation. Unchecked extrinsics are named as such because they require validation checks before being accepted into the transaction pool
  • +
  • Checked extrinsics - typically used for inherent extrinsics (unsigned transactions); these don't require signature verification. Instead, they carry information such as where the extrinsic originates and any additional data required for the block authoring process
  • +
  • Opaque extrinsics - used when the format of an extrinsic is not yet fully committed or finalized. They are still decodable, but their structure can be flexible depending on the context
  • +
+

Signed Transaction Data Structure

+

A signed transaction typically includes the following components:

+
    +
  • Signature - verifies the authenticity of the transaction sender
  • +
  • Call - the actual function or method call the transaction is requesting (for example, transferring funds)
  • +
  • Nonce - tracks the number of prior transactions sent from the account, helping to prevent replay attacks
  • +
  • Tip - an optional incentive to prioritize the transaction in block inclusion
  • +
  • Additional data - includes details such as spec version, block hash, and genesis hash to ensure the transaction is valid within the correct runtime and chain context
  • +
+

Here's a simplified breakdown of how signed transactions are typically constructed in a Polkadot SDK runtime:

+
<signing account ID> + <signature> + <additional data>
+
+

Each part of the signed transaction has a purpose, ensuring the transaction's authenticity and context within the blockchain.

+

Signed Extensions

+

Polkadot SDK also provides the concept of signed extensions, which allow developers to extend extrinsics with additional data or validation logic before they are included in a block. The SignedExtension set helps enforce custom rules or protections, such as ensuring the transaction's validity or calculating priority.

+

The transaction queue regularly calls signed extensions to verify a transaction's validity before placing it in the ready queue. This safeguard ensures transactions won't fail in a block. Signed extensions are commonly used to enforce validation logic and protect the transaction pool from spam and replay attacks.

+

In FRAME, a signed extension can hold any of the following types by default:

+
    +
  • AccountId - to encode the sender's identity
  • +
  • Call - to encode the pallet call to be dispatched. This data is used to calculate transaction fees
  • +
  • AdditionalSigned - to handle any additional data to go into the signed payload allowing you to attach any custom logic prior to dispatching a transaction
  • +
  • Pre - to encode the information that can be passed from before a call is dispatched to after it gets dispatched
  • +
+

Signed extensions can enforce checks like:

+
    +
  • CheckSpecVersion - ensures the transaction is compatible with the runtime's current version
  • +
  • CheckWeight - calculates the weight (or computational cost) of the transaction, ensuring the block doesn't exceed the maximum allowed weight
  • +
+

These extensions are critical in the transaction lifecycle, ensuring that only valid and prioritized transactions are processed.

+

Transaction Construction

+

Building transactions in the Polkadot SDK involves constructing a payload that can be verified, signed, and submitted for inclusion in a block. Each runtime in the Polkadot SDK has its own rules for validating and executing transactions, but there are common patterns for constructing a signed transaction.

+

Construct a Signed Transaction

+

A signed transaction in the Polkadot SDK includes various pieces of data to ensure security, prevent replay attacks, and prioritize processing. Here's an overview of how to construct one:

+
    +
  1. Construct the unsigned payload - gather the necessary information for the call, including:
      +
    • Pallet index - identifies the pallet where the runtime function resides
    • +
    • Function index - specifies the particular function to call in the pallet
    • +
    • Parameters - any additional arguments required by the function call
    • +
    +
  2. +
  3. Create a signing payload - once the unsigned payload is ready, additional data must be included:
      +
    • Transaction nonce - unique identifier to prevent replay attacks
    • +
    • Era information - defines how long the transaction is valid before it's dropped from the pool
    • +
    • Block hash - ensures the transaction doesn't execute on the wrong chain or fork
    • +
    +
  4. +
  5. Sign the payload - using the sender's private key, sign the payload to ensure that the transaction can only be executed by the account holder
  6. +
  7. Serialize the signed payload - once signed, the transaction must be serialized into a binary format, ensuring the data is compact and easy to transmit over the network
  8. +
  9. Submit the serialized transaction - finally, submit the serialized transaction to the network, where it will enter the transaction pool and wait for processing by an authoring node
  10. +
+

The following is an example of how a signed transaction might look:

+
node_runtime::UncheckedExtrinsic::new_signed(
+    function.clone(),                                      // some call
+    sp_runtime::AccountId32::from(sender.public()).into(), // some sending account
+    node_runtime::Signature::Sr25519(signature.clone()),   // the account's signature
+    extra.clone(),                                         // the signed extensions
+)
+
+

Transaction Encoding

+

Before a transaction is sent to the network, it is serialized and encoded using a structured encoding process that ensures consistency and prevents tampering:

+
    +
  • [1] - compact encoded length in bytes of the entire transaction
  • +
  • [2] - a u8 containing 1 byte to indicate whether the transaction is signed or unsigned (1 bit) and the encoded transaction version ID (7 bits)
  • +
  • [3] - if signed, this field contains an account ID, an SR25519 signature, and some extra data
  • +
  • [4] - encoded call data, including pallet and function indices and any required arguments
  • +
+

This encoded format ensures consistency and efficiency in processing transactions across the network. By adhering to this format, applications can construct valid transactions and pass them to the network for execution.

+
+Additional Information +

Learn how compact encoding works using SCALE.

+
+

Customize Transaction Construction

+

Although the basic steps for constructing transactions are consistent across Polkadot SDK-based chains, developers can customize transaction formats and validation rules. For example:

+
    +
  • Custom pallets - you can define new pallets with custom function calls, each with its own parameters and validation logic
  • +
  • Signed extensions - developers can implement custom extensions that modify how transactions are prioritized, validated, or included in blocks
  • +
+

By leveraging Polkadot SDK's modular design, developers can create highly specialized transaction logic tailored to their chain's needs.

+

Lifecycle of a Transaction

+

In the Polkadot SDK, transactions are often referred to as extrinsics because the data in transactions originates outside of the runtime. These transactions contain data that initiates changes to the chain state. The most common type of extrinsic is a signed transaction, which is cryptographically verified and typically incurs a fee. This section focuses on how signed transactions are processed, validated, and ultimately included in a block.

+

Define Transaction Properties

+

The Polkadot SDK runtime defines key transaction properties, such as:

+
    +
  • Transaction validity - ensures the transaction meets all runtime requirements
  • +
  • Signed or unsigned - identifies whether a transaction needs to be signed by an account
  • +
  • State changes - determines how the transaction modifies the state of the chain
  • +
+

Pallets, which compose the runtime's logic, define the specific transactions that your chain supports. When a user submits a transaction, such as a token transfer, it becomes a signed transaction, verified by the user's account signature. If the account has enough funds to cover fees, the transaction is executed, and the chain's state is updated accordingly.

+

Process on a Block Authoring Node

+

In Polkadot SDK-based networks, some nodes are authorized to author blocks. These nodes validate and process transactions. When a transaction is sent to a node that can produce blocks, it undergoes a lifecycle that involves several stages, including validation and execution. Non-authoring nodes gossip the transaction across the network until an authoring node receives it. The following diagram illustrates the lifecycle of a transaction that's submitted to a network and processed by an authoring node.

+

Transaction lifecycle diagram

+

Validate and Queue

+

Once a transaction reaches an authoring node, it undergoes an initial validation process to ensure it meets specific conditions defined in the runtime. This validation includes checks for:

+
    +
  • Correct nonce - ensures the transaction is sequentially valid for the account
  • +
  • Sufficient funds - confirms the account can cover any associated transaction fees
  • +
  • Signature validity - verifies that the sender's signature matches the transaction data
  • +
+

After these checks, valid transactions are placed in the transaction pool, where they are queued for inclusion in a block. The transaction pool regularly re-validates queued transactions to ensure they remain valid before being processed. To reach consensus, two-thirds of the nodes must agree on the order of the transactions executed and the resulting state change. Transactions are validated and queued on the local node in a transaction pool to prepare for consensus.

+

Transaction Pool

+

The transaction pool is responsible for managing valid transactions. It ensures that only transactions that pass initial validity checks are queued. Transactions that fail validation, expire, or become invalid for other reasons are removed from the pool.

+

The transaction pool organizes transactions into two queues:

+
    +
  • Ready queue - transactions that are valid and ready to be included in a block
  • +
  • Future queue - transactions that are not yet valid but could be in the future, such as transactions with a nonce too high for the current state
  • +
+

Details on how the transaction pool validates transactions, including fee and signature handling, can be found in the validate_transaction method.

+

Invalid Transactions

+

If a transaction is invalid, for example, due to an invalid signature or insufficient funds, it is rejected and won't be added to the block. Invalid transactions might be rejected for reasons such as:

+
    +
  • The transaction has already been included in a block
  • +
  • The transaction's signature does not match the sender
  • +
  • The transaction is too large to fit in the current block
  • +
+

Transaction Ordering and Priority

+

When a node is selected as the next block author, it prioritizes transactions based on weight, length, and tip amount. The goal is to fill the block with high-priority transactions without exceeding its maximum size or computational limits. Transactions are ordered as follows:

+
    +
  • Inherents first - inherent transactions, such as block timestamp updates, are always placed first
  • +
  • Nonce-based ordering - transactions from the same account are ordered by their nonce
  • +
  • Fee-based ordering - among transactions with the same nonce or priority level, those with higher fees are prioritized
  • +
+

Transaction Execution

+

Once a block author selects transactions from the pool, the transactions are executed in priority order. As each transaction is processed, the state changes are written directly to the chain's storage. It's important to note that these changes are not cached, meaning a failed transaction won't revert earlier state changes, which could leave the block in an inconsistent state.

+

Events are also written to storage. Runtime logic should not emit an event before performing the associated actions. If the associated transaction fails after the event was emitted, the event will not revert.

+
+Additional Information +

Watch Seminar: Lifecycle of a transaction for a video overview of the lifecycle of transactions and the types of transactions that exist.

+
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/chain-data/index.html b/polkadot-protocol/basics/chain-data/index.html new file mode 100644 index 00000000..9d2601f6 --- /dev/null +++ b/polkadot-protocol/basics/chain-data/index.html @@ -0,0 +1,5409 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Chain Data | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Chain Data

+

Introduction

+

Understanding and leveraging on-chain data is a fundamental aspect of blockchain development. Whether you're building frontend applications or backend systems, accessing and decoding runtime metadata is vital to interacting with the blockchain. This guide introduces you to the tools and processes for generating and retrieving metadata, explains its role in application development, and outlines the additional APIs available for interacting with a Polkadot node. By mastering these components, you can ensure seamless communication between your applications and the blockchain.

+

Application Development

+

You might not be directly involved in building frontend applications as a blockchain developer. However, most applications that run on a blockchain require some form of frontend or user-facing client to enable users or other programs to access and modify the data that the blockchain stores. For example, you might develop a browser-based, mobile, or desktop application that allows users to submit transactions, post articles, view their assets, or track previous activity. The backend for that application is configured in the runtime logic for your blockchain, but the frontend client makes the runtime features accessible to your users.

+

For your custom chain to be useful to others, you'll need to provide a client application that allows users to view, interact with, or update information that the blockchain keeps track of. In this article, you'll learn how to expose information about your runtime so that client applications can use it, see examples of the information exposed, and explore tools and libraries that use it.

+

Understand Metadata

+

Polkadot SDK-based blockchain networks are designed to expose their runtime information, allowing developers to learn granular details regarding pallets, RPC calls, and runtime APIs. The metadata also exposes their related documentation. The chain's metadata is SCALE-encoded, allowing for the development of browser-based, mobile, or desktop applications to support the chain's runtime upgrades seamlessly. It is also possible to develop applications compatible with multiple Polkadot SDK-based chains simultaneously.

+

Expose Runtime Information as Metadata

+

To interact with a node or the state of the blockchain, you need to know how to connect to the chain and access the exposed runtime features. This interaction involves a Remote Procedure Call (RPC) through a node endpoint address, commonly through a secure web socket connection.

+

An application developer typically needs to know the contents of the runtime logic, including the following details:

+
    +
  • Version of the runtime the application is connecting to
  • +
  • Supported APIs
  • +
  • Implemented pallets
  • +
  • Defined functions and corresponding type signatures
  • +
  • Defined custom types
  • +
  • Exposed parameters users can set
  • +
+

As the Polkadot SDK is modular and provides a composable framework for building blockchains, there are limitless opportunities to customize the schema of properties. Each runtime can be configured with its properties, including function calls and types, which can be changed over time with runtime upgrades.

+

The Polkadot SDK enables you to generate the runtime metadata schema to capture information unique to a runtime. The metadata for a runtime describes the pallets in use and types defined for a specific runtime version. The metadata includes information about each pallet's storage items, functions, events, errors, and constants. The metadata also provides type definitions for any custom types included in the runtime.

+

Metadata provides a complete inventory of a chain's runtime. It is key to enabling client applications to interact with the node, parse responses, and correctly format message payloads sent back to that chain.

+

Generate Metadata

+

To efficiently use the blockchain's networking resources and minimize the data transmitted over the network, the metadata schema is encoded using the Parity SCALE Codec. This encoding is done automatically through the scale-infocrate.

+

At a high level, generating the metadata involves the following steps:

+
    +
  1. The pallets in the runtime logic expose callable functions, types, parameters, and documentation that need to be encoded in the metadata
  2. +
  3. The scale-info crate collects type information for the pallets in the runtime, builds a registry of the pallets that exist in a particular runtime, and the relevant types for each pallet in the registry. The type information is detailed enough to enable encoding and decoding for every type
  4. +
  5. The frame-metadata crate describes the structure of the runtime based on the registry provided by the scale-info crate
  6. +
  7. Nodes provide the RPC method state_getMetadata to return a complete description of all the types in the current runtime as a hex-encoded vector of SCALE-encoded bytes
  8. +
+

Retrieve Runtime Metadata

+

The type information provided by the metadata enables applications to communicate with nodes using different runtime versions and across chains that expose different calls, events, types, and storage items. The metadata also allows libraries to generate a substantial portion of the code needed to communicate with a given node, enabling libraries like subxt to generate frontend interfaces that are specific to a target chain.

+

Use Polkadot.js

+

Visit the Polkadot.js Portal and select the Developer dropdown in the top banner. Select RPC Calls to make the call to request metadata. Follow these steps to make the RPC call:

+
    +
  1. Select state as the endpoint to call
  2. +
  3. Select getMetadata(at) as the method to call
  4. +
  5. Click Submit RPC call to submit the call and return the metadata in JSON format
  6. +
+

Use Curl

+

You can fetch the metadata for the network by calling the node's RPC endpoint. This request returns the metadata in bytes rather than human-readable JSON:

+
curl -H "Content-Type: application/json" \
+-d '{"id":1, "jsonrpc":"2.0", "method": "state_getMetadata"}' \
+https://rpc.polkadot.io
+
+

Use Subxt

+

subxt may also be used to fetch the metadata of any data in a human-readable JSON format:

+
subxt metadata  --url wss://rpc.polkadot.io --format json > spec.json
+
+

Another option is to use the subxt explorer web UI.

+

Client Applications and Metadata

+

The metadata exposes the expected way to decode each type, meaning applications can send, retrieve, and process application information without manual encoding and decoding. Client applications must use the SCALE codec library to encode and decode RPC payloads to use the metadata. Client applications use the metadata to interact with the node, parse responses, and format message payloads sent to the node.

+

Metadata Format

+

Although the SCALE-encoded bytes can be decoded using the frame-metadata and parity-scale-codec libraries, there are other tools, such as subxt and the Polkadot-JS API, that can convert the raw data to human-readable JSON format.

+

The types and type definitions included in the metadata returned by the state_getMetadata RPC call depend on the runtime's metadata version.

+

In general, the metadata includes the following information:

+
    +
  • A constant identifying the file as containing metadata
  • +
  • The version of the metadata format used in the runtime
  • +
  • Type definitions for all types used in the runtime and generated by the scale-info crate
  • +
  • Pallet information for the pallets included in the runtime in the order that they are defined in the construct_runtime macro
  • +
+
+

Metadata formats may vary

+

Depending on the frontend library used (such as the Polkadot API), they may format the metadata differently than the raw format shown.

+
+

The following example illustrates a condensed and annotated section of metadata decoded and converted to JSON:

+
[
+    1635018093,
+    {
+        "V14": {
+            "types": {
+                "types": [{}]
+            },
+            "pallets": [{}],
+            "extrinsic": {
+                "ty": 126,
+                "version": 4,
+                "signed_extensions": [{}]
+            },
+            "ty": 141
+        }
+    }
+]
+
+

The constant 1635018093 is a magic number that identifies the file as a metadata file. The rest of the metadata is divided into the types, pallets, and extrinsic sections:

+
    +
  • The types section contains an index of the types and information about each type's type signature
  • +
  • The pallets section contains information about each pallet in the runtime
  • +
  • The extrinsic section describes the type identifier and transaction format version that the runtime uses
  • +
+

Different extrinsic versions can have varying formats, especially when considering signed transactions.

+

Pallets

+

The following is a condensed and annotated example of metadata for a single element in the pallets array (the sudo pallet):

+
{
+    "name": "Sudo",
+    "storage": {
+        "prefix": "Sudo",
+        "entries": [
+            {
+                "name": "Key",
+                "modifier": "Optional",
+                "ty": {
+                    "Plain": 0
+                },
+                "default": [0],
+                "docs": ["The `AccountId` of the sudo key."]
+            }
+        ]
+    },
+    "calls": {
+        "ty": 117
+    },
+    "event": {
+        "ty": 42
+    },
+    "constants": [],
+    "error": {
+        "ty": 124
+    },
+    "index": 8
+}
+
+

Every element metadata contains the name of the pallet it represents and information about its storage, calls, events, and errors. You can look up details about the definition of the calls, events, and errors by viewing the type index identifier. The type index identifier is the u32 integer used to access the type information for that item. For example, the type index identifier for calls in the Sudo pallet is 117. If you view information for that type identifier in the types section of the metadata, it provides information about the available calls, including the documentation for each call.

+

For example, the following is a condensed excerpt of the calls for the Sudo pallet:

+
{
+    "id": 117,
+    "type": {
+        "path": ["pallet_sudo", "pallet", "Call"],
+        "params": [
+            {
+                "name": "T",
+                "type": null
+            }
+        ],
+        "def": {
+            "variant": {
+                "variants": [
+                    {
+                        "name": "sudo",
+                        "fields": [
+                            {
+                                "name": "call",
+                                "type": 114,
+                                "typeName": "Box<<T as Config>::RuntimeCall>"
+                            }
+                        ],
+                        "index": 0,
+                        "docs": [
+                            "Authenticates sudo key, dispatches a function call with `Root` origin"
+                        ]
+                    },
+                    {
+                        "name": "sudo_unchecked_weight",
+                        "fields": [
+                            {
+                                "name": "call",
+                                "type": 114,
+                                "typeName": "Box<<T as Config>::RuntimeCall>"
+                            },
+                            {
+                                "name": "weight",
+                                "type": 8,
+                                "typeName": "Weight"
+                            }
+                        ],
+                        "index": 1,
+                        "docs": [
+                            "Authenticates sudo key, dispatches a function call with `Root` origin"
+                        ]
+                    },
+                    {
+                        "name": "set_key",
+                        "fields": [
+                            {
+                                "name": "new",
+                                "type": 103,
+                                "typeName": "AccountIdLookupOf<T>"
+                            }
+                        ],
+                        "index": 2,
+                        "docs": [
+                            "Authenticates current sudo key, sets the given AccountId (`new`) as the new sudo"
+                        ]
+                    },
+                    {
+                        "name": "sudo_as",
+                        "fields": [
+                            {
+                                "name": "who",
+                                "type": 103,
+                                "typeName": "AccountIdLookupOf<T>"
+                            },
+                            {
+                                "name": "call",
+                                "type": 114,
+                                "typeName": "Box<<T as Config>::RuntimeCall>"
+                            }
+                        ],
+                        "index": 3,
+                        "docs": [
+                            "Authenticates sudo key, dispatches a function call with `Signed` origin from a given account"
+                        ]
+                    }
+                ]
+            }
+        }
+    }
+}
+
+

For each field, you can access type information and metadata for the following:

+
    +
  • Storage metadata - provides the information required to enable applications to get information for specific storage items
  • +
  • Call metadata - includes information about the runtime calls defined by the #[pallet] macro including call names, arguments and documentation
  • +
  • Event metadata - provides the metadata generated by the #[pallet::event] macro, including the name, arguments, and documentation for each pallet event
  • +
  • Constants metadata - provides metadata generated by the #[pallet::constant] macro, including the name, type, and hex-encoded value of the constant
  • +
  • Error metadata - provides metadata generated by the #[pallet::error] macro, including the name and documentation for each pallet error
  • +
+
+

Note

+

Type identifiers change from time to time, so you should avoid relying on specific type identifiers in your applications.

+
+

Extrinsic

+

The runtime generates extrinsic metadata and provides useful information about transaction format. When decoded, the metadata contains the transaction version and the list of signed extensions.

+

For example:

+
{
+    "extrinsic": {
+        "ty": 126,
+        "version": 4,
+        "signed_extensions": [
+            {
+                "identifier": "CheckNonZeroSender",
+                "ty": 132,
+                "additional_signed": 41
+            },
+            {
+                "identifier": "CheckSpecVersion",
+                "ty": 133,
+                "additional_signed": 4
+            },
+            {
+                "identifier": "CheckTxVersion",
+                "ty": 134,
+                "additional_signed": 4
+            },
+            {
+                "identifier": "CheckGenesis",
+                "ty": 135,
+                "additional_signed": 11
+            },
+            {
+                "identifier": "CheckMortality",
+                "ty": 136,
+                "additional_signed": 11
+            },
+            {
+                "identifier": "CheckNonce",
+                "ty": 138,
+                "additional_signed": 41
+            },
+            {
+                "identifier": "CheckWeight",
+                "ty": 139,
+                "additional_signed": 41
+            },
+            {
+                "identifier": "ChargeTransactionPayment",
+                "ty": 140,
+                "additional_signed": 41
+            }
+        ]
+    },
+    "ty": 141
+}
+
+

The type system is composite, meaning each type identifier contains a reference to a specific type or to another type identifier that provides information about the associated primitive types.

+

For example, you can encode the BitVec<Order, Store> type, but to decode it properly, you must know the types used for the Order and Store types. To find type information for Order and Store, you can use the path in the decoded JSON to locate their type identifiers.

+

Included RPC APIs

+

A standard node comes with the following APIs to interact with a node:

+
    +
  • AuthorApiServer - make calls into a full node, including authoring extrinsics and verifying session keys
  • +
  • ChainApiServer - retrieve block header and finality information
  • +
  • OffchainApiServer - make RPC calls for off-chain workers
  • +
  • StateApiServer - query information about on-chain state such as runtime version, storage items, and proofs
  • +
  • SystemApiServer - retrieve information about network state, such as connected peers and node roles
  • +
+

Additional Resources

+

The following tools can help you locate and decode metadata:

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/cryptography/index.html b/polkadot-protocol/basics/cryptography/index.html new file mode 100644 index 00000000..17e0318f --- /dev/null +++ b/polkadot-protocol/basics/cryptography/index.html @@ -0,0 +1,5272 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Cryptography | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Cryptography

+

Introduction

+

Cryptography forms the backbone of blockchain technology, providing the mathematical verifiability crucial for consensus systems, data integrity, and user security. While a deep understanding of the underlying mathematical processes isn't necessary for most blockchain developers, grasping the fundamental applications of cryptography is essential. This page comprehensively overviews cryptographic implementations used across Polkadot SDK-based chains and the broader blockchain ecosystem.

+

Hash Functions

+

Hash functions are fundamental to blockchain technology, creating a unique digital fingerprint for any piece of data, including simple text, images, or any other form of file. They map input data of any size to a fixed-size output (typically 32 bytes) using complex mathematical operations. Hashing is used to verify data integrity, create digital signatures, and provide a secure way to store passwords. This form of mapping is known as the "pigeonhole principle," it is primarily implemented to efficiently and verifiably identify data from large sets.

+

Key Properties of Hash Functions

+
    +
  • Deterministic - the same input always produces the same output
  • +
  • Quick computation - it's easy to calculate the hash value for any given input
  • +
  • Pre-image resistance - it's infeasible to generate the input data from its hash
  • +
  • Small changes in input yield large changes in output - known as the "avalanche effect"
  • +
  • Collision resistance - the probabilities are extremely low to find two different inputs with the same hash
  • +
+

Blake2

+

The Polkadot SDK utilizes Blake2, a state-of-the-art hashing method that offers:

+
    +
  • Equal or greater security compared to SHA-2
  • +
  • Significantly faster performance than other algorithms
  • +
+

These properties make Blake2 ideal for blockchain systems, reducing sync times for new nodes and lowering the resources required for validation.

+
+

Note

+

For detailed technical specifications on Blake2, refer to the official Blake2 paper.

+
+

Types of Cryptography

+

There are two different ways that cryptographic algorithms are implemented: symmetric cryptography and asymmetric cryptography.

+

Symmetric Cryptography

+

Symmetric encryption is a branch of cryptography that isn't based on one-way functions, unlike asymmetric cryptography. It uses the same cryptographic key to encrypt plain text and decrypt the resulting ciphertext.

+

Symmetric cryptography is a type of encryption that has been used throughout history, such as the Enigma Cipher and the Caesar Cipher. It is still widely used today and can be found in Web2 and Web3 applications alike. There is only one single key, and a recipient must also have access to it to access the contained information.

+

Advantages

+
    +
  • Fast and efficient for large amounts of data
  • +
  • Requires less computational power
  • +
+

Disadvantages

+
    +
  • Key distribution can be challenging
  • +
  • Scalability issues in systems with many users
  • +
+

Asymmetric Cryptography

+

Asymmetric encryption is a type of cryptography that uses two different keys, known as a keypair: a public key, used to encrypt plain text, and a private counterpart, used to decrypt the ciphertext.

+

The public key encrypts a fixed-length message that can only be decrypted with the recipient's private key and, sometimes, a set password. The public key can be used to cryptographically verify that the corresponding private key was used to create a piece of data without compromising the private key, such as with digital signatures. This has obvious implications for identity, ownership, and properties and is used in many different protocols across Web2 and Web3.

+

Advantages

+
    +
  • Solves the key distribution problem
  • +
  • Enables digital signatures and secure key exchange
  • +
+

Disadvantages

+
    +
  • Slower than symmetric encryption
  • +
  • Requires more computational resources
  • +
+

Trade-offs and Compromises

+

Symmetric cryptography is faster and requires fewer bits in the key to achieve the same level of security that asymmetric cryptography provides. However, it requires a shared secret before communication can occur, which poses issues to its integrity and a potential compromise point. On the other hand, asymmetric cryptography doesn't require the secret to be shared ahead of time, allowing for far better end-user security.

+

Hybrid symmetric and asymmetric cryptography is often used to overcome the engineering issues of asymmetric cryptography, as it is slower and requires more bits in the key to achieve the same level of security. It encrypts a key and then uses the comparatively lightweight symmetric cipher to do the "heavy lifting" with the message.

+

Digital Signatures

+

Digital signatures are a way of verifying the authenticity of a document or message using asymmetric keypairs. They are used to ensure that a sender or signer's document or message hasn't been tampered with in transit, and for recipients to verify that the data is accurate and from the expected sender.

+

Signing digital signatures only requires a low-level understanding of mathematics and cryptography. For a conceptual example -- when signing a check, it is expected that it cannot be cashed multiple times. This isn't a feature of the signature system but rather the check serialization system. The bank will check that the serial number on the check hasn't already been used. Digital signatures essentially combine these two concepts, allowing the signature to provide the serialization via a unique cryptographic fingerprint that cannot be reproduced.

+

Unlike pen-and-paper signatures, knowledge of a digital signature cannot be used to create other signatures. Digital signatures are often used in bureaucratic processes, as they are more secure than simply scanning in a signature and pasting it onto a document.

+

Polkadot SDK provides multiple different cryptographic schemes and is generic so that it can support anything that implements the Pair trait.

+

Example of Creating a Digital Signature

+

The process of creating and verifying a digital signature involves several steps:

+
    +
  1. The sender creates a hash of the message
  2. +
  3. The hash is encrypted using the sender's private key, creating the signature
  4. +
  5. The message and signature are sent to the recipient
  6. +
  7. The recipient decrypts the signature using the sender's public key
  8. +
  9. The recipient hashes the received message and compares it to the decrypted hash
  10. +
+

If the hashes match, the signature is valid, confirming the message's integrity and the sender's identity.

+

Elliptic Curve

+

Blockchain technology requires the ability to have multiple keys creating a signature for block proposal and validation. To this end, Elliptic Curve Digital Signature Algorithm (ECDSA) and Schnorr signatures are two of the most commonly used methods. While ECDSA is a far simpler implementation, Schnorr signatures are more efficient when it comes to multi-signatures.

+

Schnorr signatures bring some noticeable features over the ECDSA/EdDSA schemes:

+
    +
  • It is better for hierarchical deterministic key derivations
  • +
  • It allows for native multi-signature through signature aggregation
  • +
  • It is generally more resistant to misuse
  • +
+

One sacrifice that is made when using Schnorr signatures over ECDSA is that both require 64 bytes, but only ECDSA signatures communicate their public key.

+

Various Implementations

+
    +
  • +

    ECDSA - Polkadot SDK provides an ECDSA signature scheme using the secp256k1 curve. This is the same cryptographic algorithm used to secure Bitcoin and Ethereum

    +
  • +
  • +

    Ed25519 - is an EdDSA signature scheme using Curve25519. It is carefully engineered at several levels of design and implementation to achieve very high speeds without compromising security

    +
  • +
  • +

    SR25519 - is based on the same underlying curve as Ed25519. However, it uses Schnorr signatures instead of the EdDSA scheme

    +
  • +
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/data-encoding/index.html b/polkadot-protocol/basics/data-encoding/index.html new file mode 100644 index 00000000..3b7785bb --- /dev/null +++ b/polkadot-protocol/basics/data-encoding/index.html @@ -0,0 +1,5195 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Data Encoding | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Data Encoding

+

Introduction

+

The Polkadot SDK uses a lightweight and efficient encoding/decoding mechanism to optimize data transmission across the network. This mechanism, known as the SCALE codec, is used for serializing and deserializing data.

+

The SCALE codec enables communication between the runtime and the outer node. This mechanism is designed for high-performance, copy-free data encoding and decoding in resource-constrained environments like the Polkadot SDK Wasm runtime.

+

It is not self-describing, meaning the decoding context must fully know the encoded data types.

+

Parity's libraries utilize the parity-scale-codec crate (a Rust implementation of the SCALE codec) to handle encoding and decoding for interactions between RPCs and the runtime.

+

The codec mechanism is ideal for Polkadot SDK-based chains because:

+
    +
  • It is lightweight compared to generic serialization frameworks like serde, which add unnecessary bulk to binaries
  • +
  • It doesn’t rely on Rust’s libstd, making it compatible with no_std environments like Wasm runtime
  • +
  • It integrates seamlessly with Rust, allowing easy derivation of encoding and decoding logic for new types using #[derive(Encode, Decode)]
  • +
+

Defining a custom encoding scheme in the Polkadot SDK-based chains, rather than using an existing Rust codec library, is crucial for enabling cross-platform and multi-language support.

+

SCALE Codec

+

The codec is implemented using the following traits:

+ +

Encode

+

The Encode trait handles data encoding into SCALE format and includes the following key functions:

+
    +
  • size_hint(&self) -> usize - estimates the number of bytes required for encoding to prevent multiple memory allocations. This should be inexpensive and avoid complex operations. Optional if the size isn’t known
  • +
  • encode_to<T: Output>(&self, dest: &mut T) - encodes the data, appending it to a destination buffer
  • +
  • encode(&self) -> Vec<u8> - encodes the data and returns it as a byte vector
  • +
  • using_encoded<R, F: FnOnce(&[u8]) -> R>(&self, f: F) -> R - encodes the data and passes it to a closure, returning the result
  • +
  • encoded_size(&self) -> usize - calculates the encoded size. Should be used when the encoded data isn’t required
  • +
+
+

Note

+

For best performance, value types should override using_encoded, and allocating types should override encode_to. It's recommended to implement size_hint for all types where possible.

+
+

Decode

+

The Decode trait handles decoding SCALE-encoded data back into the appropriate types:

+
    +
  • fn decode<I: Input>(value: &mut I) -> Result<Self, Error> - decodes data from the SCALE format, returning an error if decoding fails
  • +
+

CompactAs

+

The CompactAs trait wraps custom types for compact encoding:

+
    +
  • encode_as(&self) -> &Self::As - encodes the type as a compact type
  • +
  • decode_from(_: Self::As) -> Result<Self, Error> - decodes from a compact encoded type
  • +
+

HasCompact

+

The HasCompact trait indicates a type supports compact encoding.

+

EncodeLike

+

The EncodeLike trait is used to ensure multiple types that encode similarly are accepted by the same function. When using derive, it is automatically implemented.

+

Data Types

+

The table below outlines how the Rust implementation of the Parity SCALE codec encodes different data types.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeDescriptionExample SCALE Decoded ValueSCALE Encoded Value
BooleanBoolean values are encoded using the least significant bit of a single byte.false / true0x00 / 0x01
Compact/general integersA "compact" or general integer encoding is sufficient for encoding large integers (up to 2^536) and is more efficient at encoding most values than the fixed-width version.unsigned integer 0 / unsigned integer 1 / unsigned integer 42 / unsigned integer 69 / unsigned integer 65535 / BigInt(100000000000000)0x00 / 0x04 / 0xa8 / 0x1501 / 0xfeff0300 / 0x0b00407a10f35a
Enumerations (tagged-unions)A fixed number of variants
Fixed-width integersBasic integers are encoded using a fixed-width little-endian (LE) format.signed 8-bit integer 69 / unsigned 16-bit integer 42 / unsigned 32-bit integer 167772150x45 / 0x2a00 / 0xffffff00
OptionsOne or zero values of a particular type.Some / None0x01 followed by the encoded value / 0x00
ResultsResults are commonly used enumerations which indicate whether certain operations were successful or unsuccessful.Ok(42) / Err(false)0x002a / 0x0100
StringsStrings are Vectors of bytes (Vec) containing a valid UTF8 sequence.
StructsFor structures, the values are named, but that is irrelevant for the encoding (names are ignored - only order matters).SortedVecAsc::from([3, 5, 2, 8])[3, 2, 5, 8]
TuplesA fixed-size series of values, each with a possibly different but predetermined and fixed type. This is simply the concatenation of each encoded value.Tuple of compact unsigned integer and boolean: (3, false)0x0c00
Vectors (lists, series, sets)A collection of same-typed values is encoded, prefixed with a compact encoding of the number of items, followed by each item's encoding concatenated in turn.Vector of unsigned 16-bit integers: [4, 8, 15, 16, 23, 42]0x18040008000f00100017002a00
+

Encode and Decode Rust Trait Implementations

+

Here's how the Encode and Decode traits are implemented:

+
use parity_scale_codec::{Encode, Decode};
+
+[derive(Debug, PartialEq, Encode, Decode)]
+enum EnumType {
+    #[codec(index = 15)]
+    A,
+    B(u32, u64),
+    C {
+        a: u32,
+        b: u64,
+    },
+}
+
+let a = EnumType::A;
+let b = EnumType::B(1, 2);
+let c = EnumType::C { a: 1, b: 2 };
+
+a.using_encoded(|ref slice| {
+    assert_eq!(slice, &b"\x0f");
+});
+
+b.using_encoded(|ref slice| {
+    assert_eq!(slice, &b"\x01\x01\0\0\0\x02\0\0\0\0\0\0\0");
+});
+
+c.using_encoded(|ref slice| {
+    assert_eq!(slice, &b"\x02\x01\0\0\0\x02\0\0\0\0\0\0\0");
+});
+
+let mut da: &[u8] = b"\x0f";
+assert_eq!(EnumType::decode(&mut da).ok(), Some(a));
+
+let mut db: &[u8] = b"\x01\x01\0\0\0\x02\0\0\0\0\0\0\0";
+assert_eq!(EnumType::decode(&mut db).ok(), Some(b));
+
+let mut dc: &[u8] = b"\x02\x01\0\0\0\x02\0\0\0\0\0\0\0";
+assert_eq!(EnumType::decode(&mut dc).ok(), Some(c));
+
+let mut dz: &[u8] = &[0];
+assert_eq!(EnumType::decode(&mut dz).ok(), None);
+
+

SCALE Codec Libraries

+

Several SCALE codec implementations are available in various languages. Here's a list of them:

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/index.html b/polkadot-protocol/basics/index.html new file mode 100644 index 00000000..6f150546 --- /dev/null +++ b/polkadot-protocol/basics/index.html @@ -0,0 +1,5032 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Basics | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Basics

+

This section equips developers with the essential knowledge to create, deploy, and enhance applications and blockchains within the Polkadot ecosystem. Gain a comprehensive understanding of Polkadot’s foundational components, including accounts, balances, and transactions, as well as advanced topics like data encoding and cryptographic methods. Mastering these concepts is vital for building robust and secure applications on Polkadot.

+

By exploring these core topics, developers can leverage Polkadot's unique architecture to build scalable and interoperable solutions. From understanding how Polkadot's networks operate to implementing efficient fee mechanisms and utilizing tools like SCALE encoding, this section provides the building blocks for innovation. Whether you're optimizing blockchain performance or designing cross-chain functionality, these insights will help you navigate Polkadot’s ecosystem with confidence.

+

In This Section

+

+

+ + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/interoperability/index.html b/polkadot-protocol/basics/interoperability/index.html new file mode 100644 index 00000000..51528de2 --- /dev/null +++ b/polkadot-protocol/basics/interoperability/index.html @@ -0,0 +1,5002 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Interoperability | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Interoperability

+

Introduction

+

Interoperability lies at the heart of the Polkadot ecosystem, enabling communication and collaboration across a diverse range of blockchains. By bridging the gaps between parachains, relay chains, and even external networks, Polkadot unlocks the potential for truly decentralized applications, efficient resource sharing, and scalable solutions.

+

Polkadot’s design ensures that blockchains can transcend their individual limitations by working together as part of a unified system. This cooperative architecture is what sets Polkadot apart in the blockchain landscape.

+

Why Interoperability Matters

+

The blockchain ecosystem is inherently fragmented. Different blockchains excel in specialized domains such as finance, gaming, or supply chain management, but these chains function in isolation without interoperability. This lack of connectivity stifles the broader utility of blockchain technology.

+

Interoperability solves this problem by enabling blockchains to:

+
    +
  • Collaborate across networks - chains can interact to share assets, functionality, and data, creating synergies that amplify their individual strengths
  • +
  • Achieve greater scalability - specialized chains can offload tasks to others, optimizing performance and resource utilization
  • +
  • Expand use-case potential - cross-chain applications can leverage features from multiple blockchains, unlocking novel user experiences and solutions
  • +
+

In the Polkadot ecosystem, interoperability transforms a collection of isolated chains into a cohesive, efficient network, pushing the boundaries of what blockchains can achieve together.

+

Key Mechanisms for Interoperability

+

At the core of Polkadot's cross-chain collaboration are foundational technologies designed to break down barriers between networks. These mechanisms empower blockchains to communicate, share resources, and operate as a cohesive ecosystem.

+

Cross-Consensus Messaging (XCM): The Backbone of Communication

+

Polkadot's Cross-Consensus Messaging (XCM) is the standard framework for interaction between parachains, relay chains, and, eventually, external blockchains. XCM provides a trustless, secure messaging format for exchanging assets, sharing data, and executing cross-chain operations.

+

Through XCM, decentralized applications can:

+
    +
  • Transfer tokens and other assets across chains
  • +
  • Coordinate complex workflows that span multiple blockchains
  • +
  • Enable seamless user experiences where underlying blockchain differences are invisible
  • +
  • XCM exemplifies Polkadot’s commitment to creating a robust and interoperable ecosystem
  • +
+

For further information about XCM, check the Introduction to XCM article.

+

Bridges: Connecting External Networks

+

While XCM enables interoperability within the Polkadot ecosystem, bridges extend this functionality to external blockchains such as Ethereum and Bitcoin. By connecting these networks, bridges allow Polkadot-based chains to access external liquidity, additional functionalities, and broader user bases.

+

With bridges, developers and users gain the ability to:

+
    +
  • Integrate external assets into Polkadot-based applications
  • +
  • Combine the strengths of Polkadot’s scalability with the liquidity of other networks
  • +
  • Facilitate accurate multi-chain applications that transcend ecosystem boundaries
  • +
+

For more information about bridges in the Polkadot ecosystem, see the Bridge Hub guide.

+

The Polkadot Advantage

+

Polkadot was purpose-built for interoperability. Unlike networks that add interoperability as an afterthought, Polkadot integrates it as a fundamental design principle. This approach offers several distinct advantages:

+
    +
  • Developer empowerment - polkadot’s interoperability tools allow developers to build applications that leverage multiple chains’ capabilities without added complexity
  • +
  • Enhanced ecosystem collaboration - chains in Polkadot can focus on their unique strengths while contributing to the ecosystem’s overall growth
  • +
  • Future-proofing blockchain - by enabling seamless communication, Polkadot ensures its ecosystem can adapt to evolving demands and technologies
  • +
+

Looking Ahead

+

Polkadot’s vision of interoperability extends beyond technical functionality, representing a shift towards a more collaborative blockchain landscape. By enabling chains to work together, Polkadot fosters innovation, efficiency, and accessibility, paving the way for a decentralized future where blockchains are not isolated competitors but interconnected collaborators.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/networks/index.html b/polkadot-protocol/basics/networks/index.html new file mode 100644 index 00000000..01530d6f --- /dev/null +++ b/polkadot-protocol/basics/networks/index.html @@ -0,0 +1,5091 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Networks | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Networks

+

Introduction

+

The Polkadot ecosystem is built on a robust set of networks designed to enable secure and scalable development. Whether you are testing new features or deploying to live production, Polkadot offers several layers of networks tailored for each stage of the development process. From local environments to experimental networks like Kusama and community-run TestNets such as Paseo, developers can thoroughly test, iterate, and validate their applications. This guide will introduce you to Polkadot's various networks and explain how they fit into the development workflow.

+

Network Overview

+

Polkadot's development process is structured to ensure new features and upgrades are rigorously tested before being deployed on live production networks. The progression follows a well-defined path, starting from local environments and advancing through TestNets, ultimately reaching the Polkadot MainNet. The diagram below outlines the typical progression of the Polkadot development cycle:

+


+flowchart LR
+    id1[Local] --> id2[Westend] --> id4[Kusama] --> id5[Polkadot]  
+    id1[Local] --> id3[Paseo] --> id5[Polkadot] 
+This flow ensures developers can thoroughly test and iterate without risking real tokens or affecting production networks. Testing tools like Chopsticks and various TestNets make it easier to experiment safely before releasing to production.

+

A typical journey through the Polkadot core protocol development process might look like this:

+
    +
  1. +

    Local development node - development starts in a local environment, where developers can create, test, and iterate on upgrades or new features using a local development node. This stage allows rapid experimentation in an isolated setup without any external dependencies

    +
  2. +
  3. +

    Westend - after testing locally, upgrades are deployed to Westend, Polkadot's primary TestNet. Westend simulates real-world conditions without using real tokens, making it the ideal place for rigorous feature testing before moving on to production networks

    +
  4. +
  5. +

    Kusama - once features have passed extensive testing on Westend, they move to Kusama, Polkadot's experimental and fast-moving "canary" network. Kusama operates as a high-fidelity testing ground with actual economic incentives, giving developers insights into how their features will perform in a real-world environment

    +
  6. +
  7. +

    Polkadot - after passing tests on Westend and Kusama, features are considered ready for deployment to Polkadot, the live production network

    +
  8. +
+

In addition, parachain developers can leverage local TestNets like Zombienet and deploy upgrades on parachain TestNets.

+
    +
  1. Paseo - For parachain and dApp developers, Paseo serves as a community-run TestNet that mirrors Polkadot's runtime. Like Westend for core protocol development, Paseo provides a testing ground for parachain development without affecting live networks
  2. +
+
+

Note

+

The Rococo TestNet deprecation date was October 14, 2024. Teams should use Westend for Polkadot protocol and feature testing and Paseo for chain development-related testing.

+
+

Polkadot Development Networks

+

Development and testing are crucial to building robust dApps and parachains and performing network upgrades within the Polkadot ecosystem. To achieve this, developers can leverage various networks and tools that provide a risk-free environment for experimentation and validation before deploying features to live networks. These networks help avoid the costs and risks associated with real tokens, enabling testing for functionalities like governance, cross-chain messaging, and runtime upgrades.

+

Kusama Network

+

Kusama is the experimental version of Polkadot, designed for developers who want to move quickly and test their applications in a real-world environment with economic incentives. Kusama serves as a production-grade testing ground where developers can deploy features and upgrades with the pressure of game theory and economics in mind. It mirrors Polkadot but operates as a more flexible space for innovation.

+

The native token for Kusama is KSM. For more information about KSM, visit the Native Assets page.

+

Test Networks

+

The following test networks provide controlled environments for testing upgrades and new features. TestNet tokens are available from the Polkadot faucet.

+

Westend

+

Westend is Polkadot's primary permanent TestNet. Unlike temporary test networks, Westend is not reset to the genesis block, making it an ongoing environment for testing Polkadot core features. Managed by Parity Technologies, Westend ensures that developers can test features in a real-world simulation without using actual tokens.

+

The native token for Westend is WND. More details about WND can be found on the Native Assets page.

+

Paseo

+

Paseo is a community-managed TestNet designed for parachain and dApp developers. It mirrors Polkadot's runtime and is maintained by Polkadot community members. Paseo provides a dedicated space for parachain developers to test their applications in a Polkadot-like environment without the risks associated with live networks.

+

The native token for Paseo is PAS. Additional information on PAS is available on the Native Assets page.

+

Local Test Networks

+

Local test networks are an essential part of the development cycle for blockchain developers using the Polkadot SDK. They allow for fast, iterative testing in controlled, private environments without connecting to public TestNets. Developers can quickly spin up local instances to experiment, debug, and validate their code before deploying to larger TestNets like Westend or Paseo. Two key tools for local network testing are Zombienet and Chopsticks.

+

Zombienet

+

Zombienet is a flexible testing framework for Polkadot SDK-based blockchains. It enables developers to create and manage ephemeral, short-lived networks. This feature makes Zombienet particularly useful for quick iterations, as it allows you to run multiple local networks concurrently, mimicking different runtime conditions. Whether you're developing a parachain or testing your custom blockchain logic, Zombienet gives you the tools to automate local testing.

+

Key features of Zombienet include:

+
    +
  • Creating dynamic, local networks with different configurations
  • +
  • Running parachains and relay chains in a simulated environment
  • +
  • Efficient testing of network components like cross-chain messaging and governance
  • +
+

Zombienet is ideal for developers looking to test quickly and thoroughly before moving to more resource-intensive public TestNets.

+

Chopsticks

+

Chopsticks is a tool designed to create forks of Polkadot SDK-based blockchains, allowing developers to interact with network forks as part of their testing process. This capability makes Chopsticks a powerful option for testing upgrades, runtime changes, or cross-chain applications in a forked network environment.

+

Key features of Chopsticks include:

+
    +
  • Forking live Polkadot SDK-based blockchains for isolated testing
  • +
  • Simulating cross-chain messages in a private, controlled setup
  • +
  • Debugging network behavior by interacting with the fork in real-time
  • +
+

Chopsticks provides a controlled environment for developers to safely explore the effects of runtime changes. It ensures that network behavior is tested and verified before upgrades are deployed to live networks.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/basics/randomness/index.html b/polkadot-protocol/basics/randomness/index.html new file mode 100644 index 00000000..b1fcf1df --- /dev/null +++ b/polkadot-protocol/basics/randomness/index.html @@ -0,0 +1,4994 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Randomness | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Randomness

+

Introduction

+

Randomness is crucial in Proof of Stake (PoS) blockchains to ensure a fair and unpredictable distribution of validator duties. However, computers are inherently deterministic, meaning the same input always produces the same output. What we typically refer to as "random" numbers on a computer are actually pseudo-random. These numbers rely on an initial "seed," which can come from external sources like atmospheric noise, heart rates, or even lava lamps. While this may seem random, given the same "seed," the same sequence of numbers will always be generated.

+

In a global blockchain network, relying on real-world entropy for randomness isn’t feasible because these inputs vary by time and location. If nodes use different inputs, blockchains can fork. Hence, real-world randomness isn't suitable for use as a seed in blockchain systems.

+

Currently, two primary methods for generating randomness in blockchains are used: RANDAO and VRF (Verifiable Random Function). Polkadot adopts the VRF approach for its randomness.

+

VRF

+

A Verifiable Random Function (VRF) is a cryptographic function that generates a random number and proof that ensures the submitter produced the number. This proof allows anyone to verify the validity of the random number.

+

Polkadot's VRF is similar to the one used in Ouroboros Praos, which secures randomness for block production in systems like BABE (Polkadot’s block production mechanism).

+

The key difference is that Polkadot's VRF doesn’t rely on a central clock—avoiding the issue of whose clock to trust. Instead, it uses its own past results and slot numbers to simulate time and determine future outcomes.

+

How VRF Works

+

Slots on Polkadot are discrete units of time, each lasting six seconds, and can potentially hold a block. Multiple slots form an epoch, with 2400 slots making up one four-hour epoch.

+

In each slot, validators execute a "die roll" using a VRF. The VRF uses three inputs:

+
    +
  1. A "secret key", unique to each validator, is used for the die roll
  2. +
  3. An epoch randomness value, derived from the hash of VRF outputs from blocks two epochs ago (N-2), so past randomness influences the current epoch (N)
  4. +
  5. The current slot number
  6. +
+

This process helps maintain fair randomness across the network.

+

Here is a graphical representation:

+

+

The VRF produces two outputs: a result (the random number) and a proof (verifying that the number was generated correctly).

+

The result is checked by the validator against a protocol threshold. If it's below the threshold, the validator becomes a candidate for block production in that slot.

+

The validator then attempts to create a block, submitting it along with the PROOF and RESULT.

+

So, VRF can be expressed like:

+

(RESULT, PROOF) = VRF(SECRET, EPOCH_RANDOMNESS_VALUE, CURRENT_SLOT_NUMBER)

+

Put simply, performing a "VRF roll" generates a random number along with proof that the number was genuinely produced and not arbitrarily chosen.

+

After executing the VRF, the RESULT is compared to a protocol-defined THRESHOLD. If the RESULT is below the THRESHOLD, the validator becomes a valid candidate to propose a block for that slot. Otherwise, the validator skips the slot.

+

As a result, there may be multiple validators eligible to propose a block for a slot. In this case, the block accepted by other nodes will prevail, provided it is on the chain with the latest finalized block as determined by the GRANDPA finality gadget. It's also possible for no block producers to be available for a slot, in which case the AURA consensus takes over. AURA is a fallback mechanism that randomly selects a validator to produce a block, running in parallel with BABE and only stepping in when no block producers exist for a slot. Otherwise, it remains inactive.

+

Because validators roll independently, no block candidates may appear in some slots if all roll numbers are above the threshold.

+
+

Note

+

The resolution of this issue and the assurance that Polkadot block times remain near constant-time can be checked on the PoS Consensus page.

+
+

RANDAO

+

An alternative on-chain randomness method is Ethereum's RANDAO, where validators perform thousands of hashes on a seed and publish the final hash during a round. The collective input from all validators forms the random number, and as long as one honest validator participates, the randomness is secure.

+

To enhance security, RANDAO can optionally be combined with a Verifiable Delay Function (VDF), ensuring that randomness can't be predicted or manipulated during computation.

+
+

Note

+

More information about RANDAO can be found in the ETH documentation.

+
+

VDFs

+

Verifiable Delay Functions (VDFs) are time-bound computations that, even on parallel computers, take a set amount of time to complete.

+

They produce a unique result that can be quickly verified publicly. When combined with RANDAO, feeding RANDAO's output into a VDF introduces a delay that nullifies an attacker's chance to influence the randomness.

+

However, VDF likely requires specialized ASIC devices to run separately from standard nodes.

+
+

Warning

+

While only one is needed to secure the system, and they will be open-source and inexpensive, running them involves significant costs without direct incentives, adding friction for blockchain users.

+
+

Additional Resources

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/glossary/index.html b/polkadot-protocol/glossary/index.html new file mode 100644 index 00000000..b17cad66 --- /dev/null +++ b/polkadot-protocol/glossary/index.html @@ -0,0 +1,6097 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Glossary | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+ +
+ + + + +

Glossary

+

Key definitions, concepts, and terminology specific to the Polkadot ecosystem are included here.

+

Additional glossaries from around the ecosystem you might find helpful:

+ +

Authority

+

The role in a blockchain that can participate in consensus mechanisms.

+ +

Authority sets can be used as a basis for consensus mechanisms such as the Nominated Proof of Stake (NPoS) protocol.

+

Authority Round (Aura)

+

A deterministic consensus protocol where block production is limited to a rotating list of authorities that take turns creating blocks. In authority round (Aura) consensus, most online authorities are assumed to be honest. It is often used in combination with GRANDPA as a hybrid consensus protocol.

+

Learn more by reading the official Aura consensus algorithm wiki article.

+

Blind Assignment of Blockchain Extension (BABE)

+

A block authoring protocol similar to Aura, except authorities win slots based on a Verifiable Random Function (VRF) instead of the round-robin selection method. The winning authority can select a chain and submit a new block.

+

Learn more by reading the official Web3 Foundation BABE research document.

+

Block Author

+

The node responsible for the creation of a block, also called block producers. In a Proof of Work (PoW) blockchain, these nodes are called miners.

+

Byzantine Fault Tolerance (BFT)

+

The ability of a distributed computer network to remain operational if a certain proportion of its nodes or authorities are defective or behaving maliciously.

+
+

Note

+

A distributed network is typically considered Byzantine fault tolerant if it can remain functional, with up to one-third of nodes assumed to be defective, offline, actively malicious, and part of a coordinated attack.

+
+

Byzantine Failure

+

The loss of a network service due to node failures that exceed the proportion of nodes required to reach consensus.

+

Practical Byzantine Fault Tolerance (pBFT)

+

An early approach to Byzantine fault tolerance (BFT), practical Byzantine fault tolerance (pBFT) systems tolerate Byzantine behavior from up to one-third of participants.

+

The communication overhead for such systems is O(n²), where n is the number of nodes (participants) in the system.

+

Call

+

In the context of pallets containing functions to be dispatched to the runtime, Call is an enumeration data type that describes the functions that can be dispatched with one variant per pallet. A Call represents a dispatch data structure object.

+

Chain Specification

+

A chain specification file defines the properties required to run a node in an active or new Polkadot SDK-built network. It often contains the initial genesis runtime code, network properties (such as the network's name), the initial state for some pallets, and the boot node list. The chain specification file makes it easy to use a single Polkadot SDK codebase as the foundation for multiple independently configured chains.

+

Collator

+

An author of a parachain network. +They aren't authorities in themselves, as they require a relay chain to coordinate consensus.

+

More details are found on the Polkadot Collator Wiki.

+

Collective

+

Most often used to refer to an instance of the Collective pallet on Polkadot SDK-based networks such as Kusama or Polkadot if the Collective pallet is part of the FRAME-based runtime for the network.

+

Consensus

+

Consensus is the process blockchain nodes use to agree on a chain's canonical fork. It is composed of authorship, finality, and fork-choice rule. In the Polkadot ecosystem, these three components are usually separate and the term consensus often refers specifically to authorship.

+

See also hybrid consensus.

+

Consensus Algorithm

+

Ensures a set of actors—who don't necessarily trust each other—can reach an agreement about the state as the result of some computation. Most consensus algorithms assume that up to one-third of the actors or nodes can be Byzantine fault tolerant.

+

Consensus algorithms are generally concerned with ensuring two properties:

+
    +
  • Safety - indicating that all honest nodes eventually agreed on the state of the chain
  • +
  • Liveness - indicating the ability of the chain to keep progressing
  • +
+

Consensus Engine

+

The node subsystem responsible for consensus tasks.

+

For detailed information about the consensus strategies of the Polkadot network, see the Polkadot Consensus blog series.

+

See also hybrid consensus.

+

Coretime

+

The time allocated for utilizing a core, measured in relay chain blocks. There are two types of coretime: on-demand and bulk.

+

On-demand coretime refers to coretime acquired through bidding in near real-time for the validation of a single parachain block on one of the cores reserved specifically for on-demand orders. They are available as an on-demand coretime pool. Set of cores that are available on-demand. Cores reserved through bulk coretime could also be made available in the on-demand coretime pool, in parts or in entirety.

+

Bulk coretime is a fixed duration of continuous coretime represented by an NFT that can be split, shared, or resold. It is managed by the Broker pallet.

+

Development Phrase

+

A mnemonic phrase that is intentionally made public.

+

Well-known development accounts, such as Alice, Bob, Charlie, Dave, Eve, and Ferdie, are generated from the same secret phrase:

+
bottom drive obey lake curtain smoke basket hold race lonely fit walk
+
+

Many tools in the Polkadot SDK ecosystem, such as subkey, allow you to implicitly specify an account using a derivation path such as //Alice.

+

Digest

+

An extensible field of the block header that encodes information needed by several actors in a blockchain network, including:

+
    +
  • Light clients for chain synchronization
  • +
  • Consensus engines for block verification
  • +
  • The runtime itself, in the case of pre-runtime digests
  • +
+

Dispatchable

+

Function objects that act as the entry points in FRAME pallets. Internal or external entities can call them to interact with the blockchain’s state. They are a core aspect of the runtime logic, handling transactions and other state-changing operations.

+

Events

+

A means of recording that some particular state transition happened.

+

In the context of FRAME, events are composable data types that each pallet can individually define. Events in FRAME are implemented as a set of transient storage items inspected immediately after a block has been executed and reset during block initialization.

+

Executor

+

A means of executing a function call in a given runtime with a set of dependencies. +There are two orchestration engines in Polkadot SDK, WebAssembly and native.

+
    +
  • +

    The native executor uses a natively compiled runtime embedded in the node to execute calls. This is a performance optimization available to up-to-date nodes

    +
  • +
  • +

    The WebAssembly executor uses a Wasm binary and a Wasm interpreter to execute calls. The binary is guaranteed to be up-to-date regardless of the version of the blockchain node because it is persisted in the state of the Polkadot SDK-based chain

    +
  • +
+

Existential Deposit

+

The minimum balance an account is allowed to have in the Balances pallet. Accounts cannot be created with a balance less than the existential deposit amount.

+

If an account balance drops below this amount, the Balances pallet uses a FRAME System API to drop its references to that account.

+

If the Balances pallet reference to an account is dropped, the account can be reaped.

+

Extrinsic

+

A general term for data that originates outside the runtime, is included in a block, and leads to some action. This includes user-initiated transactions and inherent transactions placed into the block by the block builder.

+

It is a SCALE-encoded array typically consisting of a version number, signature, and varying data types indicating the resulting runtime function to be called. Extrinsics can take two forms: inherents and transactions.

+

For more technical details, see the Polkadot spec.

+

Fork Choice Rule/Strategy

+

A fork choice rule or strategy helps determine which chain is valid when reconciling several network forks. A common fork choice rule is the longest chain, in which the chain with the most blocks is selected.

+

FRAME (Framework for Runtime Aggregation of Modularized Entities)

+

Enables developers to create blockchain runtime environments from a modular set of components called pallets. It utilizes a set of procedural macros to construct runtimes.

+

Visit the Polkadot SDK docs for more details on FRAME.

+

Full Node

+

A node that prunes historical states, keeping only recently finalized block states to reduce storage needs. Full nodes provide current chain state access and allow direct submission and validation of extrinsics, maintaining network decentralization.

+

Genesis Configuration

+

A mechanism for specifying the initial state of a blockchain. By convention, this initial state or first block is commonly referred to as the genesis state or genesis block. The genesis configuration for Polkadot SDK-based chains is accomplished by way of a chain specification file.

+

GRANDPA

+

A deterministic finality mechanism for blockchains that is implemented in the Rust programming language.

+

The formal specification is maintained by the Web3 Foundation.

+ +

A structure that aggregates the information used to summarize a block. Primarily, it consists of cryptographic information used by light clients to get minimally secure but very efficient chain synchronization.

+

Hybrid Consensus

+

A blockchain consensus protocol that consists of independent or loosely coupled mechanisms for block production and finality.

+

Hybrid consensus allows the chain to grow as fast as probabilistic consensus protocols, such as Aura, while maintaining the same level of security as deterministic finality consensus protocols, such as GRANDPA.

+

Inherent Transactions

+

A special type of unsigned transaction, referred to as inherents, that enables a block authoring node to insert information that doesn't require validation directly into a block.

+

Only the block-authoring node that calls the inherent transaction function can insert data into its block. In general, validators assume the data inserted using an inherent transaction is valid and reasonable even if it can't be deterministically verified.

+

JSON-RPC

+

A stateless, lightweight remote procedure call protocol encoded in JavaScript Object Notation (JSON). JSON-RPC provides a standard way to call functions on a remote system by using JSON.

+

For Polkadot SDK, this protocol is implemented through the Parity JSON-RPC crate.

+

Keystore

+

A subsystem for managing keys for the purpose of producing new blocks.

+

Kusama

+

Kusama is a Polkadot SDK-based blockchain that implements a design similar to the Polkadot network.

+

Kusama is a canary network and is referred to as Polkadot's "wild cousin."

+

As a canary network, Kusama is expected to be more stable than a test network like Westend but less stable than a production network like Polkadot. Kusama is controlled by its network participants and is intended to be stable enough to encourage meaningful experimentation.

+

libp2p

+

A peer-to-peer networking stack that allows the use of many transport mechanisms, including WebSockets (usable in a web browser).

+

Polkadot SDK uses the Rust implementation of the libp2p networking stack.

+

Light Client

+

A type of blockchain node that doesn't store the chain state or produce blocks.

+

A light client can verify cryptographic primitives and provides a remote procedure call (RPC) server, enabling blockchain users to interact with the network.

+

Metadata

+

Data that provides information about one or more aspects of a system. +The metadata that exposes information about a Polkadot SDK blockchain enables you to interact with that system.

+

Nominated Proof of Stake (NPoS)

+

A method for determining validators or authorities based on a willingness to commit their stake to the proper functioning of one or more block-producing nodes.

+

Oracle

+

An entity that connects a blockchain to a non-blockchain data source. Oracles enable the blockchain to access and act upon information from existing data sources and incorporate data from non-blockchain systems and services.

+

Origin

+

A FRAME primitive that identifies the source of a dispatched function call into the runtime. The FRAME System pallet defines three built-in origins. As a pallet developer, you can also define custom origins, such as those defined by the Collective pallet.

+

Pallet

+

A module that can be used to extend the capabilities of a FRAME-based runtime. +Pallets bundle domain-specific logic with runtime primitives like events and storage items.

+

Parachain

+

A parachain is a blockchain that derives shared infrastructure and security from a relay chain. +You can learn more about parachains on the Polkadot Wiki.

+

Paseo

+

Paseo TestNet provisions testing on Polkadot's "production" runtime, which means less chance of feature or code mismatch when developing parachain apps. Specifically, after the Polkadot Technical fellowship proposes a runtime upgrade for Polkadot, this TestNet is updated, giving a period where the TestNet will be ahead of Polkadot to allow for testing.

+

Polkadot

+

The Polkadot network is a blockchain that serves as the central hub of a heterogeneous blockchain network. It serves the role of the relay chain and provides shared infrastructure and security to support parachains.

+

Relay Chain

+

Relay chains are blockchains that provide shared infrastructure and security to the parachains in the network. In addition to providing consensus capabilities, relay chains allow parachains to communicate and exchange digital assets without needing to trust one another.

+

Rococo

+

A parachain test network for the Polkadot network. The Rococo network is a Polkadot SDK-based blockchain with an October 14, 2024 deprecation date. Development teams are encouraged to use the Paseo TestNet instead.

+

Runtime

+

The runtime provides the state transition function for a node. In Polkadot SDK, the runtime is stored as a Wasm binary in the chain state.

+

Slot

+

A fixed, equal interval of time used by consensus engines such as Aura and BABE. In each slot, a subset of authorities is permitted, or obliged, to author a block.

+

Sovereign Account

+

The unique account identifier for each chain in the relay chain ecosystem. It is often used in cross-consensus (XCM) interactions to sign XCM messages sent to the relay chain or other chains in the ecosystem.

+

The sovereign account for each chain is a root-level account that can only be accessed using the Sudo pallet or through governance. The account identifier is calculated by concatenating the Blake2 hash of a specific text string and the registered parachain identifier.

+

SS58 Address Format

+

A public key address based on the Bitcoin Base-58-check encoding. Each Polkadot SDK SS58 address uses a base-58 encoded value to identify a specific account on a specific Polkadot SDK-based chain

+

The canonical ss58-registry provides additional details about the address format used by different Polkadot SDK-based chains, including the network prefix and website used for different networks

+

State Transition Function (STF)

+

The logic of a blockchain that determines how the state changes when a block is processed. In Polkadot SDK, the state transition function is effectively equivalent to the runtime.

+

Storage Item

+

FRAME primitives that provide type-safe data persistence capabilities to the runtime. +Learn more in the storage items reference document in the Polkadot SDK.

+

Substrate

+

A flexible framework for building modular, efficient, and upgradeable blockchains. Substrate is written in the Rust programming language and is maintained by Parity Technologies.

+

Transaction

+

An extrinsic that includes a signature that can be used to verify the account authorizing it inherently or via signed extensions.

+

Transaction Era

+

A definable period expressed as a range of block numbers during which a transaction can be included in a block. +Transaction eras are used to protect against transaction replay attacks if an account is reaped and its replay-protecting nonce is reset to zero.

+

Trie (Patricia Merkle Tree)

+

A data structure used to represent sets of key-value pairs and enables the items in the data set to be stored and retrieved using a cryptographic hash. Because incremental changes to the data set result in a new hash, retrieving data is efficient even if the data set is very large. With this data structure, you can also prove whether the data set includes any particular key-value pair without access to the entire data set.

+

In Polkadot SDK-based blockchains, state is stored in a trie data structure that supports the efficient creation of incremental digests. This trie is exposed to the runtime as a simple key/value map where both keys and values can be arbitrary byte arrays.

+

Validator

+

A validator is a node that participates in the consensus mechanism of the network. Its roles include block production, transaction validation, network integrity, and security maintenance.

+

WebAssembly (Wasm)

+

An execution architecture that allows for the efficient, platform-neutral expression of +deterministic, machine-executable logic.

+

Wasm can be compiled from many languages, including +the Rust programming language. Polkadot SDK-based chains use a Wasm binary to provide portable runtimes that can be included as part of the chain's state.

+

Weight

+

A convention used in Polkadot SDK-based blockchains to measure and manage the time it takes to validate a block. +Polkadot SDK defines one unit of weight as one picosecond of execution time on reference hardware.

+

The maximum block weight should be equivalent to one-third of the target block time with an allocation of one-third each for:

+
    +
  • Block construction
  • +
  • Network propagation
  • +
  • Import and verification
  • +
+

By defining weights, you can trade-off the number of transactions per second and the hardware required to maintain the target block time appropriate for your use case. Weights are defined in the runtime, meaning you can tune them using runtime updates to keep up with hardware and software improvements.

+

Westend

+

Westend is a Parity-maintained, Polkadot SDK-based blockchain that serves as a test network for the Polkadot network.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/index.html b/polkadot-protocol/index.html new file mode 100644 index 00000000..c2f6ee99 --- /dev/null +++ b/polkadot-protocol/index.html @@ -0,0 +1,4921 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Learn About the Polkadot Protocol | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Learn About the Polkadot Protocol

+

The Polkadot protocol is designed to enable scalable, secure, and interoperable networks. It introduces a unique multichain architecture that allows independent blockchains, known as parachains, to operate seamlessly while benefiting from the shared security of the relay chain. Polkadot’s decentralized governance ensures that network upgrades and decisions are community-driven, while its cross-chain messaging and interoperability features make it a hub for multichain applications.

+

This section offers a comprehensive technical overview of the Polkadot Protocol, delving into its multichain architecture, foundational principles, cryptographic underpinnings, and on-chain governance system. These key components constitute the core building blocks that power Polkadot, enabling seamless collaboration between parachains, efficient network operation, and decentralized decision-making through OpenGov.

+

Whether you're new to blockchain or an experienced developer, you'll gain insights into how the Polkadot Protocol enables scalable, interoperable, and decentralized networks.

+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/onchain-governance/index.html b/polkadot-protocol/onchain-governance/index.html new file mode 100644 index 00000000..0330de38 --- /dev/null +++ b/polkadot-protocol/onchain-governance/index.html @@ -0,0 +1,4924 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + On-Chain Governance | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

On-Chain Governance

+

Polkadot's on-chain governance system, OpenGov, enables decentralized decision-making across the network. It empowers stakeholders to propose, vote on, and enact changes with transparency and efficiency. This system ensures that governance is both flexible and inclusive, allowing developers to integrate custom governance solutions and mechanisms within the network. Understanding how OpenGov functions is crucial for anyone looking to engage with Polkadot’s decentralized ecosystem, whether you’re proposing upgrades, managing referenda, or exploring voting structures.

+

At the core of Polkadot’s governance system are three key pallets: Preimage, Referenda, and Conviction Voting. These components enable flexible, decentralized decision-making, providing developers with the tools to create tailored governance solutions. This modular approach ensures governance remains dynamic, secure, and adaptable, fostering deeper participation and alignment with the network’s goals. By leveraging these pallets, developers can build custom governance models that shape the evolution of the Polkadot ecosystem.

+

Start Building Governance Solutions

+

To develop solutions related to Polkadot's governance system, it’s essential to understand three key pallets:

+
    +
  • Preimage - stores and manages the content or the detailed information of a referendum proposal before it is voted on
  • +
  • Referenda - manages the lifecycle of a referendum, including proposal submission, voting, and execution. Once a referendum is proposed and voted on, it can be enacted if it passes the required threshold
  • +
  • Conviction Voting - manages the voting power based on the "conviction" or commitment of voters, providing a more flexible and nuanced voting mechanism
  • +
+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/onchain-governance/origins-tracks/index.html b/polkadot-protocol/onchain-governance/origins-tracks/index.html new file mode 100644 index 00000000..4eb9aaf1 --- /dev/null +++ b/polkadot-protocol/onchain-governance/origins-tracks/index.html @@ -0,0 +1,4928 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Origins and Tracks | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + +

Origins and Tracks

+

Introduction

+

Polkadot's OpenGov system empowers decentralized decision-making and active community participation by tailoring the governance process to the impact of proposed changes. Through a system of origins and tracks, OpenGov ensures that every referendum receives the appropriate scrutiny, balancing security, inclusivity, and efficiency.

+

This guide will help you understand the role of origins in classifying proposals by privilege and priority. You will learn how tracks guide proposals through tailored stages like voting, confirmation, and enactment and how to select the correct origin for your referendum to align with community expectations and network governance.

+

Origins and tracks are vital in streamlining the governance workflow and maintaining Polkadot's resilience and adaptability.

+

Origins

+

Origins are the foundation of Polkadot's OpenGov governance system. They categorize proposals by privilege and define their decision-making rules. Each origin corresponds to a specific level of importance and risk, guiding how referendums progress through the governance process.

+
    +
  • High-privilege origins like Root Origin govern critical network changes, such as core software upgrades
  • +
  • Lower-privilege origins like Small Spender handle minor requests, such as community project funding under 10,000 DOT
  • +
+

Proposers select an origin based on the nature of their referendum. Origins determine parameters like approval thresholds, required deposits, and timeframes for voting and confirmation. Each origin is paired with a track, which acts as a roadmap for the proposal's lifecycle, including preparation, voting, and enactment.

+
+

OpenGov Origins

+

Explore the Polkadot OpenGov Origins page for a detailed list of origins and their associated parameters.

+
+

Tracks

+

Tracks define a referendum's journey from submission to enactment, tailoring governance parameters to the impact of proposed changes. Each track operates independently and includes several key stages:

+
    +
  • Preparation - time for community discussion before voting begins
  • +
  • Voting - period for token holders to cast their votes
  • +
  • Decision - finalization of results and determination of the proposal's outcome
  • +
  • Confirmation - period to verify sustained community support before enactment
  • +
  • Enactment - final waiting period before the proposal takes effect
  • +
+

Tracks customize these stages with parameters like decision deposit requirements, voting durations, and approval thresholds, ensuring proposals from each origin receive the required scrutiny and process. For example, a runtime upgrade in the Root Origin track will have longer timeframes and stricter thresholds than a treasury request in the Small Spender track.

+

Additional Resources

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/polkadot-protocol/onchain-governance/overview/index.html b/polkadot-protocol/onchain-governance/overview/index.html new file mode 100644 index 00000000..3b184dc1 --- /dev/null +++ b/polkadot-protocol/onchain-governance/overview/index.html @@ -0,0 +1,5056 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + On-Chain Governance Overview | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

On-Chain Governance

+

Introduction

+

Polkadot’s governance system exemplifies decentralized decision-making, empowering its community of stakeholders to shape the network’s future through active participation. The latest evolution, OpenGov, builds on Polkadot’s foundation by providing a more inclusive and efficient governance model.

+

This guide will explain the principles and structure of OpenGov and walk you through its key components, such as Origins, Tracks, and Delegation. You will learn about improvements over earlier governance systems, including streamlined voting processes and enhanced stakeholder participation.

+

With OpenGov, Polkadot achieves a flexible, scalable, and democratic governance framework that allows multiple proposals to proceed simultaneously, ensuring the network evolves in alignment with its community's needs.

+

Governance Evolution

+

Polkadot’s governance journey began with Governance V1, a system that proved effective in managing treasury funds and protocol upgrades. However, it faced limitations, such as:

+
    +
  • Slow voting cycles, causing delays in decision-making
  • +
  • Inflexibility in handling multiple referendums, restricting scalability
  • +
+

To address these challenges, Polkadot introduced OpenGov, a governance model designed for greater inclusivity, efficiency, and scalability. OpenGov replaces the centralized structures of Governance V1, such as the Council and Technical Committee, with a fully decentralized and dynamic framework.

+

For a full comparison of the historic and current governance models, visit the Gov1 vs. Polkadot OpenGov section of the Polkadot Wiki.

+

OpenGov Key Features

+

OpenGov transforms Polkadot’s governance into a decentralized, stakeholder-driven model, eliminating centralized decision-making bodies like the Council. Key enhancements include:

+
    +
  • Decentralization - shifts all decision-making power to the public, ensuring a more democratic process
  • +
  • Enhanced delegation - allows users to delegate their votes to trusted experts across specific governance tracks
  • +
  • Simultaneous referendums - multiple proposals can progress at once, enabling faster decision-making
  • +
  • Polkadot Technical Fellowship - a broad, community-driven group replacing the centralized Technical Committee
  • +
+

This new system ensures Polkadot governance remains agile and inclusive, even as the ecosystem grows.

+

Origins and Tracks

+

In OpenGov, origins and tracks are central to managing proposals and votes.

+
    +
  • Origin - determines the authority level of a proposal (e.g., Treasury, Root) which decides the track of all referendums from that origin
  • +
  • Track - define the procedural flow of a proposal, such as voting duration, approval thresholds, and enactment timelines
  • +
+

Developers must be aware that referendums from different origins and tracks will take varying amounts of time to reach approval and enactment. The Polkadot Technical Fellowship has the option to shorten this timeline by whitelisting a proposal and allowing it to be enacted through the Whitelist Caller origin.

+

Visit Origins and Tracks Info for details on current origins and tracks, associated terminology, and parameters.

+

Referendums

+

In OpenGov, anyone can submit a referendum, fostering an open and participatory system. The timeline for a referendum depends on the privilege level of the origin with more significant changes offering more time for community voting and participation before enactment.

+

The timeline for an individual referendum includes four distinct periods:

+
    +
  • Lead-in - a minimum amount of time to allow for community participation, available room in the origin, and payment of the decision deposit. Voting is open during this period
  • +
  • Decision - voting continues
  • +
  • Confirmation - referendum must meet approval and support criteria during entire period to avoid rejection
  • +
  • Enactment - changes approved by the referendum are executed
  • +
+

Vote on Referendums

+

Voters can vote with their tokens on each referendum. Polkadot uses a voluntary token locking mechanism, called conviction voting, as a way for voters to increase their voting power. A token holder signals they have a stronger preference for approving a proposal based upon their willingness to lock up tokens. Longer voluntary token locks are seen as a signal of continual approval and translate to increased voting weight.

+

See Voting on a Referendum for a deeper look at conviction voting and related token locks.

+

Delegate Voting Power

+

The OpenGov system also supports multi-role delegations, allowing token holders to assign their voting power on different tracks to entities with expertise in those areas.

+

For example, if a token holder lacks the technical knowledge to evaluate proposals on the Root track, they can delegate their voting power for that track to an expert they trust to vote in the best interest of the network. This ensures informed decision-making across tracks while maintaining flexibility for token holders.

+

Visit Multirole Delegation for more details on delegating voting power.

+

Cancel a Referendum

+

Polkadot OpenGov has two origins for rejecting ongoing referendums:

+
    +
  • Referendum Canceller - cancels an active referendum when non-malicious errors occur and refunds the deposits to the originators
  • +
  • Referendum Killer - used for urgent, malicious cases this origin instantly terminates an active referendum and slashes deposits
  • +
+

See Cancelling, Killing, and Blacklisting for additional information on rejecting referendums.

+

Additional Resources

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..325a58ec --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"LICENSE/","title":"LICENSE","text":"

Attribution 4.0 International

=======================================================================

Creative Commons Corporation (\"Creative Commons\") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an \"as-is\" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.

Using Creative Commons Public Licenses

Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.

 Considerations for licensors: Our public licenses are\n intended for use by those authorized to give the public\n permission to use material in ways otherwise restricted by\n copyright and certain other rights. Our licenses are\n irrevocable. Licensors should read and understand the terms\n and conditions of the license they choose before applying it.\n Licensors should also secure all rights necessary before\n applying our licenses so that the public can reuse the\n material as expected. Licensors should clearly mark any\n material not subject to the license. This includes other CC-\n licensed material, or material used under an exception or\n limitation to copyright. More considerations for licensors:\nwiki.creativecommons.org/Considerations_for_licensors\n\n Considerations for the public: By using one of our public\n licenses, a licensor grants the public permission to use the\n licensed material under specified terms and conditions. If\n the licensor's permission is not necessary for any reason--for\n example, because of any applicable exception or limitation to\n copyright--then that use is not regulated by the license. Our\n licenses grant only permissions under copyright and certain\n other rights that a licensor has authority to grant. Use of\n the licensed material may still be restricted for other\n reasons, including because others have copyright or other\n rights in the material. A licensor may make special requests,\n such as asking that all changes be marked or described.\n Although not required by our licenses, you are encouraged to\n respect those requests where reasonable. More_considerations\n for the public:\nwiki.creativecommons.org/Considerations_for_licensees\n

=======================================================================

Creative Commons Attribution 4.0 International Public License

By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution 4.0 International Public License (\"Public License\"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.

Section 1 -- Definitions.

a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.

b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.

c. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.

d. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.

e. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.

f. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.

g. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.

h. Licensor means the individual(s) or entity(ies) granting rights under this Public License.

i. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.

j. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.

k. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.

Section 2 -- Scope.

a. License grant.

   1. Subject to the terms and conditions of this Public License,\n      the Licensor hereby grants You a worldwide, royalty-free,\n      non-sublicensable, non-exclusive, irrevocable license to\n      exercise the Licensed Rights in the Licensed Material to:\n\n        a. reproduce and Share the Licensed Material, in whole or\n           in part; and\n\n        b. produce, reproduce, and Share Adapted Material.\n\n   2. Exceptions and Limitations. For the avoidance of doubt, where\n      Exceptions and Limitations apply to Your use, this Public\n      License does not apply, and You do not need to comply with\n      its terms and conditions.\n\n   3. Term. The term of this Public License is specified in Section\n      6(a).\n\n   4. Media and formats; technical modifications allowed. The\n      Licensor authorizes You to exercise the Licensed Rights in\n      all media and formats whether now known or hereafter created,\n      and to make technical modifications necessary to do so. The\n      Licensor waives and/or agrees not to assert any right or\n      authority to forbid You from making technical modifications\n      necessary to exercise the Licensed Rights, including\n      technical modifications necessary to circumvent Effective\n      Technological Measures. For purposes of this Public License,\n      simply making modifications authorized by this Section 2(a)\n      (4) never produces Adapted Material.\n\n   5. Downstream recipients.\n\n        a. Offer from the Licensor -- Licensed Material. Every\n           recipient of the Licensed Material automatically\n           receives an offer from the Licensor to exercise the\n           Licensed Rights under the terms and conditions of this\n           Public License.\n\n        b. No downstream restrictions. You may not offer or impose\n           any additional or different terms or conditions on, or\n           apply any Effective Technological Measures to, the\n           Licensed Material if doing so restricts exercise of the\n           Licensed Rights by any recipient of the Licensed\n           Material.\n\n   6. No endorsement. Nothing in this Public License constitutes or\n      may be construed as permission to assert or imply that You\n      are, or that Your use of the Licensed Material is, connected\n      with, or sponsored, endorsed, or granted official status by,\n      the Licensor or others designated to receive attribution as\n      provided in Section 3(a)(1)(A)(i).\n

b. Other rights.

   1. Moral rights, such as the right of integrity, are not\n      licensed under this Public License, nor are publicity,\n      privacy, and/or other similar personality rights; however, to\n      the extent possible, the Licensor waives and/or agrees not to\n      assert any such rights held by the Licensor to the limited\n      extent necessary to allow You to exercise the Licensed\n      Rights, but not otherwise.\n\n   2. Patent and trademark rights are not licensed under this\n      Public License.\n\n   3. To the extent possible, the Licensor waives any right to\n      collect royalties from You for the exercise of the Licensed\n      Rights, whether directly or through a collecting society\n      under any voluntary or waivable statutory or compulsory\n      licensing scheme. In all other cases the Licensor expressly\n      reserves any right to collect such royalties.\n

Section 3 -- License Conditions.

Your exercise of the Licensed Rights is expressly made subject to the following conditions.

a. Attribution.

   1. If You Share the Licensed Material (including in modified\n      form), You must:\n\n        a. retain the following if it is supplied by the Licensor\n           with the Licensed Material:\n\n             i. identification of the creator(s) of the Licensed\n                Material and any others designated to receive\n                attribution, in any reasonable manner requested by\n                the Licensor (including by pseudonym if\n                designated);\n\n            ii. a copyright notice;\n\n           iii. a notice that refers to this Public License;\n\n            iv. a notice that refers to the disclaimer of\n                warranties;\n\n             v. a URI or hyperlink to the Licensed Material to the\n                extent reasonably practicable;\n\n        b. indicate if You modified the Licensed Material and\n           retain an indication of any previous modifications; and\n\n        c. indicate the Licensed Material is licensed under this\n           Public License, and include the text of, or the URI or\n           hyperlink to, this Public License.\n\n   2. You may satisfy the conditions in Section 3(a)(1) in any\n      reasonable manner based on the medium, means, and context in\n      which You Share the Licensed Material. For example, it may be\n      reasonable to satisfy the conditions by providing a URI or\n      hyperlink to a resource that includes the required\n      information.\n\n   3. If requested by the Licensor, You must remove any of the\n      information required by Section 3(a)(1)(A) to the extent\n      reasonably practicable.\n\n   4. If You Share Adapted Material You produce, the Adapter's\n      License You apply must not prevent recipients of the Adapted\n      Material from complying with this Public License.\n

Section 4 -- Sui Generis Database Rights.

Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:

a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database;

b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and

c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.

For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.

Section 5 -- Disclaimer of Warranties and Limitation of Liability.

a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.

b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.

c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.

Section 6 -- Term and Termination.

a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.

b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:

   1. automatically as of the date the violation is cured, provided\n      it is cured within 30 days of Your discovery of the\n      violation; or\n\n   2. upon express reinstatement by the Licensor.\n\n For the avoidance of doubt, this Section 6(b) does not affect any\n right the Licensor may have to seek remedies for Your violations\n of this Public License.\n

c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.

d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.

Section 7 -- Other Terms and Conditions.

a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.

b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.

Section 8 -- Interpretation.

a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.

b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.

c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.

d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.

=======================================================================

Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the \u201cLicensor.\u201d The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark \"Creative Commons\" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.

Creative Commons may be contacted at creativecommons.org

"},{"location":"develop/","title":"Develop and Deploy with Polkadot","text":"

Polkadot offers a unique platform for developers to build the next generation of blockchain solutions. Whether developing a custom parachain, deploying smart contracts, or crafting user-facing applications, the Polkadot ecosystem provides the tools, frameworks, and infrastructure to bring your vision to life.

This section builds on concepts from the Polkadot Protocol section, providing the practical knowledge and resources needed to design, deploy, and operate scalable solutions in the Polkadot ecosystem. By leveraging Polkadot's unique features, such as interoperability and decentralized security, developers can build applications and infrastructure tailored to their needs.

"},{"location":"develop/#choose-your-development-path","title":"Choose Your Development Path","text":"
  • Parachain developers - build, deploy, and maintain custom parachains with the Polkadot SDK. From runtime customization to network operations, learn how to create a scalable and efficient parachain

    • Where to start - Introduction to the Polkadot SDK
  • Smart contract developers - utilize Polkadot's support for Wasm-based contracts with ink! or deploy Solidity contracts on EVM-compatible parachains. Leverage familiar tools to write, test, and manage decentralized logic

    • Where to start - Overview of the Smart Contract Landscape on Polkadot
  • Application developers - integrate your applications with the Polkadot ecosystem using wallets, oracles, indexers, and more. Explore guides on how to leverage Polkadot's decentralized infrastructure to deliver high-performing, user-facing solutions

    • Where to start - Polkadot Ecosystem Toolkit
  • Cross-chain developers - harness Polkadot's interoperability features to enable secure communication and asset transfers across blockchains. Leverage Cross-Consensus Messaging (XCM) to create innovative cross-chain workflows and applications

    • Where to start - Introduction to XCM
"},{"location":"develop/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/development-pathways/","title":"Development Pathways","text":""},{"location":"develop/development-pathways/#introduction","title":"Introduction","text":"

Developers can choose from different development pathways to build applications and core blockchain functionality. Each pathway caters to different types of projects and developer skill sets, while complementing one another within the broader network.

The Polkadot ecosystem provides multiple development pathways:

graph TD\n    A[Development Pathways]\n    A --> B[Smart Contract Development]\n    A --> C[Parachain Development]\n    A --> D[Client-Side Development]
"},{"location":"develop/development-pathways/#smart-contract-development","title":"Smart Contract Development","text":"

Smart contracts are sandboxed programs that run within a virtual machine on the blockchain. These deterministic pieces of code are deployed at specific blockchain addresses and execute predefined logic when triggered by transactions. Because they run in an isolated environment, they provide enhanced security and predictable execution. Smart contracts can be deployed permissionlessly, allowing any developer to create and launch applications without requiring special access or permissions. They enable developers to create trustless applications by encoding rules, conditions, and state transitions that leverage the security and transparency of the underlying blockchain.

Some key benefits of developing smart contracts include ease of development, faster time to market, and permissionless deployment. Smart contracts allow developers to quickly build and deploy decentralized applications without complex infrastructure or intermediaries. This accelerates the development lifecycle and enables rapid innovation within the Polkadot ecosystem.

For more information on developing smart contracts in the Polkadot ecosystem, check the Smart Contracts section.

"},{"location":"develop/development-pathways/#parachain-development","title":"Parachain Development","text":"

Runtimes are the core building blocks that define the logic and functionality of Polkadot SDK-based parachains. Developers can customize and extend the features of their blockchain, allowing for tighter integration with critical network tasks such as block production, consensus mechanisms, and governance processes.

Runtimes can be upgraded through forkless runtime updates, enabling seamless evolution of the parachain without disrupting existing functionality.

Developers can define the parameters, rules, and behaviors that shape their blockchain network. This includes token economics, transaction fees, permissions, and more. Using the Polkadot SDK, teams can iterate on their blockchain designs, experiment with new features, and deploy highly specialized networks tailored to their specific use cases.

For those interested in delving deeper into runtime development, explore the dedicated Customize Your Parachain section.

"},{"location":"develop/development-pathways/#client-side-development","title":"Client-Side Development","text":"

The client-side development path is dedicated to building applications that interact with Polkadot SDK-based blockchains and enhance user engagement with the network. While decentralized applications (dApps) are a significant focus, this pathway also includes developing other tools and interfaces that expand users' interactions with blockchain data and services.

Client-side developers can build:

  • Decentralized applications (dApps) - these applications leverage the blockchain's smart contracts or runtimes to offer a wide range of features, from financial services to gaming and social applications, all accessible directly by end-users

  • Command-line interfaces (CLIs) - CLI tools empower developers and technical users to interact with the blockchain programmatically. These tools enable tasks like querying the blockchain, deploying smart contracts, managing wallets, and monitoring network status

  • Data analytics and visualization tools - developers can create tools that aggregate, analyze, and visualize on-chain data to help users and businesses understand trends, track transactions, and gain insights into the network's health and usage

  • Wallets - securely managing accounts and private keys is crucial for blockchain users. Client-side development includes building user-friendly wallets, account management tools, and extensions that integrate seamlessly with the ecosystem

  • Explorers and dashboards - blockchain explorers allow users to view and search on-chain data, including blocks, transactions, and accounts. Dashboards provide a more interactive interface for users to monitor critical metrics, such as staking rewards, governance proposals, and network performance

These applications can leverage the Polkadot blockchain's underlying protocol features to create solutions that allow users to interact with the ecosystem. The Client-side development pathway is ideal for developers interested in enhancing user experiences and building applications that bring the power of decentralized networks to a broader audience.

Check the API Libraries section for essential tools to interact with Polkadot SDK-based blockchain data and protocol features.

"},{"location":"develop/networks/","title":"Networks","text":""},{"location":"develop/networks/#introduction","title":"Introduction","text":"

The Polkadot ecosystem consists of multiple networks designed to support different stages of blockchain development, from main networks to test networks. Each network serves a unique purpose, providing developers with flexible environments for building, testing, and deploying blockchain applications.

This section includes essential network information such as RPC endpoints, currency symbols and decimals, and how to acquire TestNet tokens for the Polkadot ecosystem of networks.

"},{"location":"develop/networks/#production-networks","title":"Production Networks","text":""},{"location":"develop/networks/#polkadot","title":"Polkadot","text":"

Polkadot is the primary production blockchain network for high-stakes, enterprise-grade applications. Polkadot MainNet has been running since May 2020 and has implementations in various programming languages ranging from Rust to JavaScript.

Network DetailsRPC Endpoints

Currency symbol - DOT

Currency decimals - 10

Block explorer - Polkadot Subscan

Blockops

wss://polkadot-public-rpc.blockops.network/ws\n

Dwellir

wss://polkadot-rpc.dwellir.com\n

Dwellir Tunisia

wss://polkadot-rpc-tn.dwellir.com\n

IBP1

wss://rpc.ibp.network/polkadot\n

IBP2

wss://polkadot.dotters.network\n

LuckyFriday

wss://rpc-polkadot.luckyfriday.io\n

OnFinality

wss://polkadot.api.onfinality.io/public-ws\n

RadiumBlock

wss://polkadot.public.curie.radiumblock.co/ws\n

RockX

wss://rockx-dot.w3node.com/polka-public-dot/ws\n

Stakeworld

wss://dot-rpc.stakeworld.io\n

SubQuery

wss://polkadot.rpc.subquery.network/public/ws\n

Light client

light://substrate-connect/polkadot\n

"},{"location":"develop/networks/#kusama","title":"Kusama","text":"

Kusama is a network built as a risk-taking, fast-moving \"canary in the coal mine\" for its cousin Polkadot. As it is built on top of the same infrastructure, Kusama often acts as a final testing ground for new features before they are launched on Polkadot. Unlike true TestNets, however, the Kusama KSM native token does have economic value. This incentive encourages paricipants to maintain this robust and performant structure for the benefit of the community.

Network DetailsRPC Endpoints

Currency symbol - KSM

Currency decimals - 12

Block explorer - Kusama Subscan

Dwellir

wss://kusama-rpc.dwellir.com\n

Dwellir Tunisia

wss://kusama-rpc-tn.dwellir.com\n

IBP1

wss://rpc.ibp.network/kusama\n

IBP2

wss://kusama.dotters.network\n

LuckyFriday

wss://rpc-kusama.luckyfriday.io\n

OnFinality

wss://kusama.api.onfinality.io/public-ws\n

RadiumBlock

wss://kusama.public.curie.radiumblock.co/ws\n

RockX

wss://rockx-ksm.w3node.com/polka-public-ksm/ws\n

Stakeworld

wss://rockx-ksm.w3node.com/polka-public-ksm/ws\n

Light client

light://substrate-connect/kusama\n

"},{"location":"develop/networks/#test-networks","title":"Test Networks","text":""},{"location":"develop/networks/#westend","title":"Westend","text":"

Westend is the primary test network that mirrors Polkadot's functionality for protocol-level feature development. As a true TestNet, the WND native token intentionally does not have any economic value. Use the faucet information in the following section to obtain WND tokens.

Network InformationRPC Endpoints

Currency symbol - WND

Currency decimals - 12

Block explorer - Westend Subscan

Faucet - Official Westend faucet

Dwellir

wss://westend-rpc.dwellir.com\n

Dwellir Tunisia

wss://westend-rpc-tn.dwellir.com\n

IBP1

wss://rpc.ibp.network/westend\n

IBP2

wss://westend.dotters.network\n

OnFinality

wss://westend.api.onfinality.io/public-ws\n

Parity

wss://westend-rpc.polkadot.io\n

Light client

light://substrate-connect/westend\n

"},{"location":"develop/networks/#paseo","title":"Paseo","text":"

Paseo is a decentralised, community run, stable testnet for parachain and dapp developers to build and test their applications. Unlike Westend, Paseo is not intended for protocol-level testing. As a true TestNet, the PAS native token intentionally does not have any economic value. Use the faucet information in the following section to obtain PAS tokens.

Network InformationRPC Endpoints

RPC URL

wss://paseo.rpc.amforc.com\n

Currency symbol - PAS

Currency decimals - 10

Block explorer - Paseo Subscan

Faucet - Official Paseo faucet

Amforc

wss://paseo.rpc.amforc.com\n

Dwellir

wss://paseo-rpc.dwellir.com\n

IBP1

wss://rpc.ibp.network/paseo\n

IBP2

wss://paseo.dotters.network\n

StakeWorld

wss://pas-rpc.stakeworld.io\n

"},{"location":"develop/networks/#additional-resources","title":"Additional Resources","text":"
  • Polkadot Fellowship runtimes repository - find a collection of runtimes for Polkadot, Kusama, and their system-parachains as maintained by the community via the Polkadot Technical Fellowship
"},{"location":"develop/interoperability/","title":"Interoperability","text":"

This section covers everything you need to know about building and implementing Cross-Consensus Messaging (XCM) solutions in the Polkadot ecosystem. Whether you're working on establishing cross-chain channels, sending and receiving XCM messages, or testing and debugging your cross-chain configurations, you'll find the essential resources and tools here to support your interoperability needs, regardless of your development focus.

  • Not sure where to start? Visit the Interoperability overview page to explore different options and find the right fit for your project
  • Ready to dive in? Head over to Send XCM Messages to learn how to send a message cross-chain via XCM
"},{"location":"develop/interoperability/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/interoperability/#additional-resources","title":"Additional ResourcesReview the Polkadot SDK's XCM DocumentationFollow Step-by-Step TutorialsFamiliarize Yourself with the XCM FormatEssential XCM Tools","text":"

Dive into the official documentation to learn about the key components for supporting XCM in your parachain and enabling seamless cross-chain communication.

Enhance your XCM skills with step-by-step tutorials on building interoperability solutions on Polkadot SDK-based blockchains.

Gain a deeper understanding of the XCM format and structure, including any extra data it may need and what each part of a message means.

Explore essential tools for creating and integrating cross-chain solutions within the Polkadot ecosystem.

"},{"location":"develop/interoperability/intro-to-xcm/","title":"Introduction to XCM","text":""},{"location":"develop/interoperability/intro-to-xcm/#introduction","title":"Introduction","text":"

Polkadot\u2019s unique value lies in its ability to enable interoperability between parachains and other blockchain systems. At the core of this capability is XCM (Cross-Consensus Messaging)\u2014a flexible messaging format that facilitates communication and collaboration between independent consensus systems.

With XCM, one chain can send intents to another one, fostering a more interconnected ecosystem. Although it was developed specifically for Polkadot, XCM is a universal format, usable in any blockchain environment. This guide provides an overview of XCM\u2019s core principles, design, and functionality, alongside practical examples of its implementation.

"},{"location":"develop/interoperability/intro-to-xcm/#messaging-format","title":"Messaging Format","text":"

XCM is not a protocol but a standardized messaging format. It defines the structure and behavior of messages but does not handle their delivery. This separation allows developers to focus on crafting instructions for target systems without worrying about transmission mechanics.

XCM messages are intent-driven, outlining desired actions for the receiving blockchain to consider and potentially alter its state. These messages do not directly execute changes; instead, they rely on the host chain's environment to interpret and implement them. By utilizing asynchronous composability, XCM facilitates efficient execution where messages can be processed independently of their original order, similar to how RESTful services handle HTTP requests without requiring sequential processing.

"},{"location":"develop/interoperability/intro-to-xcm/#the-four-principles-of-xcm","title":"The Four Principles of XCM","text":"

XCM adheres to four guiding principles that ensure robust and reliable communication across consensus systems:

  • Asynchronous - XCM messages operate independently of sender acknowledgment, avoiding delays due to blocked processes
  • Absolute - XCM messages are guaranteed to be delivered and interpreted accurately, in order, and timely. Once a message is sent, one can be sure it will be processed as intended
  • Asymmetric - XCM messages follow the 'fire and forget' paradigm meaning no automatic feedback is provided to the sender. Any results must be communicated separately to the sender with an additional message back to the origin
  • Agnostic - XCM operates independently of the specific consensus mechanisms, making it compatible across diverse systems

These principles guarantee that XCM provides a reliable framework for cross-chain communication, even in complex environments.

"},{"location":"develop/interoperability/intro-to-xcm/#the-xcm-tech-stack","title":"The XCM Tech Stack","text":"

The XCM tech stack is designed to faciliate seamless interoperable communication between chains that reside within the Polkadot ecosystem. XCM can be used to ecpress the meaning of the messages over each of the communicatio channels.

"},{"location":"develop/interoperability/intro-to-xcm/#core-functionalities-of-xcm","title":"Core Functionalities of XCM","text":"

XCM enhances cross-consensus communication by introducing several powerful features:

  • Programmability - supports dynamic message handling, allowing for more comprehensive use cases. Includes branching logic, safe dispatches for version checks, and asset operations like NFT management
  • Functional Multichain Decomposition - enables mechanisms such as remote asset locking, asset namespacing, and inter-chain state referencing, with contextual message identification
  • Bridging - establishes a universal reference framework for multi-hop setups, connecting disparate systems like Ethereum and Bitcoin with the Polkadot relay chain acting as a universal location

The standardized format for messages allows parachains to handle tasks like user balances, governance, and staking, freeing the Polkadot relay chain to focus on shared security. These features make XCM indispensable for implementing scalable and interoperable blockchain applications.

"},{"location":"develop/interoperability/intro-to-xcm/#xcm-example","title":"XCM Example","text":"

The following is a simplified XCM message demonstrating a token transfer from Alice to Bob on the same chain (ParaA).

let message = Xcm(vec![\n    WithdrawAsset((Here, amount).into()),\n    BuyExecution { \n        fees: (Here, amount).into(), \n        weight_limit: WeightLimit::Unlimited \n    },\n    DepositAsset {\n        assets: All.into(),\n        beneficiary: MultiLocation {\n            parents: 0,\n            interior: Junction::AccountId32 {\n                network: None,\n                id: BOB.clone().into()\n            }.into(),\n        }.into()\n    }\n]);\n

The message consists of three instructions described as follows:

  • WithdrawAsset - transfers a specified number of tokens from Alice's account to a holding register

    WithdrawAsset((Here, amount).into()),\n

    • Here - the native parachain token
    • amount - the number of tokens that are transferred

    The first instruction takes as an input the MultiAsset that should be withdrawn. The MultiAsset describes the native parachain token with the Here keyword. The amount parameter is the number of tokens that are transferred. The withdrawal account depends on the origin of the message. In this example the origin of the message is Alice. The WithdrawAsset instruction moves amount number of native tokens from Alice's account into the holding register.

  • BuyExecution - allocates fees to cover the execution weight of the XCM instructions

    BuyExecution { \n    fees: (Here, amount).into(), \n    weight_limit: WeightLimit::Unlimited \n},\n

    • fees - describes the asset in the holding register that should be used to pay for the weight
    • weight_limit - defines the maximum fees that can be used to buy weight
  • DepositAsset - moves the remaining tokens from the holding register to Bob\u2019s account

    DepositAsset {\n    assets: All.into(),\n    beneficiary: MultiLocation {\n        parents: 0,\n        interior: Junction::AccountId32 {\n            network: None,\n            id: BOB.clone().into()\n        }.into(),\n    }.into()\n}\n

    • All - the wildcard for the asset(s) to be deposited. In this case, all assets in the holding register should be deposited

This step-by-step process showcases how XCM enables precise state changes within a blockchain system. You can find a complete XCM message example in the XCM repository.

"},{"location":"develop/interoperability/intro-to-xcm/#overview","title":"Overview","text":"

XCM revolutionizes cross-chain communication by enabling use cases such as:

  • Token transfers between blockchains
  • Asset locking for cross-chain smart contract interactions
  • Remote execution of functions on other blockchains

These functionalities empower developers to build innovative, multi-chain applications, leveraging the strengths of various blockchain networks. To stay updated on XCM\u2019s evolving format or contribute, visit the XCM repository.

"},{"location":"develop/interoperability/send-messages/","title":"Send XCM Messages","text":""},{"location":"develop/interoperability/send-messages/#introduction","title":"Introduction","text":"

One of the core FRAME pallets that enables parachains to engage in cross-chain communication using the Cross-Consensus Message (XCM) format is pallet-xcm. It facilitates the sending, execution, and management of XCM messages, thereby allowing parachains to interact with other chains within the ecosystem. Additionally, pallet-xcm, also referred to as the XCM pallet, supports essential operations like asset transfers, version negotiation, and message routing.

This page provides a detailed overview of the XCM pallet's key features, its primary roles in XCM operations, and the main extrinsics it offers. Whether aiming to execute XCM messages locally or send them to external chains, this guide covers the foundational concepts and practical applications you need to know.

"},{"location":"develop/interoperability/send-messages/#xcm-frame-pallet-overview","title":"XCM Frame Pallet Overview","text":"

The pallet-xcm provides a set of pre-defined, commonly used XCVM programs in the form of a set of extrinsics.

This pallet provides some default implementations for traits required by XcmConfig. The XCM executor is also included as an associated type within the pallet's configuration.

Note

For further details on the XCM configuration, refer to the XCM Configuration page.

Where the XCM format defines a set of instructions used to construct XCVM programs, pallet-xcm defines a set of extrinsics that can be utilized to build XCVM programs, either to target the local or external chains. The pallet-xcm functionality is divided into three categories:

  • Primitive - dispatchable functions to execute XCM locally

  • High-level - functions for asset transfers between chains

  • Version negotiation-specific - functions for managing XCM version compability

"},{"location":"develop/interoperability/send-messages/#key-roles-of-the-xcm-pallet","title":"Key Roles of the XCM Pallet","text":"

The XCM pallet plays a central role in managing cross-chain messages, with its primary responsibilities including:

  • Execute XCM messages - interacts with the XCM executor to validate and execute messages, adhering to predefined security and filter criteria

  • Send messages across chains - allows authorized origins to send XCM messages, enabling controlled cross-chain communication

  • Reserve-based transfers and teleports - supports asset movement between chains, governed by filters that restrict operations to authorized origins

  • XCM version negotiation - ensures compatibility by selecting the appropriate XCM version for inter-chain communication

  • Asset trapping and recovery - manages trapped assets, enabling safe reallocation or recovery when issues occur during cross-chain transfers

  • Support for XCVM operations - oversees state and configuration requirements necessary for executing cross-consensus programs within the XCVM framework

"},{"location":"develop/interoperability/send-messages/#primary-extrinsics-of-the-xcm-pallet","title":"Primary Extrinsics of the XCM Pallet","text":"

This page will highlight the two Primary Primitive Calls responsible for sending and executing XCVM programs as dispatchable functions within the pallet.

"},{"location":"develop/interoperability/send-messages/#execute","title":"Execute","text":"

The\u00a0execute\u00a0call directly interacts with the XCM executor, allowing for the execution of XCM messages originating from a locally signed origin. The executor validates the message, ensuring it complies with any configured barriers or filters before executing.

Once validated, the message is executed locally, and an event is emitted to indicate the result\u2014whether the message was fully executed or only partially completed. Execution is capped by a maximum weight (max_weight); if the required weight exceeds this limit, the message will not be executed.

pub fn execute<T: Config>(\n    message: Box<VersionedXcm<<T as Config>::RuntimeCall>>,\n    max_weight: Weight,\n)\n

Note

For further details on the execute extrinsic, see the pallet-xcm documentation.

Warning

Partial execution of messages may occur depending on the constraints or barriers applied.

"},{"location":"develop/interoperability/send-messages/#send","title":"Send","text":"

The\u00a0send\u00a0call enables XCM messages to be sent to a specified destination. This could be a parachain, smart contract, or any external system governed by consensus. Unlike the execute call, the message is not executed locally but is transported to the destination chain for processing.

The destination is defined using a Location, which describes the target chain or system. This ensures precise delivery through the configured XCM transport mechanism.

pub fn send<T: Config>(\n    dest: Box<MultiLocation>,\n    message: Box<VersionedXcm<<T as Config>::RuntimeCall>>,\n)\n

Note

For further information about the send extrinsic, check the pallet-xcm documentation.

"},{"location":"develop/interoperability/send-messages/#xcm-router","title":"XCM Router","text":"

The XcmRouter is a critical component the XCM pallet requires to facilitate sending XCM messages. It defines where messages can be sent and determines the appropriate XCM transport protocol for the operation.

For instance, the Kusama network employs the ChildParachainRouter, which restricts routing to Downward Message Passing (DMP) from the relay chain to parachains, ensuring secure and controlled communication.

pub type XcmRouter = WithUniqueTopic<(\n    // Only one router so far - use DMP to communicate with child parachains.\n    ChildParachainRouter<Runtime, XcmPallet, PriceForChildParachainDelivery>,\n)>;\n

Note

For more details on XCM transport protocols, see the XCM Channels page.

"},{"location":"develop/interoperability/test-and-debug/","title":"Testing and Debugging","text":""},{"location":"develop/interoperability/test-and-debug/#introduction","title":"Introduction","text":"

Cross-Consensus Messaging (XCM) is a core feature of the Polkadot ecosystem, enabling communication between parachains, relay chains, and system chains. To ensure the reliability of XCM-powered blockchains, thorough testing and debugging are essential before production deployment.

This article explores two indispensable tools for XCM testing, the XCM Simulator and the XCM Emulator, to help developers onboard and test their solutions effectively.

"},{"location":"develop/interoperability/test-and-debug/#xcm-simulator","title":"XCM Simulator","text":"

Setting up a live network with multiple interconnected parachains for XCM testing can be complex and resource-intensive. To address this, the xcm-simulator was developed. This versatile tool enables developers to test and experiment with XCM in a controlled, simulated network environment.

The xcm-simulator offers a fast and efficient way to test XCM instructions against the xcm-executor. It serves as an experimental playground for developers, supporting features such as:

  • Mocking Downward Message Passing (DMP) - retrieve incoming XCMs from the relay chain using the received_dmp getter
  • Rapid iteration - test XCM messages in isolation without relying on full network simulations

The xcm-simulator achieves this by utilizing mocked runtimes for both the relay chain and connected parachains, enabling developers to focus on message logic and configuration without needing a live network.

"},{"location":"develop/interoperability/test-and-debug/#how-does-it-work","title":"How does it work?","text":"

The xcm-simulator provides the following macros for building a mocked simulation environment:

  • decl_test_relay_chain - implements upward message passing (UMP) for the specified relay chain struct. The struct must define the XCM configuration for the relay chain:

    decl_test_relay_chain! {\n    pub struct Relay {\n        Runtime = relay_chain::Runtime,\n        XcmConfig = relay_chain::XcmConfig,\n        new_ext = relay_ext(),\n    }\n}\n

    The relay_ext() sets up a test environment for the relay chain with predefined storage, then returns a TestExternalities instance for further testing.

  • decl_test_parachain - implements the XcmMessageHandlerT and DmpMessageHandlerT traits for the specified parachain struct. Requires the parachain struct to include the XcmpMessageHandler and DmpMessageHandler pallets, which define the logic for processing messages (implemented through mock_message_queue). The patter must be the following:

    decl_test_parachain! {\n    pub struct ParaA {\n        Runtime = parachain::Runtime,\n        XcmpMessageHandler = parachain::MsgQueue,\n        DmpMessageHandler = parachain::MsgQueue,\n        new_ext = para_ext(1),\n    }\n}\n

    The para_ext(para_id: u32) function initializes a test environment for a parachain with a specified para_id, sets the initial configuration of the parachain, returning a TestExternalities instance for testing.

    Note

    Developers can take this idea and define as many parachains as they want, like ParaA, ParaB, ParaC, etc

  • decl_test_network - defines a testing network consisting of a relay chain and multiple parachains. Takes a network struct as input and implements functionalities for testing, including ParachainXcmRouter and RelayChainXcmRouter. The struct must specify the relay chain and an indexed list of parachains to be included in the network:

    decl_test_network! {\n    pub struct ExampleNet {\n        relay_chain = Relay,\n        parachains = vec![\n            (1, ParaA),\n            (2, ParaB),\n        ],\n    }\n}\n

By leveraging these macros, developers can customize their testing networks by defining relay chains and parachains tailored to their needs.

For guidance on implementing a mock runtime for a Polkadot SDK-based chain, refer to the Pallet Testing article. This framework enables thorough testing of runtime and cross-chain interactions.

For a complete example of how to use the xcm-simulator, explore the sample provided in the xcm-simulator codebase.

"},{"location":"develop/interoperability/test-and-debug/#xcm-emulator","title":"XCM Emulator","text":"

The xcm-emulator is a tool designed to simulate the execution of XCM programs using predefined runtime configurations. These configurations include those utilized by live networks like Kusama, Polkadot, and the Asset Hub.

This tool enables testing of cross-chain message passing, providing a way to verify outcomes, weights, and side effects efficiently.

The xcm-emulator relies on transport layer pallets. However, the messages do not leverage the same messaging infrastructure as live networks since the transport mechanism is mocked. Additionally, consensus-related events are not covered, such as disputes, staking, and ImOnline events. Parachains should use end-to-end (E2E) tests to validate these events.

"},{"location":"develop/interoperability/test-and-debug/#pros-and-cons","title":"Pros and Cons","text":"

The XCM Emulator provides both advantages and limitations when testing cross-chain communication in simulated environments.

  • Pros:

    • Interactive debugging - offers tracing capabilities similar to EVM, enabling detailed analysis of issues
    • Runtime composability - facilitates testing and integration of multiple runtime components
    • Immediate feedback - supports Test-Driven Development (TDD) by providing rapid test results
    • Seamless integration testing - simplifies the process of testing new runtime versions in an isolated environment
  • Cons:

    • Simplified emulation - always assumes message delivery, which may not mimic real-world network behavior
    • Dependency challenges - requires careful management of dependency versions and patching. Refer to the Cargo dependency documentation
    • Compilation overhead - testing environments can be resource-intensive, requiring frequent compilation updates
"},{"location":"develop/interoperability/test-and-debug/#how-does-it-work_1","title":"How Does It Work?","text":"

The xcm-emulator package builds upon the functionality provided by the xcm-simulator package, offering the same set of macros while extending their capabilities. In addition to the standard features, xcm-emulator introduces new tools that make testing cross-chain communication more comprehensive.

One of the key additions is the decl_test_bridges macro. This macro allows developers to define and implement mock bridges for testing interoperability in the Polkadot ecosystem.

  • decl_test_bridges - enables the creation of multiple bridges between chains, specifying their source chain, target chain, and the handler responsible for processing messages

    decl_test_bridges! {\n    pub struct BridgeA {\n        source = ChainA,\n        target = ChainB,\n        handler = HandlerA\n    },\n    pub struct BridgeB {\n        source = ChainB,\n        target = ChainC,\n        handler = HandlerB\n    },\n}\n

Utilizing the capabilities of the xcm-emulator, developers can effectively design, test, and optimize cross-chain functionality, fostering interoperability within the Polkadot ecosystem.

"},{"location":"develop/interoperability/xcm-channels/","title":"XCM Channels","text":""},{"location":"develop/interoperability/xcm-channels/#introduction","title":"Introduction","text":"

Polkadot is designed to enable interoperability between its connected parachains. At the core of this interoperability is the Cross-Consensus Message Format (XCM), a standard language that allows parachains to communicate and interact with each other.

The network-layer protocol responsible for delivering XCM-formatted messages between parachains is the Cross-Chain Message Passing (XCMP) protocol. XCMP maintains messaging queues on the relay chain, serving as a bridge to facilitate cross-chain interactions.

As XCMP is still under development, Polkadot has implemented a temporary alternative called Horizontal Relay-routed Message Passing (HRMP). HRMP offers the same interface and functionality as the planned XCMP but it has a crucial difference, it stores all messages directly in the relay chain\u2019s storage, which is more resource-intensive.

Once XCMP is fully implemented, HRMP will be deprecated in favor of the native XCMP protocol. XCMP will offer a more efficient and scalable solution for cross-chain message passing, as it will not require the relay chain to store all the messages.

"},{"location":"develop/interoperability/xcm-channels/#establishing-hrmp-channels","title":"Establishing HRMP Channels","text":"

To enable communication between parachains using the HRMP protocol, the parachains must explicitly establish communication channels by registering them on the relay chain.

Downward and upward channels from and to the relay chain are implicitly available, meaning they do not need to be explicitly opened.

Opening an HRMP channel requires the parachains involved to make a deposit on the relay chain. This deposit serves a specific purpose, it covers the costs associated with using the relay chain's storage for the message queues linked to the channel. The amount of this deposit varies based on parameters defined by the specific relay chain being used.

"},{"location":"develop/interoperability/xcm-channels/#relay-chain-parameters","title":"Relay Chain Parameters","text":"

Each Polkadot relay chain has a set of configurable parameters that control the behavior of the message channels between parachains. These parameters include hrmpSenderDeposit, hrmpRecipientDeposit, hrmpChannelMaxMessageSize, hrmpChannelMaxCapacity, and more.

When a parachain wants to open a new channel, it must consider these parameter values to ensure the channel is configured correctly.

To view the current values of these parameters in the Polkadot network:

  1. Visit Polkadot.js Apps, navigate to the Developer dropdown and select the Chain state option

  2. Query the chain configuration parameters. The result will display the current settings for all the Polkadot network parameters, including the HRMP channel settings

    1. Select configuration
    2. Choose the activeConfig() call
    3. Click the + button to execute the query
    4. Check the chain configuration

"},{"location":"develop/interoperability/xcm-channels/#dispatching-extrinsics","title":"Dispatching Extrinsics","text":"

Establishing new HRMP channels between parachains requires dispatching specific extrinsic calls on the Polkadot relay chain from the parachain's origin.

The most straightforward approach is to implement the channel opening logic off-chain, then use the XCM pallet's send extrinsic to submit the necessary instructions to the relay chain. However, the ability to send arbitrary programs through the Transact instruction in XCM is typically restricted to privileged origins, such as the sudo pallet or governance mechanisms.

Parachain developers have a few options for triggering the required extrinsic calls from their parachain's origin, depending on the configuration and access controls defined:

  • Sudo - if the parachain has a sudo pallet configured, the sudo key holder can use the sudo extrinsic to dispatch the necessary channel opening calls
  • Governance - the parachain's governance system, such as a council or OpenGov, can be used to authorize the channel opening calls
  • Privileged accounts - the parachain may have other designated privileged accounts that are allowed to dispatch the HRMP channel opening extrinsics
"},{"location":"develop/interoperability/xcm-channels/#where-to-go-next","title":"Where to Go Next","text":"

Explore the following tutorials for detailed, step-by-step guidance on setting up cross-chain communication channels in Polkadot:

  • Opening HRMP Channels Between Parachains
  • Opening HRMP Channels with System Parachains
"},{"location":"develop/interoperability/xcm-config/","title":"XCM Config","text":""},{"location":"develop/interoperability/xcm-config/#introduction","title":"Introduction","text":"

The XCM executor is a crucial component responsible for interpreting and executing XCM messages (XCMs) with Polkadot SDK-based chains. It processes and manages XCM instructions, ensuring they are executed correctly and in sequentially. Adhering to the Cross-Consensus Virtual Machine (XCVM) specification, the XCM executor can be customized or replaced with an alternative that also complies with the XCVM standards.

The XcmExecutor is not a pallet but a struct parameterized by a Config trait. The Config trait is the inner configuration, parameterizing the outer XcmExecutor<Config> struct. Both configurations are set up within the runtime.

The executor is highly configurable, with the XCM builder offering building blocks to tailor the configuration to specific needs. While they serve as a foundation, users can easily create custom blocks to suit unique configurations. Users can also create their building blocks to address unique needs. This article examines the XCM configuration process, explains each configurable item, and provides examples of the tools and types available to help customize these settings.

"},{"location":"develop/interoperability/xcm-config/#xcm-executor-configuration","title":"XCM Executor Configuration","text":"

The Config trait defines the XCM executor\u2019s configuration, which requires several associated types. Each type has specific trait bounds that the concrete implementation must fulfill. Some types, such as RuntimeCall, come with a default implementation in most cases, while others use the unit type () as the default. For many of these types, selecting the appropriate implementation carefully is crucial. Predefined solutions and building blocks can be adapted to your specific needs. These solutions can be found in the xcm-builder folder.

Each type is explained below, along with an overview of some of its implementations:

pub trait Config {\n    type RuntimeCall: Parameter + Dispatchable<PostInfo = PostDispatchInfo> + GetDispatchInfo;\n    type XcmSender: SendXcm;\n    type AssetTransactor: TransactAsset;\n    type OriginConverter: ConvertOrigin<<Self::RuntimeCall as Dispatchable>::RuntimeOrigin>;\n    type IsReserve: ContainsPair<MultiAsset, MultiLocation>;\n    type IsTeleporter: ContainsPair<MultiAsset, MultiLocation>;\n    type Aliasers: ContainsPair<Location, Location>;\n    type UniversalLocation: Get<InteriorMultiLocation>;\n    type Barrier: ShouldExecute;\n    type Weigher: WeightBounds<Self::RuntimeCall>;\n    type Trader: WeightTrader;\n    type ResponseHandler: OnResponse;\n    type AssetTrap: DropAssets;\n    type AssetClaims: ClaimAssets;\n    type AssetLocker: AssetLock;\n    type AssetExchanger: AssetExchange;\n    type SubscriptionService: VersionChangeNotifier;\n    type PalletInstancesInfo: PalletsInfoAccess;\n    type MaxAssetsIntoHolding: Get<u32>;\n    type FeeManager: FeeManager;\n    type MessageExporter: ExportXcm;\n    type UniversalAliases: Contains<(MultiLocation, Junction)>;\n    type CallDispatcher: CallDispatcher<Self::RuntimeCall>;\n    type SafeCallFilter: Contains<Self::RuntimeCall>;\n    type TransactionalProcessor: ProcessTransaction;\n    type HrmpNewChannelOpenRequestHandler: HandleHrmpNewChannelOpenRequest;\n    type HrmpChannelAcceptedHandler: HandleHrmpChannelAccepted;\n    type HrmpChannelClosingHandler: HandleHrmpChannelClosing;\n    type XcmRecorder: RecordXcm;\n}\n
"},{"location":"develop/interoperability/xcm-config/#config-items","title":"Config Items","text":"

Each configuration item is explained below, detailing the associated type\u2019s purpose and role in the XCM executor. Many of these types have predefined solutions available in the xcm-builder. Therefore, the available configuration items are:

  • RuntimeCall - defines the runtime's callable functions, created via the frame::runtime macro. It represents an enum listing the callable functions of all implemented pallets

    type RuntimeCall: Parameter + Dispatchable<PostInfo = PostDispatchInfo> + GetDispatchInfo\n
    The associated traits signify:

    • Parameter - ensures the type is encodable, decodable, and usable as a parameter
    • Dispatchable - indicates it can be executed in the runtime
    • GetDispatchInfo - provides weight details, determining how long execution takes
  • XcmSender - implements the SendXcm trait, specifying how the executor sends XCMs using transport layers (e.g., UMP for relay chains or XCMP for sibling chains). If a runtime lacks certain transport layers, such as HRMP (or XCMP)

    type XcmSender: SendXcm;\n

  • AssetTransactor - implements the TransactAsset trait, handling the conversion and transfer of MultiAssets between accounts or registers. It can be configured to support native tokens, fungibles, and non-fungibles or multiple tokens using pre-defined adapters like FungibleAdapter or custom solutions

    type AssetTransactor: TransactAsset;\n

  • OriginConverter - implements the ConvertOrigin trait to map MultiLocation origins to RuntimeOrigin. Multiple implementations can be combined, and OriginKind is used to resolve conflicts. Pre-defined converters like SovereignSignedViaLocation and SignedAccountId32AsNative handle sovereign and local accounts respectively

    type OriginConverter: ConvertOrigin<<Self::RuntimeCall as Dispatchable>::RuntimeOrigin>;\n

  • IsReserve - specifies trusted <MultiAsset, MultiLocation> pairs for depositing reserve assets. Using the unit type () blocks reserve deposits. The NativeAsset struct is an example of a reserve implementation

    type IsReserve: ContainsPair<MultiAsset, MultiLocation>;\n

  • IsTeleporter - defines trusted <MultiAsset, MultiLocation> pairs for teleporting assets to the chain. Using () blocks the ReceiveTeleportedAssets instruction. The NativeAsset struct can act as an implementation

    type IsTeleporter: ContainsPair<MultiAsset, MultiLocation>;\n

  • Aliasers - a list of (Origin, Target) pairs enabling each Origin to be replaced with its corresponding Target

    type Aliasers: ContainsPair<Location, Location>;\n

  • UniversalLocation - specifies the runtime's location in the consensus universe

    type UniversalLocation: Get<InteriorMultiLocation>;\n

    • Some examples are:
      • X1(GlobalConsensus(NetworkId::Polkadot)) for Polkadot
      • X1(GlobalConsensus(NetworkId::Kusama)) for Kusama
      • X2(GlobalConsensus(NetworkId::Polkadot), Parachain(1000)) for Statemint
  • Barrier - implements the ShouldExecute trait, functioning as a firewall for XCM execution. Multiple barriers can be combined in a tuple, where execution halts if one succeeds

    type Barrier: ShouldExecute;\n

  • Weigher - calculates the weight of XCMs and instructions, enforcing limits and refunding unused weight. Common solutions include FixedWeightBounds, which uses a base weight and limits on instructions

    type Weigher: WeightBounds<Self::RuntimeCall>;\n

  • Trader - manages asset-based weight purchases and refunds for BuyExecution instructions. The UsingComponents trader is a common implementation

    type Trader: WeightTrader;\n

  • ResponseHandler - handles QueryResponse instructions, implementing the OnResponse trait. FRAME systems typically use the pallet-xcm implementation

    type ResponseHandler: OnResponse;\n

  • AssetTrap - handles leftover assets in the holding register after XCM execution, allowing them to be claimed via ClaimAsset. If unsupported, assets are burned

    type AssetTrap: DropAssets;\n

  • AssetClaims - facilitates the claiming of trapped assets during the execution of the ClaimAsset instruction. Commonly implemented via pallet-xcm

    type AssetClaims: ClaimAssets;\n

  • AssetLocker - handles the locking and unlocking of assets. Can be omitted using () if asset locking is unnecessary

    type AssetLocker: AssetLock;\n

  • AssetExchanger - implements the AssetExchange trait to manage asset exchanges during the ExchangeAsset instruction. The unit type () disables this functionality

    type AssetExchanger: AssetExchange;\n

  • SubscriptionService - manages (Un)SubscribeVersion instructions and returns the XCM version via QueryResponse. Typically implemented by pallet-xcm

    type SubscriptionService: VersionChangeNotifier;\n

  • PalletInstancesInfo - provides runtime pallet information for QueryPallet and ExpectPallet instructions. FRAME-specific systems often use this, or it can be disabled with ()

    type PalletInstancesInfo: PalletsInfoAccess;\n

  • MaxAssetsIntoHolding - limits the number of assets in the Holding register. At most, twice this limit can be held under worst-case conditions

    type MaxAssetsIntoHolding: Get<u32>;\n

  • FeeManager - manages fees for XCM instructions, determining whether fees should be paid, waived, or handled in specific ways. Fees can be waived entirely using ()

    type FeeManager: FeeManager;\n

  • MessageExporter - implements the ExportXcm trait, enabling XCMs export to other consensus systems. It can spoof origins for use in bridges. Use () to disable exporting

    type MessageExporter: ExportXcm;\n

  • UniversalAliases - lists origin locations and universal junctions allowed to elevate themselves in the UniversalOrigin instruction. Using Nothing prevents origin aliasing

    type UniversalAliases: Contains<(MultiLocation, Junction)>;\n

  • CallDispatcher - dispatches calls from the Transact instruction, adapting the origin or modifying the call as needed. Can default to RuntimeCall

    type CallDispatcher: CallDispatcher<Self::RuntimeCall>;\n

  • SafeCallFilter - whitelists calls permitted in the Transact instruction. Using Everything allows all calls, though this is temporary until proof size weights are accounted for

    type SafeCallFilter: Contains<Self::RuntimeCall>;\n

  • TransactionalProcessor - implements the ProccessTransaction trait. It ensures that XCM instructions are executed atomically, meaning they either fully succeed or fully fail without any partial effects. This type allows for non-transactional XCM instruction processing by setting the () type

    type TransactionalProcessor: ProcessTransaction;\n

  • HrmpNewChannelOpenRequestHandler - enables optional logic execution in response to the HrmpNewChannelOpenRequest XCM notification

    type HrmpNewChannelOpenRequestHandler: HandleHrmpNewChannelOpenRequest;\n

  • HrmpChannelAcceptedHandler - enables optional logic execution in response to the HrmpChannelAccepted XCM notification

    type HrmpChannelAcceptedHandler: HandleHrmpChannelAccepted;\n

  • HrmpChannelClosingHandler - enables optional logic execution in response to the HrmpChannelClosing XCM notification
    type HrmpChannelClosingHandler: HandleHrmpChannelClosing;\n
  • XcmRecorder - allows tracking of the most recently executed XCM, primarily for use with dry-run runtime APIs
    type XcmRecorder: RecordXcm;\n
"},{"location":"develop/interoperability/xcm-config/#inner-config","title":"Inner Config","text":"

The Config trait underpins the XcmExecutor, defining its core behavior through associated types for asset handling, XCM processing, and permission management. These types are categorized as follows:

  • Handlers - manage XCMs sending, asset transactions, and special notifications
  • Filters - define trusted combinations, origin substitutions, and execution barriers
  • Converters - handle origin conversion for call execution
  • Accessors - provide weight determination and pallet information
  • Constants - specify universal locations and asset limits
  • Common Configs - include shared settings like RuntimeCall

The following diagram outlines this categorization:

flowchart LR\n    A[Inner Config] --> B[Handlers]\n    A --> C[Filters]\n    A --> D[Converters]\n    A --> E[Accessors]\n    A --> F[Constants]\n    A --> G[Common Configs]\n\n    B --> H[XcmSender]\n    B --> I[AssetTransactor]\n    B --> J[Trader]\n    B --> K[ResponseHandler]\n    B --> L[AssetTrap]\n    B --> M[AssetLocker]\n    B --> N[AssetExchanger]\n    B --> O[AssetClaims]\n    B --> P[SubscriptionService]\n    B --> Q[FeeManager]\n    B --> R[MessageExporter]\n    B --> S[CallDispatcher]\n    B --> T[HrmpNewChannelOpenRequestHandler]\n    B --> U[HrmpChannelAcceptedHandler]\n    B --> V[HrmpChannelClosingHandler]\n\n    C --> W[IsReserve]\n    C --> X[IsTeleporter]\n    C --> Y[Aliasers]\n    C --> Z[Barrier]\n    C --> AA[UniversalAliases]\n    C --> AB[SafeCallFilter]\n\n    D --> AC[OriginConverter]\n\n    E --> AD[Weigher]\n    E --> AE[PalletInstancesInfo]\n\n    F --> AF[UniversalLocation]\n    F --> AG[MaxAssetsIntoHolding]\n\n    G --> AH[RuntimeCall]
"},{"location":"develop/interoperability/xcm-config/#outer-config","title":"Outer Config","text":"

The XcmExecutor<Config> struct extends the functionality of the inner config by introducing fields for execution context, asset handling, error tracking, and operational management. For further details, see the documentation for XcmExecutor<Config>.

"},{"location":"develop/interoperability/xcm-config/#multiple-implementations","title":"Multiple Implementations","text":"

Some associated types in the Config trait are highly configurable and may have multiple implementations (e.g., Barrier). These implementations are organized into a tuple (impl_1, impl_2, ..., impl_n), and the execution follows a sequential order. Each item in the tuple is evaluated individually, each being checked to see if it fails. If an item passes (e.g., returns Ok or true), the execution stops, and the remaining items are not evaluated. The following example of the Barrier type demonstrates how this grouping operates (understanding each item in the tuple is unnecessary for this explanation).

In the following example, the system will first check the TakeWeightCredit type when evaluating the barrier. If it fails, it will check AllowTopLevelPaidExecutionFrom, and so on, until one of them returns a positive result. If all checks fail, a Barrier error will be triggered.

pub type Barrier = (\n    TakeWeightCredit,\n    AllowTopLevelPaidExecutionFrom<Everything>,\n    AllowKnownQueryResponses<XcmPallet>,\n    AllowSubscriptionsFrom<Everything>,\n);\n\npub struct XcmConfig;\nimpl xcm_executor::Config for XcmConfig {\n    ...\n    type Barrier = Barrier;\n    ...\n}\n
"},{"location":"develop/parachains/","title":"Parachains","text":"

This section provides a complete guide to working with the Polkadot SDK, from getting started to long-term network maintenance. Discover how to create custom blockchains, test and deploy your parachains, and ensure their continued performance and reliability.

"},{"location":"develop/parachains/#building-parachains-with-the-polkadot-sdk","title":"Building Parachains with the Polkadot SDK","text":"

With the Polkadot relay chain handling security and consensus, parachain developers are free to focus on features such as asset management, governance, and cross-chain communication. The Polkadot SDK equips developers with the tools to build, deploy, and maintain efficient, scalable parachains.

Polkadot SDK\u2019s FRAME framework provides developers with the tools to do the following:

  • Customize parachain runtimes
  • Develop new pallets
  • Add smart contract functionality
  • Test your build for a confident deployment
  • Deploy your blockchain for use
  • Maintain your network including monitoring and upgrades

New to parachain development? Start with the Introduction to the Polkadot SDK to discover how this framework simplifies building custom parachains.

"},{"location":"develop/parachains/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/parachains/customize-parachain/","title":"Customize Your Parachain","text":"

Learn how to build a custom parachain with Polkadot SDK's FRAME framework, which includes pallet development, testing, smart contracts, and runtime customization. Pallets are modular components within the FRAME ecosystem that contain specific blockchain functionalities. This modularity grants developers increased flexibility and control around which behaviors to include in the core logic of their parachain.

The FRAME directory includes a robust library of pre-built pallets you can use as examples or templates to ease development.

"},{"location":"develop/parachains/customize-parachain/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/parachains/customize-parachain/#additional-resources","title":"Additional ResourcesFRAME RepositoryFRAME Rust docs","text":"

View the source code of the FRAME development environment that provides pallets you can use, modify, and extend to build the runtime logic to suit the needs of your blockchain.

Check out the Rust docs for the frame_support crate to view the support code for the runtime.

"},{"location":"develop/parachains/customize-parachain/add-existing-pallets/","title":"Add a Pallet to the Runtime","text":""},{"location":"develop/parachains/customize-parachain/add-existing-pallets/#introduction","title":"Introduction","text":"

The Polkadot SDK Solochain Template provides a functional runtime that includes default FRAME development modules (pallets) to help you get started with building a custom blockchain.

Each pallet has specific configuration requirements, such as the parameters and types needed to enable the pallet's functionality. In this guide, you'll learn how to add a pallet to a runtime and configure the settings specific to that pallet.

The purpose of this article is to help you:

  • Learn how to update runtime dependencies to integrate a new pallet
  • Understand how to configure pallet-specific Rust traits to enable the pallet's functionality
  • Grasp the entire workflow of integrating a new pallet into your runtime
"},{"location":"develop/parachains/customize-parachain/add-existing-pallets/#configuring-runtime-dependencies","title":"Configuring Runtime Dependencies","text":"

For Rust programs, this configuration is defined in the Cargo.toml file, which specifies the settings and dependencies that control what gets compiled into the final binary. Since the Polkadot SDK runtime compiles to both a native binary (which includes standard Rust library functions) and a Wasm binary (which does not include the standard Rust library), the runtime/Cargo.toml file manages two key aspects:

  • The locations and versions of the pallets that are to be imported as dependencies for the runtime
  • The features in each pallet that should be enabled when compiling the native Rust binary. By enabling the standard (std) feature set from each pallet, you ensure that the runtime includes the functions, types, and primitives necessary for the native build, which are otherwise excluded when compiling the Wasm binary

Note

For information about adding dependencies in Cargo.toml files, see the Dependencies page in the Cargo documentation. For information about enabling and managing features from dependent packages, see the Features section in the Cargo documentation.

"},{"location":"develop/parachains/customize-parachain/add-existing-pallets/#dependencies-for-a-new-pallet","title":"Dependencies for a New Pallet","text":"

To add the dependencies for a new pallet to the runtime, you must modify the Cargo.toml file by adding a new line into the [workspace.dependencies] section with the pallet you want to add. This pallet definition might look like:

pallet-example = { version = \"4.0.0-dev\", default-features = false }\n

This line imports the pallet-example crate as a dependency and specifies the following:

  • version - the specific version of the crate to import
  • default-features - determines the behavior for including pallet features when compiling the runtime with standard Rust libraries

Note

If you\u2019re importing a pallet that isn\u2019t available on crates.io, you can specify the pallet's location (either locally or from a remote repository) by using the git or path key. For example:

pallet-example = { \n    version = \"4.0.0-dev\",\n    default-features = false,\n    git = \"INSERT_PALLET_REMOTE_URL\",\n}\n

In this case, replace INSERT_PALLET_REMOTE_URL with the correct repository URL. For local paths, use the path key like so:

pallet-example = { \n    version = \"4.0.0-dev\",\n    default-features = false,\n    path = \"INSERT_PALLET_RELATIVE_PATH\",\n}\n

Ensure that you substitute INSERT_PALLET_RELATIVE_PATH with the appropriate local path to the pallet.

Next, add this dependency to the [dependencies] section of the runtime/Cargo.toml file, so it inherits from the main Cargo.toml file:

pallet-examples.workspace = true\n

To enable the std feature of the pallet, add the pallet to the following section:

[features]\ndefault = [\"std\"]\nstd = [\n    ...\n    \"pallet-example/std\",\n    ...\n]\n

This section specifies the default feature set for the runtime, which includes the std features for each pallet. When the runtime is compiled with the std feature set, the standard library features for all listed pallets are enabled. For more details on how the runtime is compiled as both a native binary (using std) and a Wasm binary (using no_std), refer to the Wasm build section in the Polkadot SDK documentation.

Note

If you forget to update the features section in the Cargo.toml file, you might encounter cannot find function errors when compiling the runtime.

To ensure that the new dependencies resolve correctly for the runtime, you can run the following command:

cargo check --release\n
"},{"location":"develop/parachains/customize-parachain/add-existing-pallets/#config-trait-for-pallets","title":"Config Trait for Pallets","text":"

Every Polkadot SDK pallet defines a Rust trait called Config. This trait specifies the types and parameters that the pallet needs to integrate with the runtime and perform its functions. The primary purpose of this trait is to act as an interface between this pallet and the runtime in which it is embedded. A type, function, or constant in this trait is essentially left to be configured by the runtime that includes this pallet.

Consequently, a runtime that wants to include this pallet must implement this trait.

You can inspect any pallet\u2019s Config trait by reviewing its Rust documentation or source code. The Config trait ensures the pallet has access to the necessary types (like events, calls, or origins) and integrates smoothly with the rest of the runtime.

At its core, the Config trait typically looks like this:

#[pallet::config]\npub trait Config: frame_system::Config {\n    /// Event type used by the pallet.\n    type RuntimeEvent: From<Event> + IsType<<Self as frame_system::Config>::RuntimeEvent>;\n\n    /// Weight information for controlling extrinsic execution costs.\n    type WeightInfo: WeightInfo;\n}\n

This basic structure shows that every pallet must define certain types, such as RuntimeEvent and WeightInfo, to function within the runtime. The actual implementation can vary depending on the pallet\u2019s specific needs.

Example - Utility Pallet

For instance, in the\u00a0utility pallet, the Config trait is implemented with the following types:

#[pallet::config]\npub trait Config: frame_system::Config {\n    /// The overarching event type.\n    type RuntimeEvent: From<Event> + IsType<<Self as frame_system::Config>::RuntimeEvent>;\n\n    /// The overarching call type.\n    type RuntimeCall: Parameter\n    + Dispatchable<RuntimeOrigin = Self::RuntimeOrigin, PostInfo = PostDispatchInfo>\n    + GetDispatchInfo\n    + From<frame_system::Call<Self>>\n    + UnfilteredDispatchable<RuntimeOrigin = Self::RuntimeOrigin>\n    + IsSubType<Call<Self>>\n    + IsType<<Self as frame_system::Config>::RuntimeCall>;\n\n    /// The caller origin, overarching type of all pallets origins.\n    type PalletsOrigin: Parameter +\n    Into<<Self as frame_system::Config>::RuntimeOrigin> +\n    IsType<<<Self as frame_system::Config>::RuntimeOrigin as frame_support::traits::OriginTrait>::PalletsOrigin>;\n\n    /// Weight information for extrinsics in this pallet.\n    type WeightInfo: WeightInfo;\n}\n

This example shows how the Config trait defines types like RuntimeEvent, RuntimeCall, PalletsOrigin, and WeightInfo, which the pallet will use when interacting with the runtime.

"},{"location":"develop/parachains/customize-parachain/add-existing-pallets/#parameter-configuration-for-pallets","title":"Parameter Configuration for Pallets","text":"

Traits in Rust define shared behavior, and within the Polkadot SDK, they allow runtimes to integrate and utilize a pallet's functionality by implementing its associated configuration trait and parameters. Some of these parameters may require constant values, which can be defined using the parameter_types! macro. This macro simplifies development by expanding the constants into the appropriate struct types with functions that the runtime can use to access their types and values in a consistent manner.

For example, the following code snippet shows how the solochain template configures certain parameters through the parameter_types! macro in the runtime/lib.rs file:

parameter_types! {\n    pub const BlockHashCount: BlockNumber = 2400;\n    pub const Version: RuntimeVersion = VERSION;\n    /// We allow for 2 seconds of compute with a 6 second average block time.\n    pub BlockWeights: frame_system::limits::BlockWeights =\n        frame_system::limits::BlockWeights::with_sensible_defaults(\n            Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX),\n            NORMAL_DISPATCH_RATIO,\n        );\n    pub BlockLength: frame_system::limits::BlockLength = frame_system::limits::BlockLength\n        ::max_with_normal_ratio(5 * 1024 * 1024, NORMAL_DISPATCH_RATIO);\n    pub const SS58Prefix: u8 = 42;\n}\n
"},{"location":"develop/parachains/customize-parachain/add-existing-pallets/#pallet-config-in-the-runtime","title":"Pallet Config in the Runtime","text":"

To integrate a new pallet into the runtime, you must implement its Config trait in the runtime/lib.rs file. This is done by specifying the necessary types and parameters in Rust, as shown below:

impl pallet_example::Config for Runtime {\n    type RuntimeEvent = RuntimeEvent;\n    type WeightInfo = pallet_template::weights::SubstrateWeight<Runtime>;\n    ...\n}\n

Finally, to compose the runtime, update the list of pallets in the same file by modifying the #[frame_support::runtime] section. This Rust macro constructs the runtime with a specified name and the pallets you want to include. Use the following format when adding your pallet:

#[frame_support::runtime]\nmod runtime {\n    #[runtime::runtime]\n    #[runtime::derive(\n        RuntimeCall,\n        RuntimeEvent,\n        RuntimeError,\n        RuntimeOrigin,\n        RuntimeFreezeReason,\n        RuntimeHoldReason,\n        RuntimeSlashReason,\n        RuntimeLockId,\n        RuntimeTask\n    )]\n    pub struct Runtime;\n\n    #[runtime::pallet_index(0)]\n    pub type System = frame_system;\n\n    #[runtime::pallet_index(1)]\n    pub type Example = pallet_example;\n

Note

The #[frame_support::runtime] macro wraps the runtime's configuration, automatically generating boilerplate code for pallet inclusion.

"},{"location":"develop/parachains/customize-parachain/add-existing-pallets/#where-to-go-next","title":"Where to Go Next","text":"

With the pallet successfully added and configured, the runtime is ready to be compiled and used. Following this guide\u2019s steps, you\u2019ve integrated a new pallet into the runtime, set up its dependencies, and ensured proper configuration. You can now proceed to any of the following points:

  • Dive deeper by creating your custom pallet to expand the functionality of your blockchain
  • Ensure robustness with Pallet Testing to verify the accuracy and behavior of your code
"},{"location":"develop/parachains/customize-parachain/add-smart-contract-functionality/","title":"Add Smart Contract Functionality","text":""},{"location":"develop/parachains/customize-parachain/add-smart-contract-functionality/#introduction","title":"Introduction","text":"

When building your custom blockchain with the Polkadot SDK, you have the flexibility to add smart contract capabilities through specialized pallets. These pallets allow blockchain users to deploy and execute smart contracts, enhancing your chain's functionality and programmability.

Polkadot SDK-based blockchains support two distinct smart contract execution environments: EVM (Ethereum Virtual Machine) and Wasm (WebAssembly). Each environment allows developers to deploy and execute different types of smart contracts, providing flexibility in choosing the most suitable solution for their needs.

"},{"location":"develop/parachains/customize-parachain/add-smart-contract-functionality/#evm-smart-contracts","title":"EVM Smart Contracts","text":"

To enable Ethereum-compatible smart contracts in your blockchain, you'll need to integrate Frontier, the Ethereum compatibility layer for Polkadot SDK-based chains. This requires adding two essential pallets to your runtime:

  • pallet-evm - provides the EVM execution environment
  • pallet-ethereum - handles Ethereum-formatted transactions and RPC capabilities

For step-by-step guidance on adding these pallets to your runtime, refer to Add a Pallet to the Runtime.

For a real-world example of how these pallets are implemented in production, you can check Moonbeam's implementation of pallet-evm and pallet-ethereum.

"},{"location":"develop/parachains/customize-parachain/add-smart-contract-functionality/#wasm-smart-contracts","title":"Wasm Smart Contracts","text":"

To support Wasm-based smart contracts, you'll need to integrate:

  • pallet-contracts - provides the Wasm smart contract execution environment

This pallet enables the deployment and execution of Wasm-based smart contracts on your blockchain. For detailed instructions on adding this pallet to your runtime, see Add a Pallet to the Runtime.

For a real-world example of how this pallet is implemented in production, you can check Astar's implementation of pallet-contracts.

"},{"location":"develop/parachains/customize-parachain/add-smart-contract-functionality/#where-to-go-next","title":"Where to Go Next","text":"

Now that you understand how to enable smart contract functionality in your blockchain, you might want to:

  • Take a step back and learn more about EVM and Wasm contracts by visiting the Smart Contracts guide
  • Start building with Wasm (ink!) contracts
  • Start building with EVM contracts
"},{"location":"develop/parachains/customize-parachain/benchmarking/","title":"Benchmark Testing","text":""},{"location":"develop/parachains/customize-parachain/benchmarking/#introduction","title":"Introduction","text":"

Benchmark testing is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmark testing your custom pallets ensures that each extrinsic has a precise weight, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks.

The Polkadot SDK leverages the FRAME benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's benchmarking framework, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure.

"},{"location":"develop/parachains/customize-parachain/benchmarking/#the-case-for-benchmark-testing","title":"The Case for Benchmark Testing","text":"

Benchmark testing helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmark testing, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights.

Benchmark testing also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability.

"},{"location":"develop/parachains/customize-parachain/benchmarking/#benchmark-testing-and-weight","title":"Benchmark Testing and Weight","text":"

In Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as:

  • Computational complexity
  • Storage complexity (proof size)
  • Database reads and writes
  • Hardware specifications

Benchmark testing uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model.

Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmark testing. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period of time.

Within FRAME, each function call that is dispatched must have a #[pallet::weight] annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs:

#[pallet::call_index(0)]\n#[pallet::weight(T::WeightInfo::do_something())]\npub fn do_something(origin: OriginFor<T>) -> DispatchResultWithPostInfo { Ok(()) }\n

The WeightInfo file is automatically generated during benchmark testing. Based on these tests, this file provides accurate weights for each extrinsic.

"},{"location":"develop/parachains/customize-parachain/benchmarking/#benchmark-process","title":"Benchmark Process","text":"

Benchmark testing a pallet involves the following steps:

  1. Creating a benchmarking.rs file within your pallet's structure
  2. Writing a benchmarking test for each extrinsic
  3. Executing the benchmarking tool to calculate weights based on performance metrics

The benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmark testing pipeline is deactivated. To activate it, compile your runtime with the runtime-benchmarks feature flag.

"},{"location":"develop/parachains/customize-parachain/benchmarking/#prepare-your-environment","title":"Prepare Your Environment","text":"

Before writing benchmark tests, you need to ensure the frame-benchmarking crate is included in your pallet's Cargo.toml similar to the following:

Cargo.toml
frame-benchmarking = { version = \"37.0.0\", default-features = false }\n

You must also ensure that you add the runtime-benchmarks feature flag as follows under the [features] section of your pallet's Cargo.toml:

Cargo.toml
runtime-benchmarks = [\n  \"frame-benchmarking/runtime-benchmarks\",\n  \"frame-support/runtime-benchmarks\",\n  \"frame-system/runtime-benchmarks\",\n  \"sp-runtime/runtime-benchmarks\",\n]\n

Lastly, ensure that frame-benchmarking is included in std = []:

Cargo.toml
std = [\n  # ...\n  \"frame-benchmarking?/std\",\n  # ...\n]\n

Once complete, you have the required dependencies for writing benchmark tests for your pallet.

"},{"location":"develop/parachains/customize-parachain/benchmarking/#write-benchmark-tests","title":"Write Benchmark Tests","text":"

Create a benchmarking.rs file in your pallet's src/. Your directory structure should look similar to the following:

my-pallet/\n\u251c\u2500\u2500 src/\n\u2502   \u251c\u2500\u2500 lib.rs          # Main pallet implementation\n\u2502   \u2514\u2500\u2500 benchmarking.rs # Benchmarking\n\u2514\u2500\u2500 Cargo.toml\n

With the directory structure set, you can use the polkadot-sdk-parachain-template to get started as follows:

benchmarking.rs (starter template)
//! Benchmarking setup for pallet-template\n#![cfg(feature = \"runtime-benchmarks\")]\n\nuse super::*;\nuse frame_benchmarking::v2::*;\n\n#[benchmarks]\nmod benchmarks {\n    use super::*;\n    #[cfg(test)]\n    use crate::pallet::Pallet as Template;\n    use frame_system::RawOrigin;\n\n    #[benchmark]\n    fn do_something() {\n        let caller: T::AccountId = whitelisted_caller();\n        #[extrinsic_call]\n        do_something(RawOrigin::Signed(caller), 100);\n\n        assert_eq!(Something::<T>::get().map(|v| v.block_number), Some(100u32.into()));\n    }\n\n    #[benchmark]\n    fn cause_error() {\n        Something::<T>::put(CompositeStruct { block_number: 100u32.into() });\n        let caller: T::AccountId = whitelisted_caller();\n        #[extrinsic_call]\n        cause_error(RawOrigin::Signed(caller));\n\n        assert_eq!(Something::<T>::get().map(|v| v.block_number), Some(101u32.into()));\n    }\n\n    impl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test);\n}\n

In your benchmarking tests, employ these best practices:

  • Write custom testing functions - the function do_something in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as whitelisted_caller() to sign transactions and facilitate testing
  • Use the #[extrinsic_call] macro - this macro is used when calling the extrinsic itself and is a required part of a benchmark testing function. See the `extrinsic_call Rust docs for more details
  • Validate extrinsic behavior - the assert_eq expression ensures that the extrinsic is working properly within the benchmark context
"},{"location":"develop/parachains/customize-parachain/benchmarking/#add-benchmarks-to-runtime","title":"Add Benchmarks to Runtime","text":"

Before running the benchmarking tool, you must integrate benchmarks with your runtime as follows:

  1. Create a benchmarks.rs file. This file should contain the following macro, which registers all pallets for benchmarking, as well as their respective configurations: benchmarks.rs

    frame_benchmarking::define_benchmarks!(\n    [frame_system, SystemBench::<Runtime>]\n    [pallet_parachain_template, TemplatePallet]\n    [pallet_balances, Balances]\n    [pallet_session, SessionBench::<Runtime>]\n    [pallet_timestamp, Timestamp]\n    [pallet_message_queue, MessageQueue]\n    [pallet_sudo, Sudo]\n    [pallet_collator_selection, CollatorSelection]\n    [cumulus_pallet_parachain_system, ParachainSystem]\n    [cumulus_pallet_xcmp_queue, XcmpQueue]\n);\n
    For example, to register a pallet named pallet_parachain_template for benchmark testing, add it as follows: benchmarks.rs
    frame_benchmarking::define_benchmarks!(\n    [frame_system, SystemBench::<Runtime>]\n    [pallet_parachain_template, TemplatePallet]\n);\n

    Updating define_benchmarks! macro is required

    If the pallet isn't included in the define_benchmarks! macro, the CLI cannot access and benchmark it later.

  2. Navigate to the runtime's lib.rs file and add the import for benchmarks.rs as follows:

    lib.rs
    #[cfg(feature = \"runtime-benchmarks\")]\nmod benchmarks;\n

    The runtime-benchmarks feature gate ensures benchmark tests are isolated from production runtime code.

"},{"location":"develop/parachains/customize-parachain/benchmarking/#run-benchmarks","title":"Run Benchmarks","text":"

You can now compile your runtime with the runtime-benchmarks feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled:

  1. Run build with the feature flag included

    cargo build --features runtime-benchmarks --release\n
  2. Once compiled, run the benchmarking tool to measure extrinsic weights

    ./target/release/INSERT_NODE_BINARY_NAME benchmark pallet \\\n--runtime INSERT_PATH_TO_WASM_RUNTIME \\\n--pallet INSERT_NAME_OF_PALLET \\\n--extrinsic '*' \\\n--steps 20 \\\n--repeat 10 \\\n--output weights.rs\n

    Flag definitions

    • --runtime - the path to your runtime's Wasm
    • --pallet - the name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in define_benchmarks
    • --extrinsic - which extrinsic to test. Using '*' implies all extrinsics will be benchmarked
    • --output - where the output of the auto-generated weights will reside

The generated weights.rs file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity:

./target/release/INSERT_NODE_BINARY_NAME benchmark pallet \\ --runtime INSERT_PATH_TO_WASM_RUNTIME \\ --pallet INSERT_PALLET_NAME \\ --extrinsic '*' \\ --steps 20 \\ --repeat 10 \\ --output weights.rs 2024-10-28 11:07:25 Loading WASM from ./target/release/wbuild/educhain-runtime/educhain_runtime.wasm 2024-10-28 11:07:26 Could not find genesis preset 'development'. Falling back to default. 2024-10-28 11:07:26 assembling new collators for new session 0 at #0 2024-10-28 11:07:26 assembling new collators for new session 1 at #0 2024-10-28 11:07:26 Loading WASM from ./target/release/wbuild/educhain-runtime/educhain_runtime.wasm Pallet: \"pallet_parachain_template\", Extrinsic: \"do_something\", Lowest values: [], Highest values: [], Steps: 20, Repeat: 10 ... Created file: \"weights.rs\" 2024-10-28 11:07:27 [ 0 % ] Starting benchmark: pallet_parachain_template::do_something I2024-10-28 11:07:27 [ 50 % ] Starting benchmark: pallet_parachain_template::cause_error"},{"location":"develop/parachains/customize-parachain/benchmarking/#add-benchmark-weights-to-pallet","title":"Add Benchmark Weights to Pallet","text":"

Once the weights.rs is generated, you may add the generated weights to your pallet. It is common that weights.rs become part of your pallet's root in src/:

use crate::weights::WeightInfo;\n\n/// Configure the pallet by specifying the parameters and types on which it depends.\n#[pallet::config]\npub trait Config: frame_system::Config {\n    /// A type representing the weights required by the dispatchables of this pallet.\n    type WeightInfo: WeightInfo;\n}\n

After which, you may add this to the #[pallet::weight] annotation in the extrinsic via the Config:

#[pallet::call_index(0)]\n#[pallet::weight(T::WeightInfo::do_something())]\npub fn do_something(origin: OriginFor<T>) -> DispatchResultWithPostInfo { Ok(()) }\n
"},{"location":"develop/parachains/customize-parachain/benchmarking/#where-to-go-next","title":"Where to Go Next","text":"
  • View the Rust Docs for a more comprehensive, low-level view of the FRAME V2 Benchmarking Suite
  • Read the FRAME Benchmarking and Weights reference document, a concise guide which details how weights and benchmarking work
"},{"location":"develop/parachains/customize-parachain/make-custom-pallet/","title":"Make a Custom Pallet","text":""},{"location":"develop/parachains/customize-parachain/make-custom-pallet/#introduction","title":"Introduction","text":"

FRAME provides a powerful set of tools for blockchain development, including a library of pre-built pallets. However, its true strength lies in the ability to create custom pallets tailored to your specific needs. This section will guide you through creating your own custom pallet, allowing you to extend your blockchain's functionality in unique ways.

To get the most out of this guide, ensure you're familiar with FRAME concepts.

Creating custom pallets offers several advantages over relying on pre-built pallets:

  • Flexibility - define runtime behavior that precisely matches your project requirements
  • Modularity - combine pre-built and custom pallets to achieve the desired blockchain functionality
  • Scalability - add or modify features as your project evolves

As you follow this guide to create your custom pallet, you'll work with the following key sections:

  1. Imports and dependencies - bring in necessary FRAME libraries and external modules
  2. Runtime configuration trait - specify the types and constants required for your pallet to interact with the runtime
  3. Runtime events - define events that your pallet can emit to communicate state changes
  4. Runtime errors - define the error types that can be returned from the function calls dispatched to the runtime
  5. Runtime storage - declare on-chain storage items for your pallet's state
  6. Extrinsics (function calls) - create callable functions that allow users to interact with your pallet and execute transactions

For additional macros you can include in a pallet, beyond those covered in this guide, refer to the pallet_macros section of the Polkadot SDK Docs.

"},{"location":"develop/parachains/customize-parachain/make-custom-pallet/#initial-setup","title":"Initial Setup","text":"

This section will guide you through the initial steps of creating the foundation for your custom FRAME pallet. You'll create a new Rust library project and set up the necessary dependencies.

  1. Create a new Rust library project using the following cargo command:

    cargo new --lib custom-pallet \\\n&& cd custom-pallet\n

    This command creates a new library project named custom-pallet and navigates into its directory.

  2. Configure the dependencies required for FRAME pallet development in the Cargo.toml file as follows:

    [package]\nname = \"custom-pallet\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n[dependencies]\nframe-support = { version = \"37.0.0\", default-features = false }\nframe-system = { version = \"37.0.0\", default-features = false }\ncodec = { version = \"3.6.12\", default-features = false, package = \"parity-scale-codec\", features = [\n  \"derive\",\n] }\nscale-info = { version = \"2.11.1\", default-features = false, features = [\n  \"derive\",\n] }\nsp-runtime = { version = \"39.0.0\", default-features = false }\n\n[features]\ndefault = [\"std\"]\nstd = [\n  \"frame-support/std\",\n  \"frame-system/std\",\n  \"codec/std\",\n  \"scale-info/std\",\n  \"sp-runtime/std\",\n]\n

    Note

    Proper version management is crucial for ensuring compatibility and reducing potential conflicts in your project. Carefully select the versions of the packages according to your project's specific requirements:

    • When developing for a specific Polkadot SDK runtime, ensure that your pallet's dependency versions match those of the target runtime
    • If you're creating this pallet within a Polkadot SDK workspace:

      • Define the actual versions in the root Cargo.toml file
      • Use workspace inheritance in your pallet's Cargo.toml to maintain consistency across your project
    • Regularly check for updates to FRAME and Polkadot SDK dependencies to benefit from the latest features, performance improvements, and security patches

    For detailed information on workspace inheritance and how to properly integrate your pallet with the runtime, refer to the Add an Existing Pallet to the Runtime page.

  3. Initialize the pallet structure by replacing the contents of src/lib.rs with the following scaffold code:

    pub use pallet::*;\n\n#[frame_support::pallet]\npub mod pallet {\n    use frame_support::pallet_prelude::*;\n    use frame_system::pallet_prelude::*;\n\n    #[pallet::pallet]\n    pub struct Pallet<T>(_);\n\n    #[pallet::config]  // snip\n    #[pallet::event]   // snip\n    #[pallet::error]   // snip\n    #[pallet::storage] // snip\n    #[pallet::call]    // snip\n}\n

    With this scaffold in place, you're ready to start implementing your custom pallet's specific logic and features. The subsequent sections of this guide will walk you through populating each of these components with the necessary code for your pallet's functionality.

"},{"location":"develop/parachains/customize-parachain/make-custom-pallet/#pallet-configuration","title":"Pallet Configuration","text":"

Every pallet includes a Rust trait called\u00a0Config, which exposes configurable options and links your pallet to other parts of the runtime. All types and constants the pallet depends on must be declared within this trait. These types are defined generically and made concrete when the pallet is instantiated in the runtime/src/lib.rs file of your blockchain.

In this step, you'll only configure the common types used by all pallets:

  • RuntimeEvent - since this pallet emits events, the runtime event type is required to handle them. This ensures that events generated by the pallet can be correctly processed and interpreted by the runtime
  • WeightInfo - this type defines the weights associated with the pallet's callable functions (also known as dispatchables). Weights help measure the computational cost of executing these functions. However, the WeightInfo type will be left unconfigured since setting up custom weights is outside the scope of this guide

Replace the line containing the #[pallet::config] macro with the following code block:

#[pallet::config]\npub trait Config: frame_system::Config {\n    /// The overarching runtime event type.\n    type RuntimeEvent: From<Event<Self>> + IsType<<Self as frame_system::Config>::RuntimeEvent>;\n    /// A type representing the weights required by the dispatchables of this pallet.\n    type WeightInfo;\n}\n
"},{"location":"develop/parachains/customize-parachain/make-custom-pallet/#pallet-events","title":"Pallet Events","text":"

After configuring the pallet to emit events, the next step is to define the events that can be triggered by functions within the pallet. Events provide a straightforward way to inform external entities, such as dApps, chain explorers, or users, that a significant change has occurred in the runtime. In a FRAME pallet, the details of each event and its parameters are included in the node\u2019s metadata, making them accessible to external tools and interfaces.

The generate_deposit macro generates a deposit_event function on the Pallet, which converts the pallet\u2019s event type into the RuntimeEvent (as specified in the Config trait) and deposits it using frame_system::Pallet::deposit_event.

This step adds an event called SomethingStored, which is triggered when a user successfully stores a value in the pallet. The event records both the value and the account that performed the action.

To define events, replace the #[pallet::event] line with the following code block:

#[pallet::event]\n#[pallet::generate_deposit(pub(super) fn deposit_event)]\npub enum Event<T: Config> {\n    /// A user has successfully set a new value.\n    SomethingStored {\n        /// The new value set.\n        something: u32,\n        /// The account who set the new value.\n        who: T::AccountId,\n    },\n}\n
"},{"location":"develop/parachains/customize-parachain/make-custom-pallet/#pallet-errors","title":"Pallet Errors","text":"

While events signal the successful completion of calls, errors indicate when and why a call has failed. It's essential to use informative names for errors to clearly communicate the cause of failure. Like events, error documentation is included in the node's metadata, so providing helpful descriptions is crucial.

Errors are defined as an enum named Error with a generic type. Variants can have fields or be fieldless. Any field type specified in the error must implement the TypeInfo trait, and the encoded size of each field should be as small as possible. Runtime errors can be up to 4 bytes in size, allowing the return of additional information when needed.

This step defines two basic errors: one for handling cases where no value has been set and another for managing arithmetic overflow.

To define errors, replace the #[pallet::error] line with the following code block:

#[pallet::error]\npub enum Error<T> {\n    /// The value retrieved was `None` as no value was previously set.\n    NoneValue,\n    /// There was an attempt to increment the value in storage over `u32::MAX`.\n    StorageOverflow,\n}\n
"},{"location":"develop/parachains/customize-parachain/make-custom-pallet/#pallet-storage","title":"Pallet Storage","text":"

To persist and store state/data within the pallet (and subsequently, the blockchain you are building), the #[pallet::storage] macro is used. This macro allows the definition of abstract storage within the runtime and sets metadata for that storage. It can be applied multiple times to define different storage items. Several types are available for defining storage, which you can explore in the Polkadot SDK documentation.

This step adds a simple storage item, Something, which stores a single u32 value in the pallet's runtime storage

To define storage, replace the #[pallet::storage] line with the following code block:

#[pallet::storage]\npub type Something<T> = StorageValue<_, u32>;\n
"},{"location":"develop/parachains/customize-parachain/make-custom-pallet/#pallet-dispatchable-extrinsics","title":"Pallet Dispatchable Extrinsics","text":"

Dispatchable functions enable users to interact with the pallet and trigger state changes. These functions are represented as \"extrinsics,\" which are similar to transactions. They must return a DispatchResult and be annotated with a weight and a call index.

The #[pallet::call_index] macro is used to explicitly define an index for calls in the Call enum. This is useful for maintaining backward compatibility in the event of new dispatchables being introduced, as changing the order of dispatchables would otherwise alter their index.

The #[pallet::weight] macro assigns a weight to each call, determining its execution cost.

This section adds two dispatchable functions:

  • do_something - takes a single u32 value, stores it in the pallet's storage, and emits an event
  • cause_error - checks if a value exists in storage. If the value is found, it increments and is stored back. If no value is present or an overflow occurs, a custom error is returned

To implement these calls, replace the #[pallet::call] line with the following code block:

#[pallet::call]\nimpl<T: Config> Pallet<T> {\n    #[pallet::call_index(0)]\n    #[pallet::weight(Weight::default())]\n    pub fn do_something(origin: OriginFor<T>, something: u32) -> DispatchResult {\n        // Check that the extrinsic was signed and get the signer.\n        let who = ensure_signed(origin)?;\n\n        // Update storage.\n        Something::<T>::put(something);\n\n        // Emit an event.\n        Self::deposit_event(Event::SomethingStored { something, who });\n\n        // Return a successful `DispatchResult`\n        Ok(())\n    }\n\n    #[pallet::call_index(1)]\n    #[pallet::weight(Weight::default())]\n    pub fn cause_error(origin: OriginFor<T>) -> DispatchResult {\n        let _who = ensure_signed(origin)?;\n\n        // Read a value from storage.\n        match Something::<T>::get() {\n            // Return an error if the value has not been set.\n            None => Err(Error::<T>::NoneValue.into()),\n            Some(old) => {\n                // Increment the value read from storage. This will cause an error in the event\n                // of overflow.\n                let new = old.checked_add(1).ok_or(Error::<T>::StorageOverflow)?;\n                // Update the value in storage with the incremented result.\n                Something::<T>::put(new);\n                Ok(())\n            },\n        }\n    }\n}\n
"},{"location":"develop/parachains/customize-parachain/make-custom-pallet/#pallet-implementation-overview","title":"Pallet Implementation Overview","text":"

After following all the previous steps, the pallet is now fully implemented. Below is the complete code, combining the configuration, events, errors, storage, and dispatchable functions:

Code
pub use pallet::*;\n\n#[frame_support::pallet]\npub mod pallet {\n    use frame_support::pallet_prelude::*;\n    use frame_system::pallet_prelude::*;\n\n    #[pallet::pallet]\n    pub struct Pallet<T>(_);\n\n    #[pallet::config]\n    pub trait Config: frame_system::Config {\n        /// The overarching runtime event type.\n        type RuntimeEvent: From<Event<Self>> + IsType<<Self as frame_system::Config>::RuntimeEvent>;\n        /// A type representing the weights required by the dispatchables of this pallet.\n        type WeightInfo;\n    }\n\n    #[pallet::event]\n    #[pallet::generate_deposit(pub(super) fn deposit_event)]\n    pub enum Event<T: Config> {\n        /// A user has successfully set a new value.\n        SomethingStored {\n            /// The new value set.\n            something: u32,\n            /// The account who set the new value.\n            who: T::AccountId,\n        },\n    }\n\n    #[pallet::error]\n    pub enum Error<T> {\n        /// The value retrieved was `None` as no value was previously set.\n        NoneValue,\n        /// There was an attempt to increment the value in storage over `u32::MAX`.\n        StorageOverflow,\n    }\n\n    #[pallet::storage]\n    pub type Something<T> = StorageValue<_, u32>;\n\n    #[pallet::call]\n    impl<T: Config> Pallet<T> {\n        #[pallet::call_index(0)]\n        #[pallet::weight(Weight::default())]\n        pub fn do_something(origin: OriginFor<T>, something: u32) -> DispatchResult {\n            // Check that the extrinsic was signed and get the signer.\n            let who = ensure_signed(origin)?;\n\n            // Update storage.\n            Something::<T>::put(something);\n\n            // Emit an event.\n            Self::deposit_event(Event::SomethingStored { something, who });\n\n            // Return a successful `DispatchResult`\n            Ok(())\n        }\n\n        #[pallet::call_index(1)]\n        #[pallet::weight(Weight::default())]\n        pub fn cause_error(origin: OriginFor<T>) -> DispatchResult {\n            let _who = ensure_signed(origin)?;\n\n            // Read a value from storage.\n            match Something::<T>::get() {\n                // Return an error if the value has not been set.\n                None => Err(Error::<T>::NoneValue.into()),\n                Some(old) => {\n                    // Increment the value read from storage. This will cause an error in the event\n                    // of overflow.\n                    let new = old.checked_add(1).ok_or(Error::<T>::StorageOverflow)?;\n                    // Update the value in storage with the incremented result.\n                    Something::<T>::put(new);\n                    Ok(())\n                },\n            }\n        }\n    }\n}\n
"},{"location":"develop/parachains/customize-parachain/make-custom-pallet/#where-to-go-next","title":"Where to Go Next","text":"

With the pallet implemented, the next steps involve ensuring its reliability and performance before integrating it into a runtime. Check the following sections:

  • Testing - learn how to effectively test the functionality and reliability of your pallet to ensure it behaves as expected

  • Benchmarking - explore methods to measure the performance and execution cost of your pallet

  • Add a Pallet to the Runtime - follow this guide to include your pallet in a Polkadot SDK-based runtime, making it ready for use in your blockchain

"},{"location":"develop/parachains/customize-parachain/overview/","title":"Overview","text":""},{"location":"develop/parachains/customize-parachain/overview/#introduction","title":"Introduction","text":"

The runtime is the heart of any Polkadot SDK-based blockchain, handling the essential logic that governs state changes and transaction processing. With Polkadot SDK\u2019s FRAME (Framework for Runtime Aggregation of Modularized Entities), developers gain access to a powerful suite of tools for building custom blockchain runtimes. FRAME offers a modular architecture, featuring reusable pallets and support libraries, to streamline development.

This guide provides an overview of FRAME, its core components like pallets and system libraries, and demonstrates how to compose a runtime tailored to your specific blockchain use case. Whether you\u2019re integrating pre-built modules or designing custom logic, FRAME equips you with the tools to create scalable, feature-rich blockchains.

"},{"location":"develop/parachains/customize-parachain/overview/#frame-runtime-architecture","title":"FRAME Runtime Architecture","text":"

The following diagram illustrates how FRAME components integrate into the runtime:

All transactions sent to the runtime are handled by the frame_executive pallet, which dispatches them to the appropriate pallet for execution. These runtime modules contain the logic for specific blockchain features. The frame_system module provides core functions, while frame_support libraries offer useful tools to simplify pallet development. Together, these components form the backbone of a FRAME-based blockchain's runtime.

"},{"location":"develop/parachains/customize-parachain/overview/#pallets","title":"Pallets","text":"

Pallets are modular components within the FRAME ecosystem that encapsulate specific blockchain functionalities. These modules offer customizable business logic for various use cases and features that can be integrated into a runtime.

Developers have the flexibility to implement any desired behavior in the core logic of the blockchain, such as:

  • Exposing new transactions
  • Storing information
  • Enforcing business rules

Pallets also include necessary wiring code to ensure proper integration and functionality within the runtime. FRAME provides a range of pre-built pallets for standard and common blockchain functionalities, including consensus algorithms, staking mechanisms, governance systems, and more. These pre-existing pallets serve as building blocks or templates, which developers can use as-is, modify, or reference when creating custom functionalities.

"},{"location":"develop/parachains/customize-parachain/overview/#pallet-structure","title":"Pallet Structure","text":"

Polkadot SDK heavily utilizes Rust macros, allowing developers to focus on specific functional requirements when writing pallets instead of dealing with technicalities and scaffolding code.

A typical pallet skeleton looks like this:

pub use pallet::*;\n\n#[frame_support::pallet]\npub mod pallet {\n  use frame_support::pallet_prelude::*;\n  use frame_system::pallet_prelude::*;\n\n  #[pallet::pallet]\n  #[pallet::generate_store(pub(super) trait Store)]\n  pub struct Pallet<T>(_);\n\n  #[pallet::config]  // snip\n  #[pallet::event]   // snip\n  #[pallet::error]   // snip\n  #[pallet::storage] // snip\n  #[pallet::call]    // snip\n}\n

All pallets, including custom ones, can implement these attribute macros:

  • #[frame_support::pallet] - marks the module as usable in the runtime
  • #[pallet::pallet] - applied to a structure used to retrieve module information easily
  • #[pallet::config] - defines the configuration for the pallets's data types
  • #[pallet::event] - defines events to provide additional information to users
  • #[pallet::error] - lists possible errors in an enum to be returned upon unsuccessful execution
  • #[pallet::storage] - defines elements to be persisted in storage
  • #[pallet::call] - defines functions exposed as transactions, allowing dispatch to the runtime

These macros are applied as attributes to Rust modules, functions, structures, enums, and types. They enable the pallet to be built and added to the runtime, exposing the custom logic to the outer world.

Note

The macros above are the core components of a pallet. For a comprehensive guide on these and additional macros, refer to the pallet_macros section in the Polkadot SDK documentation.

"},{"location":"develop/parachains/customize-parachain/overview/#support-libraries","title":"Support Libraries","text":"

In addition to purpose-specific pallets, FRAME offers services and core libraries that facilitate composing and interacting with the runtime:

  • frame_system pallet - provides low-level types, storage, and functions for the runtime
  • frame_executive pallet - orchestrates the execution of incoming function calls to the respective pallets in the runtime
  • frame_support crate - is a collection of Rust macros, types, traits, and modules that simplify the development of Substrate pallets
  • frame_benchmarking crate - contains common runtime patterns for benchmarking and testing purposes
"},{"location":"develop/parachains/customize-parachain/overview/#compose-a-runtime-with-pallets","title":"Compose a Runtime with Pallets","text":"

The Polkadot SDK allows developers to construct a runtime by combining various pallets, both built-in and custom-made. This modular approach enables the creation of unique blockchain behaviors tailored to specific requirements.

The following diagram illustrates the process of selecting and combining FRAME pallets to compose a runtime:

This modular design allows developers to:

  • Rapidly prototype blockchain systems
  • Easily add or remove features by including or excluding pallets
  • Customize blockchain behavior without rebuilding core components
  • Leverage tested and optimized code from built-in pallets

For more detailed information on implementing this process, refer to the following sections:

  • Add a Pallet to Your Runtime
  • Create a Custom Pallet
"},{"location":"develop/parachains/customize-parachain/pallet-testing/","title":"Pallet Testing","text":""},{"location":"develop/parachains/customize-parachain/pallet-testing/#introduction","title":"Introduction","text":"

Unit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. The Polkadot SDK offers a set of APIs to create a test environment that can simulate runtime and mock transaction execution for both extrinsic and queries.

This guide will explore how to mock a runtime and test a pallet. For that, the Polkadot SDK pallets use the mock.rs and test.rs files as a basis for testing pallet processes. The mock.rs file allows the mock runtime to be tested, and the test.rs file allows writing unit test functions to check the functionality of isolated pieces of code within the pallet.

"},{"location":"develop/parachains/customize-parachain/pallet-testing/#mocking-the-runtime","title":"Mocking the Runtime","text":"

To test a pallet, a mock runtime is created to simulate the behavior of the blockchain environment where the pallet will be included. This involves defining a minimal runtime configuration that only provides for the required dependencies for the tested pallet.

For a complete example of a mocked runtime, check out the mock.rs file in the Solochain Template.

A mock.rs file defines the mock runtime in a typical Polkadot SDK project. It includes the elements described below.

"},{"location":"develop/parachains/customize-parachain/pallet-testing/#runtime-composition","title":"Runtime Composition","text":"

This section describes the pallets included for the mocked runtime. For example, the following code snippet shows how to build a mocked runtime called Test that consists of the frame_system pallet and the pallet_template:

frame_support::construct_runtime!(\n    pub enum Test {\n        System: frame_system,\n        TemplateModule: pallet_template,\n    }\n);\n
"},{"location":"develop/parachains/customize-parachain/pallet-testing/#pallets-configurations","title":"Pallets Configurations","text":"

This section outlines the types linked to each pallet in the mocked runtime. For testing, many of these types are simple or primitive, replacing more complex, abstract types to streamline the process.

impl frame_system::Config for Test {\n    ...\n    type Index = u64;\n    type BlockNumber = u64;\n    type Hash = H256;\n    type Hashing = BlakeTwo256;\n    type AccountId = u64;\n    ...\n}\n

The configuration should be set for each pallet existing in the mocked runtime.

Note

The simplification of types is for simplifying the testing process. For example, AccountId is u64, meaning a valid account address can be an unsigned integer:

let alice_account: u64 = 1;\n
"},{"location":"develop/parachains/customize-parachain/pallet-testing/#genesis-config-initialization","title":"Genesis Config Initialization","text":"

To initialize the genesis storage according to the mocked runtime, the following function can be used:

pub fn new_test_ext() -> sp_io::TestExternalities {\n    frame_system::GenesisConfig::<Test>::default()\n        .build_storage()\n        .unwrap()\n        .into()\n}\n
"},{"location":"develop/parachains/customize-parachain/pallet-testing/#pallet-unit-testing","title":"Pallet Unit Testing","text":"

Once the mock runtime is in place, the next step is to write unit tests that evaluate the functionality of your pallet. Unit tests allow you to test specific pallet features in isolation, ensuring that each function behaves correctly under various conditions. These tests typically reside in your pallet\u2019s module\u2019s test.rs file.

"},{"location":"develop/parachains/customize-parachain/pallet-testing/#writing-unit-tests","title":"Writing Unit Tests","text":"

Unit tests in the Polkadot SDK use the Rust testing framework, and the mock runtime you\u2019ve defined earlier will serve as the test environment. Below are the typical steps involved in writing unit tests for a pallet.

"},{"location":"develop/parachains/customize-parachain/pallet-testing/#test-initialization","title":"Test Initialization","text":"

Each test starts by initializing the runtime environment, typically using the new_test_ext() function, which sets up the mock storage and environment.

#[test]\nfn test_pallet_functionality() {\n    new_test_ext().execute_with(|| {\n        // Test logic goes here\n    });\n}\n
"},{"location":"develop/parachains/customize-parachain/pallet-testing/#function-call-testing","title":"Function Call Testing","text":"

Call the pallet\u2019s extrinsics or functions to simulate user interaction or internal logic. Use the assert_ok! macro to check for successful execution and assert_err! to verify that errors are handled properly.

#[test]\nfn it_works_for_valid_input() {\n    new_test_ext().execute_with(|| {\n        // Call an extrinsic or function\n        assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param));\n    });\n}\n\n#[test]\nfn it_fails_for_invalid_input() {\n    new_test_ext().execute_with(|| {\n        // Call an extrinsic with invalid input and expect an error\n        assert_err!(\n            TemplateModule::some_function(Origin::signed(1), invalid_param),\n            Error::<Test>::InvalidInput\n        );\n    });\n}\n
"},{"location":"develop/parachains/customize-parachain/pallet-testing/#storage-testing","title":"Storage Testing","text":"

After calling a function or extrinsic in your pallet, it's important to verify that the state changes in the pallet's storage match the expected behavior. This ensures that data is updated correctly based on the actions taken.

The following example shows how to test the storage behavior before and after the function call:

#[test]\nfn test_storage_update_on_extrinsic_call() {\n    new_test_ext().execute_with(|| {\n        // Check the initial storage state (before the call)\n        assert_eq!(Something::<Test>::get(), None);\n\n        // Dispatch a signed extrinsic, which modifies storage\n        assert_ok!(TemplateModule::do_something(RuntimeOrigin::signed(1), 42));\n\n        // Validate that the storage has been updated as expected (after the call)\n        assert_eq!(Something::<Test>::get(), Some(42));\n    });\n}\n
"},{"location":"develop/parachains/customize-parachain/pallet-testing/#event-testing","title":"Event Testing","text":"

It\u2019s also crucial to test the events that your pallet emits during execution. By default, events generated in a pallet using the #generate_deposit macro are stored under the system's event storage key (system/events) as EventRecord entries. These can be accessed using System::events() or verified with specific helper methods provided by the system pallet, such as assert_has_event and assert_last_event.

Here\u2019s an example of testing events in a mock runtime:

#[test]\nfn it_emits_events_on_success() {\n    new_test_ext().execute_with(|| {\n        // Call an extrinsic or function\n        assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param));\n\n        // Verify that the expected event was emitted\n        assert!(System::events().iter().any(|record| {\n            record.event == Event::TemplateModule(TemplateEvent::SomeEvent)\n        }));\n    });\n}\n

Some key considerations are:

  • Block number - events are not emitted on the genesis block, so you need to set the block number using System::set_block_number() to ensure events are triggered
  • Converting events - use .into() when instantiating your pallet\u2019s event to convert it into a generic event type, as required by the system\u2019s event storage
"},{"location":"develop/parachains/customize-parachain/pallet-testing/#where-to-go-next","title":"Where to Go Next","text":"
  • Dive into the full implementation of the mock.rs and test.rs files in the Solochain Template
  • To evaluate the resource usage of your pallet operations, refer to the Benchmarking documentation for guidance on measuring efficiency
"},{"location":"develop/parachains/deployment/","title":"Deployment","text":"

Learn how to prepare your blockchain for deployment using the Polkadot SDK, including building deterministic Wasm runtimes and generating chain specifications.

To better understand the deployment process, check out the following section. If you're ready to start jump to In This Section to begin working through the deployment guides.

"},{"location":"develop/parachains/deployment/#deployment-process","title":"Deployment Process","text":"

Taking your Polkadot SDK-based blockchain from a local environment to production involves several steps, ensuring your network is stable, secure, and ready for real-world use. The following diagram outlines the process at a high level:

flowchart TD\n    %% Group 1: Pre-Deployment\n    subgraph group1 [Pre-Deployment]\n        direction LR\n        A(\"Local \\nDevelopment \\nand Testing\") --> B(\"Runtime \\nCompilation\")\n        B --> C(\"Generate \\nChain \\nSpecifications\")\n        C --> D(\"Prepare \\nDeployment \\nEnvironment\")\n        D --> E(\"Acquire \\nCoretime\")\n    end\n\n    %% Group 2: Deployment\n    subgraph group2 [Deployment]\n        F(\"Launch \\nand \\nMonitor\")\n    end\n\n    %% Group 3: Post-Deployment\n    subgraph group3 [Post-Deployment]\n        G(\"Maintenance \\nand \\nUpgrades\")\n    end\n\n    %% Connections Between Groups\n    group1 --> group2\n    group2 --> group3\n\n    %% Styling\n    style group1 fill:#ffffff,stroke:#6e7391,stroke-width:1px\n    style group2 fill:#ffffff,stroke:#6e7391,stroke-width:1px\n    style group3 fill:#ffffff,stroke:#6e7391,stroke-width:1px

For more details, check out the Deploy a Parachain to Polkadot overview.

"},{"location":"develop/parachains/deployment/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/parachains/deployment/#additional-resources","title":"Additional ResourcesCheck Out the Chain Spec Builder Docs","text":"

Learn about Substrate\u2019s chain spec builder utility.

"},{"location":"develop/parachains/deployment/build-deterministic-runtime/","title":"Build a Deterministic Runtime","text":""},{"location":"develop/parachains/deployment/build-deterministic-runtime/#introduction","title":"Introduction","text":"

By default, the Rust compiler produces optimized Wasm binaries. These binaries are suitable for working in an isolated environment, such as local development. However, the Wasm binaries the compiler builds by default aren't guaranteed to be deterministically reproducible. Each time the compiler generates the Wasm runtime, it might produce a slightly different Wasm byte code. This is problematic in a blockchain network where all nodes must use exactly the same raw chain specification file.

Working with builds that aren't guaranteed to be deterministically reproducible can cause other problems, too. For example, for automating the build processes for a blockchain, it is ideal that the same code always produces the same result (in terms of bytecode). Compiling the Wasm runtime with every push would produce inconsistent and unpredictable results without a deterministic build, making it difficult to integrate with any automation and likely to break a CI/CD pipeline continuously. Deterministic builds\u2014code that always compiles to exactly the same bytecode\u2014ensure that the Wasm runtime can be inspected, audited, and independently verified.

"},{"location":"develop/parachains/deployment/build-deterministic-runtime/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure you have Docker installed.

"},{"location":"develop/parachains/deployment/build-deterministic-runtime/#tooling-for-wasm-runtime","title":"Tooling for Wasm Runtime","text":"

To compile the Wasm runtime deterministically, the same tooling that produces the runtime for Polkadot, Kusama, and other Polkadot SDK-based chains can be used. This tooling, referred to collectively as the Substrate Runtime Toolbox or\u00a0srtool, ensures that the same source code consistently compiles to an identical Wasm blob.

The core component of srtool is a Docker container executed as part of a Docker image. The name of the srtool Docker image specifies the version of the Rust compiler used to compile the code included in the image. For example, the image paritytech/srtool:1.62.0 indicates that the code in the image was compiled with version 1.62.0 of the rustc compiler.

"},{"location":"develop/parachains/deployment/build-deterministic-runtime/#working-with-the-docker-container","title":"Working with the Docker Container","text":"

The srtool-cli package is a command-line utility written in Rust that installs an executable program called srtool. This program simplifies the interactions with the srtool Docker container.

Over time, the tooling around the srtool Docker image has expanded to include the following tools and helper programs:

  • srtool-cli - provides a command-line interface to pull the srtool Docker image, get information about the image and tooling used to interact with it, and build the runtime using the srtool Docker container
  • subwasm - provides command-line options for working with the metadata and Wasm runtime built using srtool. The subwasm program is also used internally to perform tasks in the srtool image
  • srtool-actions - provides GitHub actions to integrate builds produced using the srtool image with your GitHub CI/CD pipelines
  • srtool-app - provides a simple graphical user interface for building the runtime using the srtool Docker image
"},{"location":"develop/parachains/deployment/build-deterministic-runtime/#prepare-the-environment","title":"Prepare the Environment","text":"

It is recommended to install the srtool-cli program to work with the Docker image using a simple command-line interface.

To prepare the environment:

  1. Verify that Docker is installed by running the following command:

    docker --version\n

    If Docker is installed, the command will display version information:

    docker --version Docker version 20.10.17, build 100c701

  2. Install the srtool command-line interface by running the following command:

    cargo install --git https://github.com/chevdor/srtool-cli\n
  3. View usage information for the srtool command-line interface by running the following command:

    srtool help\n
  4. Download the latest srtool Docker image by running the following command:

    srtool pull\n
"},{"location":"develop/parachains/deployment/build-deterministic-runtime/#start-a-deterministic-build","title":"Start a Deterministic Build","text":"

After preparing the environment, the Wasm runtime can be compiled using the\u00a0srtool\u00a0Docker image.

To build the runtime, you need to open your Polkadot SDK-based project in a terminal shell and run the following command:

srtool build --app --package INSERT_RUNTIME_PACKAGE_NAME --runtime-dir INSERT_RUNTIME_PATH \n
  • The name specified for the --package should be the name defined in the Cargo.toml file for the runtime
  • The path specified for the --runtime-dir should be the path to the Cargo.toml file for the runtime. For example:

    node/\npallets/\nruntime/\n\u251c\u2500\u2500lib.rs\n\u2514\u2500\u2500Cargo.toml # INSERT_RUNTIME_PATH should be the path to this file\n...\n
  • If the Cargo.toml file for the runtime is located in a runtime subdirectory, for example, runtime/kusama, the --runtime-dir parameter can be omitted

"},{"location":"develop/parachains/deployment/build-deterministic-runtime/#use-srtool-in-github-actions","title":"Use srtool in GitHub Actions","text":"

To add a GitHub workflow for building the runtime:

  1. Create a .github/workflows directory in the chain's directory
  2. In the .github/workflows directory, click Add file, then select Create new file
  3. Copy the sample GitHub action from basic.yml example in the srtools-actions repository and paste it into the file you created in the previous step

    basic.yml
    name: Srtool build\n\non: push\n\njobs:\n  srtool:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        chain: [\"asset-hub-kusama\", \"asset-hub-westend\"]\n    steps:\n      - uses: actions/checkout@v3\n      - name: Srtool build\n        id: srtool_build\n        uses: chevdor/srtool-actions@v0.8.0\n        with:\n          chain: ${{ matrix.chain }}\n          runtime_dir: polkadot-parachains/${{ matrix.chain }}-runtime\n      - name: Summary\n        run: |\n          echo '${{ steps.srtool_build.outputs.json }}' | jq . > ${{ matrix.chain }}-srtool-digest.json\n          cat ${{ matrix.chain }}-srtool-digest.json\n          echo \"Runtime location: ${{ steps.srtool_build.outputs.wasm }}\"\n
  4. Modify the settings in the sample action

    For example, modify the following settings:

    • The name of the chain
    • The name of the runtime package
    • The location of the runtime
  5. Type a name for the action file and commit

"},{"location":"develop/parachains/deployment/build-deterministic-runtime/#use-the-srtool-image-via-docker-hub","title":"Use the srtool Image via Docker Hub","text":"

If utilizing srtool-cli or srtool-app isn't an option, the paritytech/srtool container image can be used directly via Docker Hub.

To pull the image from Docker Hub:

  1. Sign in to Docker Hub
  2. Type paritytech/srtool in the search field and press enter
  3. Click paritytech/srtool, then click Tags
  4. Copy the command for the image you want to pull
  5. Open a terminal shell on your local computer
  6. Paste the command you copied from the Docker Hub. For example, you might run a command similar to the following, which downloads and unpacks the image:

    docker pull paritytech/srtool:1.62.0\n
"},{"location":"develop/parachains/deployment/build-deterministic-runtime/#naming-convention-for-images","title":"Naming Convention for Images","text":"

Keep in mind that there is no latest tag for the srtool image. Ensure that the image selected is compatible with the locally available version of the Rust compiler.

The naming convention for paritytech/srtool Docker images specifies the version of the Rust compiler used to compile the code included in the image. Some images specify both a compiler version and the version of the build script used. For example, an image named paritytech/srtool:1.62.0-0.9.19 was compiled with version 1.62.0 of the rustc compiler and version 0.9.19 of the build script. Images that only specify the compiler version always contain the software's latest version.

"},{"location":"develop/parachains/deployment/generate-chain-specs/","title":"Generate Chain Specs","text":""},{"location":"develop/parachains/deployment/generate-chain-specs/#introduction","title":"Introduction","text":"

A chain specification collects information that describes a Polkadot SDK-based network. A chain specification is a crucial parameter when starting a node, providing the genesis configurations, bootnodes, and other parameters relating to that particular network. It identifies the network a blockchain node connects to, the other nodes it initially communicates with, and the initial state that nodes must agree on to produce blocks.

The chain specification is defined using the ChainSpec struct. This struct separates the information required for a chain into two parts:

  • Client specification - contains information the node uses to communicate with network participants and send data to telemetry endpoints. Many of these chain specification settings can be overridden by command-line options when starting a node or can be changed after the blockchain has started

  • Initial genesis state - agreed upon by all nodes in the network. It must be set when the blockchain is first started and cannot be changed after that without starting a whole new blockchain

"},{"location":"develop/parachains/deployment/generate-chain-specs/#node-settings-customization","title":"Node Settings Customization","text":"

For the node, the chain specification controls information such as:

  • The bootnodes the node will communicate with
  • The server endpoints for the node to send telemetry data to
  • The human and machine-readable names for the network the node will connect to

The chain specification can be customized to include additional information. For example, you can configure the node to connect to specific blocks at specific heights to prevent long-range attacks when syncing a new node from genesis.

Note that you can customize node settings after genesis. However, nodes only add peers that use the same protocolId.

"},{"location":"develop/parachains/deployment/generate-chain-specs/#genesis-configuration-customization","title":"Genesis Configuration Customization","text":"

All nodes in the network must agree on the genesis state before they can agree on any subsequent blocks. The information configured in the genesis portion of a chain specification is used to create a genesis block. When you start the first node, it takes effect and cannot be overridden with command-line options. However, you can configure some information in the genesis section of a chain specification. For example, you can customize it to include information such as:

  • Initial account balances
  • Accounts that are initially part of a governance council
  • The account that controls the sudo key
  • Any other genesis state for a pallet

Nodes also require the compiled Wasm to execute the runtime logic on the chain, so the initial runtime must also be supplied in the chain specification. For a more detailed look at customizing the genesis chain specification, be sure to check out the Polkadot SDK Docs.

"},{"location":"develop/parachains/deployment/generate-chain-specs/#declaring-storage-items-for-a-runtime","title":"Declaring Storage Items for a Runtime","text":"

A runtime usually requires some storage items to be configured at genesis. This includes the initial state for pallets, for example, how much balance\u00a0specific accounts\u00a0have, or which account will have sudo permissions.

These storage values are configured in the genesis portion of the chain specification. You can create a patch file and ingest it using the chain-spec-builder utility, that is explained in the Creating a Custom Chain Specification section.

"},{"location":"develop/parachains/deployment/generate-chain-specs/#chain-specification-json-format","title":"Chain Specification JSON Format","text":"

Users generally work with the JSON format of the chain specification. Internally, the chain specification is embedded in the GenericChainSpec struct, with specific properties accessible through the ChainSpec struct. The chain specification includes the following keys:

  • name - the human-readable name for the network
  • id - the machine-readable identifier for the network
  • chainType - the type of chain to start (refer to ChainType for more details)
  • bootNodes - a list of multiaddresses belonging to the chain's boot nodes
  • telemetryEndpoints - an optional list of multiaddresses for telemetry endpoints with verbosity levels ranging from 0 to 9 (0 being the lowest verbosity)
  • protocolId - the optional protocol identifier for the network
  • forkId - an optional fork ID that should typically be left empty; it can be used to signal a fork at the network level when two chains share the same genesis hash
  • properties - custom properties provided as a key-value JSON object
  • codeSubstitutes - an optional mapping of block numbers to Wasm code
  • genesis - the genesis configuration for the chain

For example, the following JSON shows a basic chain specification file:

{\n    \"name\": \"chainName\",\n    \"id\": \"chainId\",\n    \"chainType\": \"Local\",\n    \"bootNodes\": [],\n    \"telemetryEndpoints\": null,\n    \"protocolId\": null,\n    \"properties\": null,\n    \"codeSubstitutes\": {},\n    \"genesis\": {\n        \"code\": \"0x...\"\n    }\n}\n
"},{"location":"develop/parachains/deployment/generate-chain-specs/#creating-a-custom-chain-specification","title":"Creating a Custom Chain Specification","text":"

To create a custom chain specification, you can use the chain-spec-builder tool. This is a CLI tool that is used to generate chain specifications from the runtime of a node. To install the tool, run the following command:

cargo install staging-chain-spec-builder\n

To verify the installation, run the following:

chain-spec-builder --help\n
"},{"location":"develop/parachains/deployment/generate-chain-specs/#plain-chain-specifications","title":"Plain Chain Specifications","text":"

To create a plain chain specification, you can use the following utility within your project:

chain-spec-builder create -r <INSERT_RUNTIME_WASM_PATH> <INSERT_COMMAND> \n

Note

Before running the command, ensure that the runtime has been compiled and is available at the specified path.

Ensure to replace <INSERT_RUNTIME_WASM_PATH> with the path to the runtime Wasm file and <INSERT_COMMAND> with the command to insert the runtime into the chain specification. The available commands are:

  • patch - overwrites the runtime's default genesis config with the provided patch. You can check the following patch file as a reference
  • full - build the genesis config for runtime using the JSON file. No defaults will be used. As a reference, you can check the following full file
  • default - gets the default genesis config for the runtime and uses it in ChainSpec. Please note that the default genesis config may not be valid. For some runtimes, initial values should be added there (e.g., session keys, BABE epoch)
  • named-preset - uses named preset provided by the runtime to build the chain spec
"},{"location":"develop/parachains/deployment/generate-chain-specs/#raw-chain-specifications","title":"Raw Chain Specifications","text":"

With runtime upgrades, the blockchain's runtime can be upgraded with newer business logic. Chain specifications contain information structured in a way that the node's runtime can understand. For example, consider this excerpt of a common entry for a chain specification:

\"sudo\": {\n    \"key\": \"5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY\"\n}\n

In the plain chain spec JSON file, the keys and associated values are in a human-readable format, which can be used to initialize the genesis storage. When the chain specification is loaded, the runtime converts these readable values into storage items within the trie. However, for long-lived networks like testnets or production chains, using the raw format for storage initialization is preferred. This avoids the need for conversion by the runtime and ensures that storage items remain consistent, even when runtime upgrades occur.

To enable a node with an upgraded runtime to synchronize with a chain from genesis, the plain chain specification is encoded in a raw format. The raw format allows the distribution of chain specifications that all nodes can use to synchronize the chain even after runtime upgrades.

To convert a plain chain specification to a raw chain specification, you can use the following utility:

chain-spec-builder convert-to-raw chain_spec.json\n

After the conversion to the raw format, the sudo key snippet looks like this:

\"0x50a63a871aced22e88ee6466fe5aa5d9\": \"0xd43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d\",\n

The raw chain specification can be used to initialize the genesis storage for a node.

"},{"location":"develop/parachains/deployment/generate-chain-specs/#where-to-go-next","title":"Where to Go Next","text":"

After generating a chain specification, you can use it to initialize the genesis storage for a node. Refer to the following guides to learn how to proceed with the deployment of your blockchain:

  • Obtain Coretime - learn how to obtain the necessary coretime configuration to synchronize your blockchain\u2019s timestamping and enhance its performance
  • Deployment - explore the steps required to deploy your chain specification, ensuring a smooth launch of your network and proper node operation
  • Maintenance - discover best practices for maintaining your blockchain post-deployment, including how to manage upgrades and monitor network health
"},{"location":"develop/parachains/deployment/obtain-coretime/","title":"Obtain Coretime","text":""},{"location":"develop/parachains/deployment/obtain-coretime/#introduction","title":"Introduction","text":"

Securing coretime is essential for operating a parachain on Polkadot. It provides your parachain with guaranteed computational resources and access to Polkadot's shared security model, ensuring your blockchain can process transactions, maintain its state, and interact securely with other parachains in the network. Without coretime, a parachain cannot participate in the ecosystem or leverage the relay chain's validator set for security.

Coretime represents the computational resources allocated to your parachain on the Polkadot network. It determines when and how often your parachain can produce blocks and have them validated by the relay chain.

There are two primary methods to obtain coretime:

  • Bulk coretime - purchase computational resources in advance for a full month
  • On-demand coretime - buy computational resources as needed for individual block production

This guide explains the different methods of obtaining coretime and walks through the necessary steps to get your parachain running.

"},{"location":"develop/parachains/deployment/obtain-coretime/#prerequisites","title":"Prerequisites","text":"

Before obtaining coretime, ensure you have:

  • Developed your parachain runtime using the Polkadot SDK
  • Set up and configured a parachain collator for your target relay chain
  • Successfully compiled your parachain collator node
  • Generated and exported your parachain's genesis state
  • Generated and exported your parachain's validation code (Wasm)
"},{"location":"develop/parachains/deployment/obtain-coretime/#initial-setup-steps","title":"Initial Setup Steps","text":"
  1. Reserve a unique identifier, ParaID, for your parachain:

    1. Connect to the relay chain
    2. Submit the registrar.reserve extrinsic

    Upon success, you'll receive a registered ParaID

  2. Register your parachain's essential information by submitting the registrar.register extrinsic with the following parameters:

    • id - your reserved ParaID
    • genesisHead - your exported genesis state
    • validationCode - your exported Wasm validation code
  3. Start your parachain collator and begin synchronization with the relay chain

"},{"location":"develop/parachains/deployment/obtain-coretime/#obtaining-coretime","title":"Obtaining Coretime","text":""},{"location":"develop/parachains/deployment/obtain-coretime/#bulk-coretime","title":"Bulk Coretime","text":"

Bulk coretime provides several advantages:

  • Monthly allocation of resources
  • Guaranteed block production slots (every 12 seconds, or 6 seconds with Asynchronous Backing)
  • Priority renewal rights
  • Protection against price fluctuations
  • Ability to split and resell unused coretime

To purchase bulk coretime:

  1. Access the Coretime system parachain
  2. Interact with the Broker pallet
  3. Purchase your desired amount of coretime
  4. Assign the purchased core to your registered ParaID

After successfully obtaining coretime, your parachain will automatically start producing blocks at regular intervals.

For current marketplaces and pricing, consult the Coretime Marketplaces page on the Polkadot Wiki.

"},{"location":"develop/parachains/deployment/obtain-coretime/#on-demand-coretime","title":"On-demand Coretime","text":"

On-demand coretime allows for flexible, as-needed block production. To purchase:

  1. Ensure your collator node is fully synchronized with the relay chain
  2. From the account that registered the ParaID, submit the onDemand.placeOrderAllowDeath extrinsic with:

    • maxAmountFor - sufficient funds for the transaction
    • paraId - your registered ParaID

After succesfully executing the extrinsic, your parachain will produce a block.

"},{"location":"develop/parachains/get-started/","title":"Get Started","text":"

Learn how to start building with the Polkadot SDK, from installation and basic concepts to creating and deploying your custom blockchain. This powerful and versatile developer kit is designed to facilitate building on the Polkadot network.

"},{"location":"develop/parachains/get-started/#key-features-of-the-polkadot-sdk","title":"Key Features of the Polkadot SDK","text":"

Features developers can expect from the Polkadot SDK include:

  • Networking and peer-to-peer communication (powered by libp2p)
  • Consensus protocols
  • Cryptography
  • A robust library of pre-built pallets (modules)
  • Benchmarking and testing suites
"},{"location":"develop/parachains/get-started/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/parachains/get-started/#additional-resources","title":"Additional ResourcesPolkadot SDK repositoryPolkadot SDK Rust documentation","text":"

The Polkadot SDK repository provides all the components needed to start building on the Polkadot network. Check out the latest releases, issues, and more.

Check out a code-level introduction to the Polkadot SDK. Learn about the structure of the SDK and provided tools.

"},{"location":"develop/parachains/get-started/build-custom-parachains/","title":"Build Custom Parachains","text":""},{"location":"develop/parachains/get-started/build-custom-parachains/#introduction","title":"Introduction","text":"

Building custom parachains with the Polkadot SDK allows developers to create specialized blockchain solutions tailored to unique requirements. By leveraging Substrate\u2014a Rust-based, modular blockchain development framework\u2014the Polkadot SDK provides powerful tools to construct chains that can either stand-alone or connect to Polkadot\u2019s shared security network as parachains. This flexibility empowers projects across various sectors to launch blockchains that meet specific functional, security, and scalability needs.

This guide covers the core steps for building a custom blockchain using the Polkadot SDK, starting from pre-built chain templates. These templates simplify development, providing an efficient starting point that can be further customized, allowing you to focus on implementing the features and modules that set your blockchain apart.

"},{"location":"develop/parachains/get-started/build-custom-parachains/#starting-from-templates","title":"Starting from Templates","text":"

Using pre-built templates is an efficient way to begin building a custom blockchain. Templates provide a foundational setup with pre-configured modules, letting developers avoid starting from scratch and instead focus on customization. Depending on your project\u2019s goals\u2014whether you want a simple test chain, a standalone chain, or a parachain that integrates with Polkadot\u2019s relay chains\u2014there are templates designed to suit different levels of complexity and scalability.

Within the Polkadot SDK, the following templates are available to get you started:

  • minimal-template - includes only the essential components necessary for a functioning blockchain. It\u2019s ideal for developers who want to gain familiarity with blockchain basics and test simple customizations before scaling up

  • solochain-template - provides a foundation for creating standalone blockchains with moderate features, including a simple consensus mechanism and several core FRAME pallets. It\u2019s a solid starting point for developers who want a fully functional chain that doesn\u2019t depend on a relay chain

  • parachain-template - designed for connecting to relay chains like Polkadot, Kusama, or Paseo, this template enables a chain to operate as a parachain. For projects aiming to integrate with Polkadot\u2019s ecosystem, this template offers a great starting point

In addition, several external templates offer unique features and can align with specific use cases or developer familiarity:

  • OpenZeppelin - offers two flexible starting points:

    • The generic-runtime-template provides a minimal setup with essential pallets and secure defaults, creating a reliable foundation for custom blockchain development
    • The evm-runtime-template enables EVM compatibility, allowing developers to migrate Solidity contracts and EVM-based dApps. This template is ideal for Ethereum developers looking to leverage Substrate's capabilities
  • Tanssi - provides developers with pre-built templates that can help accelerate the process of creating appchain

  • Pop Network - designed with user-friendliness in mind, Pop Network offers an approachable starting point for new developers, with a simple CLI interface for creating appchains

Choosing a suitable template depends on your project\u2019s unique requirements, level of customization, and integration needs. Starting from a template speeds up development and lets you focus on implementing your chain\u2019s unique features rather than the foundational blockchain setup.

"},{"location":"develop/parachains/get-started/build-custom-parachains/#high-level-steps-to-build-a-custom-chain","title":"High-Level Steps to Build a Custom Chain","text":"

Building a custom blockchain with the Polkadot SDK involves several core steps, from environment setup to deployment. Here\u2019s a breakdown of each stage:

  1. Set up the development environment - install Rust and configure all necessary dependencies to work with the Polkadot SDK (for more information, check the Install Polkadot SDK dependencies page). Ensuring your environment is correctly set up from the start is crucial for avoiding compatibility issues later

  2. Clone the chain template - start by downloading the code for one of the pre-built templates that best aligns with your project needs. Each template offers a different configuration, so select one based on your chain\u2019s intended functionality

  3. Define your chain's custom logic - with your chosen template, check the runtime configuration to customize the chain\u2019s functionality. Polkadot\u2019s modular \u201cpallet\u201d system lets you easily add or modify features like account balances, transaction handling, and staking. Creating custom pallets to implement unique features and combining them with existing ones enables you to define the unique aspects of your chain

  4. Test and debug - testing is essential to ensure your custom chain works as intended. Conduct unit tests for individual pallets and integration tests for interactions between pallets

  5. Compile - after finalizing and testing your custom configurations, compile the blockchain to generate the necessary executable files for running a node. Run the node locally to validate that your customizations work as expected and that your chain is stable and responsive

Each of these steps is designed to build on the last, helping ensure that your custom blockchain is functional, optimized, and ready for deployment within the Polkadot ecosystem or beyond.

"},{"location":"develop/parachains/get-started/build-custom-parachains/#where-to-go-next","title":"Where to Go Next","text":"

Once your chain is functional locally, depending on your project\u2019s goals, you can deploy to a TestNet to monitor performance and gather feedback or launch directly on a MainNet. To learn more about this process, check the Deploy a Parachain section of the documentation.

After deployment, regular monitoring and maintenance are essential to ensure that the chain is functioning as expected. Developers need to be able to monitor the chain's performance, identify issues, and troubleshoot problems. Key activities include tracking network health, node performance, and transaction throughput. It's also essential to test the blockchain\u2019s scalability under high load and perform security audits regularly to prevent vulnerabilities. For more information on monitoring and maintenance, refer to the Maintenance section.

"},{"location":"develop/parachains/get-started/deploy-parachain-to-polkadot/","title":"Deploy a Parachain","text":""},{"location":"develop/parachains/get-started/deploy-parachain-to-polkadot/#introduction","title":"Introduction","text":"

Deploying a blockchain with the Polkadot SDK is a critical step in transforming a locally developed network into a secure, fully functioning system for public or private use. It involves more than just launching a runtime; you'll need to prepare the chain specification, ensure ecosystem compatibility, and plan for long-term maintenance and updates.

Whether deploying a test network for development or a mainnet for production, this guide highlights the essential steps to get your blockchain operational. It provides an overview of the deployment process, introducing key concepts, tools, and best practices for a smooth transition from development to production.

"},{"location":"develop/parachains/get-started/deploy-parachain-to-polkadot/#deployment-process","title":"Deployment Process","text":"

Taking your Polkadot SDK-based blockchain from a local environment to production involves several steps, ensuring your network is stable, secure, and ready for real-world use. The following diagram outlines the process at a high level:

flowchart TD\n    %% Group 1: Pre-Deployment\n    subgraph group1 [Pre-Deployment]\n        direction LR\n        A(\"Local \\nDevelopment \\nand Testing\") --> B(\"Runtime \\nCompilation\")\n        B --> C(\"Generate \\nChain \\nSpecifications\")\n        C --> D(\"Prepare \\nDeployment \\nEnvironment\")\n        D --> E(\"Acquire \\nCoretime\")\n    end\n\n    %% Group 2: Deployment\n    subgraph group2 [Deployment]\n        F(\"Launch \\nand \\nMonitor\")\n    end\n\n    %% Group 3: Post-Deployment\n    subgraph group3 [Post-Deployment]\n        G(\"Maintenance \\nand \\nUpgrades\")\n    end\n\n    %% Connections Between Groups\n    group1 --> group2\n    group2 --> group3\n\n    %% Styling\n    style group1 fill:#ffffff,stroke:#6e7391,stroke-width:1px\n    style group2 fill:#ffffff,stroke:#6e7391,stroke-width:1px\n    style group3 fill:#ffffff,stroke:#6e7391,stroke-width:1px
  • Local development and testing - the process begins with local development and testing. Developers focus on building the runtime by selecting and configuring the necessary pallets while refining network features. In this phase, it's essential to run a local TestNet to verify transactions and ensure the blockchain behaves as expected. Unit and integration tests are also crucial for ensuring the network works as expected before launch. Thorough testing is conducted, not only for individual components but also for interactions between pallets

  • Runtime compilation - Polkadot SDK-based blockchains are built with Wasm, a highly portable and efficient format. Compiling your blockchain's runtime into Wasm ensures it can be executed reliably across various environments, guaranteeing network-wide compatibility and security. The srtool is helpful for this purpose since it allows you to compile deterministic runtimes

  • Generate chain specifications - the chain spec file defines the structure and configuration of your blockchain. It includes initial node identities, session keys, and other parameters. Defining a well thought-out chain specification ensures that your network will operate smoothly and according to your intended design

  • Deployment environment - whether launching a local test network or a production-grade blockchain, selecting the proper infrastructure is vital. For further information about these topics, see the Infrastructure section

  • Acquire coretime - to build on top of the Polkadot network, users need to acquire coretime (either on-demand or in bulk) to access the computational resources of the relay chain. This allows for the secure validation of parachain blocks through a randomized selection of relay chain validators

    Note

    If you\u2019re building a standalone blockchain (solochain) that won\u2019t connect to Polkadot as a parachain, you can skip this step, as there\u2019s no need to acquire coretime or implement Cumulus .

  • Launch and monitor - once everything is configured, you can launch the blockchain, initiating the network with your chain spec and Wasm runtime. Validators or collators will begin producing blocks, and the network will go live. Post-launch, monitoring is vital to ensuring network health\u2014tracking block production, node performance, and overall security

  • Maintenance and upgrade - a blockchain continues to evolve post-deployment. As the network expands and adapts, it may require runtime upgrades, governance updates, coretime renewals, and even modifications to the underlying code. For an in-depth guide on this topic, see the Maintenance section

"},{"location":"develop/parachains/get-started/deploy-parachain-to-polkadot/#where-to-go-next","title":"Where to Go Next","text":"

Deploying a Polkadot SDK-based blockchain is a multi-step process that requires careful planning, from generating chain specs and compiling the runtime to managing post-launch updates. By understanding the deployment process and utilizing the right tools, developers can confidently take their blockchain from development to production. For more on this topic, check out the following resources:

  • Generate Chain Specifications - learn how to generate a chain specification for your blockchain
  • Building Deterministic Runtimes - learn how to build deterministic runtimes for your blockchain
  • Infrastructure - learn about the different infrastructure options available for your blockchain
  • Maintenance - discover how to manage updates on your blockchain to ensure smooth operation
"},{"location":"develop/parachains/get-started/install-polkadot-sdk/","title":"Install Polkadot SDK Dependencies","text":"

This guide provides step-by-step instructions for installing the dependencies you need to work with the Polkadot SDK-based chains on macOS, Linux, and Windows. Follow the appropriate section for your operating system to ensure all necessary tools are installed and configured properly.

"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#macos","title":"macOS","text":"

You can install Rust and set up a Substrate development environment on Apple macOS computers with Intel or Apple M1 processors.

"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#before-you-begin","title":"Before You Begin","text":"

Before you install Rust and set up your development environment on macOS, verify that your computer meets the following basic requirements:

  • Operating system version is 10.7 Lion or later
  • Processor speed of at least 2 GHz. Note that 3 GHz is recommended
  • Memory of at least 8 GB RAM. Note that 16 GB is recommended
  • Storage of at least 10 GB of available space
  • Broadband Internet connection
"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#install-homebrew","title":"Install Homebrew","text":"

In most cases, you should use Homebrew to install and manage packages on macOS computers. If you don't already have Homebrew installed on your local computer, you should download and install it before continuing.

To install Homebrew:

  1. Open the Terminal application

  2. Download and install Homebrew by running the following command:

    /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)\"\n
  3. Verify Homebrew has been successfully installed by running the following command:

    brew --version\n

    The command displays output similar to the following:

    brew --version Homebrew 4.3.15

"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#support-for-apple-silicon","title":"Support for Apple Silicon","text":"

Protobuf must be installed before the build process can begin. To install it, run the following command:

brew install protobuf\n
"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#install-required-packages-and-rust","title":"Install Required Packages and Rust","text":"

Because the blockchain requires standard cryptography to support the generation of public/private key pairs and the validation of transaction signatures, you must also have a package that provides cryptography, such as openssl.

To install openssl and the Rust toolchain on macOS:

  1. Open the Terminal application

  2. Ensure you have an updated version of Homebrew by running the following command:

    brew update\n
  3. Install the openssl package by running the following command:

    brew install openssl\n
  4. Download the rustup installation program and use it to install Rust by running the following command:

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\n
  5. Follow the prompts displayed to proceed with a default installation

  6. Update your current shell to include Cargo by running the following command:

    source ~/.cargo/env\n
  7. Configure the Rust toolchain to default to the latest stable version by running the following commands:

    rustup default stable\nrustup update\nrustup target add wasm32-unknown-unknown\n
  8. Add the nightly release and the nightly Wasm targets to your development environment by running the following commands:

    rustup update nightly\nrustup target add wasm32-unknown-unknown --toolchain nightly\n
  9. Verify your installation

  10. Install cmake using the following command:

    brew install cmake\n
"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#linux","title":"Linux","text":"

Rust supports most Linux distributions. Depending on the specific distribution and version of the operating system you use, you might need to add some software dependencies to your environment. In general, your development environment should include a linker or C-compatible compiler, such as clang and an appropriate integrated development environment (IDE).

"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#before-you-begin-linux","title":"Before You Begin","text":"

Check the documentation for your operating system for information about the installed packages and how to download and install any additional packages you might need. For example, if you use Ubuntu, you can use the Ubuntu Advanced Packaging Tool (apt) to install the build-essential package:

sudo apt install build-essential\n

At a minimum, you need the following packages before you install Rust:

clang curl git make\n

Because the blockchain requires standard cryptography to support the generation of public/private key pairs and the validation of transaction signatures, you must also have a package that provides cryptography, such as libssl-dev or openssl-devel.

"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#install-required-packages-and-rust-linux","title":"Install Required Packages and Rust","text":"

To install the Rust toolchain on Linux:

  1. Open a terminal shell

  2. Check the packages you have installed on the local computer by running an appropriate package management command for your Linux distribution

  3. Add any package dependencies you are missing to your local development environment by running the appropriate package management command for your Linux distribution:

    UbuntuDebianArchFedoraOpenSUSE
    sudo apt install --assume-yes git clang curl libssl-dev protobuf-compiler\n
    sudo apt install --assume-yes git clang curl libssl-dev llvm libudev-dev make protobuf-compiler\n
    pacman -Syu --needed --noconfirm curl git clang make protobuf\n
    sudo dnf update\nsudo dnf install clang curl git openssl-devel make protobuf-compiler\n
    sudo zypper install clang curl git openssl-devel llvm-devel libudev-devel make protobuf\n

    Remember that different distributions might use different package managers and bundle packages in different ways. For example, depending on your installation selections, Ubuntu Desktop and Ubuntu Server might have different packages and different requirements. However, the packages listed in the command-line examples are applicable for many common Linux distributions, including Debian, Linux Mint, MX Linux, and Elementary OS.

  4. Download the rustup installation program and use it to install Rust by running the following command:

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\n
  5. Follow the prompts displayed to proceed with a default installation

  6. Update your current shell to include Cargo by running the following command:

    source $HOME/.cargo/env\n
  7. Verify your installation by running the following command:

    rustc --version\n
  8. Configure the Rust toolchain to default to the latest stable version by running the following commands:

    rustup default stable\nrustup update\n
  9. Add the nightly release and the nightly Wasm targets to your development environment by running the following commands:

    rustup update nightly\nrustup target add wasm32-unknown-unknown --toolchain nightly\n
  10. Verify your installation

"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#windows-wsl","title":"Windows (WSL)","text":"

In general, UNIX-based operating systems\u2014like macOS or Linux\u2014provide a better development environment for building Substrate-based blockchains.

However, suppose your local computer uses Microsoft Windows instead of a UNIX-based operating system. In that case, you can configure it with additional software to make it a suitable development environment for building Substrate-based blockchains. To prepare a development environment on a Microsoft Windows computer, you can use Windows Subsystem for Linux (WSL) to emulate a UNIX operating environment.

"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#before-you-begin-windows","title":"Before You Begin","text":"

Before installing on Microsoft Windows, verify the following basic requirements:

  • You have a computer running a supported Microsoft Windows operating system:
    • For Windows desktop - you must be running Microsoft Windows 10, version 2004 or later, or Microsoft Windows 11 to install WSL
    • For Windows server - you must be running Microsoft Windows Server 2019, or later, to install WSL on a server operating system
  • You have good internet connection and access to a shell terminal on your local computer
"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#set-up-windows-subsystem-for-linux","title":"Set Up Windows Subsystem for Linux","text":"

WSL enables you to emulate a Linux environment on a computer that uses the Windows operating system. The primary advantage of this approach for Substrate development is that you can use all of the code and command-line examples as described in the Substrate documentation. For example, you can run common commands\u2014such as ls and ps\u2014unmodified. By using WSL, you can avoid configuring a virtual machine image or a dual-boot operating system.

To prepare a development environment using WSL:

  1. Check your Windows version and build number to see if WSL is enabled by default.

    If you have Microsoft Windows 10, version 2004 (Build 19041 and higher), or Microsoft Windows 11, WSL is available by default and you can continue to the next step.

    If you have an older version of Microsoft Windows installed, see the WSL manual installation steps for older versions. If you are installing on an older version of Microsoft Windows, you can download and install WLS 2 if your computer has Windows 10, version 1903 or higher

  2. Select Windows PowerShell or Command Prompt from the Start menu, right-click, then Run as administrator

  3. In the PowerShell or Command Prompt terminal, run the following command:

    wsl --install\n

    This command enables the required WSL 2 components that are part of the Windows operating system, downloads the latest Linux kernel, and installs the Ubuntu Linux distribution by default.

    If you want to review the other Linux distributions available, run the following command:

    wsl --list --online\n
  4. After the distribution is downloaded, close the terminal

  5. Click the Start menu, select Shut down or sign out, then click Restart to restart the computer.

    Restarting the computer is required to start the installation of the Linux distribution. It can take a few minutes for the installation to complete after you restart.

    For more information about setting up WSL as a development environment, see the Set up a WSL development environment docs

"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#install-required-packages-and-rust-windows","title":"Install Required Packages and Rust","text":"

To install the Rust toolchain on WSL:

  1. Click the Start menu, then select Ubuntu

  2. Type a UNIX user name to create user account

  3. Type a password for your UNIX user, then retype the password to confirm it

  4. Download the latest updates for the Ubuntu distribution using the Ubuntu Advanced Packaging Tool (apt) by running the following command:

    sudo apt update\n
  5. Add the required packages for the Ubuntu distribution by running the following command:

    sudo apt install --assume-yes git clang curl libssl-dev llvm libudev-dev make protobuf-compiler\n
  6. Download the rustup installation program and use it to install Rust for the Ubuntu distribution by running the following command:

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\n
  7. Follow the prompts displayed to proceed with a default installation

  8. Update your current shell to include Cargo by running the following command:

    source ~/.cargo/env\n
  9. Verify your installation by running the following command:

    rustc --version\n
  10. Configure the Rust toolchain to use the latest stable version as the default toolchain by running the following commands:

    rustup default stable\nrustup update\n
  11. Add the nightly version of the toolchain and the nightly Wasm target to your development environment by running the following commands:

    rustup update nightly\nrustup target add wasm32-unknown-unknown --toolchain nightly\n
  12. Verify your installation

"},{"location":"develop/parachains/get-started/install-polkadot-sdk/#verifying-installation","title":"Verifying Installation","text":"

Verify the configuration of your development environment by running the following command:

rustup show\nrustup +nightly show\n

The command displays output similar to the following:

rustup show ... active toolchain ---------------- stable-x86_64-apple-darwin (default) rustc 1.81.0 (eeb90cda1 2024-09-04) ... active toolchain ---------------- nightly-x86_64-apple-darwin (overridden by +toolchain on the command line) rustc 1.83.0-nightly (6c6d21008 2024-09-22)"},{"location":"develop/parachains/get-started/intro-polkadot-sdk/","title":"Introduction to Polkadot SDK","text":""},{"location":"develop/parachains/get-started/intro-polkadot-sdk/#introduction","title":"Introduction","text":"

The Polkadot SDK is a powerful and versatile developer kit designed to facilitate building on the Polkadot network. It provides the necessary components for creating custom blockchains, parachains, generalized rollups, and more. Written in the Rust programming language, it puts security and robustness at the forefront of its design.

Whether you're building a standalone chain or deploying a parachain on Polkadot, this SDK equips developers with the libraries and tools needed to manage runtime logic, compile the codebase, and utilize core features like staking, governance, and Cross-Consensus Messaging (XCM). It also provides a means for building generalized peer-to-peer systems beyond blockchains. The Polkadot SDK houses the following overall functionality:

  • Networking and peer-to-peer communication (powered by Libp2p)
  • Consensus protocols, such as BABE, GRANDPA, or Aura
  • Cryptography
  • The ability to create portable Wasm runtimes
  • A selection of pre-built modules, called pallets
  • Benchmarking and testing suites

Note

For an in-depth dive into the monorepo, the Polkadot SDK Rust documentation is highly recommended.

"},{"location":"develop/parachains/get-started/intro-polkadot-sdk/#polkadot-sdk-overview","title":"Polkadot SDK Overview","text":"

The Polkadot SDK is composed of five major components:

  • Substrate - a set of libraries and primitives for building blockchains
  • FRAME - a blockchain development framework built on top of Substrate
  • Cumulus - a set of libraries and pallets to add parachain capabilities to a Substrate/FRAME runtime
  • XCM (Cross Consensus Messaging) - the primary format for conveying messages between parachains
  • Polkadot - the node implementation for the Polkadot protocol
"},{"location":"develop/parachains/get-started/intro-polkadot-sdk/#substrate","title":"Substrate","text":"

Substrate is a Software Development Kit (SDK) that uses Rust-based libraries and tools to enable you to build application-specific blockchains from modular and extensible components. Application-specific blockchains built with Substrate can run as standalone services or in parallel with other chains to take advantage of the shared security provided by the Polkadot ecosystem. Substrate includes default implementations of the core components of the blockchain infrastructure to allow you to focus on the application logic.

Every blockchain platform relies on a decentralized network of computers\u2014called nodes\u2014that communicate with each other about transactions and blocks. In general, a node in this context is the software running on the connected devices rather than the physical or virtual machine in the network. As software, Substrate-based nodes consist of two main parts with separate responsibilities:

  • Client - services to handle network and blockchain infrastructure activity
    • Native binary
    • Executes the Wasm runtime
    • Manages components like database, networking, mempool, consensus, and others
    • Also known as \"Host\"
  • Runtime - business logic for state transitions
    • Application logic
    • Compiled to Wasm
    • Stored as a part of the chain state
    • Also known as State Transition Function (STF)
"},{"location":"develop/parachains/get-started/intro-polkadot-sdk/#frame","title":"FRAME","text":"

FRAME provides the core modular and extensible components that make the Substrate SDK flexible and adaptable to different use cases. FRAME includes Rust-based libraries that simplify the development of application-specific logic. Most of the functionality that FRAME provides takes the form of plug-in modules called pallets that you can add and configure to suit your requirements.

"},{"location":"develop/parachains/get-started/intro-polkadot-sdk/#cumulus","title":"Cumulus","text":"

Cumulus provides utilities and libraries to turn FRAME-based runtimes into runtimes that can be a parachain on Polkadot. Cumulus runtimes are still FRAME runtimes but contain the necessary functionality that allows that runtime to become a parachain on a relay chain.

"},{"location":"develop/parachains/get-started/intro-polkadot-sdk/#why-use-polkadot-sdk","title":"Why Use Polkadot SDK?","text":"

Using the Polkadot SDK, you can build application-specific blockchains without the complexity of building a blockchain from scratch or the limitations of building on a general-purpose blockchain. You can focus on crafting the business logic that makes your chain unique and innovative with the additional benefits of flexibility, upgradeability, open-source licensing, and cross-consensus interoperability.

"},{"location":"develop/parachains/get-started/intro-polkadot-sdk/#create-a-custom-blockchain-using-the-sdk","title":"Create a Custom Blockchain Using the SDK","text":"

Before starting your blockchain development journey, you'll need to decide whether you want to build a standalone chain or a parachain that connects to the Polkadot network. Each path has its considerations and requirements. Once you've made this decision, follow these development stages:

graph LR\n    A[Install the Polkadot SDK] --> B[Build the Chain]\n    B --> C[Deploy the Chain]
  1. Install the Polkadot SDK - set up your development environment with all necessary dependencies and tools
  2. Build the chain - learn how to create and customize your blockchain's runtime, configure pallets, and implement your chain's unique features
  3. Deploy the chain - follow the steps to launch your blockchain, whether as a standalone network or as a parachain on Polkadot

Each stage is covered in detail in its respective guide, walking you through the process from initial setup to final deployment.

"},{"location":"develop/parachains/maintenance/","title":"Maintenance","text":"

Learn how to maintain Polkadot SDK-based networks, focusing on runtime monitoring, upgrades, and storage migrations for optimal performance. Proper maintenance ensures your blockchain remains secure, efficient, and adaptable to changing needs. These sections will guide you through monitoring your network, using runtime versioning, and performing forkless upgrades to keep your blockchain secure and up-to-date without downtime.

"},{"location":"develop/parachains/maintenance/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/parachains/maintenance/#additional-resources","title":"Additional ResourcesSingle Block Migration ExampleClient Telemetry Crate","text":"

Check out an example pallet demonstrating best practices for writing single-block migrations while upgrading pallet storage.

Check out the docs on Substrate's client telemetry, a part of Substrate that allows ingesting telemetry data with, for example, Polkadot telemetry.

"},{"location":"develop/parachains/maintenance/runtime-metrics-monitoring/","title":"Runtime Metrics and Monitoring","text":""},{"location":"develop/parachains/maintenance/runtime-metrics-monitoring/#introduction","title":"Introduction","text":"

Maintaining a stable, secure, and efficient network requires continuous monitoring. Polkadot SDK-based nodes are equipped with built-in telemetry components that automatically collect and transmit detailed data about node performance in real-time. This telemetry system is a core feature of the Substrate framework, allowing for easy monitoring of network health without complex setup.

Substrate's client telemetry enables real-time data ingestion, which can be visualized on a client dashboard. The telemetry process uses tracing and logging to gather operational data. This data is sent through a tracing layer to a background task called the TelemetryWorker, which then forwards it to configured remote telemetry servers.

If multiple Substrate nodes run within the same process, the telemetry system uses a tracing::Span to distinguish data from each node. This ensures that each task, managed by the sc-service's TaskManager, inherits a span for data consistency, making it easy to track parallel node operations. Each node can be monitored for basic metrics, such as block height, peer connections, CPU usage, and memory. Substrate nodes expose these metrics at the host:9615/metrics endpoint, accessible locally by default. To expose metrics on all interfaces, start a node with the --prometheus-external flag.

As a developer or node operator, the telemetry system handles most of the technical setup. Collected data is automatically sent to a default telemetry server, where it\u2019s aggregated and displayed on a dashboard, making it easy to monitor network performance and identify issues.

"},{"location":"develop/parachains/maintenance/runtime-metrics-monitoring/#runtime-metrics","title":"Runtime Metrics","text":"

Substrate exposes a variety of metrics about the operation of your network, such as the number of peer connections, memory usage, and block production. To capture and visualize these metrics, you can configure and use tools like Prometheus and Grafana. At a high level, Substrate exposes telemetry data that can be consumed by the Prometheus endpoint and then presented as visual information in a Grafana dashboard or graph. The provided diagram offers a simplified overview of how the interaction between Substrate, Prometheus, and Grafana can be configured to display information about node operations.

graph TD\n  subNode([Substrate Node]) --> telemetryStream[Exposed Telemetry Stream]\n  telemetryStream --> prometheus[Prometheus]\n  prometheus --> endpoint[Endpoint: Every 1 minute]\n  endpoint --> grafana[Grafana]\n  grafana --> userOpen[User Opens a Graph]\n  prometheus --> localData[Local Prometheus Data]\n  localData --> getmetrics[Get Metrics]

The diagram shows the flow of data from the Substrate node to the monitoring and visualization components. The Substrate node exposes a telemetry stream, which is consumed by Prometheus. Prometheus is configured to collect data every minute and store it. Grafana is then used to visualize the data, allowing the user to open graphs and retrieve specifc metrics from the telemetry stream.

"},{"location":"develop/parachains/maintenance/runtime-metrics-monitoring/#visual-monitoring","title":"Visual Monitoring","text":"

The Polkadot telemetry dashboard provides a real-time view of how currently online nodes are performing. This dashboard, allows users to select the network you need to check on, and also the information you want to display by turning visible columns on and off from the list of columns available. The monitoring dashboard provides the following indicators and metrics:

  • Validator - identifies whether the node is a validator node or not
  • Location - displays the geographical location of the node
  • Implementation - shows the version of the software running on the node
  • Network ID - displays the public network identifier for the node
  • Peer count - indicates the number of peers connected to the node
  • Transactions in queue - shows the number of transactions waiting in the Ready queue for a block author
  • Upload bandwidth - graphs the node's recent upload activity in MB/s
  • Download bandwidth - graphs the node's recent download activity in MB/s
  • State cache size - graphs the size of the node's state cache in MB
  • Block - displays the current best block number to ensure synchronization with peers
  • Block hash - shows the block hash for the current best block number
  • Finalized block - displays the most recently finalized block number to ensure synchronization with peers
  • Finalized block hash - shows the block hash for the most recently finalized block
  • Block time - indicates the time between block executions
  • Block propagation time - displays the time it took to import the most recent block
  • Last block time - shows the time it took to author the most recent block
  • Node uptime - indicates the number of days the node has been online without restarting
"},{"location":"develop/parachains/maintenance/runtime-metrics-monitoring/#displaying-network-wide-statistics","title":"Displaying Network-Wide Statistics","text":"

In addition to the details available for individual nodes, you can view statistics that provide insights into the broader network. The network statistics provide detailed information about the hardware and software configurations of the nodes in the network, including:

  • Software version
  • Operating system
  • CPU architecture and model
  • Number of physical CPU cores
  • Total memory
  • Whether the node is a virtual machine
  • Linux distribution and kernel version
  • CPU and memory speed
  • Disk speed
"},{"location":"develop/parachains/maintenance/runtime-metrics-monitoring/#customizing-monitoring-tools","title":"Customizing Monitoring Tools","text":"

The default telemetry dashboard offers core metrics without additional setup. However, many projects prefer custom telemetry setups with more advanced monitoring and alerting policies.

Typically, setting up a custom telemetry solution involves establishing monitoring and alerting policies for both on-chain events and individual node operations. This allows for more tailored monitoring and reporting compared to the default telemetry setup.

"},{"location":"develop/parachains/maintenance/runtime-metrics-monitoring/#on-chain-activity","title":"On-Chain Activity","text":"

You can monitor specific on-chain events like transactions from certain addresses or changes in the validator set. Connecting to RPC nodes allows tracking for delays or specific event timings. Running your own RPC servers is recommended for reliable queries, as public RPC nodes may occasionally be unreliable.

"},{"location":"develop/parachains/maintenance/runtime-metrics-monitoring/#monitoring-tools","title":"Monitoring Tools","text":"

To implement customized monitoring and alerting, consider using the following stack:

  • Prometheus - collects metrics at intervals, stores data in a time series database, and applies rules for evaluation
  • Grafana - visualizes collected data through customizable dashboards
  • Node exporter - reports host metrics, including CPU, memory, and bandwidth usage
  • Alert manager - manages alerts, routing them based on defined rules
  • Loki - scalable log aggregator for searching and viewing logs across infrastructure
"},{"location":"develop/parachains/maintenance/runtime-metrics-monitoring/#change-the-telemetry-server","title":"Change the Telemetry Server","text":"

Once backend monitoring is configured, use the --telemetry-url flag when starting a node to specify telemetry endpoints and verbosity levels. Multiple telemetry URLs can be provided, and verbosity ranges from 0 (least verbose) to 9 (most verbose).

For instance, setting a custom telemetry server with verbosity level 5 would look like:

./target/release/node-template --dev \\\n  --telemetry-url \"wss://192.168.48.1:9616 5\" \\\n  --prometheus-port 9616 \\\n  --prometheus-external\n

For more information on the backend components for telemetry or configuring your own server, you can refer to the substrate-telemetry project or the Substrate Telemetry Helm Chart for Kubernetes deployments.

"},{"location":"develop/parachains/maintenance/runtime-upgrades/","title":"Runtime Upgrades","text":""},{"location":"develop/parachains/maintenance/runtime-upgrades/#introduction","title":"Introduction","text":"

One of the defining features of Polkadot SDK-based blockchains is the ability to perform forkless runtime upgrades. Unlike traditional blockchains, which require hard forks and node coordination for upgrades, Polkadot networks enable seamless updates without network disruption.

Forkless upgrades are achieved through WebAssembly (Wasm) runtimes stored on-chain, which can be securely swapped and upgraded as part of the blockchain's state. By leveraging decentralized consensus, runtime updates can be happen trustlessly, ensuring continuous improvement and evolution without halting operations.

This guide explains how Polkadot's runtime versioning, Wasm deployment, and storage migrations enable these upgrades, ensuring the blockchain evolves smoothly and securely. You'll also learn how different upgrade processes apply to solo chains and parachains, depending on the network setup.

"},{"location":"develop/parachains/maintenance/runtime-upgrades/#how-runtime-upgrades-work","title":"How Runtime Upgrades Work","text":"

In FRAME, the system pallet uses the set_code extrinsic to update the Wasm code for the runtime. This method allows solo chains to upgrade without disruption.

For parachains, upgrades are more complex. Parachains must first call authorize_upgrade, followed by apply_authorized_upgrade, to ensure the relay chain approves and applies the changes. Additionally, changes to current functionality that impact storage often require a storage migration.

"},{"location":"develop/parachains/maintenance/runtime-upgrades/#runtime-versioning","title":"Runtime Versioning","text":"

The executor is the component that selects the runtime execution environment to communicate with. Although you can override the default execution strategies for custom scenarios, in most cases, the executor selects the appropriate binary to use by evaluating and comparing key parameters from the native and Wasm runtime binaries.

The runtime includes a runtime version struct to provide the needed parameter information to the executor process. A sample runtime version struct might look as follows:

pub const VERSION: RuntimeVersion = RuntimeVersion {\n    spec_name: create_runtime_str!(\"node-template\"),\n    impl_name: create_runtime_str!(\"node-template\"),\n    authoring_version: 1,\n    spec_version: 1,\n    impl_version: 1,\n    apis: RUNTIME_API_VERSIONS,\n    transaction_version: 1,\n};\n

The struct provides the following parameter information to the executor:

  • spec_name - the identifier for the different runtimes
  • impl_name - the name of the implementation of the spec. Serves only to differentiate code of different implementation teams
  • authoring_version - the version of the authorship interface. An authoring node won't attempt to author blocks unless this is equal to its native runtime
  • spec_version - the version of the runtime specification. A full node won't attempt to use its native runtime in substitute for the on-chain Wasm runtime unless the spec_name, spec_version, and authoring_version are all the same between the Wasm and native binaries. Updates to the spec_version can be automated as a CI process, as is done for the Polkadot network. This parameter is typically incremented when there's an update to the transaction_version
  • impl_version - the version of the implementation of the specification. Nodes can ignore this. It is only used to indicate that the code is different. As long as the authoring_version and the spec_version are the same, the code might have changed, but the native and Wasm binaries do the same thing. In general, only non-logic-breaking optimizations would result in a change of the impl_version
  • transaction_version - the version of the interface for handling transactions. This parameter can be useful to synchronize firmware updates for hardware wallets or other signing devices to verify that runtime transactions are valid and safe to sign. This number must be incremented if there is a change in the index of the pallets in the construct_runtime! macro or if there are any changes to dispatchable functions, such as the number of parameters or parameter types. If transaction_version is updated, then the spec_version must also be updated
  • apis - a list of supported runtime APIs along with their versions

The executor follows the same consensus-driven logic for both the native runtime and the Wasm runtime before deciding which to execute. Because runtime versioning is a manual process, there is a risk that the executor could make incorrect decisions if the runtime version is misrepresented or incorrectly defined.

"},{"location":"develop/parachains/maintenance/runtime-upgrades/#accessing-the-runtime-version","title":"Accessing the Runtime Version","text":"

The runtime version can be accessed through the state.getRuntimeVersion RPC endpoint, which accepts an optional block identifier. It can also be accessed through the runtime metadata to understand the APIs the runtime exposes and how to interact with them.

The runtime metadata should only change when the chain's runtime spec_version changes.

"},{"location":"develop/parachains/maintenance/runtime-upgrades/#storage-migrations","title":"Storage Migrations","text":"

Storage migrations are custom, one-time functions that allow you to update storage to adapt to changes in the runtime.

For example, if a runtime upgrade changes the data type used to represent user balances from an unsigned integer to a signed integer, the storage migration would read the existing value as an unsigned integer and write back an updated value that has been converted to a signed integer.

If you don't make changes to how data is stored when needed, the runtime can't properly interpret the storage values to include in the runtime state and is likely to lead to undefined behavior.

"},{"location":"develop/parachains/maintenance/runtime-upgrades/#storage-migrations-with-frame","title":"Storage Migrations with FRAME","text":"

FRAME storage migrations are implemented using the OnRuntimeUpgrade trait. The OnRuntimeUpgrade trait specifies a single function, on_runtime_upgrade, that allows you to specify logic to run immediately after a runtime upgrade but before any on_initialize functions or transactions are executed.

For further details about this process, see the Storage Migrations page.

"},{"location":"develop/parachains/maintenance/runtime-upgrades/#ordering-migrations","title":"Ordering Migrations","text":"

By default, FRAME orders the execution of on_runtime_upgrade functions based on the order in which the pallets appear in the construct_runtime! macro. The functions run in reverse order for upgrades, starting with the last pallet executed first. You can impose a custom order if needed.

FRAME storage migrations run in this order:

  1. Custom on_runtime_upgrade functions if using a custom order
  2. System frame_system::on_runtime_upgrade functions
  3. All on_runtime_upgrade functions defined in the runtime starting with the last pallet in the construct_runtime! macro
"},{"location":"develop/parachains/maintenance/storage-migrations/","title":"Storage Migrations","text":""},{"location":"develop/parachains/maintenance/storage-migrations/#introduction","title":"Introduction","text":"

Storage migrations are a crucial part of the runtime upgrade process. They allow you to update the storage items of your blockchain, adapting to changes in the runtime. Whenever you change the encoding or data types used to represent data in storage, you'll need to provide a storage migration to ensure the runtime can correctly interpret the existing stored values in the new runtime state.

Storage migrations must be executed precisely during the runtime upgrade process to ensure data consistency and prevent runtime panics. The migration code needs to run as follows:

  • After the new runtime is deployed
  • Before any other code from the new runtime executes
  • Before any on_initialize hooks run
  • Before any transactions are processed

This timing is critical because the new runtime expects data to be in the updated format. Any attempt to decode the old data format without proper migration could result in runtime panics or undefined behavior.

"},{"location":"develop/parachains/maintenance/storage-migrations/#storage-migration-scenarios","title":"Storage Migration Scenarios","text":"

A storage migration is necessary whenever a runtime upgrade changes the storage layout or the encoding/interpretation of existing data. Even if the underlying data type appears to still \"fit\" the new storage representation, a migration may be required if the interpretation of the stored values has changed.

Storage migrations ensure data consistency and prevent corruption during runtime upgrades. Below are common scenarios categorized by their impact on storage and migration requirements:

  • Migration required:

    • Reordering or mutating fields of an existing data type to change the encoded/decoded data representation
    • Removal of a pallet or storage item warrants cleaning up storage via a migration to avoid state bloat
  • Migration not required:

    • Adding a new storage item would not require any migration since no existing data needs transformation
    • Adding or removing an extrinsic introduces no new interpretation of preexisting data, so no migration is required

The following are some common scenarios where a storage migration is needed:

  • Changing data types - changing the underlying data type requires a migration to convert the existing values

    #[pallet::storage]\npub type FooValue = StorageValue<_, Foo>;\n// old\npub struct Foo(u32)\n// new\npub struct Foo(u64)\n
  • Changing data representation - modifying the representation of the stored data, even if the size appears unchanged, requires a migration to ensure the runtime can correctly interpret the existing values

    #[pallet::storage]\npub type FooValue = StorageValue<_, Foo>;\n// old\npub struct Foo(u32)\n// new\npub struct Foo(i32)\n// or\npub struct Foo(u16, u16)\n
  • Extending an enum - adding new variants to an enum requires a migration if you reorder existing variants, insert new variants between existing ones, or change the data type of existing variants. No migration is required when adding new variants at the end of the enum

    #[pallet::storage]\npub type FooValue = StorageValue<_, Foo>;\n// old\npub enum Foo { A(u32), B(u32) }\n// new (New variant added at the end. No migration required)\npub enum Foo { A(u32), B(u32), C(u128) }\n// new (Reordered variants. Requires migration)\npub enum Foo { A(u32), C(u128), B(u32) }\n
  • Changing the storage key - modifying the storage key, even if the underlying data type remains the same, requires a migration to ensure the runtime can locate the correct stored values.

    #[pallet::storage]\npub type FooValue = StorageValue<_, u32>;\n// new\n#[pallet::storage]\npub type BarValue = StorageValue<_, u32>;\n

Warning

In general, any change to the storage layout or data encoding used in your runtime requires careful consideration of the need for a storage migration. Overlooking a necessary migration can lead to undefined behavior or data loss during a runtime upgrade.

"},{"location":"develop/parachains/maintenance/storage-migrations/#implement-storage-migrations","title":"Implement Storage Migrations","text":"

The OnRuntimeUpgrade trait provides the foundation for implementing storage migrations in your runtime. Here's a detailed look at its essential functions:

pub trait OnRuntimeUpgrade {\n    fn on_runtime_upgrade() -> Weight { ... }\n    fn try_on_runtime_upgrade(checks: bool) -> Result<Weight, TryRuntimeError> { ... }\n    fn pre_upgrade() -> Result<Vec<u8>, TryRuntimeError> { ... }\n    fn post_upgrade(_state: Vec<u8>) -> Result<(), TryRuntimeError> { ... }\n}\n
"},{"location":"develop/parachains/maintenance/storage-migrations/#core-migration-function","title":"Core Migration Function","text":"

The on_runtime_upgrade function executes when the FRAME Executive pallet detects a runtime upgrade. Important considerations when using this function include:

  • It runs before any pallet's on_initialize hooks
  • Critical storage items (like block_number) may not be set
  • Execution is mandatory and must be completed
  • Careful weight calculation is required to prevent bricking the chain

When implementing the migration logic, your code must handle several vital responsibilities. A migration implementation must do the following to operate correctly:

  • Read existing storage values in their original format
  • Transform data to match the new format
  • Write updated values back to storage
  • Calculate and return consumed weight
"},{"location":"develop/parachains/maintenance/storage-migrations/#migration-testing-hooks","title":"Migration Testing Hooks","text":"

The OnRuntimeUpgrade trait provides some functions designed specifically for testing migrations. These functions never execute on-chain but are essential for validating migration behavior in test environments. The migration test hooks are as follows:

  • try_on_runtime_upgrade - this function serves as the primary orchestrator for testing the complete migration process. It coordinates the execution flow from pre-upgrade checks through the actual migration to post-upgrade verification. Handling the entire migration sequence ensures that storage modifications occur correctly and in the proper order. Preserving this sequence is particularly valuable when testing multiple dependent migrations, where the execution order matters

  • pre_upgrade - before a runtime upgrade begins, the pre_upgrade function performs preliminary checks and captures the current state. It returns encoded state data that can be used for post-upgrade verification. This function must never modify storage - it should only read and verify the existing state. The data it returns includes critical state values that should remain consistent or transform predictably during migration

  • post_upgrade - after the migration completes, post_upgrade validates its success. It receives the state data captured by pre_upgrade to verify that the migration was executed correctly. This function checks for storage consistency and ensures all data transformations are completed as expected. Like pre_upgrade, it operates exclusively in testing environments and should not modify storage

"},{"location":"develop/parachains/maintenance/storage-migrations/#migration-structure","title":"Migration Structure","text":"

There are two approaches to implementing storage migrations. The first method involves directly implementing OnRuntimeUpgrade on structs. This approach requires manually checking the on-chain storage version against the new StorageVersion and executing the transformation logic only when the check passes. This version verification prevents multiple executions of the migration during subsequent runtime upgrades.

The recommended approach is to implement UncheckedOnRuntimeUpgrade and wrap it with VersionedMigration. VersionedMigration implements OnRuntimeUpgrade and handles storage version management automatically, following best practices and reducing potential errors.

VersionedMigration requires five type parameters:

  • From - the source version for the upgrade
  • To - the target version for the upgrade
  • Inner - the UncheckedOnRuntimeUpgrade implementation
  • Pallet - the pallet being upgraded
  • Weight - the runtime's RuntimeDbWeight implementation

Examine the following migration example that transforms a simple StorageValue storing a u32 into a more complex structure that tracks both current and previous values using the CurrentAndPreviousValue struct:

  • Old StorageValue format:

    #[pallet::storage]\npub type Value<T: Config> = StorageValue<_, u32>;\n

  • New StorageValue format:

    /// Example struct holding the most recently set [`u32`] and the\n/// second most recently set [`u32`] (if one existed).\n#[docify::export]\n#[derive(\n    Clone, Eq, PartialEq, Encode, Decode, RuntimeDebug, scale_info::TypeInfo, MaxEncodedLen,\n)]\npub struct CurrentAndPreviousValue {\n    /// The most recently set value.\n    pub current: u32,\n    /// The previous value, if one existed.\n    pub previous: Option<u32>,\n}\n\n#[pallet::storage]\npub type Value<T: Config> = StorageValue<_, CurrentAndPreviousValue>;\n

  • Migration:

    use frame_support::{\n    storage_alias,\n    traits::{Get, UncheckedOnRuntimeUpgrade},\n};\n\n#[cfg(feature = \"try-runtime\")]\nuse alloc::vec::Vec;\n\n/// Collection of storage item formats from the previous storage version.\n///\n/// Required so we can read values in the v0 storage format during the migration.\nmod v0 {\n    use super::*;\n\n    /// V0 type for [`crate::Value`].\n    #[storage_alias]\n    pub type Value<T: crate::Config> = StorageValue<crate::Pallet<T>, u32>;\n}\n\n/// Implements [`UncheckedOnRuntimeUpgrade`], migrating the state of this pallet from V0 to V1.\n///\n/// In V0 of the template [`crate::Value`] is just a `u32`. In V1, it has been upgraded to\n/// contain the struct [`crate::CurrentAndPreviousValue`].\n///\n/// In this migration, update the on-chain storage for the pallet to reflect the new storage\n/// layout.\npub struct InnerMigrateV0ToV1<T: crate::Config>(core::marker::PhantomData<T>);\n\nimpl<T: crate::Config> UncheckedOnRuntimeUpgrade for InnerMigrateV0ToV1<T> {\n    /// Return the existing [`crate::Value`] so we can check that it was correctly set in\n    /// `InnerMigrateV0ToV1::post_upgrade`.\n    #[cfg(feature = \"try-runtime\")]\n    fn pre_upgrade() -> Result<Vec<u8>, sp_runtime::TryRuntimeError> {\n        use codec::Encode;\n\n        // Access the old value using the `storage_alias` type\n        let old_value = v0::Value::<T>::get();\n        // Return it as an encoded `Vec<u8>`\n        Ok(old_value.encode())\n    }\n\n    /// Migrate the storage from V0 to V1.\n    ///\n    /// - If the value doesn't exist, there is nothing to do.\n    /// - If the value exists, it is read and then written back to storage inside a\n    /// [`crate::CurrentAndPreviousValue`].\n    fn on_runtime_upgrade() -> frame_support::weights::Weight {\n        // Read the old value from storage\n        if let Some(old_value) = v0::Value::<T>::take() {\n            // Write the new value to storage\n            let new = crate::CurrentAndPreviousValue { current: old_value, previous: None };\n            crate::Value::<T>::put(new);\n            // One read + write for taking the old value, and one write for setting the new value\n            T::DbWeight::get().reads_writes(1, 2)\n        } else {\n            // No writes since there was no old value, just one read for checking\n            T::DbWeight::get().reads(1)\n        }\n    }\n\n    /// Verifies the storage was migrated correctly.\n    ///\n    /// - If there was no old value, the new value should not be set.\n    /// - If there was an old value, the new value should be a [`crate::CurrentAndPreviousValue`].\n    #[cfg(feature = \"try-runtime\")]\n    fn post_upgrade(state: Vec<u8>) -> Result<(), sp_runtime::TryRuntimeError> {\n        use codec::Decode;\n        use frame_support::ensure;\n\n        let maybe_old_value = Option::<u32>::decode(&mut &state[..]).map_err(|_| {\n            sp_runtime::TryRuntimeError::Other(\"Failed to decode old value from storage\")\n        })?;\n\n        match maybe_old_value {\n            Some(old_value) => {\n                let expected_new_value =\n                    crate::CurrentAndPreviousValue { current: old_value, previous: None };\n                let actual_new_value = crate::Value::<T>::get();\n\n                ensure!(actual_new_value.is_some(), \"New value not set\");\n                ensure!(\n                    actual_new_value == Some(expected_new_value),\n                    \"New value not set correctly\"\n                );\n            },\n            None => {\n                ensure!(crate::Value::<T>::get().is_none(), \"New value unexpectedly set\");\n            },\n        };\n        Ok(())\n    }\n}\n\n/// [`UncheckedOnRuntimeUpgrade`] implementation [`InnerMigrateV0ToV1`] wrapped in a\n/// [`VersionedMigration`](frame_support::migrations::VersionedMigration), which ensures that:\n/// - The migration only runs once when the on-chain storage version is 0\n/// - The on-chain storage version is updated to `1` after the migration executes\n/// - Reads/Writes from checking/settings the on-chain storage version are accounted for\npub type MigrateV0ToV1<T> = frame_support::migrations::VersionedMigration<\n    0, // The migration will only execute when the on-chain storage version is 0\n    1, // The on-chain storage version will be set to 1 after the migration is complete\n    InnerMigrateV0ToV1<T>,\n    crate::pallet::Pallet<T>,\n    <T as frame_system::Config>::DbWeight,\n>;\n

"},{"location":"develop/parachains/maintenance/storage-migrations/#migration-organization","title":"Migration Organization","text":"

Best practices recommend organizing migrations in a separate module within your pallet. Here's the recommended file structure:

my-pallet/\n\u251c\u2500\u2500 src/\n\u2502   \u251c\u2500\u2500 lib.rs       # Main pallet implementation\n\u2502   \u2514\u2500\u2500 migrations/  # All migration-related code\n\u2502       \u251c\u2500\u2500 mod.rs   # Migrations module definition\n\u2502       \u251c\u2500\u2500 v1.rs    # V0 -> V1 migration\n\u2502       \u2514\u2500\u2500 v2.rs    # V1 -> V2 migration\n\u2514\u2500\u2500 Cargo.toml\n

This structure provides several benefits:

  • Separates migration logic from core pallet functionality
  • Makes migrations easier to test and maintain
  • Provides explicit versioning of storage changes
  • Simplifies the addition of future migrations
"},{"location":"develop/parachains/maintenance/storage-migrations/#scheduling-migrations","title":"Scheduling Migrations","text":"

To execute migrations during a runtime upgrade, you must configure them in your runtime's Executive pallet. Add your migrations in runtime/src/lib.rs:

/// Tuple of migrations (structs that implement `OnRuntimeUpgrade`)\ntype Migrations = (\n    pallet_my_pallet::migrations::v1::Migration,\n    // More migrations can be added here\n);\npub type Executive = frame_executive::Executive<\n    Runtime,\n    Block,\n    frame_system::ChainContext<Runtime>,\n    Runtime,\n    AllPalletsWithSystem,\n    Migrations, // Include migrations here\n>;\n
"},{"location":"develop/parachains/maintenance/storage-migrations/#single-block-migrations","title":"Single-Block Migrations","text":"

Single-block migrations execute their logic within one block immediately following a runtime upgrade. They run as part of the runtime upgrade process through the OnRuntimeUpgrade trait implementation and must be completed before any other runtime logic executes.

While single-block migrations are straightforward to implement and provide immediate data transformation, they carry significant risks. The most critical consideration is that they must complete within one block's weight limits. This is especially crucial for parachains, where exceeding block weight limits will brick the chain.

Use single-block migrations only when you can guarantee:

  • The migration has a bounded execution time
  • Weight calculations are thoroughly tested
  • Total weight will never exceed block limits

For a complete implementation example of a single-block migration, refer to the single-block migration example in the Polkadot SDK documentation.

"},{"location":"develop/parachains/maintenance/storage-migrations/#multi-block-migrations","title":"Multi Block Migrations","text":"

Multi-block migrations distribute the migration workload across multiple blocks, providing a safer approach for production environments. The migration state is tracked in storage, allowing the process to pause and resume across blocks.

This approach is essential for production networks and parachains as the risk of exceeding block weight limits is eliminated. Multi-block migrations can safely handle large storage collections, unbounded data structures, and complex nested data types where weight consumption might be unpredictable.

Multi-block migrations are ideal when dealing with:

  • Large-scale storage migrations
  • Unbounded storage items or collections
  • Complex data structures with uncertain weight costs

The primary trade-off is increased implementation complexity, as you must manage the migration state and handle partial completion scenarios. However, multi-block migrations' significant safety benefits and operational reliability are typically worth the increased complexity.

For a complete implementation example of multi-block migrations, refer to the official example in the Polkadot SDK.

"},{"location":"develop/parachains/testing/","title":"Testing Your Polkadot SDK-Based Blockchain","text":"

Explore comprehensive testing strategies for Polkadot SDK-based blockchains, from setting up test environments to verifying runtime and pallet interactions. Testing is essential to feeling confident your network will behave the way you intend upon deployment.

Through these guides, you'll learn to:

  • Create effective test environments
  • Validate pallet interactions
  • Simulate blockchain conditions
  • Verify runtime behavior
"},{"location":"develop/parachains/testing/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/parachains/testing/#additional-resources","title":"Additional Resources`sp_runtime` crate Rust docsMoonwall Testing Framework","text":"

Learn about Substrate Runtime primitives that enable communication between a Substrate blockchain's runtime and client.

Moonwall is a comprehensive blockchain test framework for Substrate-based networks.

"},{"location":"develop/parachains/testing/runtime/","title":"Runtime Testing","text":""},{"location":"develop/parachains/testing/runtime/#introduction","title":"Introduction","text":"

In the Polkadot SDK, it's important to test individual pallets in isolation and how they interact within the runtime. Once unit tests for specific pallets are complete, the next step is integration testing to verify that multiple pallets work together correctly within the blockchain system. This testing ensures that the entire runtime functions as expected under real-world conditions.

This article extends the Testing Setup guide by illustrating how to test interactions between different pallets within the same runtime.

"},{"location":"develop/parachains/testing/runtime/#testing-pallets-interactions","title":"Testing Pallets Interactions","text":"

Once the test environment is ready, you can write tests to simulate interactions between multiple pallets in the runtime. Below is an example of how to test the interaction between two generic pallets, referred to here as pallet_a and pallet_b. In this scenario, assume that pallet_b depends on pallet_a. The configuration of pallet_b is the following:

use pallet_a::Config as PalletAConfig;\n\n...\n\n#[pallet::config]\npub trait Config: frame_system::Config + PalletAConfig {\n    type RuntimeEvent: From<Event<Self>> + IsType<<Self as frame_system::Config>::RuntimeEvent>;\n    type WeightInfo: WeightInfo;\n}\n

And also, pallet_b exposes a call that interacts with pallet_a:

#[pallet::call]\nimpl<T: Config> Pallet<T> {\n    #[pallet::call_index(0)]\n    #[pallet::weight(<T as pallet_b::Config>::WeightInfo::dummy_weight())]\n    pub fn dummy_call_against_pallet_a(_origin: OriginFor<T>, number: u32) -> DispatchResult {\n        pallet_a::DummyCounter::<T>::put(number);\n        Self::deposit_event(Event::Dummy);\n        Ok(())\n    }\n}\n

In this first test, a call to pallet_a is simulated, and the internal state is checked to ensure it updates correctly. The block number is also checked to ensure it advances as expected:

#[test]\nfn testing_runtime_with_pallet_a() {\n    new_test_ext().execute_with(|| {\n        // Block 0: Verify runtime initialization\n        assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 0);\n\n        // Check the initial state of pallet_a\n        assert_eq!(0, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n        // Simulate calling a function from pallet_a\n        let dummy_origin = RuntimeOrigin::none();\n        pallet_a::Pallet::<Runtime>::dummy_call(dummy_origin, 2);\n\n        // Verify that pallet_a's state has been updated\n        assert_eq!(2, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n        // Move to the next block\n        frame_system::Pallet::<Runtime>::set_block_number(1);\n\n        // Confirm the block number has advanced\n        assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 1);\n    });\n}\n

Next, a test can be written to verify the interaction between pallet_a and pallet_b:

#[test]\nfn testing_runtime_with_pallet_b() {\n    new_test_ext().execute_with(|| {\n        // Block 0: Check if initialized correctly\n        assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 0);\n\n        // Ensure that pallet_a is initialized correctly\n        assert_eq!(0, pallet_a::Pallet::<Runtime>::get_dummy_counter()); \n\n        // Use pallet_b to call a function that interacts with pallet_a\n        let dummy_origin = RuntimeOrigin::none();\n        pallet_b::Pallet::<Runtime>::dummy_call_against_pallet_a(dummy_origin, 4);\n\n        // Confirm that pallet_a's state was updated by pallet_b\n        assert_eq!(4, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n        // Transition to block 1.\n        frame_system::Pallet::<Runtime>::set_block_number(1);\n\n        // Confirm the block number has advanced\n        assert_eq!(frame_system::pallet::<runtime>::block_number(), 1);\n    });\n}\n

This test demonstrates how pallet_b can trigger a change in pallet_a's state, verifying that the pallets interact properly during runtime.

For more information about testing more specific elements like storage, errors, and events, see the Pallet Testing article.

Integration Test - Complete Code

The complete code for the integration test is shown below:

pub mod integration_testing {\n    use crate::*;\n    use sp_runtime::BuildStorage;\n    use frame_support::assert_ok;\n\n    // Build genesis storage according to the runtime's configuration.\n    pub fn new_test_ext() -> sp_io::TestExternalities {\n        frame_system::GenesisConfig::<Runtime>::default().build_storage().unwrap().into()\n    }\n\n    #[test]\n    fn testing_runtime_with_pallet_a() {\n        new_test_ext().execute_with(|| {\n            // Block 0: Check if initialized correctly\n            assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 0);\n\n            assert_eq!(0, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n            let dummy_origin = RuntimeOrigin::none();\n            pallet_a::Pallet::<Runtime>::dummy_call(dummy_origin, 2);\n\n            assert_eq!(2, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n            // Transition to block 1.\n            frame_system::Pallet::<Runtime>::set_block_number(1);\n\n            // Check if block number is now 1.\n            assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 1);\n        });\n    }\n\n    #[test]\n    fn testing_runtime_with_pallet_b() {\n        new_test_ext().execute_with(|| {\n            // Block 0: Check if initialized correctly\n            assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 0);\n\n            assert_eq!(0, pallet_a::Pallet::<Runtime>::get_dummy_counter()); \n            let dummy_origin = RuntimeOrigin::none();\n            pallet_b::Pallet::<Runtime>::dummy_call_against_pallet_a(dummy_origin, 4);\n            assert_eq!(4, pallet_a::Pallet::<Runtime>::get_dummy_counter());\n\n            // Transition to block 1.\n            frame_system::Pallet::<Runtime>::set_block_number(1);\n\n            // Check if block number is now 1.\n            assert_eq!(frame_system::Pallet::<Runtime>::block_number(), 1);\n        });\n    }\n}\n
"},{"location":"develop/parachains/testing/runtime/#verifying-pallet-interactions","title":"Verifying Pallet Interactions","text":"

The tests confirm that:

  • Pallets initialize correctly - at the start of each test, the system should initialize with block number 0, and the pallets should be in their default states
  • Pallets modify each other's state - the second test shows how one pallet can trigger changes in another pallet's internal state, confirming proper cross-pallet interactions
  • State transitions between blocks are seamless - by simulating block transitions, the tests validate that the runtime responds correctly to changes in the block number

Testing pallet interactions within the runtime is critical for ensuring the blockchain behaves as expected under real-world conditions. Writing integration tests allows validation of how pallets function together, preventing issues that might arise when the system is fully assembled.

This approach provides a comprehensive view of the runtime's functionality, ensuring the blockchain is stable and reliable.

"},{"location":"develop/parachains/testing/setup/","title":"Testing Setup","text":""},{"location":"develop/parachains/testing/setup/#introduction","title":"Introduction","text":"

In Polkadot SDK development, testing is crucial to ensure your blockchain works as expected. While unit testing for individual pallets validates isolated functionality, as discussed in Pallet Testing, it's equally important to test how these pallets function together within the runtime. Runtime testing fills this role by providing a complete simulation of the blockchain system.

This guide will help you set up an environment to test an entire runtime. Runtime testing will enable you to assess how different pallets, their configurations, and system components interact, ensuring your blockchain behaves correctly under real-world conditions.

"},{"location":"develop/parachains/testing/setup/#runtime-testing","title":"Runtime Testing","text":"

In the context of Polkadot SDK, runtime testing involves creating a simulated environment that mimics actual blockchain conditions. This type of testing goes beyond individual pallet validation, focusing on how multiple components integrate and collaborate across the system. This way, multiple runtimes can be tested if needed.

While unit tests provide confidence that individual pallets function correctly in isolation, runtime tests offer a holistic view. These tests validate pallets' communication and interaction, ensuring a seamless and functional blockchain system. By running integration tests at the runtime level, you can catch issues that only arise when multiple pallets are combined, which is critical for building a stable and reliable blockchain.

"},{"location":"develop/parachains/testing/setup/#configuring-a-mock-runtime-for-integration-tests","title":"Configuring a Mock Runtime for Integration Tests","text":"

The mock runtime includes all the necessary pallets and configurations needed for testing. To simplify the process, you can create a module that integrates all components, making it easier to assess how pallets and system elements interact.

Here's a simple example of how to create a testing module that simulates these interactions:

pub mod integration_testing {\n    use crate::*;\n    // ...\n}\n

Note

The crate::*; snippet imports all the components from your crate (including runtime configurations, pallet modules, and utility functions) into the integration_testing module. This allows you to write tests without manually importing each piece, making the code more concise and readable.

Once the testing module is set, the next step is configuring the genesis storage\u2014the initial state of your blockchain. Genesis storage sets the starting conditions for the runtime, defining how pallets are configured before any blocks are produced.

In Polkadot SDK, you can create this storage using the BuildStorage trait from the sp_runtime crate. This trait is essential for building the configuration that initializes the blockchain's state.

The function new_test_ext() demonstrates setting up this environment. It uses frame_system::GenesisConfig::<Runtime>::default() to generate a default genesis configuration for the runtime, followed by .build_storage() to create the initial storage state. This storage is then converted into a format usable by the testing framework, sp_io::TestExternalities, allowing tests to be executed in a simulated blockchain environment.

Here's the code that sets up the mock runtime:

pub mod integration_testing {\n    use crate::*;\n    use sp_runtime::BuildStorage;\n\n    pub fn new_test_ext() -> sp_io::TestExternalities {\n        frame_system::GenesisConfig::<Runtime>::default()\n            .build_storage()\n            .unwrap()\n            .into()\n    }\n}\n

You can also customize the genesis storage to set initial values for your runtime pallets. For example, you can set the initial balance for accounts like this:

// Build genesis storage according to the runtime's configuration\npub fn new_test_ext() -> sp_io::TestExternalities {\n    // Define the initial balances for accounts\n    let initial_balances: Vec<(AccountId32, u128)> = vec![\n        (AccountId32::from([0u8; 32]), 1_000_000_000_000),\n        (AccountId32::from([1u8; 32]), 2_000_000_000_000),\n    ];\n\n    let mut t = frame_system::GenesisConfig::<Runtime>::default()\n        .build_storage()\n        .unwrap();\n\n    // Adding balances configuration to the genesis config\n    pallet_balances::GenesisConfig::<Runtime> {\n        balances: initial_balances,\n    }\n    .assimilate_storage(&mut t)\n    .unwrap();\n\n    t.into()\n}\n
"},{"location":"develop/parachains/testing/setup/#where-to-go-next","title":"Where to Go Next","text":"

With the mock environment in place, you can now write tests to validate how your pallets interact within the runtime. This approach ensures that your blockchain behaves as expected when the entire runtime is assembled.

You can view a complete example of an integration test in the Astar parachain codebase.

For more advanced information on runtime testing, please refer to the Runtime Testing article.

"},{"location":"develop/smart-contracts/","title":"Smart Contracts","text":"

Polkadot offers developers flexibility in building smart contracts, supporting both Wasm-based contracts using ink! (written in Rust) and Solidity contracts executed by the EVM (Ethereum Virtual Machine).

This section guides you through the tools, resources, and guides to help you build and deploy smart contracts using either Wasm/ink! or EVM-based parachains, depending on your language and environment preference.

"},{"location":"develop/smart-contracts/#choosing-the-right-smart-contract-language-and-execution-environment","title":"Choosing the Right Smart Contract Language and Execution Environment","text":"

For developers building smart contracts in the Polkadot ecosystem, the choice between parachains supporting ink! (for Wasm contracts) and EVM-compatible parachains (for Solidity contracts) depend on the preferred development environment and language. By selecting the right parachain, developers can leverage Polkadot's scalability and interoperability while utilizing the framework that best suits their needs.

Here are some key considerations:

  • Wasm (ink!) contracts - contracts are written in Rust and compiled to Wasm. The advantage of Wasm is that it allows for more flexibility, speed, and potentially lower execution costs compared to EVM, especially in the context of Polkadot's multi-chain architecture
  • EVM-compatible contracts - contracts are written in languages like Solidity or Vyper and executed by the Ethereum Virtual Machine (EVM). The EVM is widely standardized across blockchains, including Polkadot parachains like Astar, Moonbeam, and Acala. This compatibility allows contracts to be deployed across multiple networks with minimal modifications, benefiting from a well-established, broad development ecosystem
  • PolkaVM-compatible contracts - contracts are written in languages like Solidity or Vyper and executed by the PolkaVM. This compatibility provides a seamless transition for developers coming from EVM environments while also enabling interactions with other Polkadot parachains and leveraging Polkadot's interoperability

Throughout the pages in this section, you'll find resources and guides to help you get started with developing smart contracts in both environments.

"},{"location":"develop/smart-contracts/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/smart-contracts/#additional-resources","title":"Additional ResourcesView the Official ink! DocumentationView the Official Asset Hub Contracts Documentation","text":"

Learn everything you need to know about developing smart contracts with ink!.

Learn everything you need about developing smart contracts on Asset Hub using the PolkaVM.

"},{"location":"develop/smart-contracts/overview/","title":"An Overview of the Smart Contract Landscape on Polkadot","text":""},{"location":"develop/smart-contracts/overview/#introduction","title":"Introduction","text":"

Polkadot is designed to support an ecosystem of parachains, rather than hosting smart contracts directly. Developers aiming to build smart contract applications on Polkadot rely on parachains within the ecosystem that provide smart contract functionality.

This guide outlines the primary approaches to developing smart contracts in the Polkadot ecosystem:

  • Wasm-based smart contracts - using ink!, a Rust-based embedded domain-specific language (eDSL), enabling developers to leverage Rust\u2019s safety and tooling
  • EVM-compatible contracts - which support languages like Solidity and Vyper, offering compatibility with popular Ethereum tools and wallets
  • PolkaVM-compatible contracts - which support Solidity and Rust while maintaining compatibility with Ethereum based tools

You'll explore the key differences between these development paths, along with considerations for parachain developers integrating smart contract functionality.

Parachain Developer?

If you are a parachain developer looking to add smart contract functionality to your chain, please refer to the Add Smart Contract Functionality page, which covers both Wasm and EVM-based contract implementations.

"},{"location":"develop/smart-contracts/overview/#smart-contracts-versus-parachains","title":"Smart Contracts Versus Parachains","text":"

A smart contract is a program that executes specific logic isolated to the chain on which it is being executed. All the logic executed is bound to the same state transition rules determined by the underlying virtual machine (VM). Consequently, smart contracts are more streamlined to develop, and programs can easily interact with each other through similar interfaces.

flowchart LR\n  subgraph A[Chain State]\n    direction LR\n    B[\"Program Logic and Storage<br/>(Smart Contract)\"]\n    C[\"Tx Relevant Storage\"]\n  end\n  A --> D[[Virtual Machine]]\n  E[Transaction] --> D\n  D --> F[(New State)]\n  D --> G[Execution Logs]\n  style A fill:#ffffff,stroke:#000000,stroke-width:1px

In addition, because smart contracts are programs that execute on top of existing chains, teams don't have to think about the underlying consensus they are built on.

These strengths do come with certain limitations. Some smart contracts environments, like EVM, tend to be immutable by default. Developers have developed different proxy strategies to be able to upgrade smart contracts over time. The typical pattern relies on a proxy contract which holds the program storage forwarding a call to an implementation contract where the execution logic resides. Smart contract upgrades require changing the implementation contract while retaining the same storage structure, necessitating careful planning.

Another downside is that smart contracts often follow a gas metering model, where program execution is associated with a given unit and a marketplace is set up to pay for such an execution unit. This fee system is often very rigid, and some complex flows, like account abstraction, have been developed to circumvent this problem.

In contrast, parachains can create their own custom logics (known as pallets or modules), and combine them as the state transition function (STF or runtime) thanks to the modularity provided by the Polkadot-SDK. The different pallets within the parachain runtime can give developers a lot of flexibility when building applications on top of it.

flowchart LR\n    A[(Chain State)] --> B[[\"STF<br/>[Pallet 1]<br/>[Pallet 2]<br/>...<br/>[Pallet N]\"]]\n    C[Transaction<br/>Targeting Pallet 2] --> B\n    B --> E[(New State)]\n    B --> F[Execution Logs]

Parachains inherently offer features such as logic upgradeability, flexible transaction fee mechanisms, and chain abstraction logic. More so, by using Polkadot, parachains can benefit from robust consensus guarantees with little engineering overhead.

Additional information

To read more about the differences between smart contracts and parachain runtimes, please Refer to the Runtime vs. Smart Contracts section of the Polkadot SDK Rust docs. For a more in-depth discussion on choosing between runtime development and smart contract development, you can check the post \"When should one build a Polkadot SDK runtime versus a Substrate (Polkadot SDK) smart contract?\" from Stack Overflow.

"},{"location":"develop/smart-contracts/overview/#building-a-smart-contract","title":"Building a Smart Contract","text":"

Polkadot's primary purpose is to provide security for parachains that connect to it. Therefore, it is not meant to support smart contract execution. Developers looking to build smart contract projects in Polkadot need to look into its ecosystem for parachains that support it.

The Polkadot SDK supports multiple smart contract execution environments:

  • EVM - through Frontier. It consists of a full Ethereum JSON RPC compatible client, an Ethereum emulation layer, and a Rust-based EVM. This is used by chains like Acala, Astar, Moonbeam and more
  • Wasm - through the Contracts pallet. ink! is a smart contract language that provides a compiler to Wasm. Wasm contracts can be used by chains like Astar
  • PolkaVM - a cutting-edge virtual machine tailored to optimize smart contract execution on Polkadot. Unlike traditional EVMs, PolkaVM is built with a RISC-V-based register architecture for increased performance and scalability
"},{"location":"develop/smart-contracts/overview/#evm-contracts","title":"EVM Contracts","text":"

The Frontier project provides a set of modules that enables a Polkadot SDK-based chain to run an Ethereum emulation layer that allows the execution of EVM smart contracts natively with the same API/RPC interface.

Ethereum addresses (ECDSA) can also be mapped directly to and from the Polkadot SDK's SS58 scheme from existing accounts. Moreover, you can modify Polkadot SDK to use the ECDSA signature scheme directly to avoid any mapping.

At a high level, Frontier is composed of three main components:

  • Ethereum Client - an Ethereum JSON RPC compliant client that allows any request coming from an Ethereum tool, such as Remix, Hardhat or Foundry, to be admitted by the network
  • Pallet Ethereum - a block emulation and Ethereum transaction validation layer that works jointly with the Ethereum client to ensure compatibility with Ethereum tools
  • Pallet EVM - access layer to the Rust-based EVM, enabling the execution of EVM smart contract logic natively

Broadly speaking, in this configuration, an EVM transaction follows the path presented in the diagram below:

flowchart TD\n    A[Users and Devs] -->|Send Tx| B[Frontier RPC Ext]\n    subgraph C[Pallet Ethereum]\n        D[Validate Tx]\n        E[Send<br/>Valid Tx]    \n    end\n    B -->|Interact with| C\n    D --> E\n    subgraph F[Pallet EVM]\n        G[Rust EVM]\n    end\n    I[(Current EVM<br/>Emulated State)]\n\n    H[Smart Contract<br/>Solidity, Vyper...] <-->|Compiled to EVM<br/>Bytecode| I\n\n    C --> F\n    I --> F\n    F --> J[(New Ethereum<br/>Emulated State)]\n    F --> K[Execution Logs]\n\n    style C fill:#ffffff,stroke:#000000,stroke-width:1px\n    style F fill:#ffffff,stroke:#000000,stroke-width:1px

Although it seems complex, users and developers are abstracted of that complexity, and tools can easily interact with the parachain as they would with any other EVM-compatible environment.

The Rust EVM is capable of executing regular EVM bytecode. Consequently, any language that compiles to EVM bytecode can be used to create programs that the parachain can execute.

You can find more information on deploying EVM smart contracts to Polkadot's native smart contract platform, or any of the ecosystem parachains.

"},{"location":"develop/smart-contracts/overview/#wasm-contracts","title":"Wasm Contracts","text":"

The pallet_contracts provides the execution environment for Wasm-based smart contracts. Consequently, any smart contract language that compiles to Wasm can be executed in a parachain that enables this module.

At the time of writing there are two main languages that can be used for Wasm programs:

  • ink! - it is a Rust-based language that compiles to Wasm. It allows developers to inherit all its safety guarantees and use normal Rust tooling, being the dedicated domain-specific language
  • Solidity - it can be compiled to Wasm via the Solang compiler. Consequently, developers can write Solidity 0.8 smart contracts that can be executed as Wasm programs in parachains

Broadly speaking, with pallet_contracts, a transaction follows the path presented in the diagram below:

flowchart TD\n\n    subgraph A[Wasm Bytecode API]\n        C[Pallet Contracts]\n    end\n\n    B[Users and Devs] -- Interact with ---> A\n\n    D[(Current State)]\n\n    E[Smart Contract<br/>ink!, Solidity...] <-->|Compiled to Wasm<br/>Bytecode| D\n\n    D --> A\n    A --> F[(New State)]\n    A --> G[Execution Logs]\n\n    style A fill:#ffffff,stroke:#000000,stroke-width:1px

Learn more on how to build and deploy Wasm smart contracts on the Wasm Smart Contracts page.

"},{"location":"develop/smart-contracts/overview/#polkavm-contracts","title":"PolkaVM Contracts","text":"

A component of the Asset Hub parachain, PolkaVM helps enable the deployment of Solidity-based smart contracts directly on Asset Hub. Learn more about how this cutting edge virtual machine facilitates using familiar EVM contracts and tools with Asset Hub by visiting the Native EVM Contracts guide.

"},{"location":"develop/smart-contracts/wasm-ink/","title":"Wasm (ink!)","text":""},{"location":"develop/smart-contracts/wasm-ink/#introduction","title":"Introduction","text":"

The pallet_contracts is a specialized pallet within the Polkadot SDK that enables smart contract functionality through a WebAssembly (Wasm) execution environment. For developing smart contracts for this pallet, ink! emerges as the primary and recommended language.

ink! is an embedded domain-specific language (eDSL) designed to develop Wasm smart contracts using the Rust programming language.

Rather than creating a new language, ink! is just standard Rust in a well-defined \"contract format\" with specialized #[ink(\u2026)] attribute macros. These attribute macros tell ink! what the different parts of your Rust smart contract represent and ultimately allow ink! to do all the magic needed to create Polkadot SDK-compatible Wasm bytecode. Because of this, it inherits critical advantages such as:

  • Strong memory safety guarantees
  • Advanced type system
  • Comprehensive development tooling
  • Support from Rust's extensive developer community

Since ink! smart contracts are compiled to Wasm, they offer high execution speed, platform independence, and enhanced security through sandboxed execution.

"},{"location":"develop/smart-contracts/wasm-ink/#installation","title":"Installation","text":"

ink! smart contract development requires the installation of cargo-contract, a command-line interface (CLI) tool that provides essential utilities for creating, testing, and managing ink! projects.

For step-by-step installation instructions, including platform-specific requirements and troubleshooting tips, refer to the official cargo-contract Installation guide.

"},{"location":"develop/smart-contracts/wasm-ink/#get-started","title":"Get Started","text":"

To create a new ink! smart contract project, use the cargo contract command:

cargo contract new INSERT_PROJECT_NAME\n

This command generates a new project directory with the following structure:

INSERT_PROJECT_NAME/\n\u251c\u2500\u2500 lib.rs          # Contract source code\n\u251c\u2500\u2500 Cargo.toml      # Project configuration and dependencies\n\u2514\u2500\u2500 .gitignore      # Git ignore rules\n

The lib.rs file includes a basic contract template with storage and message-handling functionality. Customize this file to implement your contract\u2019s logic. The Cargo.toml file defines project dependencies, including the necessary ink! libraries and configuration settings.

"},{"location":"develop/smart-contracts/wasm-ink/#contract-structure","title":"Contract Structure","text":"

An ink! smart contract requires three fundamental components:

  • A storage struct marked with #[ink(storage)]
  • At least one constructor function marked with #[ink(constructor)]
  • At least one message function marked with #[ink(message)]
"},{"location":"develop/smart-contracts/wasm-ink/#default-template-structure","title":"Default Template Structure","text":"

The following example shows the basic contract structure generated by running cargo contract new:

#![cfg_attr(not(feature = \"std\"), no_std, no_main)]\n\n#[ink::contract]\nmod flipper {\n\n    /// Defines the storage of your contract.\n    /// Add new fields to the below struct in order\n    /// to add new static storage fields to your contract.\n    #[ink(storage)]\n    pub struct Flipper {\n        /// Stores a single `bool` value on the storage.\n        value: bool,\n    }\n\n    impl Flipper {\n        /// Constructor that initializes the `bool` value to the given `init_value`.\n        #[ink(constructor)]\n        pub fn new(init_value: bool) -> Self {\n            Self { value: init_value }\n        }\n\n        /// Constructor that initializes the `bool` value to `false`.\n        ///\n        /// Constructors can delegate to other constructors.\n        #[ink(constructor)]\n        pub fn default() -> Self {\n            Self::new(Default::default())\n        }\n\n        /// A message that can be called on instantiated contracts.\n        /// This one flips the value of the stored `bool` from `true`\n        /// to `false` and vice versa.\n        #[ink(message)]\n        pub fn flip(&mut self) {\n            self.value = !self.value;\n        }\n\n        /// Simply returns the current value of our `bool`.\n        #[ink(message)]\n        pub fn get(&self) -> bool {\n            self.value\n        }\n    }\n}\n
"},{"location":"develop/smart-contracts/wasm-ink/#storage","title":"Storage","text":"

In an ink! contract, persistent storage is defined by a single struct annotated with the #[ink(storage)] attribute. This struct represents the contract's state and can use various data types for storing information, such as:

  • Common data types:

    • Boolean values (bool)
    • Unsigned integers (u8, u16, u32, u64, u128)
    • Signed integers (i8, i16, i32, i64, i128)
    • Tuples and arrays
  • Substrate-specific types:

    • AccountId - contract and user addresses
    • Balance - token amounts
    • Hash - cryptographic hashes
  • Data structures:

    • Struct - custom data structures
    • Vec - dynamic arrays
    • Mapping - key-value storage
    • BTreeMap- ordered maps
    • HashMap - unordered maps

Example of a storage struct using various supported types:

#[ink(storage)]\npub struct Data {\n    /// A boolean flag to indicate a certain condition\n    flag: bool,\n    /// A vector to store multiple entries of unsigned 32-bit integers\n    entries: Vec<u32>,\n    /// An optional value that can store a specific integer or none\n    optional_value: Option<i32>,\n    /// A map to associate keys (as AccountId) with values (as unsigned 64-bit integers)\n    key_value_store: Mapping<AccountId, u64>,\n    /// A counter to keep track of some numerical value\n    counter: u64,\n}\n

For an in-depth explanation of storage and data structures in ink!, refer to the Storage & Data Structures section and the #[ink(storage)] macro definition in the official documentation.

"},{"location":"develop/smart-contracts/wasm-ink/#constructors","title":"Constructors","text":"

Constructors are functions that execute once when deploying the contract and are used to initialize the contract\u2019s state. Each contract must have at least one constructor, though multiple constructors can provide different initialization options.

Example:

#[ink::contract]\nmod mycontract {\n\n    #[ink(storage)]\n    pub struct MyContract {\n        number: u32,\n    }\n\n    impl MyContract {\n        /// Constructor that initializes the `u32` value to the given `init_value`.\n        #[ink(constructor)]\n        pub fn new(init_value: u32) -> Self {\n            Self {\n                number: init_value,\n            }\n        }\n\n        /// Constructor that initializes the `u32` value to the `u32` default.\n        #[ink(constructor)]\n        pub fn default() -> Self {\n            Self {\n                number: Default::default(),\n            }\n        }\n    }\n\n    /* ... */\n}\n

Note

In this example, new(init_value: u32) initializes number with a specified value, while default() initializes it with the type\u2019s default value (0 for u32). These constructors provide flexibility in contract deployment by supporting custom and default initialization options.

For more information, refer to the official documentation for the #[ink(constructor)] macro definition.

"},{"location":"develop/smart-contracts/wasm-ink/#messages","title":"Messages","text":"

Messages are functions that interact with the contract, allowing users or other contracts to call specific methods. Each contract must define at least one message.

There are two types of messages:

  • Immutable messages (&self) - these messages can only read the contract's state and cannot modify it
  • Mutable messages (&mut self) - these messages can read and modify the contract's state

Note

&self is a reference to the contract's storage.

Example:

#[ink(message)]\npub fn my_getter(&self) -> u32 {\n    self.my_number\n}\n\n#[ink(message)]\npub fn my_setter(&mut self, new_value: u32) -> u32 {\n    self.my_number = new_value;\n}\n

Note

In the example above, my_getter is an immutable message that reads state, while my_setter is a mutable message that updates state.

For more information, refer to the official documentation on the #[ink(message)] macro.

"},{"location":"develop/smart-contracts/wasm-ink/#errors","title":"Errors","text":"

For defining errors, ink! uses idiomatic Rust error handling with the Result<T,E> type. These errors are user-defined by creating an Error enum and all the necessary types. If an error is returned, the contract reverts

In ink!, errors are handled using idiomatic Rust practices with the Result<T, E> type. Custom error types are defined by creating an Error enum and specifying any necessary variants. If a message returns an error, the contract execution reverts, ensuring no changes are applied to the contract's state.

Example:

[derive(Debug, PartialEq, Eq)]\n#[ink::scale_derive(Encode, Decode, TypeInfo)]\npub enum Error {\n    /// Returned if not enough balance to fulfill a request is available.\n    InsufficientBalance,\n    /// Returned if not enough allowance to fulfill a request is available.\n    InsufficientAllowance,\n}\n\nimpl Erc20 {\n    //...\n    #[ink(message)]\n    pub fn transfer_from(\n        &mut self,\n        from: AccountId,\n        to: AccountId,\n        value: Balance,\n    ) -> Result<(),Error> {\n        let caller = self.env().caller();\n        let allowance = self.allowance_impl(&from, &caller);\n        if allowance < value {\n            return Err(Error::InsufficientAllowance)\n        }\n        //...\n    }\n    //...\n}\n

Note

In this example, the Error enum defines custom error types InsufficientBalance and InsufficientAllowance. When transfer_from is called, it checks if the allowance is sufficient. If not, it returns an InsufficientAllowance error, causing the contract to revert. This approach ensures robust error handling for smart contracts.

"},{"location":"develop/smart-contracts/wasm-ink/#events","title":"Events","text":"

Events are a way of letting the outside world know about what's happening inside the contract. They are user-defined in a struct and decorated with the #[ink(event)] macro.

Events allow the contract to communicate important occurrences to the outside world. They are user-defined by creating a struct and annotating it with the #[ink(event)] macro. Each field you want to index for efficient querying should be marked with #[ink(topic)].

Example:

/// Event emitted when a token transfer occurs.\n#[ink(event)]\npub struct Transfer {\n    #[ink(topic)]\n    from: Option<AccountId>,\n    #[ink(topic)]\n    to: Option<AccountId>,\n    value: Balance,\n}\n\nimpl Erc20 {\n    //...\n    #[ink(message)]\n    pub fn transfer_from(\n        &mut self,\n        from: AccountId,\n        to: AccountId,\n        value: Balance,\n    ) -> Result<(),Error> {\n        //...\n        self.env().emit_event(Transfer {\n            from: Some(from),\n            to: Some(to),\n            value,\n        });\n\n        Ok(())\n    }\n}\n

Note

In this example, the Transfer event records the sender (from), the receiver (to), and the amount transferred (value). The event is emitted in the transfer_from function to notify external listeners whenever a transfer occurs.

For more details, check the Events section and the #[ink(event)] macro documentation.

"},{"location":"develop/smart-contracts/wasm-ink/#where-to-go-next","title":"Where to Go Next?","text":"

To deepen your knowledge of ink! development, whether you're exploring foundational concepts or advanced implementations, the following resources provide essential guidance:

  • Official ink! documentation \u2014 a thorough resource with guides, in-depth explanations, and technical references to support you in mastering ink! development

  • ink-examples repository \u2014 a curated collection of smart contract examples that demonstrate best practices and commonly used design patterns

"},{"location":"develop/smart-contracts/evm/","title":"EVM","text":"

The Polkadot ecosystem supports Ethereum-compatible smart contracts through both native EVM contracts and EVM-compatible parachains. Native EVM contracts are designed to run on Polkadot's Ethereum-compatible virtual machine, PolkaVM, which natively supports Ethereum's EVM. This allows Polkadot to execute Solidity-based contracts directly within its ecosystem without relying on external Ethereum networks or EVM-compatible parachains.

With EVM support, developers can build decentralized applications (dApps) using familiar tools, languages like Solidity, and established smart contract standards, all while taking advantage of Polkadot's unique features, such as scalability and cross-chain interoperability.

Whether deploying existing Ethereum contracts on Polkadot or creating new applications, this section provides the resources you need to get started. Learn how to leverage popular EVM tools, such as Remix and Ethers.js, to integrate Ethereum-compatible smart contract functionality into the Polkadot ecosystem.

  • Want to learn more? Jump to In This Section to get started
  • Ready to start coding? Check out the Deploy a Smart Contract section to get started
"},{"location":"develop/smart-contracts/evm/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/smart-contracts/evm/#deploy-a-smart-contract","title":"Deploy a Smart ContractDeploy a Smart Contract on Asset HubDeploy a Smart Contract on AstarDeploy a Smart Contract on MoonbeamDeploy a Smart Contract on Acala","text":"

Follow instructions to deploy your first contract using Remix on the Asset Hub system chain.

Follow instructions to deploy your first contract on the Astar parachain using Remix.

Follow instructions to deploy your first contract on the Moonbeam parachain using Remix.

Follow instructions to deploy your first contract on the Acala parachain using Remix.

"},{"location":"develop/smart-contracts/evm/native-evm-contracts/","title":"Native EVM Contracts","text":""},{"location":"develop/smart-contracts/evm/native-evm-contracts/#introduction","title":"Introduction","text":"

The Asset Hub parachain is the cornerstone of asset management within the Polkadot ecosystem, providing seamless, secure access to digital assets. Native EVM contracts allow developers to deploy Solidity-based smart contracts directly on Asset Hub, enhancing developer efficiency and simplifying application design. This approach eliminates the complexity of asynchronous cross-chain communication and avoids the overhead of additional governance systems or tokens.

This guide will help you understand the role of native EVM contracts and how they integrate with the Polkadot ecosystem. You will explore the components powering this functionality, including PolkaVM and Revive, and learn how to deploy and interact with smart contracts on Asset Hub using tools like MetaMask, Revive Remix, and Ethers.js.

By enabling native smart contract deployment, Polkadot's Asset Hub streamlines blockchain development while preserving its secure, scalable foundation.

"},{"location":"develop/smart-contracts/evm/native-evm-contracts/#components","title":"Components","text":"

The native EVM contracts feature leverages several powerful components to deliver high performance and Solidity compatibility:

  • pallet_revive - a runtime module that executes smart contracts by adding extrinsics, runtime APIs, and logic to convert Ethereum-style transactions into formats compatible with the blockchain. The workflow is as follows:

    • Transactions are sent via a proxy server emulating Ethereum JSON RPC
    • The proxy converts Ethereum transactions into a special dispatchable, leaving the payload intact
    • The pallet's logic decodes and transforms these transactions into a format compatible with the blockchain

    Using a proxy avoids modifying the node binary, ensuring compatibility with alternative clients without requiring additional implementation.

  • PolkaVM - a custom virtual machine optimized for performance with RISC-V-based architecture, supporting Solidity and additional high-performance languages

  • Revive - compiles Solidity for PolkaVM by translating the solc compiler's YUL output into RISC-V. This translation simplifies development and ensures full compatibility with all Solidity versions and features

  • Revive Remix - a modified fork of Remix IDE supporting backend compilation via LLVM-based Revive. Allows for efficient Solidity contract deployment on Polkadot

"},{"location":"develop/smart-contracts/evm/native-evm-contracts/#polkavm","title":"PolkaVM","text":"

PolkaVM is a cutting-edge virtual machine tailored to optimize smart contract execution on Polkadot. Unlike traditional Ethereum Virtual Machines (EVM), PolkaVM is built with a RISC-V-based register architecture and a 64-bit word size, enabling:

  • Faster arithmetic operations and efficient hardware translation
  • Seamless integration of high-performance languages like C and Rust for advanced optimization
  • Improved scalability for modern blockchain applications
"},{"location":"develop/smart-contracts/evm/native-evm-contracts/#compared-to-traditional-evms","title":"Compared to Traditional EVMs","text":"
  • Architecture - PolkaVM's register-based design offers significant performance improvements over Ethereum's stack-based EVM. It allows for faster compilation times and aligns better with modern hardware, reducing bottlenecks in contract execution
  • Gas modeling - PolkaVM employs a multi-dimensional gas model, metering resources like computation time, storage, and proof sizes. This ensures more accurate cost assessments for contract execution, reducing overcharging for memory allocation and enabling efficient cross-contract calls
  • Compatibility - while optimized for performance, PolkaVM remains compatible with Ethereum tools through a closely mirrored RPC interface, with minor adjustments for certain operations. It also hides the existential deposit requirement, simplifying user interactions by abstracting balance limitations
"},{"location":"develop/smart-contracts/evm/native-evm-contracts/#performance-benefits","title":"Performance Benefits","text":"

PolkaVM's innovations translate into significant performance gains, such as:

  • Enhanced developer experience - faster execution and better tooling support
  • Optimized resource use - reduced transaction costs with precise metering
  • Broader language support - potential integration of languages like Rust and C for specialized use cases

By combining advanced performance optimizations with Ethereum compatibility, PolkaVM bridges the gap between cutting-edge blockchain development and the familiar tools developers rely on.

"},{"location":"develop/smart-contracts/evm/native-evm-contracts/#deploy-a-smart-contract-to-asset-hub","title":"Deploy a Smart Contract to Asset Hub","text":"

The following sections guide you through the steps to connect to Asset Hub, deploy a smart contract, and interact with the contract using Ethers.js.

"},{"location":"develop/smart-contracts/evm/native-evm-contracts/#connect-to-asset-hub","title":"Connect to Asset Hub","text":"

Install any EVM-compatible wallet. To follow this example, install the MetaMask browser extension and add the Westend TestNet Asset Hub as a custom network using the following settings:

  • Network name - Asset-Hub Westend Testnet
  • RPC URL - https://westend-asset-hub-eth-rpc.polkadot.io
  • Chain ID - 420420421
  • Currency symbol - WND
  • Block explorer URL - https://assethub-westend.subscan.io
"},{"location":"develop/smart-contracts/evm/native-evm-contracts/#deploy-a-contract","title":"Deploy a Contract","text":"

To deploy a contract to the Westend Asset Hub, you must get WND tokens. To do so, you can use the Westend Faucet. You need to specify the address where you want to receive the tokens from the faucet.

For deploying and interacting with contracts in Revive Remix, you can use the following steps:

  1. Open the Remix IDE, select any Solidity contract available, and compile it using the \u25b6\ufe0f button or the Solidity Compiler tab

  2. Deploy the contract

    1. Click on the Deploy & Run tab
    2. Choose the Westend TestNet - Metamask button. Your account address and balance will appear in the ACCOUNT field
    3. Click on the Deploy button to launch the contract

After deployment, you can interact with the contract listed in the Deployed/Unpinned Contracts section within the Deploy & Run tab. You can either call the smart contract methods or run tests against the contract to see if it works as expected.

"},{"location":"develop/smart-contracts/evm/native-evm-contracts/#use-ethersjs-to-interact","title":"Use Ethers.js to Interact","text":"

Once deployed, you can use the Ethers.js library to allow your application to interact with the contract. This library provides the tools needed to query data, send transactions, and listen to events through a provider, which links your application and the blockchain.

  • In browsers, providers are available through wallets like MetaMask, which inject an ethereum object into the window. Ensure that Metamask is installed and connected to Westend Asset Hub

    import { BrowserProvider } from 'ethers';\n\n// Browser wallet will inject the ethereum object into the window object\nif (typeof window.ethereum == 'undefined') {\n  return console.log('No wallet installed');\n}\n\nconsole.log('An Ethereum wallet is installed!');\nconst provider = new BrowserProvider(window.ethereum);\n
  • For server-side applications, JsonRpcProvider can connect directly to RPC nodes:

    import { JsonRpcProvider } from 'ethers';\n\nconst provider = new JsonRpcProvider(\n  'https://westend-asset-hub-eth-rpc.polkadot.io',\n);\n

Once your application is connected, you can retrieve network data, access contract methods, and fully interact with the deployed smart contract.

"},{"location":"develop/smart-contracts/evm/native-evm-contracts/#where-to-go-next","title":"Where to Go Next","text":"

For further information about the Asset Hub smart contracts, please refer to the official documentation.

"},{"location":"develop/smart-contracts/evm/parachain-contracts/","title":"Parachain Contracts","text":""},{"location":"develop/smart-contracts/evm/parachain-contracts/#introduction","title":"Introduction","text":"

One key factor underpinning Ethereum's growth is the ease of deploying to the EVM. The EVM, or Ethereum Virtual Machine, provides developers with a consistent and predictable execution environment for smart contracts. While the EVM is not perfect, its popularity and ease of deployment have far outweighed any shortcomings and resulted in the massive growth of EVM-compatible smart contract platforms.

Also integral to the proliferation of EVM-based smart contract networks is smart contract portability. Developers can take their smart contracts that they've deployed to Ethereum, and in many cases, deploy them to other EVM-compatible networks with minimal changes. More than \"Copy/Paste\" deployments, this enables chains' interoperability. Building a cross-chain application is much easier when both chains offer similar EVM compatibility.

"},{"location":"develop/smart-contracts/evm/parachain-contracts/#why-adopt-the-evm-as-a-polkadot-parachain","title":"Why Adopt the EVM as a Polkadot Parachain?","text":"

In addition to the developer mindshare of the EVM, Polkadot parachains leveraging the EVM can benefit from the extensive tooling for Ethereum developers that's already been built and battle-tested. This includes wallets, block explorers, developer tools, and more. Beyond just tools, the EVM has had a long headstart regarding smart contract auditors and institutional/custodial asset management. Integrating EVM compatibility can unlock several of these tools by default or allow for relatively easy future integrations.

Polkadot enables parachains to supercharge the capabilities of their parachain beyond just the limitations of the EVM. To that end, many parachains have developed ways to tap into the powerful features offered by Polkadot, such as through precompiles or solidity interfaces that expose Substrate functionality to app developers and users. This guide will cover some of the unique features that each parachain offers. For more information about each parachain, visit the documentation site for the respective parachain.

"},{"location":"develop/smart-contracts/evm/parachain-contracts/#evm-compatible-parachains","title":"EVM-Compatible Parachains","text":""},{"location":"develop/smart-contracts/evm/parachain-contracts/#astar","title":"Astar","text":"

Astar emerged as a key smart contract platform on Polkadot, distinguished by its unique multiple virtual machine approach that supports both EVM and WebAssembly (Wasm) smart contracts. This dual VM support allows developers to choose their preferred programming environment while maintaining full Ethereum compatibility. The platform's runtime is built on Substrate using FRAME, incorporating crucial components from Polkadot-SDK alongside custom-built modules for handling its unique features.

Astar has established itself as an innovation hub through initiatives like the zk-rollup development framework and integration with multiple Layer 2 scaling solutions. Astar leverages XCM for native Polkadot ecosystem interoperability while maintaining connections to external networks through various bridge protocols. Through its support for both EVM and Wasm, along with advanced cross-chain capabilities, Astar serves as a crucial gateway for projects looking to leverage the unique advantages of both Ethereum and Polkadot ecosystems while maintaining seamless interoperability between them.

"},{"location":"develop/smart-contracts/evm/parachain-contracts/#technical-architecture","title":"Technical Architecture","text":"
graph TB\n    subgraph A[\"DApp Layer\"]\n        direction TB\n        eth[\"Ethereum DApps\\n(Web3)\"]\n        wasm[\"Wasm DApps\\n(ink!, Ask!)\"]\n        substrate[\"Substrate DApps\\n(Polkadot.js)\"]\n    end\n\n    subgraph B[\"Astar Network\"]\n        direction TB\n        rpc[\"RPC Layer\\n(Web3 + Substrate)\"]\n\n        subgraph \"Runtime\"\n            xvm[\"Cross-Virtual Machine (XVM)\"]\n            evm[\"EVM\"]\n            wasm_vm[\"Wasm VM\"]\n\n            subgraph D[\"Core Features\"]\n                staking[\"dApp Staking\"]\n            end\n        end\n    end\n\n    subgraph C[\"Base Layer\"]\n        dot[\"Polkadot Relay Chain\\n(Shared Security)\"]\n    end\n\n    %% Connections\n    A --> B\n    rpc --> xvm\n    xvm --> C\n    xvm --> D\n    xvm --> wasm_vm\n    xvm --> evm\n\n    evm <--> wasm_vm

The diagram illustrates the layered architecture of Astar Network: at the top, dApps can interact with the Astar network through either Web3, Substrate, or Wasm. These requests flow through Astar's RPC layer into the main runtime, where the magic happens in the virtual machine layer. Here, Astar's unique Cross-Virtual Machine (XVM) coordinates between EVM and Wasm environments, allowing smart contracts from both ecosystems to interact. The Runtime also includes core blockchain functions through various pallets (like system operations and dApps staking), and everything is ultimately secured by connecting to the Polkadot Relay Chain at the bottom layer.

"},{"location":"develop/smart-contracts/evm/parachain-contracts/#endpoints-and-faucet","title":"Endpoints and Faucet","text":"Variable Value Network Name Shibuya Testnet EVM Chain ID 81 Public RPC URLs
https://evm.shibuya.astar.network
Public WSS URLs
wss://evm.shibuya.astar.network
Block Explorer Shibuya Blockscout Faucet Link Faucet - Astar Docs"},{"location":"develop/smart-contracts/evm/parachain-contracts/#moonbeam","title":"Moonbeam","text":"

Moonbeam was the first parachain to bring full Ethereum-compatibility to Polkadot, enabling Ethereum developers to bring their dApps to Polkadot and gain access to the rapidly growing Polkadot user base. Moonbeam's runtime is built using FRAME, and combines essential components from the Polkadot-SDK, Frontier, and custom pallets. The architecture integrates key Substrate offerings like balance management and transaction processing, while Frontier's pallets enable EVM execution and Ethereum compatibility. Custom pallets handle Moonbeam-specific features such as parachain staking and block author verification. Moonbeam offers a variety of precompiles for dApp developers to access powerful Polkadot features via a Solidity interface, such as governance, randomness, transaction batching, and more.

Additionally, Moonbeam is a hub for interoperability and cross-chain connected contracts. Moonbeam has a variety of integrations with GMP (general message passing) providers, including Wormhole, LayerZero, Axelar, and more. These integrations make it easy for developers to build cross-chain contracts on Moonbeam, and they also play an integral role in connecting the entire Polkadot ecosystem with other blockchains. Innovations like Moonbeam Routed Liquidity, or MRL, enable users to bridge funds between chains like Ethereum and parachains like HydraDX. Through XCM, other parachains can connect to Moonbeam and access its established bridge connections to Ethereum and other networks, eliminating the need for each parachain to build and maintain their own bridges.

"},{"location":"develop/smart-contracts/evm/parachain-contracts/#technical-architecture_1","title":"Technical Architecture","text":"
  graph LR\n      A[Existing<br/>EVM DApp</br>Frontend]\n      B[Ethereum<br/>Development<br/>Tool]\n\n      subgraph C[Moonbeam Node]\n        direction LR\n        D[Web3 RPC]\n        subgraph E[Ethereum Pallet]\n          direction LR\n          F[Substrate<br/>Runtime<br/>Functions]\n          G[Block Processor]\n        end\n        subgraph H[EVM Pallet]\n          direction LR\n          I[EVM Execution]\n        end\n\n      end\n\n      A --> C\n      B --> C\n      D --> E\n      F --> G \n      E --> H\n\n    classDef darkBackground fill:#2b2042,stroke:#000,color:#fff;\n    classDef lightBox fill:#b8a8d9,stroke:#000,color:#000;\n\n    class A,B darkBackground\n    class D,E,H lightBox

The diagram above illustrates how transactions are processed on Moonbeam. When a DApp or Ethereum development tool (like Hardhat) sends a Web3 RPC request, it's first received by a Moonbeam node. Moonbeam nodes are versatile - they support both Web3 and Substrate RPCs, giving developers the flexibility to use either Ethereum or Substrate tools. When these RPC calls come in, they're processed by corresponding functions in the Substrate runtime. The runtime verifies signatures and processes any Substrate extrinsics. Finally, if the transaction involves smart contracts, these are forwarded to Moonbeam's EVM for execution and state changes.

"},{"location":"develop/smart-contracts/evm/parachain-contracts/#endpoints-and-faucet_1","title":"Endpoints and Faucet","text":"Variable Value Network Name Moonbase Alpha Testnet EVM Chain ID 1287 Public RPC URLs
https://rpc.api.moonbase.moonbeam.network
Public WSS URLs
wss://wss.api.moonbase.moonbeam.network
Block Explorer Moonbase Alpha Moonscan Faucet Link Moonbase Faucet"},{"location":"develop/smart-contracts/evm/parachain-contracts/#acala","title":"Acala","text":"

Acala positioned itself as Polkadot's DeFi hub by introducing the Acala EVM+ - an enhanced version of the EVM specifically optimized for DeFi operations. This customized EVM implementation enables seamless deployment of Ethereum-based DeFi protocols while offering advanced features like on-chain scheduling, pre-built DeFi primitives, and native multi-token support that aren't available in traditional EVMs.

Acala supports a comprehensive DeFi ecosystem including a decentralized stablecoin (aUSD) and a liquid staking derivative for DOT. The platform's EVM+ innovations extend beyond standard Ethereum compatibility by enabling direct interaction between EVM smart contracts and Substrate pallets, facilitating advanced cross-chain DeFi operations through XCM, and providing built-in oracle integrations. These enhancements make it possible for DeFi protocols to achieve functionality that would be prohibitively expensive or technically infeasible on traditional EVM chains.

"},{"location":"develop/smart-contracts/evm/parachain-contracts/#technical-architecture_2","title":"Technical Architecture","text":"
graph TB\n    subgraph A[\"DApp Layer\"]\n        direction TB\n        eth[\"Ethereum DApps\\n(Web3 + bodhi.js)\"]\n        substrate[\"Substrate DApps\\n(Polkadot.js)\"]\n    end\n\n    subgraph B[\"Acala Network\"]\n        direction TB\n        rpc[\"RPC Layer\\n(Web3 + Substrate)\"]\n\n        subgraph \"Runtime\"\n            direction TB\n            evmplus[\"EVM+\"]\n\n            subgraph C[\"Core Components\"]\n                direction LR\n                storage[\"Storage Meter\"]\n                precompiles[\"Precompiled DeFi Contracts\\n(DEX, Oracle, Scheduler)\"]\n            end\n        end\n    end\n\n    subgraph D[\"Base Layer\"]\n        dot[\"Polkadot Relay Chain\\n(Shared Security)\"]\n    end\n\n    %% Simplified connections\n    A --> B\n    rpc --> evmplus\n    evmplus --> C\n    evmplus --> D

The diagram illustrates Acala's unique EVM+ architecture, which extends beyond standard EVM compatibility. At the top, DApps can interact with the network using either Ethereum tools (via Web3 and bodhi.js) or Substrate tools. These requests flow through Acala's dual RPC layer into the main Runtime. The key differentiator is the EVM+ environment, which includes special features like the Storage Meter for rent management, and numerous precompiled contracts (like DEX, Oracle, Schedule) that provide native Substrate functionality to EVM contracts. All of this runs on top of Polkadot's shared security as a parachain.

"},{"location":"develop/smart-contracts/evm/parachain-contracts/#endpoints-and-faucet_2","title":"Endpoints and Faucet","text":"Variable Value Network Name Mandala TC7 Testnet EVM Chain ID 595 Public RPC URLs
https://eth-rpc-tc9.aca-staging.network
Public WSS URLs
wss://tc7-eth.aca-dev.network
Block Explorer Mandala Blockscout Faucet Link Mandala Faucet"},{"location":"develop/smart-contracts/evm/parachain-contracts/#evm-developer-tools","title":"EVM Developer Tools","text":"

One of the key benefits of being an EVM-compatible parachain the ability for developers to use familiar developer tools, like Hardhat, Remix, and Foundry. Being compatible with the most widely adopted smart contract programming language, Solidity, means that developers can leverage existing smart contract templates and standards, such as the ones built by OpenZeppelin. To learn more, check out the following guides for each parachain:

AstarMoonbeamAcala
  • Hardhat
  • Thirdweb
  • Remix
  • Privy embedded wallets
  • Hardhat
  • Thirdweb
  • Remix
  • Tenderly
  • Foundry
  • OpenZeppelin
  • Hardhat
  • Remix
  • Waffle
"},{"location":"develop/smart-contracts/evm/parachain-contracts/#reading-contract-state-on-evm-compatible-parachains","title":"Reading Contract State on EVM-Compatible Parachains","text":"

The following section will dive into a practical demonstration. The following script showcases how to interact with multiple Polkadot parachains using their EVM compatibility. This script will query:

  • Moonbeam for its Wormhole USDC total supply
  • Acala for its native ACA token supply using a precompile
  • Astar for its USDC total supply

What makes this demo particularly powerful is that all three chains\u2014Astar, Moonbeam, and Acala\u2014share EVM compatibility. This means you can use a single, unified script to query token balances across all chains, simply by adjusting the RPC endpoints and token contract addresses. Thanks to EVM-compatibility, there's no need for chain-specific scripts or custom development work.

Expand to view the complete script
// Required imports\nconst { ethers } = require('ethers');\n\n// Network RPC endpoints\nconst networkConfigs = {\n  moonbeam: {\n    rpc: 'https://rpc.api.moonbeam.network',\n    name: 'Moonbeam (Wormhole USDC)',\n  },\n  acala: {\n    rpc: 'https://eth-rpc-acala.aca-api.network',\n    name: 'Acala ACA',\n  },\n  astar: {\n    rpc: 'https://evm.astar.network',\n    name: 'Astar (USDC)',\n  },\n};\n\n// Minimal ERC20 ABI - we only need totalSupply\nconst erc20ABI = [\n  {\n    constant: true,\n    inputs: [],\n    name: 'totalSupply',\n    outputs: [{ name: '', type: 'uint256' }],\n    type: 'function',\n  },\n  {\n    constant: true,\n    inputs: [],\n    name: 'decimals',\n    outputs: [{ name: '', type: 'uint8' }],\n    type: 'function',\n  },\n];\n\nasync function getTokenSupply(networkKey, tokenAddress) {\n  try {\n    // Get network configuration\n    const networkConfig = networkConfigs[networkKey];\n    if (!networkConfig) {\n      throw new Error(`Unsupported network: ${networkKey}`);\n    }\n\n    // Create provider and contract instance - Updated for ethers v6\n    const provider = new ethers.JsonRpcProvider(networkConfig.rpc);\n    const contract = new ethers.Contract(tokenAddress, erc20ABI, provider);\n\n    // Get total supply and decimals\n    const [totalSupply, decimals] = await Promise.all([\n      contract.totalSupply(),\n      contract.decimals(),\n    ]);\n\n    // Convert to human readable format\n    const formattedSupply = ethers.formatUnits(totalSupply, decimals);\n\n    return {\n      network: networkConfig.name,\n      tokenAddress,\n      totalSupply: formattedSupply,\n      rawTotalSupply: totalSupply.toString(),\n      decimals: decimals,\n    };\n  } catch (error) {\n    throw new Error(`Error fetching token supply: ${error.message}`);\n  }\n}\n\nasync function main() {\n  const tokens = {\n    moonbeam: '0x931715FEE2d06333043d11F658C8CE934aC61D0c', // Wormhole USDC\n    acala: '0x0000000000000000000100000000000000000000', // ACA\n    astar: '0x6a2d262D56735DbA19Dd70682B39F6bE9a931D98', // USDC on Astar\n  };\n\n  for (const [network, tokenAddress] of Object.entries(tokens)) {\n    try {\n      const result = await getTokenSupply(network, tokenAddress);\n      console.log(`\\n${result.network} Token Supply:`);\n      console.log(`Address: ${result.tokenAddress}`);\n      console.log(`Total Supply: ${result.totalSupply}`);\n      console.log(`Decimals: ${result.decimals}`);\n    } catch (error) {\n      console.error(`Error for ${network}:`, error.message);\n    }\n  }\n}\n\n// Execute the main function and handle any errors\nmain().catch((error) => {\n  console.error('Error in main:', error);\n  process.exit(1);\n});\n

This script demonstrates one of the fundamental ways to interact with blockchain networks - querying on-chain state through smart contract calls. The standardized ERC20 interface (which most tokens implement) is used to read the total supply of tokens across different EVM networks. This type of interaction is \"read-only\" or a \"view\" call, meaning it is simply fetching data from the blockchain without making any transactions or state changes. Therefore, it isn't using any gas. Transactions that attempt to make a state change to the blockchain require gas. The ability to query state like this is essential for DApps, analytics tools, and monitoring systems that need real-time blockchain data.

"},{"location":"develop/smart-contracts/evm/parachain-contracts/#where-to-go-next","title":"Where to Go Next","text":"

Check out the links below for each respective parachain for network endpoints, getting started guides, and more.

AstarMoonbeamAcala
  • Astar Docs
  • Astar Network Endpoints
  • Build EVM Smart Contracts on Astar
  • Moonbeam Docs
  • Moonbeam Network Endpoints
  • Get Started Building on Moonbeam
  • Acala Docs
  • Acala Network Endpoints
  • About the Acala Network
"},{"location":"develop/toolkit/","title":"Toolkit","text":"

Explore Polkadot's core development toolkit, designed to support a variety of developers and use cases within the ecosystem. Whether you're building blockchain infrastructure, developing cross-chain applications, or integrating with external services, this section offers essential tools and resources to help you succeed.

Key tools for different audiences:

  • Blockchain developers - leverage development tools for building and managing Polkadot SDK-based blockchains, optimizing the infrastructure of the ecosystem
  • DApp developers - develop decentralized applications (dApps) that interact seamlessly with the Polkadot network, using APIs, SDKs, and integration tools for efficient application development
  • Cross-chain application developers - create applications that operate across multiple blockchains, using Polkadot\u2019s XCM and messaging tools to enable interoperability and asset transfers
"},{"location":"develop/toolkit/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/toolkit/api-libraries/","title":"API Libraries","text":"

Explore the powerful API libraries designed for interacting with the Polkadot network. These libraries offer developers versatile tools to build, query, and manage blockchain interactions. Whether you\u2019re working with JavaScript, TypeScript, Python, or RESTful services, they provide the flexibility to efficiently interact with and retrieve data from Polkadot-based chains.

"},{"location":"develop/toolkit/api-libraries/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/toolkit/api-libraries/#additional-resources","title":"Additional ResourcesUnderstand Chain DataNetwork Configurations","text":"

Familiarize yourself with the data provided by the APIs, including available calls, events, types, and storage items.

Obtain the necessary configurations and WSS endpoints to interact with the APIs on Polkadot networks.

"},{"location":"develop/toolkit/api-libraries/papi/","title":"Polkadot-API","text":""},{"location":"develop/toolkit/api-libraries/papi/#introduction","title":"Introduction","text":"

Polkadot-API (PAPI) is a set of libraries built to be modular, composable, and grounded in a \u201clight-client first\u201d approach. Its primary aim is to equip dApp developers with an extensive toolkit for building fully decentralized applications.

PAPI is optimized for light-client functionality, using the new JSON-RPC spec to support decentralized interactions fully. It provides strong TypeScript support with types and documentation generated directly from on-chain metadata, and it offers seamless access to storage reads, constants, transactions, events, and runtime calls. Developers can connect to multiple chains simultaneously and prepare for runtime updates through multi-descriptor generation and compatibility checks. PAPI is lightweight and performant, leveraging native BigInt, dynamic imports, and modular subpaths to avoid bundling unnecessary assets. It supports promise-based and observable-based APIs, integrates easily with Polkadot.js extensions, and offers signing options through browser extensions or private keys.

"},{"location":"develop/toolkit/api-libraries/papi/#get-started","title":"Get Started","text":""},{"location":"develop/toolkit/api-libraries/papi/#api-instantiation","title":"API Instantiation","text":"

To instantiate the API, you can install the package by using the following command:

npmpnpmyarn
npm i polkadot-api\n
pnpm add polkadot-api\n
yarn add polkadot-api\n

Then, obtain the latest metadata from the target chain and generate the necessary types:

# Add the target chain\nnpx papi add dot -n polkadot\n

The papi add command initializes the library by generating the corresponding types needed for the chain used. It assigns the chain a custom name and specifies downloading metadata from the Polkadot chain. You can replace dot with the name you prefer or with another chain if you want to add a different one. Once the latest metadata is downloaded, generate the required types:

# Generate the necessary types\nnpx papi\n

You can now set up a PolkadotClient with your chosen provider to begin interacting with the API. Choose from Smoldot via WebWorker, Node.js, or direct usage, or connect through the WSS provider. The examples below show how to configure each option for your setup.

Smoldot (WebWorker)Smoldot (Node.js)SmoldotWSS
// `dot` is the identifier assigned during `npx papi add`\nimport { dot } from '@polkadot-api/descriptors';\nimport { createClient } from 'polkadot-api';\nimport { getSmProvider } from 'polkadot-api/sm-provider';\nimport { chainSpec } from 'polkadot-api/chains/polkadot';\nimport { startFromWorker } from 'polkadot-api/smoldot/from-worker';\nimport SmWorker from 'polkadot-api/smoldot/worker?worker';\n\nconst worker = new SmWorker();\nconst smoldot = startFromWorker(worker);\nconst chain = await smoldot.addChain({ chainSpec });\n\n// Establish connection to the Polkadot relay chain\nconst client = createClient(getSmProvider(chain));\n\n// To interact with the chain, obtain the `TypedApi`, which provides\n// the necessary types for every API call on this chain\nconst dotApi = client.getTypedApi(dot);\n
// `dot` is the alias assigned during `npx papi add`\nimport { dot } from '@polkadot-api/descriptors';\nimport { createClient } from 'polkadot-api';\nimport { getSmProvider } from 'polkadot-api/sm-provider';\nimport { chainSpec } from 'polkadot-api/chains/polkadot';\nimport { startFromWorker } from 'polkadot-api/smoldot/from-node-worker';\nimport { fileURLToPath } from 'url';\nimport { Worker } from 'worker_threads';\n\n// Get the path for the worker file in ESM\nconst workerPath = fileURLToPath(\n  import.meta.resolve('polkadot-api/smoldot/node-worker'),\n);\n\nconst worker = new Worker(workerPath);\nconst smoldot = startFromWorker(worker);\nconst chain = await smoldot.addChain({ chainSpec });\n\n// Set up a client to connect to the Polkadot relay chain\nconst client = createClient(getSmProvider(chain));\n\n// To interact with the chain's API, use `TypedApi` for access to\n// all the necessary types and calls associated with this chain\nconst dotApi = client.getTypedApi(dot);\n
// `dot` is the alias assigned when running `npx papi add`\nimport { dot } from '@polkadot-api/descriptors';\nimport { createClient } from 'polkadot-api';\nimport { getSmProvider } from 'polkadot-api/sm-provider';\nimport { chainSpec } from 'polkadot-api/chains/polkadot';\nimport { start } from 'polkadot-api/smoldot';\n\n// Initialize Smoldot client\nconst smoldot = start();\nconst chain = await smoldot.addChain({ chainSpec });\n\n// Set up a client to connect to the Polkadot relay chain\nconst client = createClient(getSmProvider(chain));\n\n// Access the `TypedApi` to interact with all available chain calls and types\nconst dotApi = client.getTypedApi(dot);\n
// `dot` is the identifier assigned when executing `npx papi add`\nimport { dot } from '@polkadot-api/descriptors';\nimport { createClient } from 'polkadot-api';\n// Use this import for Node.js environments\nimport { getWsProvider } from 'polkadot-api/ws-provider/web';\nimport { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat';\n\n// Establish a connection to the Polkadot relay chain\nconst client = createClient(\n  // The Polkadot SDK nodes may have compatibility issues; using this enhancer is recommended.\n  // Refer to the Requirements page for additional details\n  withPolkadotSdkCompat(getWsProvider('wss://dot-rpc.stakeworld.io')),\n);\n\n// To interact with the chain, obtain the `TypedApi`, which provides\n// the types for all available calls in that chain\nconst dotApi = client.getTypedApi(dot);\n

Now that you have set up the client, you can interact with the chain by reading and sending transactions.

"},{"location":"develop/toolkit/api-libraries/papi/#reading-chain-data","title":"Reading Chain Data","text":"

The TypedApi provides a streamlined way to read blockchain data through three main interfaces, each designed for specific data access patterns:

  • Constants - access fixed values or configurations on the blockchain using the constants interface:

    const version = await typedApi.constants.System.Version();\n
  • Storage queries - retrieve stored values by querying the blockchain\u2019s storage via the query interface:

    const asset = await api.query.ForeignAssets.Asset.getValue(\n  token.location,\n  { at: 'best' },\n);\n
  • Runtime APIs - interact directly with runtime APIs using the apis interface:

    const metadata = await typedApi.apis.Metadata.metadata();\n

To learn more about the different actions you can perform with the TypedApi, refer to the TypedApi reference.

"},{"location":"develop/toolkit/api-libraries/papi/#sending-transactions","title":"Sending Transactions","text":"

In PAPI, the TypedApi provides the tx and txFromCallData methods to send transactions.

  • The tx method allows you to directly send a transaction with the specified parameters by using the typedApi.tx.Pallet.Call pattern:

    const tx: Transaction = typedApi.tx.Pallet.Call({arg1, arg2, arg3});\n

    For instance, to execute the balances.transferKeepAlive call, you can use the following snippet:

    import { MultiAddress } from '@polkadot-api/descriptors';\n\nconst tx: Transaction = typedApi.tx.Balances.transfer_keep_alive({\n  dest: MultiAddress.Id('INSERT_DESTINATION_ADDRESS'),\n  value: BigInt(INSERT_VALUE),\n});\n

    Ensure you replace INSERT_DESTINATION_ADDRESS and INSERT_VALUE with the actual destination address and value, respectively.

  • The txFromCallData method allows you to send a transaction using the call data. This option accepts binary call data and constructs the transaction from it. It validates the input upon creation and will throw an error if invalid data is provided. The pattern is as follows:

    const callData = Binary.fromHex('0x...');\nconst tx: Transaction = typedApi.txFromCallData(callData);\n

    For instance, to execute a transaction using the call data, you can use the following snippet:

    const callData = Binary.fromHex('0x00002470617065726d6f6f6e');\nconst tx: Transaction = typedApi.txFromCallData(callData);\n

For more information about sending transactions, refer to the Transactions page.

"},{"location":"develop/toolkit/api-libraries/papi/#where-to-go-next","title":"Where to Go Next","text":"

For an in-depth guide on how to use PAPI, refer to the official PAPI documentation.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/","title":"Polkadot.js API","text":""},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#introduction","title":"Introduction","text":"

The Polkadot.js API uses JavaScript/TypeScript to interact with Polkadot SDK-based chains. It allows you to query nodes, read chain state, and submit transactions through a dynamic, auto-generated API interface.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#dynamic-api-generation","title":"Dynamic API Generation","text":"

Unlike traditional static APIs, the Polkadot.js API generates its interfaces automatically when connecting to a node. Here's what happens when you connect:

  1. The API connects to your node
  2. It retrieves the chain's metadata
  3. Based on this metadata, it creates specific endpoints in this format: api.<type>.<module>.<section>
"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#available-api-categories","title":"Available API Categories","text":"

You can access three main categories of chain interactions:

  • Runtime constants (api.consts)

    • Access runtime constants directly
    • Returns values immediately without function calls
    • Example - api.consts.balances.existentialDeposit
  • State queries (api.query)

    • Read chain state
    • Example - api.query.system.account(accountId)
  • Transactions (api.tx)

    • Submit extrinsics (transactions)
    • Example - api.tx.balances.transfer(accountId, value)

The available methods and interfaces will automatically reflect what's possible on your connected chain.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#installation","title":"Installation","text":"

To add the Polkadot.js API to your project:

npmpnpmyarn
npm i @polkadot/api\n
pnpm add @polkadot/api\n
yarn add @polkadot/api\n

This command installs the latest stable release, which supports any Polkadot SDK-based chain.

Note

For more installation details, refer to the Installation section in the official Polkadot.js API documentation.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#get-started","title":"Get Started","text":""},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#creating-an-api-instance","title":"Creating an API Instance","text":"

To interact with a Polkadot SDK-based chain, you must establish a connection through an API instance. The API provides methods for querying chain state, sending transactions, and subscribing to updates.

To create an API connection:

import { ApiPromise, WsProvider } from '@polkadot/api';\n\n// Create a WebSocket provider\nconst wsProvider = new WsProvider('wss://rpc.polkadot.io');\n\n// Initialize the API\nconst api = await ApiPromise.create({ provider: wsProvider });\n\n// Verify the connection by getting the chain's genesis hash\nconsole.log('Genesis Hash:', api.genesisHash.toHex());\n

Note

All await operations must be wrapped in an async function or block since the API uses promises for asynchronous operations.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#reading-chain-data","title":"Reading Chain Data","text":"

The API provides several ways to read data from the chain. You can access:

  • Constants - values that are fixed in the runtime and don't change without a runtime upgrade

    // Get the minimum balance required for a new account\nconst minBalance = api.consts.balances.existentialDeposit.toNumber();\n
  • State - current chain state that updates with each block

    // Example address\nconst address = '5DTestUPts3kjeXSTMyerHihn1uwMfLj8vU8sqF7qYrFabHE';\n\n// Get current timestamp\nconst timestamp = await api.query.timestamp.now();\n\n// Get account information\nconst { nonce, data: balance } = await api.query.system.account(address);\n\nconsole.log(`\n  Timestamp: ${timestamp}\n  Free Balance: ${balance.free}\n  Nonce: ${nonce}\n`);\n
"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#sending-transactions","title":"Sending Transactions","text":"

Transactions (also called extrinsics) modify the chain state. Before sending a transaction, you need:

  • A funded account with sufficient balance to pay transaction fees
  • The account's keypair for signing

To make a transfer:

// Assuming you have an `alice` keypair from the Keyring\nconst recipient = 'INSERT_RECIPIENT_ADDRESS';\nconst amount = 'INSERT_VALUE'; // Amount in the smallest unit (e.g., Planck for DOT)\n\n// Sign and send a transfer\nconst txHash = await api.tx.balances\n  .transfer(recipient, amount)\n  .signAndSend(alice);\n\nconsole.log('Transaction Hash:', txHash);\n

Note

The alice keypair in the example comes from a Keyring object. See the Keyring documentation for details on managing keypairs.

"},{"location":"develop/toolkit/api-libraries/polkadot-js-api/#where-to-go-next","title":"Where to Go Next","text":"

For more detailed information about the Polkadot.js API, check the official documentation.

"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/","title":"Python Substrate Interface","text":""},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#introduction","title":"Introduction","text":"

The Python Substrate Interface is a powerful library that enables interaction with Polkadot SDK-based chains. It provides essential functionality for:

  • Querying on-chain storage
  • Composing and submitting extrinsics
  • SCALE encoding/decoding
  • Interacting with Substrate runtime metadata
  • Managing blockchain interactions through convenient utility methods
"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#installation","title":"Installation","text":"

Install the library using pip:

pip install substrate-interface\n

Note

For more installation details, refer to the Installation section in the official Python Substrate Interface documentation.

"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#get-started","title":"Get Started","text":"

This guide will walk you through the basic operations with the Python Substrate Interface: connecting to a node, reading chain state, and submitting transactions.

"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#establishing-connection","title":"Establishing Connection","text":"

The first step is to establish a connection to a Polkadot SDK-based node. You can connect to either a local or remote node:

from substrateinterface import SubstrateInterface\n\n# Connect to a node using websocket\nsubstrate = SubstrateInterface(\n    # For local node: \"ws://127.0.0.1:9944\"\n    # For Polkadot: \"wss://rpc.polkadot.io\"\n    # For Kusama: \"wss://kusama-rpc.polkadot.io\"\n    url=\"INSERT_WS_URL\"\n)\n\n# Verify connection\nprint(f\"Connected to chain: {substrate.chain}\")\n
"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#reading-chain-state","title":"Reading Chain State","text":"

You can query various on-chain storage items. To retrieve data, you need to specify three key pieces of information:

  • Pallet name - module or pallet that contains the storage item you want to access
  • Storage item - specific storage entry you want to query within the pallet
  • Required parameters - any parameters needed to retrieve the desired data

Here's an example of how to check an account's balance and other details:

# ...\n\n# Query account balance and info\naccount_info = substrate.query(\n    module=\"System\",  # The pallet name\n    storage_function=\"Account\",  # The storage item\n    params=[\"INSERT_ADDRESS\"],  # Account address in SS58 format\n)\n\n# Access account details from the result\nfree_balance = account_info.value[\"data\"][\"free\"]\nreserved = account_info.value[\"data\"][\"reserved\"]\nnonce = account_info.value[\"nonce\"]\n\nprint(\n    f\"\"\"\n    Account Details:\n    - Free Balance: {free_balance}\n    - Reserved: {reserved} \n    - Nonce: {nonce}\n    \"\"\"\n)\n
"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#submitting-transactions","title":"Submitting Transactions","text":"

To modify the chain state, you need to submit transactions (extrinsics). Before proceeding, ensure you have:

  • A funded account with sufficient balance to pay transaction fees
  • Access to the account's keypair

Here's how to create and submit a balance transfer:

#...\n\n# Compose the transfer call\ncall = substrate.compose_call(\n    call_module=\"Balances\",  # The pallet name\n    call_function=\"transfer_keep_alive\",  # The extrinsic function\n    call_params={\n        'dest': 'INSERT_ADDRESS',  # Recipient's address\n        'value': 'INSERT_VALUE'  # Amount in smallest unit (e.g., Planck for DOT)\n    }\n)\n\n# Create a signed extrinsic\nextrinsic = substrate.create_signed_extrinsic(\n    call=call, keypair=keypair  # Your keypair for signing\n)\n\n# Submit and wait for inclusion\nreceipt = substrate.submit_extrinsic(\n    extrinsic, wait_for_inclusion=True  # Wait until the transaction is in a block\n)\n\nif receipt.is_success:\n    print(\n        f\"\"\"\n        Transaction successful:\n        - Extrinsic Hash: {receipt.extrinsic_hash}\n        - Block Hash: {receipt.block_hash}\n        \"\"\"\n    )\nelse:\n    print(f\"Transaction failed: {receipt.error_message}\")\n

Note

The keypair object is essential for signing transactions. See the Keypair documentation for more details.

"},{"location":"develop/toolkit/api-libraries/py-substrate-interface/#where-to-go-next","title":"Where to Go Next","text":"

Now that you understand the basics, you can:

  • Explore more complex queries and transactions
  • Learn about batch transactions and utility functions
  • Discover how to work with custom pallets and types

For comprehensive reference materials and advanced features, visit the py-substrate-interface documentation.

"},{"location":"develop/toolkit/api-libraries/sidecar/","title":"Sidecar API","text":""},{"location":"develop/toolkit/api-libraries/sidecar/#introduction","title":"Introduction","text":"

The Sidecar Rest API is a service that provides a REST interface for interacting with Polkadot SDK-based blockchains. With this API, developers can easily access a broad range of endpoints for nodes, accounts, transactions, parachains, and more.

Sidecar functions as a caching layer between your application and a Polkadot SDK-based node, offering standardized REST endpoints that simplify interactions without requiring complex, direct RPC calls. This approach is especially valuable for developers who prefer REST APIs or build applications in languages with limited WebSocket support.

Some of the key features of the Sidecar API include:

  • REST API interface - provides a familiar REST API interface for interacting with Polkadot SDK-based chains
  • Standardized endpoints - offers consistent endpoint formats across different chain implementations
  • Caching layer - acts as a caching layer to improve performance and reduce direct node requests
  • Multiple chain support - works with any Polkadot SDK-based chain, including Polkadot, Kusama, and custom chains
"},{"location":"develop/toolkit/api-libraries/sidecar/#installation","title":"Installation","text":"

To install Substrate API Sidecar, use one of the following commands:

npmpnpmyarn
npm install -g @substrate/api-sidecar\n
pnpm install -g @substrate/api-sidecar\n
yarn global add @substrate/api-sidecar\n

Note

Sidecar API requires Node.js version 18.14 LTS or higher. Verify your Node.js version:

node --version\n

If you need to install or update Node.js, visit the official Node.js website to download and install the latest LTS version.

You can confirm the installation by running:

substrate-api-sidecar --version\n

For more information about the Sidecar API installation, please refer to the official documentation.

"},{"location":"develop/toolkit/api-libraries/sidecar/#usage","title":"Usage","text":"

To use the Sidecar API, you have two options:

  • Local node - run a node locally, which Sidecar will connect to by default, requiring no additional configuration. To start, run:
    substrate-api-sidecar\n
  • Remote node - connect Sidecar to a remote node by specifying the RPC endpoint for that chain. For example, to gain access to the Polkadot Asset Hub associated endpoints:

    SAS_SUBSTRATE_URL=wss://polkadot-asset-hub-rpc.polkadot.io substrate-api-sidecar\n

    Note

    More configuration details are available in the Configuration section of the Sidecar API documentation.

Once the Sidecar API is running, you\u2019ll see output similar to this:

SAS_SUBSTRATE_URL=wss://polkadot-asset-hub-rpc.polkadot.io substrate-api-sidecar SAS: \ud83d\udce6 LOG: \u2705 LEVEL: \"info\" \u2705 JSON: false \u2705 FILTER_RPC: false \u2705 STRIP_ANSI: false \u2705 WRITE: false \u2705 WRITE_PATH: \"/opt/homebrew/lib/node_modules/@substrate/api-sidecar/build/src/logs\" \u2705 WRITE_MAX_FILE_SIZE: 5242880 \u2705 WRITE_MAX_FILES: 5 \ud83d\udce6 SUBSTRATE: \u2705 URL: \"wss://polkadot-asset-hub-rpc.polkadot.io\" \u2705 TYPES_BUNDLE: undefined \u2705 TYPES_CHAIN: undefined \u2705 TYPES_SPEC: undefined \u2705 TYPES: undefined \u2705 CACHE_CAPACITY: undefined \ud83d\udce6 EXPRESS: \u2705 BIND_HOST: \"127.0.0.1\" \u2705 PORT: 8080 \u2705 KEEP_ALIVE_TIMEOUT: 5000 \ud83d\udce6 METRICS: \u2705 ENABLED: false \u2705 PROM_HOST: \"127.0.0.1\" \u2705 PROM_PORT: 9100 \u2705 LOKI_HOST: \"127.0.0.1\" \u2705 LOKI_PORT: 3100 \u2705 INCLUDE_QUERYPARAMS: false 2024-11-06 08:06:01 info: Version: 19.3.0 2024-11-06 08:06:02 warn: API/INIT: RPC methods not decorated: chainHead_v1_body, chainHead_v1_call, chainHead_v1_continue, chainHead_v1_follow, chainHead_v1_header, chainHead_v1_stopOperation, chainHead_v1_storage, chainHead_v1_unfollow, chainHead_v1_unpin, chainSpec_v1_chainName, chainSpec_v1_genesisHash, chainSpec_v1_properties, transactionWatch_v1_submitAndWatch, transactionWatch_v1_unwatch, transaction_v1_broadcast, transaction_v1_stop 2024-11-06 08:06:02 info: Connected to chain Polkadot Asset Hub on the statemint client at wss://polkadot-asset-hub-rpc.polkadot.io 2024-11-06 08:06:02 info: Listening on http://127.0.0.1:8080/ 2024-11-06 08:06:02 info: Check the root endpoint (http://127.0.0.1:8080/) to see the available endpoints for the current node

With Sidecar running, you can access the exposed endpoints via a browser, Postman, curl, or your preferred tool.

"},{"location":"develop/toolkit/api-libraries/sidecar/#endpoints","title":"Endpoints","text":"

Sidecar API provides a set of REST endpoints that allow you to query different aspects of the chain, including blocks, accounts, and transactions. Each endpoint offers specific insights into the chain\u2019s state and activities.

For example, to retrieve the version of the node, use the /node/version endpoint:

curl -X 'GET' \\\n  'http://127.0.0.1:8080/node/version' \\\n  -H 'accept: application/json'\n

Note

Alternatively, you can access http://127.0.0.1:8080/node/version directly in a browser since it\u2019s a GET request.

In response, you\u2019ll see output similar to this (assuming you\u2019re connected to Polkadot Asset Hub):

curl -X 'GET' 'http://127.0.0.1:8080/node/version' -H 'accept: application/json' { \"clientVersion\": \"1.16.1-835e0767fe8\", \"clientImplName\": \"statemint\", \"chain\": \"Polkadot Asset Hub\" }

For a complete list of available endpoints and their documentation, visit the Sidecar API list endpoints. You can learn about the endpoints and how to use them in your applications.

"},{"location":"develop/toolkit/api-libraries/sidecar/#where-to-go-next","title":"Where to Go Next","text":"

To dive deeper, refer to the official Sidecar documentation. This provides a comprehensive guide to the available configurations and advanced usage.

"},{"location":"develop/toolkit/integrations/","title":"Integrations","text":"

Polkadot offers a wide range of integrations that allow developers to enhance their decentralized applications (dApps) and leverage the full capabilities of the ecosystem. Whether you\u2019re looking to extend your application\u2019s functionality, integrate with other chains, or access specialized services, these integrations provide the tools and resources you need to build efficiently and effectively. Explore the available options to find the solutions that best suit your development needs.

"},{"location":"develop/toolkit/integrations/#key-integration-solutions","title":"Key Integration Solutions","text":"

Polkadot\u2019s ecosystem offers a variety of integrations designed to enhance dApp functionality, improve data management, and bridge the gap between on-chain and off-chain systems. These integrations provide the building blocks needed for creating more robust, efficient, and user-friendly decentralized applications.

Some of the available integrations are explained in this section.

"},{"location":"develop/toolkit/integrations/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/toolkit/integrations/indexers/","title":"Indexers","text":""},{"location":"develop/toolkit/integrations/indexers/#the-challenge-of-blockchain-data-access","title":"The Challenge of Blockchain Data Access","text":"

Blockchain data is inherently sequential and distributed, with information stored chronologically across numerous blocks. While retrieving data from a single block through JSON-RPC API calls is straightforward, more complex queries that span multiple blocks present significant challenges:

  • Data is scattered and unorganized across the blockchain
  • Retrieving large datasets can take days or weeks to sync
  • Complex operations (like aggregations, averages, or cross-chain queries) require additional processing
  • Direct blockchain queries can impact dApp performance and responsiveness
"},{"location":"develop/toolkit/integrations/indexers/#what-is-a-blockchain-indexer","title":"What is a Blockchain Indexer?","text":"

A blockchain indexer is a specialized infrastructure tool that processes, organizes, and stores blockchain data in an optimized format for efficient querying. Think of it as a search engine for blockchain data that:

  • Continuously monitors the blockchain for new blocks and transactions
  • Processes and categorizes this data according to predefined schemas
  • Stores the processed data in an easily queryable database
  • Provides efficient APIs (typically GraphQL) for data retrieval
"},{"location":"develop/toolkit/integrations/indexers/#indexer-implementations","title":"Indexer Implementations","text":"
  • Subsquid

    Subsquid is a data network that allows rapid and cost-efficient retrieval of blockchain data from 100+ chains using Subsquid's decentralized data lake and open-source SDK. In simple terms, Subsquid can be considered an ETL (extract, transform, and load) tool with a GraphQL server included. It enables comprehensive filtering, pagination, and even full-text search capabilities. Subsquid has native and full support for EVM and Substrate data, even within the same project.

    Reference

  • Subquery

    SubQuery is a fast, flexible, and reliable open-source data decentralised infrastructure network that provides both RPC and indexed data to consumers worldwide. It provides custom APIs for your web3 project across multiple supported chains.

    Reference

"},{"location":"develop/toolkit/integrations/oracles/","title":"Oracles","text":""},{"location":"develop/toolkit/integrations/oracles/#what-is-a-blockchain-oracle","title":"What is a Blockchain Oracle?","text":"

Oracles enable blockchains to access external data sources. Since blockchains operate as isolated networks, they cannot natively interact with external systems - this limitation is known as the \"blockchain oracle problem.\" Oracles solves this by extracting data from external sources (like APIs, IoT devices, or other blockchains), validating it, and submitting it on-chain.

While simple oracle implementations may rely on a single trusted provider, more sophisticated solutions use decentralized networks where multiple providers stake assets and reach consensus on data validity. Typical applications include DeFi price feeds, weather data for insurance contracts, and cross-chain asset verification.

"},{"location":"develop/toolkit/integrations/oracles/#oracle-implementations","title":"Oracle Implementations","text":"
  • Acurast

    Acurast is a decentralized, serverless cloud platform that uses a distributed network of mobile devices for oracle services, addressing centralized trust and data ownership issues. In the Polkadot ecosystem, it allows developers to define off-chain data and computation needs, which are processed by these devices acting as decentralized oracle nodes, delivering results to Substrate (Wasm) and EVM environments.

    Reference

  • Chainlink

    Chainlink is a decentralized oracle network that brings external data onto blockchains. It acts as a secure bridge between traditional data sources and blockchain networks, enabling access to real-world information reliably. In the Polkadot ecosystem, Chainlink provides the Chainlink Feed Pallet, a Polkadot SDK-based oracle module that enables access to price reference data across your runtime logic.

    Reference

"},{"location":"develop/toolkit/integrations/wallets/","title":"Wallets","text":""},{"location":"develop/toolkit/integrations/wallets/#what-is-a-blockchain-wallet","title":"What is a Blockchain Wallet?","text":"

A wallet serves as your gateway to interacting with blockchain networks. Rather than storing funds, wallets secure your private keys, controlling access to your blockchain assets. Your private key provides complete control over all permitted transactions on your blockchain account, making it essential to keep it secure.

Wallet types fall into two categories based on their connection to the internet:

  • Hot wallets - online storage through websites, browser extensions or smartphone apps
  • Cold wallets - offline storage using hardware devices or air-gapped systems
"},{"location":"develop/toolkit/integrations/wallets/#hot-wallets","title":"Hot Wallets","text":"
  • Nova Wallet

    A non-custodial, mobile-first wallet for managing assets and interacting with the Polkadot and Kusama ecosystems. It supports staking, governance, cross-chain transfers, and crowdloans. With advanced features, seamless multi-network support, and strong security, Nova Wallet empowers users to explore the full potential of Polkadot parachains on the go.

    Reference

  • Talisman

    A non-custodial web browser extension that allows you to manage your portfolio and interact with Polkadot and Ethereum applications. It supports Web3 apps, asset storage, and account management across over 150 Polkadot SDK-based and EVM networks. Features include NFT management, Ledger support, fiat on-ramp, and portfolio tracking.

    Reference

  • Subwallet

    A non-custodial web browser extension and mobile wallet for Polkadot and Ethereum. Track, send, receive, and monitor multi-chain assets on 150+ networks. Import account with seed phrase, private key, QR code, and JSON file. Import token & NFT, attach read-only account. XCM Transfer, NFT Management, Parity Signer & Ledger support, light clients support, EVM dApp support, MetaMask compatibility, custom endpoints, fiat on-ramp, phishing detection, transaction history.

    Reference

"},{"location":"develop/toolkit/integrations/wallets/#cold-wallets","title":"Cold Wallets","text":"
  • Ledger

    A hardware wallet that securely stores cryptocurrency private keys offline, protecting them from online threats. Using a secure chip and the Ledger Live app allows safe transactions and asset management while keeping keys secure.

    Reference

  • Polkadot Vault

    This cold storage solution lets you use a phone in airplane mode as an air-gapped wallet, turning any spare phone, tablet, or iOS/Android device into a hardware wallet.

    Reference

"},{"location":"develop/toolkit/interoperability/","title":"Interoperability","text":"

Polkadot's XCM tooling ecosystem redefines the boundaries of cross-chain communication and asset movement. With unparalleled flexibility and scalability, these advanced tools empower developers to build decentralized applications that connect parachains, relay chains, and external networks. By bridging siloed blockchains, Polkadot paves the way for a unified, interoperable ecosystem that accelerates innovation and collaboration.

From enabling cross-chain messaging to facilitating secure asset transfers and integrating with external blockchains, Polkadot's XCM tools serve as the cornerstone for next-generation blockchain solutions. These resources not only enhance developer workflows but also lower technical barriers, unlocking opportunities for scalable, interconnected systems.

Whether you're a blockchain pioneer or an emerging builder, Polkadot's tools provide the foundation to create impactful, future-ready applications.

"},{"location":"develop/toolkit/interoperability/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/toolkit/interoperability/xcm-tools/","title":"XCM Tools","text":""},{"location":"develop/toolkit/interoperability/xcm-tools/#introduction","title":"Introduction","text":"

As described in the Interoperability section, XCM (Cross-Consensus Messaging) is a protocol used in the Polkadot and Kusama ecosystems to enable communication and interaction between chains. It facilitates cross-chain communication, allowing assets, data, and messages to flow seamlessly across the ecosystem.

As XCM is central to enabling communication between blockchains, developers need robust tools to help interact with, build, and test XCM messages. Several XCM tools simplify working with the protocol by providing libraries, frameworks, and utilities that enhance the development process, ensuring that applications built within the Polkadot ecosystem can efficiently use cross-chain functionalities.

"},{"location":"develop/toolkit/interoperability/xcm-tools/#popular-xcm-tools","title":"Popular XCM Tools","text":""},{"location":"develop/toolkit/interoperability/xcm-tools/#moonsong-labs-xcm-tools","title":"Moonsong Labs XCM Tools","text":"

Moonsong Labs XCM Tools provides a collection of scripts for managing and testing XCM operations between Polkadot SDK-based runtimes. These tools allow performing tasks like asset registration, channel setup, and XCM initialization. Key features include:

  • Asset registration - registers assets, setting units per second (up-front fees), and configuring error (revert) codes
  • XCM initializer - initializes XCM, sets default XCM versions, and configures revert codes for XCM-related precompiles
  • HRMP manipulator - manages HRMP channel actions, including opening, accepting, or closing channels
  • XCM-Transactor-Info-Setter - configures transactor information, including extra weight and fee settings
  • Decode XCM - decodes XCM messages on the relay chain or parachains to help interpret cross-chain communication

To get started, clone the repository and install the required dependencies:

git clone https://github.com/Moonsong-Labs/xcm-tools && \ncd xcm-tools &&\nyarn install\n

For a full overview of each script, visit the scripts directory or refer to the official documentation on GitHub.

"},{"location":"develop/toolkit/interoperability/xcm-tools/#paraspell","title":"ParaSpell","text":"

ParaSpell is a collection of open-source XCM tools designed to streamline cross-chain asset transfers and interactions within the Polkadot and Kusama ecosystems. It equips developers with an intuitive interface to manage and optimize XCM-based functionalities. Some key points included by ParaSpell are:

  • XCM SDK - provides a unified layer to incorporate XCM into decentralized applications, simplifying complex cross-chain interactions
  • XCM API - offers an efficient, package-free approach to integrating XCM functionality while offloading heavy computing tasks, minimizing costs and improving application performance
  • XCM router - enables cross-chain asset swaps in a single command, allowing developers to send one asset type (such as DOT on Polkadot) and receive a different asset on another chain (like ASTR on Astar)
  • XCM analyser - decodes and translates complex XCM multilocation data into readable information, supporting easier troubleshooting and debugging
  • XCM visualizator - a tool designed to give developers a clear, interactive view of XCM activity across the Polkadot ecosystem, providing insights into cross-chain communication flow

ParaSpell's tools make it simple for developers to build, test, and deploy cross-chain solutions without needing extensive knowledge of the XCM protocol. With features like message composition, decoding, and practical utility functions for parachain interactions, ParaSpell is especially useful for debugging and optimizing cross-chain communications.

"},{"location":"develop/toolkit/interoperability/xcm-tools/#astar-xcm-tools","title":"Astar XCM Tools","text":"

The Astar parachain offers a crate with a set of utilities for interacting with the XCM protocol. The xcm-tools crate provides a straightforward method for users to locate a sovereign account or calculate an XC20 asset ID. Some commands included by the xcm-tools crate allow users to perform the following tasks:

  • Sovereign accounts - obtain the sovereign account address for any parachain, either on the Relay Chain or for sibling parachains, using a simple command
  • XC20 EVM addresses - generate XC20-compatible EVM addresses for assets by entering the asset ID, making it easy to integrate assets across EVM-compatible environments
  • Remote accounts - retrieve remote account addresses needed for multi-location compatibility, using flexible options to specify account types and parachain IDs

To start using these tools, clone the Astar repository and compile the xcm-tools package:

git clone https://github.com/AstarNetwork/Astar &&\ncd Astar &&\ncargo build --release -p xcm-tools\n

After compiling, verify the setup with the following command:

./target/release/xcm-tools --help\n
For more details on using Astar xcm-tools, consult the official documentation.

"},{"location":"develop/toolkit/interoperability/xcm-tools/#chopsticks","title":"Chopsticks","text":"

The Chopsticks library provides XCM functionality for testing XCM messages across networks, enabling you to fork multiple parachains along with a relay chain. For further details, see the Chopsticks documentation about XCM.

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/","title":"Asset Transfer API","text":"

The Asset Transfer API is a library designed to streamline asset transfers for Polkadot SDK-based chains, offering methods for both cross-chain and local transactions.

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/#what-can-i-do-with-the-asset-transfer-api","title":"What Can I Do with the Asset Transfer API?","text":"
  • Facilitate cross-chain transfers to and from the relay chain, system chains, and parachains
  • Facilitate local asset transfers
  • Initiate liquid pool asset transfers in Asset Hub
  • Claim trapped assets
  • Retrieve fee information
"},{"location":"develop/toolkit/interoperability/asset-transfer-api/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/#additional-resources","title":"Additional ResourcesView the Source CodeLearn Through Examples","text":"

Explore the Asset Transfer API repository on GitHub to familiarize yourself with the source code.

Check out the provided examples on GitHub to get a better understanding of how to implement the Asset Transfer API.

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/","title":"Asset Transfer API","text":""},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/#introduction","title":"Introduction","text":"

Asset Transfer API, a tool developed and maintained by Parity, is a specialized library designed to streamline asset transfers for Polkadot SDK-based blockchains. This API provides a simplified set of methods for users to:

  • Execute asset transfers to other parachains or locally within the same chain
  • Facilitate transactions involving system parachains like Asset Hub (Polkadot and Kusama)

Using this API, developers can manage asset transfers more efficiently, reducing the complexity of cross-chain transactions and enabling smoother operations within the ecosystem.

For additional support and information, please reach out through GitHub Issues.

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure you have the following installed:

  • Node.js (recommended version 21 or greater)
  • Package manager - npm should be installed with Node.js by default. Alternatively, you can use other package managers like Yarn
"},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/#install-asset-transfer-api","title":"Install Asset Transfer API","text":"

To use asset-transfer-api, you need a TypeScript project. If you don't have one, you can create a new one:

  1. Create a new directory for your project:

    mkdir my-asset-transfer-project \\\n&& cd my-asset-transfer-project\n
  2. Initialize a new TypeScript project:

    npm init -y \\\n&& npm install typescript ts-node @types/node --save-dev \\\n&& npx tsc --init\n

Once you have a project set up, you can install the asset-transfer-api package:

npm install @substrate/asset-transfer-api@v0.3.1\n

Note

This documentation covers version v0.3.1 of Asset Transfer API.

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/#set-up-asset-transfer-api","title":"Set Up Asset Transfer API","text":"

To initialize the Asset Transfer API, you need three key components:

  • A Polkadot.js API instance
  • The specName of the chain
  • The XCM version to use
"},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/#using-helper-function-from-library","title":"Using Helper Function from Library","text":"

Leverage the constructApiPromise helper function provided by the library for the simplest setup process. It not only constructs a Polkadot.js ApiPromise but also automatically retrieves the chain's specName and fetches a safe XCM version. By using this function, developers can significantly reduce boilerplate code and potential configuration errors, making the initial setup both quicker and more robust.

import {\n  AssetTransferApi,\n  constructApiPromise,\n} from '@substrate/asset-transfer-api';\n\nasync function main() {\n  const { api, specName, safeXcmVersion } = await constructApiPromise(\n    'INSERT_WEBSOCKET_URL',\n  );\n\n  const assetsApi = new AssetTransferApi(api, specName, safeXcmVersion);\n\n  // Your code using assetsApi goes here\n}\n\nmain();\n

Note

The code example id enclosed in an async main function to provide the necessary asynchronous context. However, you can use the code directly if you're already working within an async environment. The key is to ensure you're in an async context when working with these asynchronous operations, regardless of your specific setup.

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/#asset-transfer-api-reference","title":"Asset Transfer API Reference","text":"

For detailed information on the Asset Transfer API, including available methods, data types, and functionalities, refer to the Asset Transfer API Reference section. This resource provides in-depth explanations and technical specifications to help you integrate and utilize the API effectively.

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/#examples","title":"Examples","text":""},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/#relay-to-system-parachain-transfer","title":"Relay to System Parachain Transfer","text":"

This example demonstrates how to initiate a cross-chain token transfer from a relay chain to a system parachain. Specifically, 1 WND will be transferred from a Westend (relay chain) account to a Westmint (system parachain) account.

import {\n  AssetTransferApi,\n  constructApiPromise,\n} from '@substrate/asset-transfer-api';\n\nasync function main() {\n  const { api, specName, safeXcmVersion } = await constructApiPromise(\n    'wss://westend-rpc.polkadot.io',\n  );\n  const assetApi = new AssetTransferApi(api, specName, safeXcmVersion);\n  let callInfo;\n  try {\n    callInfo = await assetApi.createTransferTransaction(\n      '1000',\n      '5EWNeodpcQ6iYibJ3jmWVe85nsok1EDG8Kk3aFg8ZzpfY1qX',\n      ['WND'],\n      ['1000000000000'],\n      {\n        format: 'call',\n        xcmVersion: safeXcmVersion,\n      },\n    );\n\n    console.log(`Call data:\\n${JSON.stringify(callInfo, null, 4)}`);\n  } catch (e) {\n    console.error(e);\n    throw Error(e as string);\n  }\n\n  const decoded = assetApi.decodeExtrinsic(callInfo.tx, 'call');\n  console.log(`\\nDecoded tx:\\n${JSON.stringify(JSON.parse(decoded), null, 4)}`);\n}\n\nmain()\n  .catch((err) => console.error(err))\n  .finally(() => process.exit());\n

After running the script, you'll see the following output in the terminal, which shows the call data for the cross-chain transfer and its decoded extrinsic details:

ts-node relayToSystem.ts Call data: { \"origin\": \"westend\", \"dest\": \"westmint\", \"direction\": \"RelayToSystem\", \"xcmVersion\": 3, \"method\": \"transferAssets\", \"format\": \"call\", \"tx\": \"0x630b03000100a10f03000101006c0c32faf970eacb2d4d8e538ac0dab3642492561a1be6f241c645876c056c1d030400000000070010a5d4e80000000000\" } Decoded tx: { \"args\": { \"dest\": { \"V3\": { \"parents\": \"0\", \"interior\": { \"X1\": { \"Parachain\": \"1,000\" } } } }, \"beneficiary\": { \"V3\": { \"parents\": \"0\", \"interior\": { \"X1\": { \"AccountId32\": { \"network\": null, \"id\": \"0x6c0c32faf970eacb2d4d8e538ac0dab3642492561a1be6f241c645876c056c1d\" } } } } }, \"assets\": { \"V3\": [ { \"id\": { \"Concrete\": { \"parents\": \"0\", \"interior\": \"Here\" } }, \"fun\": { \"Fungible\": \"1,000,000,000,000\" } } ] }, \"fee_asset_item\": \"0\", \"weight_limit\": \"Unlimited\" }, \"method\": \"transferAssets\", \"section\": \"xcmPallet\" }"},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/#local-parachain-transfer","title":"Local Parachain Transfer","text":"

The following example demonstrates a local GLMR transfer within Moonbeam, using the balances pallet. It transfers 1 GLMR token from one account to another account, where both the sender and recipient accounts are located on the same parachain.

import {\n  AssetTransferApi,\n  constructApiPromise,\n} from '@substrate/asset-transfer-api';\n\nasync function main() {\n  const { api, specName, safeXcmVersion } = await constructApiPromise(\n    'wss://wss.api.moonbeam.network',\n  );\n  const assetApi = new AssetTransferApi(api, specName, safeXcmVersion);\n\n  let callInfo;\n  try {\n    callInfo = await assetApi.createTransferTransaction(\n      '2004',\n      '0xF977814e90dA44bFA03b6295A0616a897441aceC',\n      [],\n      ['1000000000000000000'],\n      {\n        format: 'call',\n        keepAlive: true,\n      },\n    );\n\n    console.log(`Call data:\\n${JSON.stringify(callInfo, null, 4)}`);\n  } catch (e) {\n    console.error(e);\n    throw Error(e as string);\n  }\n\n  const decoded = assetApi.decodeExtrinsic(callInfo.tx, 'call');\n  console.log(`\\nDecoded tx:\\n${JSON.stringify(JSON.parse(decoded), null, 4)}`);\n}\n\nmain()\n  .catch((err) => console.error(err))\n  .finally(() => process.exit());\n

Upon executing this script, the terminal will display the following output, illustrating the encoded extrinsic for the cross-chain message and its corresponding decoded format:

ts-node localParachainTx.ts Call data: { \"origin\": \"moonbeam\", \"dest\": \"moonbeam\", \"direction\": \"local\", \"xcmVersion\": null, \"method\": \"balances::transferKeepAlive\", \"format\": \"call\", \"tx\": \"0x0a03f977814e90da44bfa03b6295a0616a897441acec821a0600\" } Decoded tx: { \"args\": { \"dest\": \"0xF977814e90dA44bFA03b6295A0616a897441aceC\", \"value\": \"1,000,000,000,000,000,000\" }, \"method\": \"transferKeepAlive\", \"section\": \"balances\" }"},{"location":"develop/toolkit/interoperability/asset-transfer-api/overview/#parachain-to-parachain-transfer","title":"Parachain to Parachain Transfer","text":"

This example demonstrates creating a cross-chain asset transfer between two parachains. It shows how to send vMOVR and vBNC from a Moonriver account to a Bifrost Kusama account using the safe XCM version. It connects to Moonriver, initializes the API, and uses the createTransferTransaction method to prepare a transaction.

import {\n  AssetTransferApi,\n  constructApiPromise,\n} from '@substrate/asset-transfer-api';\n\nasync function main() {\n  const { api, specName, safeXcmVersion } = await constructApiPromise(\n    'wss://moonriver.public.blastapi.io',\n  );\n  const assetApi = new AssetTransferApi(api, specName, safeXcmVersion);\n  let callInfo;\n  try {\n    callInfo = await assetApi.createTransferTransaction(\n      '2001',\n      '0xc4db7bcb733e117c0b34ac96354b10d47e84a006b9e7e66a229d174e8ff2a063',\n      ['vMOVR', '72145018963825376852137222787619937732'],\n      ['1000000', '10000000000'],\n      {\n        format: 'call',\n        xcmVersion: safeXcmVersion,\n      },\n    );\n\n    console.log(`Call data:\\n${JSON.stringify(callInfo, null, 4)}`);\n  } catch (e) {\n    console.error(e);\n    throw Error(e as string);\n  }\n\n  const decoded = assetApi.decodeExtrinsic(callInfo.tx, 'call');\n  console.log(`\\nDecoded tx:\\n${JSON.stringify(JSON.parse(decoded), null, 4)}`);\n}\n\nmain()\n  .catch((err) => console.error(err))\n  .finally(() => process.exit());\n

After running this script, you'll see the following output in your terminal. This output presents the encoded extrinsic for the cross-chain message, along with its decoded format, providing a clear view of the transaction details.

ts-node paraToPara.ts Call data: { \"origin\": \"moonriver\", \"dest\": \"bifrost\", \"direction\": \"ParaToPara\", \"xcmVersion\": 2, \"method\": \"transferMultiassets\", \"format\": \"call\", \"tx\": \"0x6a05010800010200451f06080101000700e40b540200010200451f0608010a0002093d000000000001010200451f0100c4db7bcb733e117c0b34ac96354b10d47e84a006b9e7e66a229d174e8ff2a06300\" } Decoded tx: { \"args\": { \"assets\": { \"V2\": [ { \"id\": { \"Concrete\": { \"parents\": \"1\", \"interior\": { \"X2\": [ { \"Parachain\": \"2,001\" }, { \"GeneralKey\": \"0x0101\" } ] } } }, \"fun\": { \"Fungible\": \"10,000,000,000\" } }, { \"id\": { \"Concrete\": { \"parents\": \"1\", \"interior\": { \"X2\": [ { \"Parachain\": \"2,001\" }, { \"GeneralKey\": \"0x010a\" } ] } } }, \"fun\": { \"Fungible\": \"1,000,000\" } } ] }, \"fee_item\": \"0\", \"dest\": { \"V2\": { \"parents\": \"1\", \"interior\": { \"X2\": [ { \"Parachain\": \"2,001\" }, { \"AccountId32\": { \"network\": \"Any\", \"id\": \"0xc4db7bcb733e117c0b34ac96354b10d47e84a006b9e7e66a229d174e8ff2a063\" } } ] } } }, \"dest_weight_limit\": \"Unlimited\" }, \"method\": \"transferMultiassets\", \"section\": \"xTokens\" }"},{"location":"develop/toolkit/interoperability/asset-transfer-api/reference/","title":"Asset Transfer API Reference","text":"
  • Install the Asset Transfer API

    Learn how to install asset-transfer-api into a new or existing project.

    Get started

  • Dive in with a tutorial

    Ready to start coding? Follow along with a step-by-step tutorial.

    How to use the Asset Transfer API

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/reference/#asset-transfer-api-class","title":"Asset Transfer API Class","text":"

Holds open an API connection to a specified chain within the ApiPromise to help construct transactions for assets and estimate fees.

For a more in-depth explanation of the Asset Transfer API class structure, check the source code.

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/reference/#methods","title":"Methods","text":""},{"location":"develop/toolkit/interoperability/asset-transfer-api/reference/#create-transfer-transaction","title":"Create Transfer Transaction","text":"

Generates an XCM transaction for transferring assets between chains. It simplifies the process by inferring what type of transaction is required given the inputs, ensuring that the assets are valid, and that the transaction details are correctly formatted.

After obtaining the transaction, you must handle the signing and submission process separately.

public async createTransferTransaction<T extends Format>(\n  destChainId: string,\n  destAddr: string,\n  assetIds: string[],\n  amounts: string[],\n  opts: TransferArgsOpts<T> = {}\n): Promise<TxResult<T>>;\n
Request parameters

destChainId string required

ID of the destination chain ('0' for relay chain, other values for parachains).

destAddr string required

Address of the recipient account on the destination chain.

assetIds string[] required

Array of asset IDs to be transferred.

When asset IDs are provided, the API dynamically selects the appropriate pallet for the current chain to handle these specific assets. If the array is empty, the API defaults to using the balances pallet.

amounts string[] required

Array of amounts corresponding to each asset in assetIds.

opts TransferArgsOpts<T>

Options for customizing the claim assets transaction. These options allow you to specify the transaction format, fee payment details, weight limits, XCM versions, and more.

Show more

format T extends Format

Specifies the format for returning a transaction.

Type Format
\n

paysWithFeeOrigin string

The Asset ID to pay fees on the current common good parachain. The defaults are as follows:

  • Polkadot Asset Hub - 'DOT'
  • Kusama Asset Hub - 'KSM'

paysWithFeeDest string

Asset ID to pay fees on the destination parachain.

weightLimit { refTime?: string, proofSize?: string }

Custom weight limit option. If not provided, it will default to unlimited.

xcmVersion number

Sets the XCM version for message construction. If this is not present a supported version will be queried, and if there is no supported version a safe version will be queried.

keepAlive boolean

Enables transferKeepAlive for local asset transfers. For creating local asset transfers, if true this will allow for a transferKeepAlive as opposed to a transfer.

transferLiquidToken boolean

Declares if this will transfer liquidity tokens. Default is false.

assetTransferType string

The XCM transfer type used to transfer assets. The AssetTransferType type defines the possible values for this parameter.

Type AssetTransferType

remoteReserveAssetTransferTypeLocation string

The remove reserve location for the XCM transfer. Should be provided when specifying an assetTransferType of RemoteReserve.

feesTransferType string

XCM TransferType used to pay fees for XCM transfer. The AssetTransferType type defines the possible values for this parameter.

Type AssetTransferType

remoteReserveFeesTransferTypeLocation string

The remote reserve location for the XCM transfer fees. Should be provided when specifying a feesTransferType of RemoteReserve.

customXcmOnDest string

A custom XCM message to be executed on the destination chain. Should be provided if a custom XCM message is needed after transferring assets. Defaults to:

Xcm(vec![DepositAsset { assets: Wild(AllCounted(assets.len())), beneficiary }])\n
Response parameters

Promise<TxResult<T>

A promise containing the result of constructing the transaction.

Show more

dest string

The destination specName of the transaction.

origin string

The origin specName of the transaction.

format Format | 'local'

The format type the transaction is outputted in.

Type Format
\n

xcmVersion number | null

The XCM version that was used to construct the transaction.

direction Direction | 'local'

The direction of the cross-chain transfer.

Enum Direction values

Local

Local transaction.

SystemToPara

System parachain to parachain.

SystemToRelay

System paracahin to system relay chain.

SystemToSystem

System parachain to System parachain chain.

SystemToBridge

System parachain to an external GlobalConsensus chain.

ParaToPara

Parachain to Parachain.

ParaToRelay

Parachain to Relay chain.

ParaToSystem

Parachain to System parachain.

RelayToSystem

Relay to System Parachain.

RelayToPara

Relay chain to Parachain.

RelayToBridge

Relay chain to an external GlobalConsensus chain.

method Methods

The method used in the transaction.

Type Methods
type Methods =\n  | LocalTransferTypes\n  | 'transferAssets'\n  | 'transferAssetsUsingTypeAndThen'\n  | 'limitedReserveTransferAssets'\n  | 'limitedTeleportAssets'\n  | 'transferMultiasset'\n  | 'transferMultiassets'\n  | 'transferMultiassetWithFee'\n  | 'claimAssets';\n
Type LocalTransferTypes
type LocalTransferTypes =\n  | 'assets::transfer'\n  | 'assets::transferKeepAlive'\n  | 'foreignAssets::transfer'\n  | 'foreignAssets::transferKeepAlive'\n  | 'balances::transfer'\n  | 'balances::transferKeepAlive'\n  | 'poolAssets::transfer'\n  | 'poolAssets::transferKeepAlive'\n  | 'tokens::transfer'\n  | 'tokens::transferKeepAlive';\n

tx ConstructedFormat<T>

The constructed transaction.

Type ConstructedFormat<T> Example

Request

import {\n  AssetTransferApi,\n  constructApiPromise,\n} from '@substrate/asset-transfer-api';\n\nasync function main() {\n  const { api, specName, safeXcmVersion } = await constructApiPromise(\n    'wss://wss.api.moonbeam.network',\n  );\n  const assetsApi = new AssetTransferApi(api, specName, safeXcmVersion);\n\n  let callInfo;\n  try {\n    callInfo = await assetsApi.createTransferTransaction(\n      '2004',\n      '0xF977814e90dA44bFA03b6295A0616a897441aceC',\n      [],\n      ['1000000000000000000'],\n      {\n        format: 'call',\n        keepAlive: true,\n      },\n    );\n\n    console.log(`Call data:\\n${JSON.stringify(callInfo, null, 4)}`);\n  } catch (e) {\n    console.error(e);\n    throw Error(e as string);\n  }\n}\n\nmain()\n  .catch((err) => console.error(err))\n  .finally(() => process.exit());\n

Response

Call data: { \"origin\": \"moonbeam\", \"dest\": \"moonbeam\", \"direction\": \"local\", \"xcmVersion\": null, \"method\": \"balances::transferKeepAlive\", \"format\": \"call\", \"tx\": \"0x0a03f977814e90da44bfa03b6295a0616a897441acec821a0600\" }"},{"location":"develop/toolkit/interoperability/asset-transfer-api/reference/#claim-assets","title":"Claim Assets","text":"

Creates a local XCM transaction to retrieve trapped assets. This function can be used to claim assets either locally on a system parachain, on the relay chain, or on any chain that supports the claimAssets runtime call.

public async claimAssets<T extends Format>(\n  assetIds: string[],\n  amounts: string[],\n  beneficiary: string,\n  opts: TransferArgsOpts<T>\n): Promise<TxResult<T>>;\n
Request parameters

assetIds string[] required

Array of asset IDs to be claimed from the AssetTrap.

amounts string[] required

Array of amounts corresponding to each asset in assetIds.

beneficiary string required

Address of the account to receive the trapped assets.

opts TransferArgsOpts<T>

Options for customizing the claim assets transaction. These options allow you to specify the transaction format, fee payment details, weight limits, XCM versions, and more.

Show more

format T extends Format

Specifies the format for returning a transaction.

Type Format
\n

paysWithFeeOrigin string

The Asset ID to pay fees on the current common good parachain. The defaults are as follows:

  • Polkadot Asset Hub - 'DOT'
  • Kusama Asset Hub - 'KSM'

paysWithFeeDest string

Asset ID to pay fees on the destination parachain.

weightLimit { refTime?: string, proofSize?: string }

Custom weight limit option. If not provided, it will default to unlimited.

xcmVersion number

Sets the XCM version for message construction. If this is not present a supported version will be queried, and if there is no supported version a safe version will be queried.

keepAlive boolean

Enables transferKeepAlive for local asset transfers. For creating local asset transfers, if true this will allow for a transferKeepAlive as opposed to a transfer.

transferLiquidToken boolean

Declares if this will transfer liquidity tokens. Default is false.

assetTransferType string

The XCM transfer type used to transfer assets. The AssetTransferType type defines the possible values for this parameter.

Type AssetTransferType

remoteReserveAssetTransferTypeLocation string

The remove reserve location for the XCM transfer. Should be provided when specifying an assetTransferType of RemoteReserve.

feesTransferType string

XCM TransferType used to pay fees for XCM transfer. The AssetTransferType type defines the possible values for this parameter.

Type AssetTransferType

remoteReserveFeesTransferTypeLocation string

The remote reserve location for the XCM transfer fees. Should be provided when specifying a feesTransferType of RemoteReserve.

customXcmOnDest string

A custom XCM message to be executed on the destination chain. Should be provided if a custom XCM message is needed after transferring assets. Defaults to:

Xcm(vec![DepositAsset { assets: Wild(AllCounted(assets.len())), beneficiary }])\n
Response parameters

Promise<TxResult<T>>

A promise containing the result of constructing the transaction.

Show more

dest string

The destination specName of the transaction.

origin string

The origin specName of the transaction.

format Format | 'local'

The format type the transaction is outputted in.

Type Format
\n

xcmVersion number | null

The XCM version that was used to construct the transaction.

direction Direction | 'local'

The direction of the cross-chain transfer.

Enum Direction values

Local

Local transaction.

SystemToPara

System parachain to parachain.

SystemToRelay

System paracahin to system relay chain.

SystemToSystem

System parachain to System parachain chain.

SystemToBridge

System parachain to an external GlobalConsensus chain.

ParaToPara

Parachain to Parachain.

ParaToRelay

Parachain to Relay chain.

ParaToSystem

Parachain to System parachain.

RelayToSystem

Relay to System Parachain.

RelayToPara

Relay chain to Parachain.

RelayToBridge

Relay chain to an external GlobalConsensus chain.

method Methods

The method used in the transaction.

Type Methods
type Methods =\n  | LocalTransferTypes\n  | 'transferAssets'\n  | 'transferAssetsUsingTypeAndThen'\n  | 'limitedReserveTransferAssets'\n  | 'limitedTeleportAssets'\n  | 'transferMultiasset'\n  | 'transferMultiassets'\n  | 'transferMultiassetWithFee'\n  | 'claimAssets';\n
Type LocalTransferTypes
type LocalTransferTypes =\n  | 'assets::transfer'\n  | 'assets::transferKeepAlive'\n  | 'foreignAssets::transfer'\n  | 'foreignAssets::transferKeepAlive'\n  | 'balances::transfer'\n  | 'balances::transferKeepAlive'\n  | 'poolAssets::transfer'\n  | 'poolAssets::transferKeepAlive'\n  | 'tokens::transfer'\n  | 'tokens::transferKeepAlive';\n

tx ConstructedFormat<T>

The constructed transaction.

Type ConstructedFormat<T> Example

Request

import {\n  AssetTransferApi,\n  constructApiPromise,\n} from '@substrate/asset-transfer-api';\n\nasync function main() {\n  const { api, specName, safeXcmVersion } = await constructApiPromise(\n    'wss://westend-rpc.polkadot.io',\n  );\n  const assetsApi = new AssetTransferApi(api, specName, safeXcmVersion);\n\n  let callInfo;\n  try {\n    callInfo = await assetsApi.claimAssets(\n      [\n        `{\"parents\":\"0\",\"interior\":{\"X2\":[{\"PalletInstance\":\"50\"},{\"GeneralIndex\":\"1984\"}]}}`,\n      ],\n      ['1000000000000'],\n      '0xf5d5714c084c112843aca74f8c498da06cc5a2d63153b825189baa51043b1f0b',\n      {\n        format: 'call',\n        xcmVersion: 2,\n      },\n    );\n\n    console.log(`Call data:\\n${JSON.stringify(callInfo, null, 4)}`);\n  } catch (e) {\n    console.error(e);\n    throw Error(e as string);\n  }\n}\n\nmain()\n  .catch((err) => console.error(err))\n  .finally(() => process.exit());\n

Response

Call data: { \"origin\": \"0\", \"dest\": \"westend\", \"direction\": \"local\", \"xcmVersion\": 2, \"method\": \"claimAssets\", \"format\": \"call\", \"tx\": \"0x630c0104000002043205011f00070010a5d4e80100010100f5d5714c084c112843aca74f8c498da06cc5a2d63153b825189baa51043b1f0b\" }"},{"location":"develop/toolkit/interoperability/asset-transfer-api/reference/#decode-extrinsic","title":"Decode Extrinsic","text":"

Decodes the hex of an extrinsic into a string readable format.

public decodeExtrinsic<T extends Format>(\n  encodedTransaction: string,\n  format: T\n): string;\n
Request parameters

encodedTransaction string required

A hex encoded extrinsic.

format T extends Format required

Specifies the format for returning a transaction.

Type Format
export type Format = 'payload' | 'call' | 'submittable';\n
Response parameters

string

Decoded extrinsic in string readable format.

Example

Request

import {\n  AssetTransferApi,\n  constructApiPromise,\n} from '@substrate/asset-transfer-api';\n\nasync function main() {\n  const { api, specName, safeXcmVersion } = await constructApiPromise(\n    'wss://wss.api.moonbeam.network',\n  );\n  const assetsApi = new AssetTransferApi(api, specName, safeXcmVersion);\n\n  const encodedExt = '0x0a03f977814e90da44bfa03b6295a0616a897441acec821a0600';\n\n  try {\n    const decodedExt = assetsApi.decodeExtrinsic(encodedExt, 'call');\n    console.log(\n      `Decoded tx:\\n ${JSON.stringify(JSON.parse(decodedExt), null, 4)}`,\n    );\n  } catch (e) {\n    console.error(e);\n    throw Error(e as string);\n  }\n}\n\nmain()\n  .catch((err) => console.error(err))\n  .finally(() => process.exit());\n

Response

Decoded tx: { \"args\": { \"dest\": \"0xF977814e90dA44bFA03b6295A0616a897441aceC\", \"value\": \"100,000\" }, \"method\": \"transferKeepAlive\", \"section\": \"balances\" }

"},{"location":"develop/toolkit/interoperability/asset-transfer-api/reference/#fetch-fee-info","title":"Fetch Fee Info","text":"

Fetch estimated fee information for an extrinsic.

public async fetchFeeInfo<T extends Format>(\n  tx: ConstructedFormat<T>,\n  format: T\n): Promise<RuntimeDispatchInfo | RuntimeDispatchInfoV1 | null>;\n
Request parameters

tx ConstructedFormat<T> required

The constructed transaction.

Type ConstructedFormat<T>
export type ConstructedFormat<T> = T extends 'payload'\n  ? GenericExtrinsicPayload\n  : T extends 'call'\n  ? `0x${string}`\n  : T extends 'submittable'\n  ? SubmittableExtrinsic<'promise', ISubmittableResult>\n  : never;\n

The ConstructedFormat type is a conditional type that returns a specific type based on the value of the TxResult format field.

  • Payload format - if the format field is set to 'payload', the ConstructedFormat type will return a GenericExtrinsicPayload
  • Call format - if the format field is set to 'call', the ConstructedFormat type will return a hexadecimal string (0x${string}). This is the encoded representation of the extrinsic call
  • Submittable format - if the format field is set to 'submittable', the ConstructedFormat type will return a SubmittableExtrinsic. This is a Polkadot.js type that represents a transaction that can be submitted to the blockchain

format T extends Format required

Specifies the format for returning a transaction.

Type Format
export type Format = 'payload' | 'call' | 'submittable';\n
Response parameters

Promise<RuntimeDispatchInfo | RuntimeDispatchInfoV1 | null>

A promise containing the estimated fee information for the provided extrinsic.

Type RuntimeDispatchInfo
export interface RuntimeDispatchInfo extends Struct {\n  readonly weight: Weight;\n  readonly class: DispatchClass;\n  readonly partialFee: Balance;\n}\n

For more information on the underlying types and fields of RuntimeDispatchInfo, check the RuntimeDispatchInfo source code.

Type RuntimeDispatchInfoV1
export interface RuntimeDispatchInfoV1 extends Struct {\n  readonly weight: WeightV1;\n  readonly class: DispatchClass;\n  readonly partialFee: Balance;\n}\n

For more information on the underlying types and fields of RuntimeDispatchInfoV1, check the RuntimeDispatchInfoV1 source code.

Example

Request

import {\n  AssetTransferApi,\n  constructApiPromise,\n} from '@substrate/asset-transfer-api';\n\nasync function main() {\n  const { api, specName, safeXcmVersion } = await constructApiPromise(\n    'wss://wss.api.moonbeam.network',\n  );\n  const assetsApi = new AssetTransferApi(api, specName, safeXcmVersion);\n\n  const encodedExt = '0x0a03f977814e90da44bfa03b6295a0616a897441acec821a0600';\n\n  try {\n    const decodedExt = await assetsApi.fetchFeeInfo(encodedExt, 'call');\n    console.log(`Fee info:\\n${JSON.stringify(decodedExt, null, 4)}`);\n  } catch (e) {\n    console.error(e);\n    throw Error(e as string);\n  }\n}\n\nmain()\n  .catch((err) => console.error(err))\n  .finally(() => process.exit());\n

Response

Fee info: { \"weight\": { \"refTime\": 163777000, \"proofSize\": 3581 }, \"class\": \"Normal\", \"partialFee\": 0 }

"},{"location":"develop/toolkit/parachains/","title":"Parachain Tools","text":"

Within the Polkadot ecosystem, you'll find a robust set of development tools that empower developers to build, test, and deploy blockchain applications efficiently. Whether you're designing a custom parachain, testing new features, or validating network configurations, these tools streamline the development process and ensure your blockchain setup is secure and optimized.

This section explores essential tools for blockchain testing, forking live networks, and interacting with the Polkadot ecosystem, giving you the resources needed to bring your blockchain project to life.

"},{"location":"develop/toolkit/parachains/#quick-links","title":"Quick Links","text":"
  • Use Zombienet to spawn a chain
  • Use Chopsticks to fork a chain
"},{"location":"develop/toolkit/parachains/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/toolkit/parachains/fork-chains/","title":"Fork Live Chains for Testing","text":"

Explore tools for forking live blockchain networks. These tools enable you to replicate real-world conditions in a local environment for accurate testing and debugging. They also allow you to analyze network behavior, test new features, and simulate complex scenarios in a controlled environment without affecting production systems.

Ready to get started? Jump straight to the Chopsticks getting started guide.

"},{"location":"develop/toolkit/parachains/fork-chains/#why-fork-a-live-chain","title":"Why Fork a Live Chain?","text":"

Forking a live chain creates a controlled environment that mirrors live network conditions. This approach enables you to:

  • Test features safely before deployment
  • Debug complex interactions
  • Validate runtime changes
  • Experiment with network modifications
"},{"location":"develop/toolkit/parachains/fork-chains/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/toolkit/parachains/fork-chains/#additional-resources","title":"Additional ResourcesStep-by-Step Tutorial on Forking Live Chains with Chopsticks","text":"

This tutorial walks you through how to fork live Polkadot SDK chains with Chopsticks. Configure forks, replay blocks, test XCM execution.

"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/","title":"Fork Live Chains with Chopsticks","text":"

Chopsticks is a powerful tool that lets you create local copies of running Polkadot SDK-based networks. By forking live chains locally, you can safely test features, analyze network behavior, and simulate complex scenarios without affecting production networks.

"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/#what-can-i-do-with-chopsticks","title":"What Can I Do with Chopsticks?","text":"
  • Create local forks of live networks
  • Replay blocks to analyze behavior
  • Test XCM interactions
  • Simulate complex scenarios
  • Modify network storage and state

Whether you're debugging an issue, testing new features, or exploring cross-chain interactions, Chopsticks provides a safe environment for blockchain experimentation and validation.

"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/#additional-resources","title":"Additional ResourcesChopsticks RepositoryFork Live Chains with Chopsticks","text":"

View the official Chopsticks Github Repository. Check out the code, check out sample commands, and track issues and new releases.

Learn how to fork live Polkadot SDK chains with Chopsticks. Configure forks, replay blocks, test XCM, and interact programmatically or via UI.

"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/","title":"Get Started","text":""},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/#introduction","title":"Introduction","text":"

Chopsticks, developed by the Acala Foundation, is a versatile tool tailored for developers working on Polkadot SDK-based blockchains. With Chopsticks, you can fork live chains locally, replay blocks to analyze extrinsics, and simulate complex scenarios like XCM interactions all without deploying to a live network.

This guide walks you through installing Chopsticks and provides information on configuring a local blockchain fork. By streamlining testing and experimentation, Chopsticks empowers developers to innovate and accelerate their blockchain projects within the Polkadot ecosystem.

For additional support and information, please reach out through GitHub Issues.

Note

Chopsticks uses Smoldot light client, which only supports the native Polkadot SDK API. Consequently, a Chopsticks-based fork doesn't support Ethereum JSON-RPC calls, meaning you cannot use it to fork your chain and connect Metamask.

"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure you have the following installed:

  • Node.js
  • A package manager such as npm, which should be installed with Node.js by default, or Yarn
"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/#install-chopsticks","title":"Install Chopsticks","text":"

You can install Chopsticks globally or locally in your project. Choose the option that best fits your development workflow.

Note

This documentation explains the features of Chopsticks version 0.13.1. Make sure you're using the correct version to match these instructions.

"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/#global-installation","title":"Global Installation","text":"

To install Chopsticks globally, allowing you to use it across multiple projects, run:

npm i -g @acala-network/chopsticks@0.13.1\n

Now, you should be able to run the chopsticks command from your terminal.

"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/#local-installation","title":"Local Installation","text":"

To use Chopsticks in a specific project, first create a new directory and initialize a Node.js project:

mkdir my-chopsticks-project\ncd my-chopsticks-project\nnpm init -y\n

Then, install Chopsticks as a local dependency:

npm i @acala-network/chopsticks@0.13.1\n

Finally, you can run Chopsticks using the npx command:

npx @acala-network/chopsticks\n
"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/#configure-chopsticks","title":"Configure Chopsticks","text":"

To run Chopsticks, you need to configure some parameters. This can be set either through using a configuration file or the command line interface (CLI). The parameters that can be configured are as follows:

  • genesis - the link to a parachain's raw genesis file to build the fork from, instead of an endpoint
  • timestamp - timestamp of the block to fork from
  • endpoint - the endpoint of the parachain to fork
  • block - use to specify at which block hash or number to replay the fork
  • wasm-override - path of the Wasm to use as the parachain runtime, instead of an endpoint's runtime
  • db - path to the name of the file that stores or will store the parachain's database
  • config - path or URL of the config file
  • port - the port to expose an endpoint on
  • build-block-mode - how blocks should be built in the fork: batch, manual, instant
  • import-storage - a pre-defined JSON/YAML storage path to override in the parachain's storage
  • allow-unresolved-imports - whether to allow Wasm unresolved imports when using a Wasm to build the parachain
  • html - include to generate storage diff preview between blocks
  • mock-signature-host - mock signature host so that any signature starts with 0xdeadbeef and filled by 0xcd is considered valid
"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/#configuration-file","title":"Configuration File","text":"

The Chopsticks source repository includes a collection of YAML files that can be used to set up various Polkadot SDK chains locally. You can download these configuration files from the repository's configs folder.

An example of a configuration file for Polkadot is as follows:

endpoint:\n  - wss://rpc.ibp.network/polkadot\n  - wss://polkadot-rpc.dwellir.com\nmock-signature-host: true\nblock: ${env.POLKADOT_BLOCK_NUMBER}\ndb: ./db.sqlite\nruntime-log-level: 5\n\nimport-storage:\n  System:\n    Account:\n      - - - 5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY\n        - providers: 1\n          data:\n            free: '10000000000000000000'\n  ParasDisputes:\n    $removePrefix: ['disputes'] # those can makes block building super slow\n

The configuration file allows you to modify the storage of the forked network by rewriting the pallet, state component and value that you want to change. For example, Polkadot's file rewrites Alice's system.Account storage so that the free balance is set to 10000000000000000000.

"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/#cli-flags","title":"CLI Flags","text":"

Alternatively, all settings (except for genesis and timestamp) can be configured via command-line flags, providing a comprehensive method to set up the environment.

"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/#websocket-commands","title":"WebSocket Commands","text":"

Chopstick's internal WebSocket server has special endpoints that allow the manipulation of the local Polkadot SDK chain.

These are the methods that can be invoked and their parameters:

  • dev_newBlock (newBlockParams) \u2014 generates one or more new blocks

    ParametersExample
    • newBlockParams NewBlockParams - the parameters to build the new block with. Where the NewBlockParams interface includes the following properties:
      • count number - the number of blocks to build
      • dmp { msg: string, sentAt: number }[] - the downward messages to include in the block
      • hrmp Record<string | number, { data: string, sentAt: number }[]> - the horizontal messages to include in the block
      • to number - the block number to build to
      • transactions string[] - the transactions to include in the block
      • ump Record<number, string[]> - the upward messages to include in the block
      • unsafeBlockHeight number - build block using a specific block height (unsafe)
    import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  await api.rpc('dev_newBlock', { count: 1 });\n}\n\nmain();\n
  • dev_setBlockBuildMode (buildBlockMode) \u2014 sets block build mode

    ParameterExample
    • buildBlockMode BuildBlockMode - the build mode. Can be any of the following modes:
      export enum BuildBlockMode {\n  Batch = 'Batch', /** One block per batch (default) */\n  Instant = 'Instant', /** One block per transaction */\n  Manual = 'Manual', /** Only build when triggered */\n}\n
    import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  await api.rpc('dev_setBlockBuildMode', 'Instant');\n}\n\nmain();\n
  • dev_setHead (hashOrNumber) \u2014 sets the head of the blockchain to a specific hash or number

    ParameterExample
    • hashOrNumber string | number - the block hash or number to set as head
    import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  await api.rpc('dev_setHead', 500);\n}\n\nmain();\n
  • dev_setRuntimeLogLevel (runtimeLogLevel) \u2014 sets the runtime log level

    ParameterExample
    • runtimeLogLevel number - the runtime log level to set
    import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  await api.rpc('dev_setRuntimeLogLevel', 1);\n}\n\nmain();\n
  • dev_setStorage (values, blockHash) \u2014 creates or overwrites the value of any storage

    ParametersExample
    • values object - JSON object resembling the path to a storage value
    • blockHash string - the block hash to set the storage value
    import { ApiPromise, WsProvider } from '@polkadot/api';\n\nimport { Keyring } from '@polkadot/keyring';\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  const keyring = new Keyring({ type: 'ed25519' });\n  const bob = keyring.addFromUri('//Bob');\n  const storage = {\n    System: {\n      Account: [[[bob.address], { data: { free: 100000 }, nonce: 1 }]],\n    },\n  };\n  await api.rpc('dev_setStorage', storage);\n}\n\nmain();\n
  • dev_timeTravel (date) \u2014 sets the timestamp of the block to a specific date\"

    ParameterExample
    • date string - timestamp or date string to set. All future blocks will be sequentially created after this point in time
    import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n  await api.rpc('dev_timeTravel', '2030-08-15T00:00:00');\n}\n\nmain();\n
"},{"location":"develop/toolkit/parachains/fork-chains/chopsticks/get-started/#where-to-go-next","title":"Where to Go Next","text":"
  • Visit the Fork a Chain with Chopsticks guide for step-by-step instructions for configuring and interacting with your forked chain.
"},{"location":"develop/toolkit/parachains/spawn-chains/","title":"Spawn Networks for Testing","text":"

Testing blockchain networks in a controlled environment is essential for development and validation. The Polkadot ecosystem provides specialized tools that enable you to spawn test networks, helping you verify functionality and catch issues before deploying to production.

Ready to get started? Jump straight to the Zombienet getting started guide.

"},{"location":"develop/toolkit/parachains/spawn-chains/#why-spawn-a-network","title":"Why Spawn a Network?","text":"

Spawning a network provides a controlled environment to test and validate various aspects of your blockchain. Use these tools to:

  • Validate network configurations
  • Test cross-chain messaging
  • Verify runtime upgrades
  • Debug complex interactions
"},{"location":"develop/toolkit/parachains/spawn-chains/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/toolkit/parachains/spawn-chains/#additional-resources","title":"Additional ResourcesSpawn a Chain with Zombienet","text":"

Learn to spawn, connect to and monitor a basic blockchain network with Zombienet, using customizable configurations for streamlined development and debugging.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/","title":"Test Networks with Zombienet","text":"

Zombienet is a testing framework that lets you quickly spin up ephemeral blockchain networks for development and testing. With support for multiple deployment targets, such as Kubernetes, Podman, and native environments, Zombienet makes it easy to validate your blockchain implementation in a controlled environment.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/#what-can-i-do-with-zombienet","title":"What Can I Do with Zombienet?","text":"
  • Deploy test networks with multiple nodes
  • Validate network behavior and performance
  • Monitor metrics and system events
  • Execute custom test scenarios

Whether you're building a new parachain or testing runtime upgrades, Zombienet provides the tools needed to ensure your blockchain functions correctly before deployment to production.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/#additional-resources","title":"Additional ResourcesSpawn a Chain with Zombienet Tutorial","text":"

Follow step-by-step instructions to spawn, connect to and monitor a basic blockchain network with Zombienet, using customizable configurations for streamlined development and debugging.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/","title":"Get Started","text":""},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#introduction","title":"Introduction","text":"

Zombienet is a robust testing framework designed for Polkadot SDK-based blockchain networks. It enables developers to efficiently deploy and test ephemeral blockchain environments on platforms like Kubernetes, Podman, and native setups. With its simple and versatile CLI, Zombienet provides an all-in-one solution for spawning networks, running tests, and validating performance.

This guide will outline the different installation methods for Zombienet, provide step-by-step instructions for setting up on various platforms, and highlight essential provider-specific features and requirements.

By following this guide, Zombienet will be up and running quickly, ready to streamline your blockchain testing and development workflows.

Additional support resources

Parity Technologies has designed and developed this framework, now maintained by the Zombienet team.

For further support and information, refer to the following contact points:

  • Zombienet repository
  • Element public channel
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#install-zombienet","title":"Install Zombienet","text":"

Zombienet releases are available on the Zombienet repository.

Multiple options are available for installing Zombienet, depending on the user's preferences and the environment where it will be used. The following section will guide you through the installation process for each option.

Use the executableUse NixUse Docker

Install Zombienet using executables by visiting the latest release page and selecting the appropriate asset for your operating system. You can download the executable and move it to a directory in your PATH.

Note

Each release includes executables for Linux and macOS. Executables are generated using pkg, which allows the Zombienet CLI to operate without requiring Node.js to be installed.

Then, ensure the downloaded file is executable:

chmod +x zombienet-macos-arm64\n

Finally, you can run the following command to check if the installation was successful. If so, it will display the version of the installed Zombienet:

./zombienet-macos-arm64 version\n

If you want to add the zombienet executable to your PATH, you can move it to a directory in your PATH, such as /usr/local/bin:

mv zombienet-macos-arm64 /usr/local/bin/zombienet\n

Now you can refer to the zombienet executable directly.

zombienet version\n

For Nix users, the Zombienet repository provides a flake.nix file to install Zombienet making it easy to incorporate Zombienet into Nix-based projects.

To install Zombienet utilizing Nix, users can run the following command, triggering the fetching of the flake and subsequently installing the Zombienet package:

nix run github:paritytech/zombienet/INSERT_ZOMBIENET_VERSION -- \\\nspawn INSERT_ZOMBIENET_CONFIG_FILE_NAME.toml\n

Note

  • Replace the INSERT_ZOMBIENET_VERSION with the desired version of Zombienet
  • Replace the INSERT_ZOMBIENET_CONFIG_FILE_NAME with the name of the configuration file you want to use

To run the command above, you need to have Flakes enabled.

Alternatively, you can also include the Zombienet binary in the PATH for the current shell using the following command:

nix shell github:paritytech/zombienet/INSERT_ZOMBIENET_VERSION\n

Zombienet can also be run using Docker. The Zombienet repository provides a Docker image that can be used to run the Zombienet CLI. To run Zombienet using Docker, you can use the following command:

docker run -it --rm \\\n-v $(pwd):/home/nonroot/zombie-net/host-current-files \\\nparitytech/zombienet\n

The command above will run the Zombienet CLI inside a Docker container and mount the current directory to the /home/nonroot/zombie-net/host-current-files directory. This allows Zombienet to access the configuration file and other files in the current directory. If you want to mount a different directory, replace $(pwd) with the desired directory path.

Inside the Docker container, you can run the Zombienet CLI commands. First, you need to set up Zombienet to download the necessary binaries:

npm run zombie -- setup polkadot polkadot-parachain\n

After that, you need to add those binaries to the PATH:

export PATH=/home/nonroot/zombie-net:$PATH\n

Finally, you can run the Zombienet CLI commands. For example, to spawn a network using a specific configuration file, you can run the following command:

pm run zombie -- -p native spawn host-current-files/minimal.toml\n

The command above mounts the current directory to the /workspace directory inside the Docker container, allowing Zombienet to access the configuration file and other files in the current directory. If you want to mount a different directory, replace $(pwd) with the desired directory path.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#providers","title":"Providers","text":"

Zombienet supports different backend providers for running the nodes. At this moment, Kubernetes, Podman, and local providers are supported, which can be declared as kubernetes, podman, or native, respectively.

To use a particular provider, you can specify it in the network file or use the --provider flag in the CLI:

zombienet spawn network.toml --provider INSERT_PROVIDER\n

Alternatively, you can set the provider in the network file:

[settings]\nprovider = \"INSERT_PROVIDER\"\n...\n

It's important to note that each provider has specific requirements and associated features. The following sections cover each provider's requirements and added features.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#kubernetes","title":"Kubernetes","text":"

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services. Zombienet is designed to be compatible with a variety of Kubernetes clusters, including:

  • Google Kubernetes Engine (GKE)
  • Docker Desktop
  • kind
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#requirements","title":"Requirements","text":"

To effectively interact with your cluster, you'll need to ensure that kubectl is installed on your system. This Kubernetes command-line tool allows you to run commands against Kubernetes clusters. If you don't have kubectl installed, you can follow the instructions provided in the Kubernetes documentation.

To create resources such as namespaces, pods, and CronJobs within the target cluster, you must grant your user or service account the appropriate permissions. These permissions are essential for managing and deploying applications effectively within Kubernetes.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#features","title":"Features","text":"

If available, Zombienet uses the Prometheus operator to oversee monitoring and visibility. This configuration ensures that only essential networking-related pods are deployed. Using the Prometheus operator, Zombienet improves its ability to monitor and manage network activities within the Kubernetes cluster efficiently.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#podman","title":"Podman","text":"

Podman is a daemonless container engine for developing, managing, and running Open Container Initiative (OCI) containers and container images on Linux-based systems. Zombienet supports Podman rootless as a provider on Linux machines. Although Podman has support for macOS through an internal virtual machine (VM), the Zombienet provider code requires Podman to run natively on Linux.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#requirements_1","title":"Requirements","text":"

To use Podman as a provider, you need to have Podman installed on your system. You can install Podman by following the instructions provided on the Podman website.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#features_1","title":"Features","text":"

Using Podman, Zombienet deploys additional pods to enhance the monitoring and visibility of the active network. Specifically, pods for Prometheus, Tempo, and Grafana are included in the deployment. Grafana is configured with Prometheus and Tempo as data sources.

Upon launching Zombienet, access to these monitoring services is facilitated through specific URLs provided in the output:

  • Prometheus - http://127.0.0.1:34123
  • Tempo - http://127.0.0.1:34125
  • Grafana - http://127.0.0.1:41461

It's important to note that Grafana is deployed with default administrator access.

When network operations cease, either from halting a running spawn with the Ctrl+C command or test completion, Zombienet automatically removes all associated pods.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#local-provider","title":"Local Provider","text":"

The Zombienet local provider, also called native, enables you to run nodes as local processes in your environment.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#requirements_2","title":"Requirements","text":"

You must have the necessary binaries for your network (such as polkadot and polkadot-parachain). These binaries should be available in your PATH, allowing Zombienet to spawn the nodes as local processes.

To install the necessary binaries, you can use the Zombienet CLI command:

zombienet setup polkadot polkadot-parachain\n

This command will download and prepare the necessary binaries for Zombienet's use.

Warning

The polkadot and polkadot-parachain binaries releases aren't compatible with macOS. As a result, macOS users will need to clone the Polkadot repository, build the Polkadot binary, and manually add it to their PATH for polkadot and polkadot-parachain to work.

If you need to use a custom binary, ensure the binary is available in your PATH. You can also specify the binary path in the network configuration file. The following example uses the custom OpenZeppelin template:

First, clone the OpenZeppelin template repository using the following command:

git clone https://github.com/OpenZeppelin/polkadot-runtime-templates \\\n&& cd polkadot-runtime-templates/generic-template\n

Next, run the command to build the custom binary:

cargo build --release\n

Finally, add the custom binary to your PATH as follows:

export PATH=$PATH:INSERT_PATH_TO_RUNTIME_TEMPLATES/parachain-template-node/target/release\n

Alternatively, you can specify the binary path in the network configuration file.

[relaychain]\nchain = \"rococo-local\"\ndefault_command = \"./bin-v1.6.0/polkadot\"\n\n[parachain]\nid = 1000\n\n    [parachain.collators]\n    name = \"collator01\"\n    command = \"./target/release/parachain-template-node\"\n
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#features_2","title":"Features","text":"

The local provider does not offer any additional features.

Note

The local provider exclusively utilizes the command configuration for nodes, which supports both relative and absolute paths. You can employ the default_command configuration to specify the binary for spawning all nodes in the relay chain.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#configure-zombienet","title":"Configure Zombienet","text":"

Effective network configuration is crucial for deploying and managing blockchain systems. Zombienet simplifies this process by offering versatile configuration options in both JSON and TOML formats. Whether setting up a simple test network or a complex multi-node system, Zombienet's tools provide the flexibility to customize every aspect of your network's setup.

The following sections will explore the structure and usage of Zombienet configuration files, explain key settings for network customization, and walk through CLI commands and flags to optimize your development workflow.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#configuration-files","title":"Configuration Files","text":"

The network configuration file can be either JSON or TOML format. The Zombienet repository also provides a collection of example configuration files that can be used as a reference.

Note

Each section may include provider-specific keys that aren't recognized by other providers. For example, if you use the local provider, any references to images for nodes will be disregarded.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#cli-usage","title":"CLI Usage","text":"

Zombienet provides a CLI that allows interaction with the tool. The CLI can receive commands and flags to perform different kinds of operations. These operations use the following syntax:

zombienet <arguments> <commands>\n

The following sections will guide you through the primary usage of the Zombienet CLI and the available commands and flags.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#cli-commands","title":"CLI Commands","text":"
  • spawn <networkConfig> - spawn the network defined in the configuration file

    Warning

    The Polkadot binary is currently not compatible with macOS. For the spawn command to work on macOS, users will need to clone the Polkadot repository, build the Polkadot binary, and manually add it to their PATH.

  • test <testFile> - run tests on the spawned network using the assertions and tests defined in the test file

  • setup <binaries> - set up the Zombienet development environment to download and use the polkadot or polkadot-parachain executable

  • convert <filePath> - transforms a polkadot-launch configuration file with a .js or .json extension into a Zombienet configuration file

  • version - prints Zombienet version

  • help - prints help information

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#cli-flags","title":"CLI Flags","text":"

You can use the following flags to customize the behavior of the CLI:

  • -p, --provider - override the provider to use

  • -d, --dir - specify a directory path for placing the network files instead of using the default temporary path

  • -f, --force - force override all prompt commands

  • -l, --logType - type of logging on the console. Defaults to table

  • -m, --monitor - start as monitor and don't auto clean up network

  • -c, --spawn-concurrency - number of concurrent spawning processes to launch. Defaults to 1

  • -h, --help - display help for command

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#settings","title":"Settings","text":"

Through the keyword settings, it's possible to define the general settings for the network. The available keys are:

  • global_volumes? GlobalVolume[] - a list of global volumes to use

    GlobalVolume interface definition
    export interface GlobalVolume {\n  name: string;\n  fs_type: string;\n  mount_path: string;\n}\n
  • bootnode boolean - add bootnode to network. Defaults to true

  • bootnode_domain? string - domain to use for bootnode
  • timeout number - global timeout to use for spawning the whole network
  • node_spawn_timeout? number - timeout to spawn pod/process
  • grafana? boolean - deploy an instance of Grafana
  • prometheus? boolean - deploy an instance of Prometheus
  • telemetry? boolean - enable telemetry for the network
  • jaeger_agent? string - the Jaeger agent endpoint passed to the nodes. Only available on Kubernetes
  • tracing_collator_url? string - the URL of the tracing collator used to query by the tracing assertion. Should be tempo query compatible
  • tracing_collator_service_name? string - service name for tempo query frontend. Only available on Kubernetes. Defaults to tempo-tempo-distributed-query-frontend
  • tracing_collator_service_namespace? string - namespace where tempo is running. Only available on Kubernetes. Defaults to tempo
  • tracing_collator_service_port? number - port of the query instance of tempo. Only available on Kubernetes. Defaults to 3100
  • enable_tracing? boolean - enable the tracing system. Only available on Kubernetes. Defaults to true
  • provider string - provider to use. Default is kubernetes\"
  • polkadot_introspector? boolean - deploy an instance of polkadot-introspector. Only available on Podman and Kubernetes. Defaults to false
  • backchannel? boolean - deploy an instance of backchannel server. Only available on Kubernetes. Defaults to false
  • image_pull_policy? string - image pull policy to use in the network. Possible values are Always, IfNotPresent, and Never
  • local_ip? string - IP used for exposing local services (rpc/metrics/monitors). Defaults to \"127.0.0.1\"
  • global_delay_network_global_settings? number - delay in seconds to apply to the network
  • node_verifier? string - specify how to verify node readiness or deactivate by using None. Possible values are None and Metric. Defaults to Metric

For example, the following configuration file defines a minimal example for the settings:

TOMLJSON base-example.toml
[settings]\ntimeout = 1000\nbootnode = false\nprovider = \"kubernetes\"\nbackchannel = false\n# ...\n
base-example.json
{\n    \"settings\": {\n        \"timeout\": 1000,\n        \"bootnode\": false,\n        \"provider\": \"kubernetes\",\n        \"backchannel\": false,\n        \"...\": {}\n    },\n    \"...\": {}\n}\n
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#relay-chain-configuration","title":"Relay Chain Configuration","text":"

You can use the relaychain keyword to define further parameters for the relay chain at start-up. The available keys are:

  • default_command? string - the default command to run. Defaults to polkadot
  • default_image? string - the default Docker image to use
  • default_resources? Resources - represents the resource limits/reservations the nodes need by default. Only available on Kubernetes

    Resources interface definition
    export interface Resources {\n  resources: {\n    requests?: {\n      memory?: string;\n      cpu?: string;\n    };\n    limits?: {\n      memory?: string;\n      cpu?: string;\n    };\n  };\n}\n
  • default_db_snapshot? string - the default database snapshot to use

  • default_prometheus_prefix string - a parameter for customizing the metric's prefix. Defaults to substrate
  • default_substrate_cli_args_version? SubstrateCliArgsVersion - set the Substrate CLI arguments version

    SubstrateCliArgsVersion enum definition
    export enum SubstrateCliArgsVersion {\n  V0 = 0,\n  V1 = 1,\n  V2 = 2,\n  V3 = 3,\n}\n
  • default_keystore_key_types? string[] - defines which keystore keys should be created

  • chain string - the chain name
  • chain_spec_path? string - path to the chain spec file. Should be the plain version to allow customizations
  • chain_spec_command? string - command to generate the chain spec. It can't be used in combination with chain_spec_path
  • default_args? string[] - an array of arguments to use as default to pass to the command
  • default_overrides? Override[] - an array of overrides to upload to the node

    Override interface definition
    export interface Override {\n  local_path: string;\n  remote_name: string;\n} \n
  • random_nominators_count? number - if set and the stacking pallet is enabled, Zombienet will generate the input quantity of nominators and inject them into the genesis

  • max_nominations number - the max number of nominations allowed by a nominator. Should match the value set in the runtime. Defaults to 24
  • nodes? Node[] - an array of nodes to spawn. It is further defined in the Node Configuration section
  • node_groups? NodeGroup[] - an array of node groups to spawn. It is further defined in the Node Group Configuration section
  • total_node_in_group? number - the total number of nodes in the group. Defaults to 1
  • genesis JSON - the genesis configuration
  • default_delay_network_settings? DelayNetworkSettings - sets the expected configuration to delay the network

    DelayNetworkSettings interface definition
    export interface DelayNetworkSettings {\n  latency: string;\n  correlation?: string; // should be parsable as float by k8s\n  jitter?: string;\n}\n
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#node-configuration","title":"Node Configuration","text":"

One specific key capable of receiving more subkeys is the nodes key. This key is used to define further parameters for the nodes. The available keys:

  • name string - name of the node. Any whitespace will be replaced with a dash (for example, new alice will be converted to new-alice)
  • image? string - override default Docker image to use for this node
  • command? string - override default command to run
  • command_with_args? string - override default command and arguments
  • args? string[] - arguments to be passed to the command
  • env? envVars[] - environment variables to set in the container

    envVars interface definition
    export interface EnvVars {\n  name: string;\n  value: string;\n}\n
  • prometheus_prefix? string - customizes the metric's prefix for the specific node. Defaults to substrate

  • db_snapshot? string - database snapshot to use
  • substrate_cli_args_version? SubstrateCliArgsVersion - set the Substrate CLI arguments version directly to skip binary evaluation overhead

    SubstrateCliArgsVersion enum definition
    export enum SubstrateCliArgsVersion {\n  V0 = 0,\n  V1 = 1,\n  V2 = 2,\n  V3 = 3,\n}\n
  • resources? Resources - represent the resources limits/reservations needed by the node

    Resources interface definition
    export interface Resources {\n  resources: {\n    requests?: {\n      memory?: string;\n      cpu?: string;\n    };\n    limits?: {\n      memory?: string;\n      cpu?: string;\n    };\n  };\n}\n
  • keystore_key_types? string[] - defines which keystore keys should be created

  • validator boolean - pass the --validator flag to the command. Defaults to true
  • invulnerable boolean - if true, add the node to invulnerables in the chain spec. Defaults to false
  • balance number - balance to set in balances for node's account. Defaults to 2000000000000
  • bootnodes? string[] - array of bootnodes to use
  • add_to_bootnodes? boolean - add this node to the bootnode list. Defaults to false
  • ws_port? number - WS port to use
  • rpc_port? number - RPC port to use
  • prometheus_port? number - Prometheus port to use
  • p2p_cert_hash? string - libp2p certhash to use with webRTC transport
  • delay_network_settings? DelayNetworkSettings - sets the expected configuration to delay the network

    DelayNetworkSettings interface definition
    export interface DelayNetworkSettings {\n  latency: string;\n  correlation?: string; // should be parsable as float by k8s\n  jitter?: string;\n}\n

The following configuration file defines a minimal example for the relay chain, including the nodes key:

TOMLJSON relaychain-example-nodes.toml
[relaychain]\ndefault_command = \"polkadot\"\ndefault_image = \"polkadot-debug:master\"\nchain = \"rococo-local\"\nchain_spec_path = \"INSERT_PATH_TO_CHAIN_SPEC\"\ndefault_args = [\"--chain\", \"rococo-local\"]\n\n[[relaychain.nodes]]\nname = \"alice\"\nvalidator = true\nbalance = 1000000000000\n\n[[relaychain.nodes]]\nname = \"bob\"\nvalidator = true\nbalance = 1000000000000\n# ...\n
relaychain-example-nodes.json
{\n    \"relaychain\": {\n        \"default_command\": \"polkadot\",\n        \"default_image\": \"polkadot-debug:master\",\n        \"chain\": \"rococo-local\",\n        \"chain_spec_path\": \"INSERT_PATH_TO_CHAIN-SPEC.JSON\",\n        \"default_args\": [\"--chain\", \"rococo-local\"],\n        \"nodes\": [\n            {\n                \"name\": \"alice\",\n                \"validator\": true,\n                \"balance\": 1000000000000\n            },\n            {\n                \"name\": \"bob\",\n                \"validator\": true,\n                \"balance\": 1000000000000\n            }\n        ]\n    }\n}\n
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#node-group-configuration","title":"Node Group Configuration","text":"

The node_groups key defines further parameters for the node groups. The available keys are:

  • name string - name of the node. Any whitespace will be replaced with a dash (for example, new alice will be converted to new-alice)
  • image? string - override default Docker image to use for this node
  • command? string - override default command to run
  • args? string[] - arguments to be passed to the command
  • env? envVars[] - environment variables to set in the container

    envVars interface definition
    export interface EnvVars {\n  name: string;\n  value: string;\n}\n
  • overrides? Override[] - array of overrides definitions

    Override interface definition
    export interface Override {\n  local_path: string;\n  remote_name: string;\n}\n
  • prometheus_prefix? string - customizes the metric's prefix for the specific node. Defaults to substrate

  • db_snapshot? string - database snapshot to use
  • substrate_cli_args_version? SubstrateCliArgsVersion - set the Substrate CLI arguments version directly to skip binary evaluation overhead

    SubstrateCliArgsVersion enum definition
    export enum SubstrateCliArgsVersion {\n  V0 = 0,\n  V1 = 1,\n  V2 = 2,\n  V3 = 3,\n}\n
  • resources? Resources - represent the resources limits/reservations needed by the node

    Resources interface definition
    export interface Resources {\n  resources: {\n    requests?: {\n      memory?: string;\n      cpu?: string;\n    };\n    limits?: {\n      memory?: string;\n      cpu?: string;\n    };\n  };\n}\n
  • keystore_key_types? string[] - defines which keystore keys should be created

  • count number | string - number of nodes to launch for this group
  • delay_network_settings? DelayNetworkSettings - sets the expected configuration to delay the network

    DelayNetworkSettings interface definition
    export interface DelayNetworkSettings {\n  latency: string;\n  correlation?: string; // should be parsable as float by k8s\n  jitter?: string;\n}\n

The following configuration file defines a minimal example for the relay chain, including the node_groups key:

TOMLJSON relaychain-example-node-groups.toml
[relaychain]\ndefault_command = \"polkadot\"\ndefault_image = \"polkadot-debug:master\"\nchain = \"rococo-local\"\nchain_spec_path = \"INSERT_PATH_TO_CHAIN_SPEC\"\ndefault_args = [\"--chain\", \"rococo-local\"]\n\n[[relaychain.node_groups]]\nname = \"group-1\"\ncount = 2\nimage = \"polkadot-debug:master\"\ncommand = \"polkadot\"\nargs = [\"--chain\", \"rococo-local\"]\n# ...\n
relaychain-example-node-groups.json
{\n    \"relaychain\": {\n        \"default_command\": \"polkadot\",\n        \"default_image\": \"polkadot-debug:master\",\n        \"chain\": \"rococo-local\",\n        \"chain_spec_path\": \"INSERT_PATH_TO_CHAIN-SPEC.JSON\",\n        \"default_args\": [\"--chain\", \"rococo-local\"],\n        \"node_groups\": [\n            {\n                \"name\": \"group-1\",\n                \"count\": 2,\n                \"image\": \"polkadot-debug:master\",\n                \"command\": \"polkadot\",\n                \"args\": [\"--chain\", \"rococo-local\"]\n            }\n        ],\n        \"...\": {}\n    },\n    \"...\": {}\n}\n
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#parachain-configuration","title":"Parachain Configuration","text":"

The parachain keyword defines further parameters for the parachain. The available keys are:

  • id number - the id to assign to this parachain. Must be unique
  • chain? string - the chain name
  • force_decorator? string - force the use of a specific decorator
  • genesis? JSON - the genesis configuration
  • balance? number - balance to set in balances for parachain's account
  • delay_network_settings? DelayNetworkSettings - sets the expected configuration to delay the network

    DelayNetworkSettings interface definition
    export interface DelayNetworkSettings {\n  latency: string;\n  correlation?: string; // should be parsable as float by k8s\n  jitter?: string;\n}\n
  • add_to_genesis? boolean - flag to add parachain to genesis or register in runtime. Defaults to true

  • register_para? boolean - flag to specify whether the para should be registered. The add_to_genesis flag must be set to false for this flag to have any effect. Defaults to true
  • onboard_as_parachain? boolean - flag to specify whether the para should be onboarded as a parachain, rather than remaining a parathread. Defaults to true
  • genesis_wasm_path? string - path to the Wasm file to use
  • genesis_wasm_generator? string - command to generate the Wasm file
  • genesis_state_path? string - path to the state file to use
  • genesis_state_generator? string - command to generate the state file
  • chain_spec_path? string - path to the chain spec file
  • chain_spec_command? string - command to generate the chain spec
  • cumulus_based? boolean - flag to use cumulus command generation. Defaults to true
  • bootnodes? string[] - array of bootnodes to use
  • prometheus_prefix? string - parameter for customizing the metric's prefix for all parachain nodes/collators. Defaults to substrate
  • collator? Collator - further defined in the Collator Configuration section
  • collator_groups? CollatorGroup[] - an array of collator groups to spawn. It is further defined in the Collator Groups Configuration section

For example, the following configuration file defines a minimal example for the parachain:

TOMLJSON parachain-example.toml
[parachain]\nid = 100\nadd_to_genesis = true\ncumulus_based = true\ngenesis_wasm_path = \"INSERT_PATH_TO_WASM\"\ngenesis_state_path = \"INSERT_PATH_TO_STATE\"\n# ...\n
parachain-example.json
{\n    \"parachain\": {\n        \"id\": 100,\n        \"add_to_genesis\": true,\n        \"cumulus_based\": true,\n        \"genesis_wasm_path\": \"INSERT_PATH_TO_WASM\",\n        \"genesis_state_path\": \"INSERT_PATH_TO_STATE\",\n        \"...\": {}\n    },\n    \"...\": {}\n}\n
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#collator-configuration","title":"Collator Configuration","text":"

One specific key capable of receiving more subkeys is the collator key. This key defines further parameters for the nodes. The available keys are:

  • name string - name of the collator. Any whitespace will be replaced with a dash (for example, new alice will be converted to new-alice)
  • image? string - image to use for the collator
  • command_with_args? string - overrides both command and arguments for the collator
  • validator boolean - pass the --validator flag to the command. Defaults to true
  • invulnerable boolean - if true, add the collator to invulnerables in the chain spec. Defaults to false
  • balance number - balance to set in balances for collator's account. Defaults to 2000000000000
  • bootnodes? string[] - array of bootnodes to use
  • add_to_bootnodes? boolean - add this collator to the bootnode list. Defaults to false
  • ws_port? number - WS port to use
  • rpc_port? number - RPC port to use
  • prometheus_port? number - Prometheus port to use
  • p2p_port? number - P2P port to use
  • p2p_cert_hash? string - libp2p certhash to use with webRTC transport
  • delay_network_settings? DelayNetworkSettings - sets the expected configuration to delay the network

    DelayNetworkSettings interface definition
    export interface DelayNetworkSettings {\n  latency: string;\n  correlation?: string; // should be parsable as float by k8s\n  jitter?: string;\n}\n
  • command? string - override default command to run

  • args? string[] - arguments to be passed to the command
  • env? envVars[] - environment variables to set in the container

    envVars interface definition
    export interface EnvVars {\n  name: string;\n  value: string;\n}\n
  • overrides? Override[] - array of overrides definitions

    Override interface definition
    export interface Override {\n  local_path: string;\n  remote_name: string;\n}\n
  • prometheus_prefix? string - customizes the metric's prefix for the specific node. Defaults to substrate

  • db_snapshot? string - database snapshot to use
  • substrate_cli_args_version? SubstrateCliArgsVersion - set the Substrate CLI arguments version directly to skip binary evaluation overhead

    SubstrateCliArgsVersion enum definition
    export enum SubstrateCliArgsVersion {\n  V0 = 0,\n  V1 = 1,\n  V2 = 2,\n  V3 = 3,\n}\n
  • resources? Resources - represent the resources limits/reservations needed by the node

    Resources interface definition
    export interface Resources {\n  resources: {\n    requests?: {\n      memory?: string;\n      cpu?: string;\n    };\n    limits?: {\n      memory?: string;\n      cpu?: string;\n    };\n  };\n}\n
  • keystore_key_types? string[] - defines which keystore keys should be created

The configuration file below defines a minimal example for the collator:

TOMLJSON collator-example.toml
[parachain]\nid = 100\nadd_to_genesis = true\ncumulus_based = true\ngenesis_wasm_path = \"INSERT_PATH_TO_WASM\"\ngenesis_state_path = \"INSERT_PATH_TO_STATE\"\n\n[[parachain.collators]]\nname = \"alice\"\nimage = \"polkadot-parachain\"\ncommand = \"polkadot-parachain\"\n# ...\n
collator-example.json
{\n    \"parachain\": {\n        \"id\": 100,\n        \"add_to_genesis\": true,\n        \"cumulus_based\": true,\n        \"genesis_wasm_path\": \"INSERT_PATH_TO_WASM\",\n        \"genesis_state_path\": \"INSERT_PATH_TO_STATE\",\n        \"collators\": [\n            {\n                \"name\": \"alice\",\n                \"image\": \"polkadot-parachain\",\n                \"command\": \"polkadot-parachain\",\n                \"...\": {}\n            }\n        ]\n    },\n    \"...\": {}\n}\n
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#collator-groups-configuration","title":"Collator Groups Configuration","text":"

The collator_groups key defines further parameters for the collator groups. The available keys are:

  • name string - name of the node. Any whitespace will be replaced with a dash (for example, new alice will be converted to new-alice)
  • image? string - override default Docker image to use for this node
  • command? string - override default command to run
  • args? string[] - arguments to be passed to the command
  • env? envVars[] - environment variables to set in the container

    envVars interface definition
    export interface EnvVars {\n  name: string;\n  value: string;\n}\n
  • overrides? Override[] - array of overrides definitions

    Override interface definition
    export interface Override {\n  local_path: string;\n  remote_name: string;\n}\n
  • prometheus_prefix? string - customizes the metric's prefix for the specific node. Defaults to substrate

  • db_snapshot? string - database snapshot to use
  • substrate_cli_args_version? SubstrateCliArgsVersion - set the Substrate CLI arguments version directly to skip binary evaluation overhead

    SubstrateCliArgsVersion enum definition
    export enum SubstrateCliArgsVersion {\n  V0 = 0,\n  V1 = 1,\n  V2 = 2,\n  V3 = 3,\n}\n
  • resources? Resources - represent the resources limits/reservations needed by the node

    Resources interface definition
    export interface Resources {\n  resources: {\n    requests?: {\n      memory?: string;\n      cpu?: string;\n    };\n    limits?: {\n      memory?: string;\n      cpu?: string;\n    };\n  };\n}\n
  • keystore_key_types? string[] - defines which keystore keys should be created

  • count number | string - number of nodes to launch for this group
  • delay_network_settings? DelayNetworkSettings - sets the expected configuration to delay the network

    DelayNetworkSettings interface definition
    export interface DelayNetworkSettings {\n  latency: string;\n  correlation?: string; // should be parsable as float by k8s\n  jitter?: string;\n}\n

For instance, the configuration file below defines a minimal example for the collator groups:

TOMLJSON collator-groups-example.toml
[parachain]\nid = 100\nadd_to_genesis = true\ncumulus_based = true\ngenesis_wasm_path = \"INSERT_PATH_TO_WASM\"\ngenesis_state_path = \"INSERT_PATH_TO_STATE\"\n\n[[parachain.collator_groups]]\nname = \"group-1\"\ncount = 2\nimage = \"polkadot-parachain\"\ncommand = \"polkadot-parachain\"\n# ...\n
collator-groups-example.json
{\n    \"parachain\": {\n        \"id\": 100,\n        \"add_to_genesis\": true,\n        \"cumulus_based\": true,\n        \"genesis_wasm_path\": \"INSERT_PATH_TO_WASM\",\n        \"genesis_state_path\": \"INSERT_PATH_TO_STATE\",\n        \"collator_groups\": [\n            {\n                \"name\": \"group-1\",\n                \"count\": 2,\n                \"image\": \"polkadot-parachain\",\n                \"command\": \"polkadot-parachain\",\n                \"...\": {}\n            }\n        ]\n    },\n    \"...\": {}\n}\n
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/get-started/#xcm-configuration","title":"XCM Configuration","text":"

You can use the hrmp_channels keyword to define further parameters for the XCM channels at start-up. The available keys are:

  • hrmp_channels HrmpChannelsConfig[] - array of Horizontal Relay-routed Message Passing (HRMP) channel configurations

    HrmpChannelsConfig interface definition

    export interface HrmpChannelsConfig {\n  sender: number;\n  recipient: number;\n  max_capacity: number;\n  max_message_size: number;\n}\n
    Each of the HrmpChannelsConfig keys are defined as follows:

    • sender number - parachain ID of the sender
    • recipient number - parachain ID of the recipient
    • max_capacity number - maximum capacity of the HRMP channel
    • max_message_size number - maximum message size allowed in the HRMP channel
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/write-tests/","title":"Write Tests","text":""},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/write-tests/#introduction","title":"Introduction","text":"

Testing is a critical step in blockchain development, ensuring reliability, performance, and security. Zombienet simplifies this process with its intuitive Domain Specific Language (DSL), enabling developers to write natural-language test scripts tailored to their network needs.

This guide provides an in-depth look at how to create and execute test scenarios using Zombienet's flexible testing framework. You\u2019ll learn how to define tests for metrics, logs, events, and more, allowing for comprehensive evaluation of your blockchain network\u2019s behavior and performance.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/write-tests/#testing-dsl","title":"Testing DSL","text":"

Zombienet provides a Domain Specific Language (DSL) for writing tests. The DSL is designed to be human-readable and allows you to write tests using natural language expressions. You can define assertions and tests against the spawned network using this DSL. This way, users can evaluate different metrics, such as:

  • On-chain storage - the storage of each of the chains running via Zombienet
  • Metrics - the metrics provided by the nodes
  • Histograms - visual representations of metrics data
  • Logs - detailed records of system activities and events
  • System events - notifications of significant occurrences within the network
  • Tracing - detailed analysis of execution paths and operations
  • Custom API calls (through Polkadot.js) - personalized interfaces for interacting with the network
  • Commands - instructions or directives executed by the network

These abstractions are expressed by sentences defined in a natural language style. Therefore, each test line will be mapped to a test to run. Also, the test file (*.zndsl) includes pre-defined header fields used to define information about the suite, such as network configuration and credentials location.

Note

View the Testing DSL specification for more details on the Zombienet DSL.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/write-tests/#the-test-file","title":"The Test File","text":"

The test file is a text file with the extension .zndsl. It is divided into two parts: the header and the body. The header contains the network configuration and the credentials to use, while the body contains the tests to run.

The header is defined by the following fields:

  • description string - long description of the test suite (optional)
  • network string - path to the network definition file, supported in both .json and .toml formats
  • creds string - credentials filename or path to use (available only with Kubernetes provider). Looks in the current directory or $HOME/.kube/ if a filename is passed

The body contains the tests to run. Each test is defined by a sentence in the DSL, which is mapped to a test to run. Each test line defines an assertion or a command to be executed against the spawned network.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/write-tests/#name","title":"Name","text":"

The test name in Zombienet is derived from the filename by removing any leading numeric characters before the first hyphen. For example, a file named 0001-zombienet-test.zndsl will result in a test name of zombienet-test, which will be displayed in the test report output of the runner.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/write-tests/#assertions","title":"Assertions","text":"

Assertions are defined by sentences in the DSL that evaluate different metrics, such as on-chain storage, metrics, histograms, logs, system events, tracing, and custom API calls. Each assertion is defined by a sentence in the DSL, which is mapped to a test to run.

  • Well known functions - already mapped test function

    SyntaxExamples

    node-name well-known_defined_test [within x seconds]

    alice: is up\nalice: parachain 100 is registered within 225 seconds\nalice: parachain 100 block height is at least 10 within 250 seconds\n
  • Histogram - get metrics from Prometheus, calculate the histogram, and assert on the target value

    SyntaxExample

    node-name reports histogram metric_name has comparator target_value samples in buckets [\"bucket\",\"bucket\",...] [within x seconds]

    alice: reports histogram polkadot_pvf_execution_time has at least 2 samples in buckets [\"0.1\", \"0.25\", \"0.5\", \"+Inf\"] within 100 seconds\n
  • Metric - get metric from Prometheus and assert on the target value

    SyntaxExamples

    node-name reports metric_name comparator target_value (e.g \"is at least x\", \"is greater than x\") [within x seconds]

    alice: reports node_roles is 4\nalice: reports sub_libp2p_is_major_syncing is 0\n
  • Log line - get logs from nodes and assert on the matching pattern

    SyntaxExample

    node-name log line (contains|matches) (regex|glob) \"pattern\" [within x seconds]

    alice: log line matches glob \"rted #1\" within 10 seconds\n
  • Count of log lines - get logs from nodes and assert on the number of lines matching pattern

    SyntaxExample

    node-name count of log lines (containing|matching) (regex|glob) \"pattern\" [within x seconds]

    alice: count of log lines matching glob \"rted #1\" within 10 seconds\n
  • System events - find a system event from subscription by matching a pattern

    SyntaxExample

    node-name system event (contains|matches)(regex| glob) \"pattern\" [within x seconds]

    alice: system event matches \"\"paraId\":[0-9]+\" within 10 seconds\n
  • Tracing - match an array of span names from the supplied traceID

    SyntaxExample

    node-name trace with traceID contains [\"name\", \"name2\",...]

    alice: trace with traceID 94c1501a78a0d83c498cc92deec264d9 contains [\"answer-chunk-request\", \"answer-chunk-request\"]\n
  • Custom JS scripts - run a custom JavaScript script and assert on the return value

    SyntaxExample

    node-name js-script script_relative_path [return is comparator target_value] [within x seconds]

    alice: js-script ./0008-custom.js return is greater than 1 within 200 seconds\n
  • Custom TS scripts - run a custom TypeScript script and assert on the return value

    SyntaxExample

    node-name ts-script script_relative_path [return is comparator target_value] [within x seconds]

    alice: ts-script ./0008-custom-ts.ts return is greater than 1 within 200 seconds\n
  • Backchannel - wait for a value and register to use

    SyntaxExample

    node-name wait for var name and use as X [within x seconds]

    alice: wait for name and use as X within 30 seconds\n
"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/write-tests/#commands","title":"Commands","text":"

Commands allow interaction with the nodes and can run pre-defined commands or an arbitrary command in the node. Commonly used commands are as follows:

  • restart - stop the process and start again after the X amount of seconds or immediately

  • pause - pause (SIGSTOP) the process

  • resume - resume (SIGCONT) the process

  • sleep - sleep the test-runner for x amount of seconds

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/write-tests/#running-a-test","title":"Running a Test","text":"

To run a test against the spawned network, you can use the Zombienet DSL to define the test scenario. Follow these steps to create an example test:

  1. Create a file named spawn-a-basic-network-test.zndsl

    touch spawn-a-basic-network-test.zndsl\n

  2. Add the following code to the file you just created. spawn-a-basic-network-test.zndsl

    Description: Test the basic functionality of the network (minimal example)\nNetwork: ./spawn-a-basic-network.toml\nCreds: config\n\nalice: is up\nalice: parachain 100 is registered within 225 seconds\nalice: parachain 100 block height is at least 10 within 250 seconds\n\nbob: is up\nbob: parachain 100 is registered within 225 seconds\nbob: parachain 100 block height is at least 10 within 250 seconds\n\n# metrics\nalice: reports node_roles is 4\nalice: reports sub_libp2p_is_major_syncing is 0\n\nbob: reports node_roles is 4\n\ncollator01: reports node_roles is 4\n

This test scenario checks to verify the following:

  • Nodes are running
  • The parachain with ID 100 is registered within a certain timeframe (255 seconds in this example)
  • Parachain block height is at least a certain number within a timeframe (in this case, 10 within 255 seconds)
  • Nodes are reporting metrics

You can define any test scenario you need following the Zombienet DSL syntax.

To run the test, execute the following command:

zombienet -p native test spawn-a-basic-network-test.zndsl\n

This command will execute the test scenario defined in the spawn-a-basic-network-test.zndsl file on the network. If successful, the terminal will display the test output, indicating whether the test passed or failed.

"},{"location":"develop/toolkit/parachains/spawn-chains/zombienet/write-tests/#example-test-files","title":"Example Test Files","text":"

The following example test files define two tests, a small network test and a big network test. Each test defines a network configuration file and credentials to use.

The tests define assertions to evaluate the network\u2019s metrics and logs. The assertions are defined by sentences in the DSL, which are mapped to tests to run.

small-network-test.zndsl
Description: Small Network test\nNetwork: ./0000-test-config-small-network.toml\nCreds: config\n\n# metrics\nalice: reports node_roles is 4\nalice: reports sub_libp2p_is_major_syncing is 0\n\n# logs\nbob: log line matches glob \"*rted #1*\" within 10 seconds\nbob: log line matches \"Imported #[0-9]+\" within 10 seconds\n

And the second test file:

big-network-test.zndsl
Description: Big Network test\nNetwork: ./0001-test-config-big-network.toml\nCreds: config\n\n# metrics\nalice: reports node_roles is 4\nalice: reports sub_libp2p_is_major_syncing is 0\n\n# logs\nbob: log line matches glob \"*rted #1*\" within 10 seconds\nbob: log line matches \"Imported #[0-9]+\" within 10 seconds\n\n# custom js script\nalice: js-script ./0008-custom.js return is greater than 1 within 200 seconds\n\n# custom ts script\nalice: ts-script ./0008-custom-ts.ts return is greater than 1 within 200 seconds\n\n# backchannel\nalice: wait for name and use as X within 30 seconds\n\n# well know functions\nalice: is up\nalice: parachain 100 is registered within 225 seconds\nalice: parachain 100 block height is at least 10 within 250 seconds\n\n# histogram\nalice: reports histogram polkadot_pvf_execution_time has at least 2 samples in buckets [\"0.1\", \"0.25\", \"0.5\", \"+Inf\"] within 100 seconds\n\n# system events\nalice: system event matches \"\"paraId\":[0-9]+\" within 10 seconds\n\n# tracing\nalice: trace with traceID 94c1501a78a0d83c498cc92deec264d9 contains [\"answer-chunk-request\", \"answer-chunk-request\"]\n
"},{"location":"images/","title":"Images","text":"

TODO

"},{"location":"infrastructure/","title":"Infrastructure","text":"

Running infrastructure on Polkadot is essential to supporting the network\u2019s performance and security. Operators must focus on reliability, ensure proper configuration, and meet the necessary hardware requirements to contribute effectively to the decentralized ecosystem.

  • Not sure where to start? Visit the Choosing the Right Role section for guidance
  • Ready to get started? Jump to In This Section to get started
"},{"location":"infrastructure/#choosing-the-right-role","title":"Choosing the Right Role","text":"

Selecting your role within the Polkadot ecosystem depends on your goals, resources, and expertise. Below are detailed considerations for each role:

  • Running a node:

    • Purpose - a node provides access to network data and supports API queries. It is commonly used for:
      • Development and testing - offers a local instance to simulate network conditions and test applications
      • Production use - acts as a data source for dApps, clients, and other applications needing reliable access to the blockchain
    • Requirements - moderate hardware resources to handle blockchain data efficiently
    • Responsibilities - a node\u2019s responsibilities vary based on its purpose:
      • Development and testing - enables developers to test features, debug code, and simulate network interactions in a controlled environment
      • Production use - provides consistent and reliable data access for dApps and other applications, ensuring minimal downtime
  • Running a validator:

    • Purpose - validators play a critical role in securing the Polkadot relay chain. They validate parachain block submissions, participate in consensus, and help maintain the network's overall integrity
    • Requirements - becoming a validator requires:
      • Staking - a variable amount of DOT tokens to secure the network and demonstrate commitment
      • Hardware - high-performing hardware resources capable of supporting intensive blockchain operations
      • Technical expertise - proficiency in setting up and maintaining nodes, managing updates, and understanding Polkadot's consensus mechanisms
      • Community involvement - building trust and rapport within the community to attract nominators willing to stake with your validator
    • Responsibilities - validators have critical responsibilities to ensure network health:
      • Uptime - maintain near-constant availability to avoid slashing penalties for downtime or unresponsiveness
      • Network security - participate in consensus and verify parachain transactions to uphold the network's security and integrity
      • Availability - monitor the network for events and respond to issues promptly, such as misbehavior reports or protocol updates
"},{"location":"infrastructure/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"infrastructure/running-a-node/","title":"Running a Node","text":"

Running a node on the Polkadot network enables you to access blockchain data, interact with the network, and support decentralized applications (dApps). This guide will walk you through the process of setting up and connecting to a Polkadot node, including essential configuration steps for ensuring connectivity and security.

"},{"location":"infrastructure/running-a-node/#full-nodes-vs-bootnodes","title":"Full Nodes vs Bootnodes","text":"

Full nodes and bootnodes serve different roles within the network, each contributing in unique ways to connectivity and data access:

  • Full node - stores blockchain data, validates transactions, and can serve as a source for querying data
  • Bootnode - assists new nodes in discovering peers and connecting to the network, but doesn\u2019t store blockchain data

The following sections describe the different types of full nodes\u2014pruned, archive, and light nodes\u2014and the unique features of each for various use cases.

"},{"location":"infrastructure/running-a-node/#types-of-full-nodes","title":"Types of Full Nodes","text":"

The three main types of nodes are as follows:

  • Pruned node - prunes historical states of all finalized block states older than a specified number except for the genesis block's state
  • Archive node - preserves all the past blocks and their states, making it convenient to query the past state of the chain at any given time. Archive nodes use a lot of disk space, which means they should be limited to use cases that require easy access to past on-chain data, such as block explorers
  • Light node - has only the runtime and the current state but doesn't store past blocks, making them useful for resource-restricted devices

Each node type can be configured to provide remote access to blockchain data via RPC endpoints, allowing external clients, like dApps or developers, to submit transactions, query data, and interact with the blockchain remotely.

Tip

On Stakeworld, you can find a list of the database sizes of Polkadot and Kusama nodes.

"},{"location":"infrastructure/running-a-node/#state-vs-block-pruning","title":"State vs. Block Pruning","text":"

A pruned node retains only a subset of finalized blocks, discarding older data. The two main types of pruning are:

  • State pruning - removes the states of old blocks while retaining block headers
  • Block pruning - removes both the full content of old blocks and their associated states, but keeps the block headers

Despite these deletions, pruned nodes are still capable of performing many essential functions, such as displaying account balances, making transfers, setting up session keys, and participating in staking.

"},{"location":"infrastructure/running-a-node/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"infrastructure/running-a-node/setup-bootnode/","title":"Set Up a Bootnode","text":""},{"location":"infrastructure/running-a-node/setup-bootnode/#introduction","title":"Introduction","text":"

Bootnodes are essential for helping blockchain nodes discover peers and join the network. When a node starts, it needs to find other nodes, and bootnodes provide an initial point of contact. Once connected, a node can expand its peer connections and play its role in the network, like participating as a validator.

This guide will walk you through setting up a Polkadot bootnode, configuring P2P, WebSocket (WS), secure WSS connections, and managing network keys. You'll also learn how to test your bootnode to ensure it is running correctly and accessible to other nodes.

"},{"location":"infrastructure/running-a-node/setup-bootnode/#prerequisites","title":"Prerequisites","text":"

Before you start, you need to have the following prerequisites:

  • Verify a working Polkadot (polkadot) binary is available on your machine
  • Ensure you have nginx installed. Please refer to the Installation Guide for help with installation if needed
  • A VPS or other dedicated server setup
"},{"location":"infrastructure/running-a-node/setup-bootnode/#accessing-the-bootnode","title":"Accessing the Bootnode","text":"

Bootnodes must be accessible through three key channels to connect with other nodes in the network:

  • P2P - a direct peer-to-peer connection, set by:

    --listen-addr /ip4/0.0.0.0/tcp/INSERT_PORT\n

    Note

    This is not enabled by default on non-validator nodes like archive RPC nodes.

  • P2P/WS - a WebSocket (WS) connection, also configured via --listen-addr

  • P2P/WSS - a secure WebSocket (WSS) connection using SSL, often required for light clients. An SSL proxy is needed, as the node itself cannot handle certificates
"},{"location":"infrastructure/running-a-node/setup-bootnode/#node-key","title":"Node Key","text":"

A node key is the ED25519 key used by libp2p to assign your node an identity or peer ID. Generating a known node key for a bootnode is crucial, as it gives you a consistent key that can be placed in chain specifications as a known, reliable bootnode.

Starting a node creates its node key in the chains/INSERT_CHAIN/network/secret_ed25519 file.

You can create a node key using:

polkadot key generate-node-key\n

This key can be used in the startup command line.

It is imperative that you backup the node key. If it is included in the polkadot binary, it is hardcoded into the binary, which must be recompiled to change the key.

"},{"location":"infrastructure/running-a-node/setup-bootnode/#running-the-bootnode","title":"Running the Bootnode","text":"

A bootnode can be run as follows:

polkadot --chain polkadot \\\n--name dot-bootnode \\\n--listen-addr /ip4/0.0.0.0/tcp/30310 \\\n--listen-addr /ip4/0.0.0.0/tcp/30311/ws\n

This assigns the p2p to port 30310 and p2p/ws to port 30311. For the p2p/wss port, a proxy must be set up with a DNS name and a corresponding certificate. The following example is for the popular nginx server and enables p2p/wss on port 30312 by adding a proxy to the p2p/ws port 30311:

/etc/nginx/sites-enabled/dot-bootnode
server {\n       listen       30312 ssl http2 default_server;\n       server_name  dot-bootnode.stakeworld.io;\n       root         /var/www/html;\n\n       ssl_certificate \"INSERT_YOUR_CERT\";\n       ssl_certificate_key \"INSERT_YOUR_KEY\";\n\n       location / {\n         proxy_buffers 16 4k;\n         proxy_buffer_size 2k;\n         proxy_pass http://localhost:30311;\n         proxy_http_version 1.1;\n         proxy_set_header Upgrade $http_upgrade;\n         proxy_set_header Connection \"Upgrade\";\n         proxy_set_header Host $host;\n   }\n\n}\n
"},{"location":"infrastructure/running-a-node/setup-bootnode/#testing-bootnode-connection","title":"Testing Bootnode Connection","text":"

If the preceding node is running with DNS name dot-bootnode.stakeworld.io, which contains a proxy with a valid certificate and node-id 12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg then the following commands should output syncing 1 peers.

Tip

You can add -lsub-libp2p=trace on the end to get libp2p trace logging for debugging purposes.

"},{"location":"infrastructure/running-a-node/setup-bootnode/#p2p","title":"P2P","text":"
polkadot --chain polkadot \\\n--base-path /tmp/node \\\n--name \"Bootnode testnode\" \\\n--reserved-only \\\n--reserved-nodes \"/dns/dot-bootnode.stakeworld.io/tcp/30310/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg\" \\\n--no-hardware-benchmarks\n
"},{"location":"infrastructure/running-a-node/setup-bootnode/#p2pws","title":"P2P/WS","text":"
polkadot --chain polkadot \\\n--base-path /tmp/node \\\n--name \"Bootnode testnode\" \\\n--reserved-only \\\n--reserved-nodes \"/dns/dot-bootnode.stakeworld.io/tcp/30311/ws/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg\" \\\n--no-hardware-benchmarks\n
"},{"location":"infrastructure/running-a-node/setup-bootnode/#p2pwss","title":"P2P/WSS","text":"
polkadot --chain polkadot \\\n--base-path /tmp/node \\\n--name \"Bootnode testnode\" \\\n--reserved-only \\\n--reserved-nodes \"/dns/dot-bootnode.stakeworld.io/tcp/30312/wss/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg\" \\\n--no-hardware-benchmarks\n
"},{"location":"infrastructure/running-a-node/setup-full-node/","title":"Set Up a Node","text":""},{"location":"infrastructure/running-a-node/setup-full-node/#introduction","title":"Introduction","text":"

Running a node on Polkadot provides direct interaction with the network, enhanced privacy, and full control over RPC requests, transactions, and data queries. As the backbone of the network, nodes ensure decentralized data propagation, transaction validation, and seamless communication across the ecosystem.

Polkadot supports multiple node types, including pruned, archive, and light nodes, each suited to specific use cases. During setup, you can use configuration flags to choose the node type you wish to run.

This guide walks you through configuring, securing, and maintaining a node on Polkadot or any Polkadot SDK-based chain. It covers instructions for the different node types and how to safely expose your node's RPC server for external access. Whether you're building a local development environment, powering dApps, or supporting network decentralization, this guide provides all the essentials.

"},{"location":"infrastructure/running-a-node/setup-full-node/#set-up-a-node_1","title":"Set Up a Node","text":"

Now that you're familiar with the different types of nodes, this section will walk you through configuring, securing, and maintaining a node on Polkadot or any Polkadot SDK-based chain.

"},{"location":"infrastructure/running-a-node/setup-full-node/#prerequisites","title":"Prerequisites","text":"

Before getting started, ensure the following prerequisites are met:

  • Ensure Rust is installed on your operating system
  • Install the necessary dependencies for the Polkadot SDK

Warning

This setup is not recommended for validators. If you plan to run a validator, refer to the Running a Validator guide for proper instructions.

"},{"location":"infrastructure/running-a-node/setup-full-node/#install-and-build-the-polkadot-binary","title":"Install and Build the Polkadot Binary","text":"

This section will walk you through installing and building the Polkadot binary for different operating systems and methods.

macOS

To get started, update and configure the Rust toolchain by running the following commands:

source ~/.cargo/env\n\nrustup default stable\nrustup update\n\nrustup update nightly\nrustup target add wasm32-unknown-unknown --toolchain nightly\nrustup component add rust-src --toolchain stable-aarch64-apple-darwin\n

You can verify your installation by running:

rustup show\nrustup +nightly show\n

You should see output similar to the following:

rustup show rustup +nightly show</span

active toolchain ---------------- stable-aarch64-apple-darwin (default) rustc 1.82.0 (f6e511eec 2024-10-15) active toolchain ---------------- nightly-aarch64-apple-darwin (overridden by +toolchain on the command line) rustc 1.84.0-nightly (03ee48451 2024-11-18)

Then, run the following commands to clone and build the Polkadot binary:

git clone https://github.com/paritytech/polkadot-sdk polkadot-sdk\ncd polkadot-sdk\ncargo build --release\n

Depending upon the specs of your machine, compiling the binary may take an hour or more. After building the Polkadot node from source, the executable binary will be located in the ./target/release/polkadot directory.

Windows

To get started, make sure that you have WSL and Ubuntu installed on your Windows machine.

Once installed, you have a couple options for installing the Polkadot binary:

  • If Rust is installed, then cargo can be used similar to the macOS instructions
  • Or, the instructions in the Linux section can be used
Linux (pre-built binary)

To grab the latest release of the Polkadot binary, you can use wget:

wget https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION/polkadot\n

Ensure you note the executable binary's location, as you'll need to use it when running the start-up command. If you prefer, you can specify the output location of the executable binary with the -O flag, for example:

wget https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION/polkadot \\\n- O /var/lib/polkadot-data/polkadot\n

Info

The nature of pre-built binaries means that they may not work on your particular architecture or Linux distribution. If you see an error like cannot execute binary file: Exec format error it likely means the binary is incompatible with your system. You will either need to compile the binary or use Docker.

Ensure that you properly configure the permissions to make the Polkadot release binary executable:

sudo chmod +x polkadot\n
Linux (compile binary)

The most reliable (although perhaps not the fastest) way of launching a full node is to compile the binary yourself. Depending on your machine's specs, this may take an hour or more.

To get started, run the following commands to configure the Rust toolchain:

rustup default stable\nrustup update\nrustup update nightly\nrustup target add wasm32-unknown-unknown --toolchain nightly\nrustup target add wasm32-unknown-unknown --toolchain stable-x86_64-unknown-linux-gnu\nrustup component add rust-src --toolchain stable-x86_64-unknown-linux-gnu\n

You can verify your installation by running:

rustup show\n

You should see output similar to the following:

rustup show rustup +nightly show</span

active toolchain ---------------- stable-x86_64-unknown-linux-gnu (default) rustc 1.82.0 (f6e511eec 2024-10-15)

Once Rust is configured, run the following commands to clone and build Polkadot:

git clone https://github.com/paritytech/polkadot-sdk polkadot-sdk\ncd polkadot-sdk\ncargo build --release\n

Compiling the binary may take an hour or more, depending on your machine's specs. After building the Polkadot node from the source, the executable binary will be located in the ./target/release/polkadot directory.

Linux (snap package)

Polkadot can be installed as a snap package. If you don't already have Snap installed, take the following steps to install it:

sudo apt update\nsudo apt install snapd\n

Install the Polkadot snap package:

sudo snap install polkadot\n

Before continuing on with the following instructions, check out the Configure and Run Your Node section to learn more about the configuration options.

To configure your Polkadot node with your desired options, you'll run a command similar to the following:

sudo snap set polkadot service-args=\"--name=MyName --chain=polkadot\"\n

Then to start the node service, run:

sudo snap start polkadot\n

You can review the logs to check on the status of the node:

snap logs polkadot -f\n

And at any time, you can stop the node service:

sudo snap stop polkadot\n

You can optionally prevent the service from stopping when snap is updated with the following command:

sudo snap set polkadot endure=true\n
"},{"location":"infrastructure/running-a-node/setup-full-node/#use-docker","title":"Use Docker","text":"

As an additional option, you can use Docker to run your node in a container. Doing this is more advanced, so it's best left up to those already familiar with Docker or who have completed the other set-up instructions in this guide. You can review the latest versions on DockerHub.

Be aware that when you run Polkadot in Docker, the process only listens on localhost by default. If you would like to connect to your node's services (RPC and Prometheus) you need to ensure that you run the node with the --rpc-external, and --prometheus-external commands.

docker run -p 9944:9944 -p 9615:9615 parity/polkadot:v1.16.2 --name \"my-polkadot-node-calling-home\" --rpc-external --prometheus-external\n

If you're running Docker on an Apple Silicon machine (e.g. M4), you'll need to adapt the command slightly:

docker run --platform linux/amd64 -p 9944:9944 -p 9615:9615 parity/polkadot:v1.16.2 --name \"kearsarge-calling-home\" --rpc-external --prometheus-external\n
"},{"location":"infrastructure/running-a-node/setup-full-node/#configure-and-run-your-node","title":"Configure and Run Your Node","text":"

Now that you've installed and built the Polkadot binary, the next step is to configure the start-up command depending on the type of node that you want to run. You'll need to modify the start-up command accordingly based on the location of the binary. In some cases, it may be located within the\u00a0./target/release/\u00a0folder, so you'll need to replace\u00a0polkadot\u00a0with\u00a0./target/release/polkadot\u00a0in the following commands.

Also, note that you can use the same binary for Polkadot as you would for Kusama or any other relay chain. You'll need to use the\u00a0--chain\u00a0flag to differentiate between chains.

Note

Not sure which type of node to run? Explore an overview of the different node types.

The base commands for running a Polkadot node are as follows:

Default pruned nodeCustom pruned nodeArchive node

This uses the default pruning value of the last 256 blocks:

polkadot --chain polkadot \\\n--name \"INSERT_NODE_NAME\"\n

You can customize the pruning value, for example, to the last 1000 finalized blocks:

polkadot --chain polkadot \\\n--name INSERT_YOUR_NODE_NAME \\\n--state-pruning 1000 \\\n--blocks-pruning archive \\\n--rpc-cors all \\\n--rpc-methods safe\n

To support the full state, use the archive option:

polkadot --chain polkadot \\\n--name INSERT_YOUR_NODE_NAME \\\n--state-pruning archive \\\n--blocks-pruning archive \\\n

If you want to run an RPC node, please refer to the following RPC Configurations section.

To review a complete list of the available commands, flags, and options, you can use the --help flag:

polkadot --help\n

Once you've fully configured your start-up command, you can execute it in your terminal and your node will start syncing.

"},{"location":"infrastructure/running-a-node/setup-full-node/#rpc-configurations","title":"RPC Configurations","text":"

The node startup settings allow you to choose what to expose, how many connections to expose, and which systems should be granted access through the RPC server.

  • You can limit the methods to use with --rpc-methods; an easy way to set this to a safe mode is --rpc-methods safe
  • You can set your maximum connections through --rpc-max-connections, for example, --rpc-max-connections 200
  • By default, localhost and Polkadot.js can access the RPC server. You can change this by setting --rpc-cors. To allow access from everywhere, you can use --rpc-cors all

For a list of important flags when running RPC nodes, refer to the Parity DevOps documentation: Important Flags for Running an RPC Node.

"},{"location":"infrastructure/running-a-node/setup-full-node/#sync-your-node","title":"Sync Your Node","text":"

The syncing process will take a while, depending on your capacity, processing power, disk speed, and RAM. The process may be completed on a $10 DigitalOcean droplet in about ~36 hours. While syncing, your node name should be visible in gray on Polkadot Telemetry, and once it is fully synced, your node name will appear in white on\u00a0Polkadot Telemetry.

A healthy node syncing blocks will output logs like the following:

2024-11-19 23:49:57 Parity Polkadot 2024-11-19 23:49:57 \u270c\ufe0f version 1.14.1-7c4cd60da6d 2024-11-19 23:49:57 \u2764\ufe0f by Parity Technologies <admin@parity.io>, 2017-2024 2024-11-19 23:49:57 \ud83d\udccb Chain specification: Polkadot 2024-11-19 23:49:57 \ud83c\udff7 Node name: myPolkadotNode 2024-11-19 23:49:57 \ud83d\udc64 Role: FULL 2024-11-19 23:49:57 \ud83d\udcbe Database: RocksDb at /home/ubuntu/.local/share/polkadot/chains/polkadot/db/full 2024-11-19 23:50:00 \ud83c\udff7 Local node identity is: 12D3KooWDmhHEgPRJUJnUpJ4TFWn28EENqvKWH4dZGCN9TS51y9h 2024-11-19 23:50:00 Running libp2p network backend 2024-11-19 23:50:00 \ud83d\udcbb Operating system: linux 2024-11-19 23:50:00 \ud83d\udcbb CPU architecture: x86_64 2024-11-19 23:50:00 \ud83d\udcbb Target environment: gnu 2024-11-19 23:50:00 \ud83d\udcbb CPU: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz 2024-11-19 23:50:00 \ud83d\udcbb CPU cores: 4 2024-11-19 23:50:00 \ud83d\udcbb Memory: 32001MB 2024-11-19 23:50:00 \ud83d\udcbb Kernel: 5.15.0-113-generic 2024-11-19 23:50:00 \ud83d\udcbb Linux distribution: Ubuntu 22.04.5 LTS 2024-11-19 23:50:00 \ud83d\udcbb Virtual machine: no 2024-11-19 23:50:00 \ud83d\udce6 Highest known block at #9319 2024-11-19 23:50:00 \u303d\ufe0f Prometheus exporter started at 127.0.0.1:9615 2024-11-19 23:50:00 Running JSON-RPC server: addr=127.0.0.1:9944, allowed origins=[\"http://localhost:*\", \"http://127.0.0.1:*\", \"https://localhost:*\", \"https://127.0.0.1:*\", \"https://polkadot.js.org\"] 2024-11-19 23:50:00 \ud83c\udfc1 CPU score: 671.67 MiBs 2024-11-19 23:50:00 \ud83c\udfc1 Memory score: 7.96 GiBs 2024-11-19 23:50:00 \ud83c\udfc1 Disk score (seq. writes): 377.87 MiBs 2024-11-19 23:50:00 \ud83c\udfc1 Disk score (rand. writes): 147.92 MiBs 2024-11-19 23:50:00 \ud83e\udd69 BEEFY gadget waiting for BEEFY pallet to become available... 2024-11-19 23:50:00 \ud83d\udd0d Discovered new external address for our node: /ip4/37.187.93.17/tcp/30333/ws/p2p/12D3KooWDmhHEgPRJUJnUpJ4TFWn28EENqvKWH4dZGCN9TS51y9h 2024-11-19 23:50:01 \ud83d\udd0d Discovered new external address for our node: /ip6/2001:41d0:a:3511::1/tcp/30333/ws/p2p/12D3KooWDmhHEgPRJUJnUpJ4TFWn28EENqvKWH4dZGCN9TS51y9h 2024-11-19 23:50:05 \u2699\ufe0f Syncing, target=#23486325 (5 peers), best: #12262 (0x8fb5\u2026f310), finalized #11776 (0x9de1\u202632fb), \u2b07 430.5kiB/s \u2b06 17.8kiB/s 2024-11-19 23:50:10 \u2699\ufe0f Syncing 628.8 bps, target=#23486326 (6 peers), best: #15406 (0x9ce1\u20262d76), finalized #15360 (0x0e41\u2026a064), \u2b07 255.0kiB/s \u2b06 1.8kiB/s

Congratulations, you're now syncing a Polkadot full node! Remember that the process is identical when using any other Polkadot SDK-based chain, although individual chains may have chain-specific flag requirements.

"},{"location":"infrastructure/running-a-node/setup-full-node/#connect-to-your-node","title":"Connect to Your Node","text":"

Open Polkadot.js Apps and click the logo in the top left to switch the node. Activate the Development toggle and input your node's domain or IP address. The default WSS endpoint for a local node is:

ws://127.0.0.1:9944\n
"},{"location":"infrastructure/running-a-validator/","title":"Running a Validator","text":"

Running a Polkadot validator is crucial for securing the network and maintaining its integrity. Validators play a key role in verifying parachain blocks, participating in consensus, and ensuring the reliability of the Polkadot relay chain.

Learn the requirements for setting up a Polkadot validator node, along with detailed steps on how to install, run, upgrade, and maintain the node.

"},{"location":"infrastructure/running-a-validator/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"infrastructure/running-a-validator/#additional-resources","title":"Additional ResourcesLearn About Securing the NetworkExplore Rewards, Offenses, and SlashesCheck Out the Decentralized Nodes ProgramGet Help and Connect With Experts","text":"

Get a better understanding of Polkadot's trust model for parachains and the relay chain, including security mechanisms and how Polkadot ensures decentralization.

Learn about Polkadot's offenses and slashing system, along with validator rewards, era points, and nominator payments.

The Decentralized Nodes program aims to support Polkadot's security and decentralization by involving a diverse set of validators. Learn more and apply.

For help, connect with the Polkadot Validator Lounge on Element, where both the team and experienced validators are ready to assist.

"},{"location":"infrastructure/running-a-validator/requirements/","title":"Validator Requirements","text":""},{"location":"infrastructure/running-a-validator/requirements/#introduction","title":"Introduction","text":"

Running a validator in the Polkadot ecosystem is essential for maintaining network security and decentralization. Validators are responsible for validating transactions and adding new blocks to the chain, ensuring the system operates smoothly. In return for their services, validators earn rewards. However, the role comes with inherent risks, such as slashing penalties for misbehavior or technical failures. If you\u2019re new to validation, starting on Kusama provides a lower-stakes environment to gain valuable experience before progressing to the Polkadot network.

This guide covers everything you need to know about becoming a validator, including system requirements, staking prerequisites, and infrastructure setup. Whether you\u2019re deploying on a VPS or running your node on custom hardware, you\u2019ll learn how to optimize your validator for performance and security, ensuring compliance with network standards while minimizing risks.

"},{"location":"infrastructure/running-a-validator/requirements/#prerequisites","title":"Prerequisites","text":"

Running a validator requires solid system administration skills and a secure, well-maintained infrastructure. Below are the primary requirements you need to be aware of before getting started:

  • System administration expertise - handling technical anomalies and maintaining node infrastructure is critical. Validators must be able to troubleshoot and optimize their setup
  • Security - ensure your setup follows best practices for securing your node. Refer to the Secure Your Validator section to learn about important security measures
  • Network choice - start with Kusama to gain experience. Look for \"Adjustments for Kusama\" throughout these guides for tips on adapting the provided instructions for the Kusama network
  • Staking requirements - a minimum amount of native token (KSM or DOT) is required to be elected into the validator set. The required stake can come from your own holdings or from nominators
  • Risk of slashing - any DOT you stake is at risk if your setup fails or your validator misbehaves. If you\u2019re unsure of your ability to maintain a reliable validator, consider nominating your DOT to a trusted validator
"},{"location":"infrastructure/running-a-validator/requirements/#technical-requirements","title":"Technical Requirements","text":"

Running a Polkadot validator node on Linux is the most common approach, especially for beginners. While you can use any VPS provider that meets the technical specifications, this guide uses Ubuntu 22.04. However, the steps should be adaptable to other Linux distributions.

"},{"location":"infrastructure/running-a-validator/requirements/#reference-hardware","title":"Reference Hardware","text":"

Polkadot validators rely on high-performance hardware to process blocks efficiently. The following specifications are based on benchmarking using two VM instances:

  • Google Cloud Platform (GCP) - n2-standard-8 instance
  • Amazon Web Services (AWS) - c6i.4xlarge instance

The recommended minimum hardware requirements to ensure a fully functional and performant validator are as follows:

  • CPU:

    • x86-64 compatible
    • Eight physical cores @ 3.4 GHz
      • Per Referenda #1051, this will be a hard requirement as of January 2025
    • Processor:
      • Intel - Ice Lake or newer (Xeon or Core series)
      • AMD - Zen3 or newer (EPYC or Ryzen)
    • Simultaneous multithreading disabled:
      • Intel - Hyper-Threading
      • AMD - SMT
    • Single-threaded performance is prioritized over higher cores count
  • Storage:

    • NVMe SSD - at least 1 TB for blockchain data (prioritize latency rather than throughput)
    • Storage requirements will increase as the chain grows. For current estimates, see the current chain snapshot
  • Memory:

    • 32 GB DDR4 ECC
  • System:

    • Linux Kernel 5.16 or newer
  • Network:

    • Symmetric networking speed of 500 Mbit/s is required to handle large numbers of parachains and ensure congestion control during peak times

While the hardware specs above are best practices and not strict requirements, subpar hardware may lead to performance issues and increase the risk of slashing.

"},{"location":"infrastructure/running-a-validator/requirements/#vps-provider-list","title":"VPS Provider List","text":"

When selecting a VPS provider for your validator node, prioritize reliability, consistent performance, and adherence to the specific hardware requirements set for Polkadot validators. The following server types have been tested and showed acceptable performance in benchmark tests. However, this is not an endorsement and actual performance may vary depending on your workload and VPS provider.

  • Google Cloud Platform (GCP) - c2 and c2d machine families offer high-performance configurations suitable for validators
  • Amazon Web Services (AWS) - c6id machine family provides strong performance, particularly for I/O-intensive workloads
  • OVH - can be a budget-friendly solution if it meets your minimum hardware specifications
  • Digital Ocean - popular among developers, Digital Ocean's premium droplets offer configurations suitable for medium to high-intensity workloads
  • Vultr - offers flexibility with plans that may meet validator requirements, especially for high-bandwidth needs
  • Linode - provides detailed documentation, which can be helpful for setup
  • Scaleway - offers high-performance cloud instances that can be suitable for validator nodes
  • OnFinality - specialized in blockchain infrastructure, OnFinality provides validator-specific support and configurations
Acceptable use policies

Different VPS providers have varying acceptable use policies, and not all allow cryptocurrency-related activities.

For example, Digital Ocean, requires explicit permission to use servers for cryptocurrency mining and defines unauthorized mining as network abuse in their acceptable use policy.

Review the terms for your VPS provider to avoid account suspension or server shutdown due to policy violations.

"},{"location":"infrastructure/running-a-validator/requirements/#minimum-bond-requirement","title":"Minimum Bond Requirement","text":"

Before bonding DOT, ensure you meet the minimum bond requirement to start a validator instance. The minimum bond is the least DOT you need to stake to enter the validator set. To become eligible for rewards, your validator node must be nominated by enough staked tokens.

For example, on November 19, 2024, the minimum stake backing a validator in Polkadot's era 1632 was 1,159,434.248 DOT. You can check the current minimum stake required using these tools:

  • Chain State Values
  • Subscan
  • Staking Dashboard
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/","title":"Onboarding and Offboarding","text":"

Successfully onboarding and offboarding a Polkadot validator node is crucial to maintaining the security and integrity of the network. This process involves setting up, activating, deactivating, and securely managing your validator\u2019s key and staking details.

This section provides guidance on how to set up, activate, and deactivate your validator.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/#additional-resources","title":"Additional ResourcesReview the RequirementsLearn About Staking MechanicsMaintain Your NodeGet Help and Connect With Experts","text":"

Explore the technical and system requirements for running a Polkadot validator, including setup, hardware, staking prerequisites, and security best practices.

Explore the staking mechanics in Polkadot, focusing on how they relate to validators, including offenses and slashes, as well as reward payouts.

Learn how to manage your Polkadot validator node, including monitoring performance, running a backup validator for maintenance, and rotating keys.

For help, connect with the Polkadot Validator Lounge on Element, where both the team and experienced validators are ready to assist.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/","title":"Set Up a Validator","text":""},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#introduction","title":"Introduction","text":"

Setting up a Polkadot validator node is essential for securing the network and earning staking rewards. This guide walks you through the technical steps to set up a validator, from installing the necessary software to managing keys and synchronizing your node with the chain.

Running a validator requires a commitment to maintaining a stable, secure infrastructure. Validators are responsible for their own stakes and those of nominators who trust them with their tokens. Proper setup and ongoing management are critical to ensuring smooth operation and avoiding potential penalties such as slashing.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#prerequisites","title":"Prerequisites","text":"

To get the most from this guide, ensure you've done the following before going forward:

  • Read Validator Requirements and understand the recommended minimum skill level and hardware needs
  • Read General Management, Upgrade Your Node, and Pause Validating and understand the tasks required to keep your validator operational
  • Read Rewards Payout and understand how validator rewards are determined and paid out
  • Read Offenses and Slashes and understand how validator performance and security can affect tokens staked by you or your nominators
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#initial-setup","title":"Initial Setup","text":"

Before you can begin running your validator, you'll need to configure your server environment to meet the operational and security standards required for validating. Configuration includes setting up time synchronization, ensuring critical security features are active, and installing the necessary binaries. Proper setup at this stage is essential to prevent issues like block production errors or being penalized for downtime. Below are the essential steps to get your system ready.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#install-network-time-protocol-client","title":"Install Network Time Protocol Client","text":"

Accurate timekeeping is critical to ensure your validator is synchronized with the network. Validators need local clocks in sync with the blockchain to avoid missing block authorship opportunities. Using Network Time Protocol (NTP) is the standard solution to keep your system's clock accurate.

If you are using Ubuntu version 18.04 or newer, the NTP Client should be installed by default. You can check whether you have the NTP client by running:

timedatectl\n

If NTP is running, you should see a message like the following:

System clock synchronized: yes\n

If NTP is not installed or running, you can install it using:

sudo apt-get install ntp\n

After installation, NTP will automatically start. To check its status:

sudo ntpq -p\n

This command will return a message with the status of the NTP synchronization. Skipping this step could result in your validator node missing blocks due to minor clock drift, potentially affecting its network performance.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#verify-landlock-is-activated","title":"Verify Landlock is Activated","text":"

Landlock is an important security feature integrated into Linux kernels starting with version 5.13. It allows processes, even those without special privileges, to limit their access to the system to reduce the machine's attack surface. This feature is crucial for validators, as it helps ensure the security and stability of the node by preventing unauthorized access or malicious behavior.

To use Landlock, ensure you use the reference kernel or newer versions. Most Linux distributions should already have Landlock activated. You can check if Landlock is activated on your machine by running the following command as root:

dmesg | grep landlock || journalctl -kg landlock\n

If Landlock is not activated, your system logs won't show any related output. In this case, you will need to activate it manually or ensure that your Linux distribution supports it. Most modern distributions with the required kernel version should have Landlock activated by default. However, if your system lacks support, you may need to build the kernel with Landlock activated. For more information on doing so, refer to the official kernel documentation.

Implementing Landlock ensures your node operates in a restricted, self-imposed sandbox, limiting potential damage from security breaches or bugs. While not a mandatory requirement, enabling this feature greatly improves the security of your validator setup.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#install-the-polkadot-binaries","title":"Install the Polkadot Binaries","text":"

You must install the Polkadot binaries required to run your validator node. These binaries include the main polkadot, polkadot-prepare-worker, and polkadot-execute-worker binaries. All three are needed to run a fully functioning validator node.

Depending on your preference and operating system setup, there are multiple methods to install these binaries. Below are the main options:

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#install-from-official-releases","title":"Install from Official Releases","text":"

The preferred, most straightforward method to install the required binaries is downloading the latest versions from the official releases. You can visit the Github Releases page for the most current versions of the polkadot, polkadot-prepare-worker, and polkadot-execute-worker binaries.

You can also download the binaries by using the following direct links and replacing INSERT_VERSION_NUMBER with the version number, e.g. v1.16.1

polkadotpolkadot-prepare-workerpolkadot-execute-worker
https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION_NUMBER/polkadot\n
https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION_NUMBER/polkadot-prepare-worker\n
https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION_NUMBER/polkadot-execute-worker\n
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#install-with-package-managers","title":"Install with Package Managers","text":"

Users running Debian-based distributions like Ubuntu, or RPM-based distributions such as Fedora or CentOS can install the binaries via package managers.

Debian-based (Debian, Ubuntu)

Run the following commands as the root user to add the necessary repository and install the binaries:

# Import the security@parity.io GPG key\ngpg --recv-keys --keyserver hkps://keys.mailvelope.com 9D4B2B6EB8F97156D19669A9FF0812D491B96798\ngpg --export 9D4B2B6EB8F97156D19669A9FF0812D491B96798 > /usr/share/keyrings/parity.gpg\n# Add the Parity repository and update the package index\necho 'deb [signed-by=/usr/share/keyrings/parity.gpg] https://releases.parity.io/deb release main' > /etc/apt/sources.list.d/parity.list\napt update\n# Install the `parity-keyring` package - This will ensure the GPG key\n# used by APT remains up-to-date\napt install parity-keyring\n# Install polkadot\napt install polkadot\n

After installation, ensure the binaries are properly installed by verifying the installation.

RPM-based (Fedora, CentOS)\"

Run the following commands as the root user to install the binaries on an RPM-based system:

# Install dnf-plugins-core (This might already be installed)\ndnf install dnf-plugins-core\n# Add the repository and activate it\ndnf config-manager --add-repo https://releases.parity.io/rpm/polkadot.repo\ndnf config-manager --set-enabled polkadot\n# Install polkadot (You may have to confirm the import of the GPG key, which\n# should have the following fingerprint: 9D4B2B6EB8F97156D19669A9FF0812D491B96798)\ndnf install polkadot\n

After installation, ensure the binaries are properly installed by verifying the installation.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#install-with-ansible","title":"Install with Ansible","text":"

You can also manage Polkadot installations using Ansible. This approach can be beneficial for users managing multiple validator nodes or requiring automated deployment. The Parity chain operations Ansible collection provides a Substrate node role for this purpose.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#install-with-docker","title":"Install with Docker","text":"

If you prefer using Docker or an OCI-compatible container runtime, the official Polkadot Docker image can be pulled directly from Docker Hub.

To pull the latest image, run the following command. Make sure to replace INSERT_VERSION_NUMBER with the appropriate version number, e.g. v1.16.1

docker.io/parity/polkadot:INSERT_VERSION_NUMBER\n
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#build-from-sources","title":"Build from Sources","text":"

You may build the binaries from source by following the instructions on the Polkadot SDK repository.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#verify-installation","title":"Verify Installation","text":"

Once the Polkadot binaries are installed, it's essential to verify that everything is set up correctly and that all the necessary components are in place. Follow these steps to ensure the binaries are installed and functioning as expected.

  1. Check the versions - run the following commands to verify the versions of the installed binaries:

    polkadot --version\npolkadot-execute-worker --version\npolkadot-prepare-worker --version\n

    The output should show the version numbers for each of the binaries. Ensure that the versions match and are consistent, similar to the following example (the specific version may vary):

    If the versions do not match or if there is an error, double-check that all the binaries were correctly installed and are accessible within your $PATH.

  2. Ensure all binaries are in the same directory - all the binaries must be in the same directory for the Polkadot validator node to function properly. If the binaries are not in the same location, move them to a unified directory and ensure this directory is added to your system's $PATH

    To verify the $PATH, run the following command:

    echo $PATH\n

    If necessary, you can move the binaries to a shared location, such as /usr/local/bin/, and add it to your $PATH.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#run-a-validator-on-a-testnet","title":"Run a Validator on a TestNet","text":"

Running your validator on a test network like Westend or Kusama is a smart way to familiarize yourself with the process and identify any setup issues in a lower-stakes environment before joining the Polkadot MainNet.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#choose-a-network","title":"Choose a Network","text":"
  • Westend - Polkadot's primary TestNet is open to anyone for testing purposes. Validator slots are intentionally limited to keep the network stable for the Polkadot release process, so it may not support as many validators at any given time
  • Kusama - often called Polkadot's \u201ccanary network,\u201d Kusama has real economic value but operates with a faster and more experimental approach. Running a validator here provides an experience closer to MainNet with the benefit of more frequent validation opportunities with an era time of 6 hours vs 24 hours for Polkadot
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#synchronize-chain-data","title":"Synchronize Chain Data","text":"

After successfully installing and verifying the Polkadot binaries, the next step is to sync your node with the blockchain network. Synchronization is necessary to download and validate the blockchain data, ensuring your node is ready to participate as a validator. Follow these steps to sync your node:

  1. Start syncing - you can run a full or warp sync

    Full syncWarp sync

    Polkadot defaults to using a full sync, which downloads and validates the entire blockchain history from the genesis block. Start the syncing process by running the following command:

    polkadot\n

    This command starts your Polkadot node in non-validator mode, allowing you to synchronize the chain data.

    You can opt to use warp sync which initially downloads only GRANDPA finality proofs and the latest finalized block's state. Use the following command to start a warp sync:

    polkadot --sync warp\n

    Warp sync ensures that your node quickly updates to the latest finalized state. The historical blocks are downloaded in the background as the node continues to operate.

    Adjustments for TestNets

    If you're planning to run a validator on a TetNet, you can specify the chain using the --chain flag. For example, the following will run a validator on Kusama:

    polkadot --chain=kusama\n
  2. Monitor sync progress - once the sync starts, you will see a stream of logs providing information about the node's status and progress. Here's an example of what the output might look like:

    polkadot 2021-06-17 03:07:07 Parity Polkadot 2021-06-17 03:07:07 \u270c\ufe0f version 0.9.5-95f6aa201-x86_64-linux-gnu 2021-06-17 03:07:07 \u2764\ufe0f by Parity Technologies <admin@parity.io>, 2017-2021 2021-06-17 03:07:07 \ud83d\udccb Chain specification: Polkadot 2021-06-17 03:07:07 \ud83c\udff7 Node name: boiling-pet-7554 2021-06-17 03:07:07 \ud83d\udc64 Role: FULL 2021-06-17 03:07:07 \ud83d\udcbe Database: RocksDb at /root/.local/share/polkadot/chains/polkadot/db 2021-06-17 03:07:07 \u26d3 Native runtime: polkadot-9050 (parity-polkadot-0.tx7.au0) 2021-06-17 03:07:10 \ud83c\udff7 Local node identity is: 12D3KooWLtXFWf1oGrnxMGmPKPW54xWCHAXHbFh4Eap6KXmxoi9u 2021-06-17 03:07:10 \ud83d\udce6 Highest known block at #17914 2021-06-17 03:07:10 \u303d\ufe0f Prometheus server started at 127.0.0.1:9615 2021-06-17 03:07:10 Listening for new connections on 127.0.0.1:9944 ...

    The output logs provide information such as the current block number, node name, and network connections. Monitor the sync progress and any errors that might occur during the process. Look for information about the latest processed block and compare it with the current highest block using tools like Telemetry or Polkadot.js Apps Explorer.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#database-snapshot-services","title":"Database Snapshot Services","text":"

If you'd like to speed up the process further, you can use a database snapshot. Snapshots are compressed backups of the blockchain's database directory and can significantly reduce the time required to sync a new node. Here are a few public snapshot providers:

  • Stakeworld
  • Polkachu
  • Polkashots

Warning

Although snapshots are convenient, syncing from scratch is recommended for security purposes. If snapshots become corrupted and most nodes rely on them, the network could inadvertently run on a non-canonical chain.

Why am I unable to synchronize the chain with 0 peers?

Make sure you have libp2p port 30333 activated. It will take some time to discover other peers over the network.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#bond-dot","title":"Bond DOT","text":"

Once your validator node is synced, the next step is bonding DOT. A bonded account, or stash, holds your staked tokens (DOT) that back your validator node. Bonding your DOT means locking it for a period, during which it cannot be transferred or spent but is used to secure your validator's role in the network. Visit the Minimum Bond Requirement section for details on how much DOT is required.

The following sections will guide you through bonding DOT for your validator.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#bonding-dot-on-polkadotjs-apps","title":"Bonding DOT on Polkadot.js Apps","text":"

Once you're ready to bond your DOT, head over to the Polkadot.js Apps staking page by clicking the Network dropdown at the top of the page and selecting Staking.

To get started with the bond submission, click on the Accounts tab, then the + Stash button, and then enter the following information:

  1. Stash account - select your stash account (which is the account with the DOT/KSM balance)
  2. Value bonded - enter how much DOT from the stash account you want to bond/stake. You are not required to bond all of the DOT in that account and you may bond more DOT at a later time. Be aware, withdrawing any bonded amount requires waiting for the unbonding period. The unbonding period is seven days for Kusama and 28 days for Polkadot
  3. Payment destination - add the recipient account for validator rewards. If you'd like to redirect payments to an account that is not the stash account, you can do it by entering the address here. Note that it is extremely unsafe to set an exchange address as the recipient of the staking rewards

Once everything is filled in properly, select Bond and sign the transaction with your stash account. If successful, you should see an ExtrinsicSuccess message.

Your bonded account will be available under Stashes. After refreshing the screen, you should now see a card with all your accounts. The bonded amount on the right corresponds to the funds bonded by the stash account.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#set-session-keys","title":"Set Session Keys","text":"

Setting up your validator's session keys is essential to associate your node with your stash account on the Polkadot network. Validators use session keys to participate in the consensus process. Your validator can only perform its role in the network by properly setting session keys which consist of several key pairs for different parts of the protocol (e.g., GRANDPA, BABE). These keys must be registered on-chain and associated with your validator node to ensure it can participate in validating blocks.

The following sections will cover generating session keys, submitting key data on-chain, and verifying that session keys are correctly set.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#generate-session-keys","title":"Generate Session Keys","text":"

The Polkadot.js Apps UI and the CLI are the two primary methods used to generate session keys.

Use Polkadot.js Apps UIUse the CLI
  1. Ensure that you are connected to your validator node through the Polkadot.js Apps interface
  2. In the Toolbox tab, navigate to RPC calls
  3. Select author_rotateKeys from the drop-down menu and run the command. This will generate new session keys in your node's keystore and return the result as a hex-encoded string
  4. Copy and save this hex-encoded output for the next step

Generate session keys by running the following command on your validator node:

curl -H \"Content-Type: application/json\" \\\n-d '{\"id\":1, \"jsonrpc\":\"2.0\", \"method\": \"author_rotateKeys\", \"params\":[]}' \\\nhttp://localhost:9944\n

This command will return a hex-encoded string that is the concatenation of your session keys. Save this string for later use.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#submit-transaction-to-set-keys","title":"Submit Transaction to Set Keys","text":"

Now that you have generated your session keys, you must submit them to the chain. Follow these steps:

  1. Go to the Network > Staking > Accounts section on Polkadot.js Apps
  2. Select Set Session Key on the bonding account you generated earlier
  3. Paste the hex-encoded session key string you generated (from either the UI or CLI) into the input field and submit the transaction

Once the transaction is signed and submitted, your session keys will be registered on-chain.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#verify-session-key-setup","title":"Verify Session Key Setup","text":"

To verify that your session keys are properly set, you can use one of two RPC calls:

  • hasKey - checks if the node has a specific key by public key and key type
  • hasSessionKeys - verifies if your node has the full session key string associated with the validator

For example, you can check session keys on the Polkadot.js Apps interface or by running an RPC query against your node. Once this is done, your validator node is ready for its role.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#set-the-node-key","title":"Set the Node Key","text":"

Validators on Polkadot need a static network key (also known as the node key) to maintain a stable node identity. This key ensures that your validator can maintain a consistent peer ID, even across restarts, which is crucial for maintaining reliable network connections.

Starting with Polkadot version 1.11, validators without a stable network key may encounter the following error on startup:

polkadot --validator --name \"INSERT_NAME_FROM_TELEMETRY\" Error: 0: Starting an authority without network key This is not a safe operation because other authorities in the network may depend on your node having a stable identity. Otherwise these other authorities may not being able to reach you. If it is the first time running your node you could use one of the following methods: 1. [Preferred] Separately generate the key with: INSERT_NODE_BINARY key generate-node-key --base-path INSERT_YOUR_BASE_PATH 2. [Preferred] Separately generate the key with: INSERT_NODE_BINARY key generate-node-key --file INSERT_YOUR_PATH_TO_NODE_KEY 3. [Preferred] Separately generate the key with: INSERT_NODE_BINARY key generate-node-key --default-base-path 4. [Unsafe] Pass --unsafe-force-node-key-generation and make sure you remove it for subsequent node restarts"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#generate-the-node-key","title":"Generate the Node Key","text":"

Use one of the following methods to generate your node key:

Save to fileUse default path

The recommended solution is to generate a node key and save it to a file using the following command:

polkadot key generate-node-key --file INSERT_PATH_TO_NODE_KEY\n

You can also generate the node key with the following command, which will automatically save the key to the base path of your node:

polkadot key generate-node-key --default-base-path\n

Save the file path for reference. You will need it in the next step to configure your node with a static identity.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#set-the-node-key_1","title":"Set the Node Key","text":"

After generating the node key, configure your node to use it by specifying the path to the key file when launching your node. Add the following flag to your validator node's startup command:

polkadot --node-key-file INSERT_PATH_TO_NODE_KEY\n

Following these steps ensures that your node retains its identity, making it discoverable by peers without the risk of conflicting identities across sessions. For further technical background, see Polkadot SDK Pull Request #3852 for the rationale behind requiring static keys.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#validate","title":"Validate","text":"

Once your validator node is fully synced and ready, the next step is to ensure it's visible on the network and performing as expected. Below are steps for monitoring and managing your node on the Polkadot network.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#verify-sync-via-telemetry","title":"Verify Sync via Telemetry","text":"

To confirm that your validator is live and synchronized with the Polkadot network, visit the Telemetry page. Telemetry provides real-time information on node performance and can help you check if your validator is connected properly. Search for your node by name. You can search all nodes currently active on the network, which is why you should use a unique name for easy recognition. Now, confirm that your node is fully synced by comparing the block height of your node with the network's latest block. Nodes that are fully synced will appear white in the list, while nodes that are not yet fully synced will appear gray.

In the following example, a node named techedtest is successfully located and synchronized, ensuring it's prepared to participate in the network:

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#activate-using-polkadotjs-apps","title":"Activate using Polkadot.js Apps","text":"

Follow these steps to use Polkadot.js Apps to activate your validator:

  1. Go to the Validator tab in the Polkadot.js Apps UI and locate the section where you input the keys generated from rotateKeys. Paste the output from author_rotateKeys, which is a hex-encoded key that links your validator with its session keys:

  2. Set a reward commission percentage if desired. You can set a percentage of the rewards to pay to your validator and the remainder pays to your nominators. A 100% commission rate indicates the validator intends to keep all rewards and is seen as a signal the validator is not seeking nominators

  3. Toggle the allows new nominations option if your validator is open to more nominations from DOT holders
  4. Once everything is configured, select Bond & Validate to activate your validator status

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#monitor-validation-status-and-slots","title":"Monitor Validation Status and Slots","text":"

On the Staking tab in Polkadot.js Apps, you can see your validator's status, the number of available validator slots, and the nodes that have signaled their intent to validate. Your node may initially appear in the waiting queue, especially if the validator slots are full. The following is an example view of the Staking tab:

The validator set refreshes each era. If there's an available slot in the next era, your node may be selected to move from the waiting queue to the active validator set, allowing it to start validating blocks. If your validator is not selected, it remains in the waiting queue. Increasing your stake or gaining more nominators may improve your chance of being selected in future eras.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#run-a-validator-using-systemd","title":"Run a Validator Using Systemd","text":"

Running your Polkadot validator as a systemd service is an effective way to ensure its high uptime and reliability. Using systemd allows your validator to automatically restart after server reboots or unexpected crashes, significantly reducing the risk of slashing due to downtime.

This following sections will walk you through creating and managing a systemd service for your validator, allowing you to seamlessly monitor and control it as part of your Linux system.

Ensure the following requirements are met before proceeding with the systemd setup:

  • Confirm your system meets the requirements for running a validator
  • Ensure you meet the minimum bond requirements for validating
  • Verify the Polkadot binary is installed
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#create-the-systemd-service-file","title":"Create the Systemd Service File","text":"

First create a new unit file called polkadot-validator.service in /etc/systemd/system/:

touch /etc/systemd/system/polkadot-validator.service\n

In this unit file, you will write the commands that you want to run on server boot/restart:

/etc/systemd/system/polkadot-validator.service
[Unit]\nDescription=Polkadot Node\nAfter=network.target\nDocumentation=https://github.com/paritytech/polkadot\n\n[Service]\nEnvironmentFile=-/etc/default/polkadot\nExecStart=/usr/bin/polkadot $POLKADOT_CLI_ARGS\nUser=polkadot\nGroup=polkadot\nRestart=always\nRestartSec=120\nCapabilityBoundingSet=\nLockPersonality=true\nNoNewPrivileges=true\nPrivateDevices=true\nPrivateMounts=true\nPrivateTmp=true\nPrivateUsers=true\nProtectClock=true\nProtectControlGroups=true\nProtectHostname=true\nProtectKernelModules=true\nProtectKernelTunables=true\nProtectSystem=strict\nRemoveIPC=true\nRestrictAddressFamilies=AF_INET AF_INET6 AF_NETLINK AF_UNIX\nRestrictNamespaces=false\nRestrictSUIDSGID=true\nSystemCallArchitectures=native\nSystemCallFilter=@system-service\nSystemCallFilter=landlock_add_rule landlock_create_ruleset landlock_restrict_self seccomp mount umount2\nSystemCallFilter=~@clock @module @reboot @swap @privileged\nSystemCallFilter=pivot_root\nUMask=0027\n\n[Install]\nWantedBy=multi-user.target\n

Restart Delay Recommendation

It is recommended that a node's restart be delayed with RestartSec in the case of a crash. It's possible that when a node crashes, consensus votes in GRANDPA aren't persisted to disk. In this case, there is potential to equivocate when immediately restarting. Delaying the restart will allow the network to progress past potentially conflicting votes.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#run-the-service","title":"Run the Service","text":"

Activate the systemd service to start on system boot by running:

systemctl enable polkadot-validator.service\n

To start the service manually, use:

systemctl start polkadot-validator.service\n

Check the service's status to confirm it is running:

systemctl status polkadot-validator.service\n

To view the logs in real-time, use journalctl like so:

journalctl -f -u polkadot-validator\n

With these steps, you can effectively manage and monitor your validator as a systemd service.

Once your validator is active, it's officially part of Polkadot's security infrastructure. For questions or further support, you can reach out to the Polkadot Validator chat for tips and troubleshooting.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/","title":"Stop Validating","text":""},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/#introduction","title":"Introduction","text":"

If you're ready to stop validating on Polkadot, there are essential steps to ensure a smooth transition while protecting your funds and account integrity. Whether you're taking a break for maintenance or unbonding entirely, you'll need to chill your validator, purge session keys, and unbond your tokens. This guide explains how to use Polkadot's tools and extrinsics to safely withdraw from validation activities, safeguarding your account's future usability.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/#pause-versus-stop","title":"Pause Versus Stop","text":"

If you wish to remain a validator or nominator (for example, stopping for planned downtime or server maintenance), submitting the chill extrinsic in the staking pallet should suffice. Additional steps are only needed to unbond funds or reap an account.

The following are steps to ensure a smooth stop to validation:

  • Chill the validator
  • Purge validator session keys
  • Unbond your tokens
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/#chill-validator","title":"Chill Validator","text":"

When stepping back from validating, the first step is to chill your validator status. This action stops your validator from being considered for the next era without fully unbonding your tokens, which can be useful for temporary pauses like maintenance or planned downtime.

Use the staking.chill extrinsic to initiate this. For more guidance on chilling your node, refer to the Pause Validating guide. You may also claim any pending staking rewards at this point.

"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/#purge-validator-session-keys","title":"Purge Validator Session Keys","text":"

Purging validator session keys is a critical step in removing the association between your validator account and its session keys, which ensures that your account is fully disassociated from validator activities. The session.purgeKeys extrinsic removes the reference to your session keys from the stash or staking proxy account that originally set them.

Here are a couple of important things to know about purging keys:

  • Account used to purge keys - always use the same account to purge keys you originally used to set them, usually your stash or staking proxy account. Using a different account may leave an unremovable reference to the session keys on the original account, preventing its reaping
  • Account reaping issue - failing to purge keys will prevent you from reaping (fully deleting) your stash account. If you attempt to transfer tokens without purging, you'll need to rebond, purge the session keys, unbond again, and wait through the unbonding period before any transfer
"},{"location":"infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/#unbond-your-tokens","title":"Unbond Your Tokens","text":"

After chilling your node and purging session keys, the final step is to unbond your staked tokens. This action removes them from staking and begins the unbonding period (usually 28 days for Polkadot and seven days for Kusama), after which the tokens will be transferable.

To unbond tokens, go to Network > Staking > Account Actions on Polkadot.js Apps. Select your stash account, click on the dropdown menu, and choose Unbond Funds. Alternatively, you can use the staking.unbond extrinsic if you handle this via a staking proxy account.

Once the unbonding period is complete, your tokens will be available for use in transactions or transfers outside of staking.

"},{"location":"infrastructure/running-a-validator/operational-tasks/","title":"Operational Tasks","text":"

Running a Polkadot validator node involves several key operational tasks to ensure secure and efficient participation in the network. In this section, you'll learn how to manage and maintain your validator node by monitoring its performance, conducting regular maintenance, and ensuring high availability through strategies like running a backup validator. You'll also find instructions on rotating your session keys to enhance security and minimize vulnerabilities. Mastering these tasks is essential for maintaining a reliable and trusted presence within your network.

"},{"location":"infrastructure/running-a-validator/operational-tasks/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"infrastructure/running-a-validator/operational-tasks/#additional-resources","title":"Additional ResourcesAccess Real-Time Validator MetricsStay Up to Date with Runtime Upgrades","text":"

Check the Polkadot Telemetry dashboard for real-time insights into node performance, including validator status, connectivity, block production, and software version to identify potential issues.

Learn how to monitor the Polkadot network for upcoming upgrades, so you can prepare your validator node for any required updates or modifications.

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/","title":"General Management","text":""},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#introduction","title":"Introduction","text":"

Validator performance is pivotal in maintaining the security and stability of the Polkadot network. As a validator, optimizing your setup ensures efficient transaction processing, minimizes latency, and maintains system reliability during high-demand periods. Proper configuration and proactive monitoring also help mitigate risks like slashing and service interruptions.

This guide covers essential practices for managing a validator, including performance tuning techniques, security hardening, and tools for real-time monitoring. Whether you're fine-tuning CPU settings, configuring NUMA balancing, or setting up a robust alert system, these steps will help you build a resilient and efficient validator operation.

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#configuration-optimization","title":"Configuration Optimization","text":"

For those seeking to optimize their validator's performance, the following configurations can improve responsiveness, reduce latency, and ensure consistent performance during high-demand periods.

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#deactivate-simultaneous-multithreading","title":"Deactivate Simultaneous Multithreading","text":"

Polkadot validators operate primarily in single-threaded mode for critical paths, meaning optimizing for single-core CPU performance can reduce latency and improve stability. Deactivating simultaneous multithreading (SMT) can prevent virtual cores from affecting performance. SMT implementation is called Hyper-Threading on Intel and 2-way SMT on AMD Zen. The following will deactivate every other (vCPU) core:

for cpunum in $(cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | cut -s -d, -f2- | tr ',' '\\n' | sort -un)\ndo\n  echo 0 > /sys/devices/system/cpu/cpu$cpunum/online\ndone\n

To save the changes permanently, add nosmt=force as kernel parameter. Edit /etc/default/grub and add nosmt=force to GRUB_CMDLINE_LINUX_DEFAULT variable as follows:

sudo nano /etc/default/grub\n# Add to GRUB_CMDLINE_LINUX_DEFAULT\n
/etc/default/grub
GRUB_HIDDEN_TIMEOUT = 0;\nGRUB_HIDDEN_TIMEOUT_QUIET = true;\nGRUB_TIMEOUT = 10;\nGRUB_DISTRIBUTOR = `lsb_release -i -s 2> /dev/null || echo Debian`;\nGRUB_CMDLINE_LINUX_DEFAULT = 'nosmt=force';\nGRUB_CMDLINE_LINUX = '';\n

After updating the variable, be sure to update GRUB to apply changes:

sudo update-grub\n

After the reboot, you should see that half of the cores are offline. To confirm, run:

lscpu --extended\n
"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#deactivate-automatic-numa-balancing","title":"Deactivate Automatic NUMA Balancing","text":"

Deactivating NUMA (Non-Uniform Memory Access) balancing for multi-CPU setups helps keep processes on the same CPU node, minimizing latency. Run the following command to deactivate NUMA balancing in runtime:

sysctl kernel.numa_balancing=0\n

To deactivate NUMA balancing permanently, add numa_balancing=disable to GRUB settings:

sudo nano /etc/default/grub\n# Add to GRUB_CMDLINE_LINUX_DEFAULT\n
/etc/default/grub
GRUB_DEFAULT = 0;\nGRUB_HIDDEN_TIMEOUT = 0;\nGRUB_HIDDEN_TIMEOUT_QUIET = true;\nGRUB_TIMEOUT = 10;\nGRUB_DISTRIBUTOR = `lsb_release -i -s 2> /dev/null || echo Debian`;\nGRUB_CMDLINE_LINUX_DEFAULT = 'numa_balancing=disable';\nGRUB_CMDLINE_LINUX = '';\n

After updating the variable, be sure to update GRUB to apply changes:

sudo update-grub\n

Confirm the deactivation by running the following command:

sysctl -a | grep 'kernel.numa_balancing'\n

If you successfully deactivated NUMA balancing, the preceding command should return 0.

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#spectre-and-meltdown-mitigations","title":"Spectre and Meltdown Mitigations","text":"

Spectre and Meltdown are well-known vulnerabilities in modern CPUs that exploit speculative execution to access sensitive data. These vulnerabilities have been patched in recent Linux kernels, but the mitigations can slightly impact performance, especially in high-throughput or containerized environments.

If your security needs allow it, you may selectively deactivate specific mitigations for performance gains. The Spectre V2 and Speculative Store Bypass Disable (SSBD) for Spectre V4 apply to speculative execution and are particularly impactful in containerized environments. Deactivating them can help regain performance if your environment doesn't require these security layers.

To selectively deactivate the Spectre mitigations, update the GRUB_CMDLINE_LINUX_DEFAULT variable in your /etc/default/grub configuration:

sudo nano /etc/default/grub\n# Add to GRUB_CMDLINE_LINUX_DEFAULT\n
/etc/default/grub
GRUB_DEFAULT = 0;\nGRUB_HIDDEN_TIMEOUT = 0;\nGRUB_HIDDEN_TIMEOUT_QUIET = true;\nGRUB_TIMEOUT = 10;\nGRUB_DISTRIBUTOR = `lsb_release -i -s 2> /dev/null || echo Debian`;\nGRUB_CMDLINE_LINUX_DEFAULT =\n  'spec_store_bypass_disable=prctl spectre_v2_user=prctl';\n

After updating the variable, be sure to update GRUB to apply changes and then reboot:

sudo update-grub\nsudo reboot\n

This approach selectively deactivates the Spectre V2 and Spectre V4 mitigations, leaving other protections intact. For full security, keep mitigations activated unless there's a significant performance need, as disabling them could expose the system to potential attacks on affected CPUs.

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#monitor-your-node","title":"Monitor Your Node","text":"

Monitoring your node's performance is critical to maintaining network reliability and security. Tools like Prometheus and Grafana provide insights into block height, peer connections, CPU and memory usage, and more. This section walks through setting up these tools and configuring alerts to notify you of potential issues.

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#prepare-environment","title":"Prepare Environment","text":"

Before installing Prometheus, it's important to set up the environment securely to ensure Prometheus runs with restricted user privileges. You can set up Prometheus securely as follows:

  1. Create a Prometheus user - ensure Prometheus runs with minimal permissions
    sudo useradd --no-create-home --shell /usr/sbin/nologin prometheus\n
  2. Set up directories - create directories for configuration and data storage
    sudo mkdir /etc/prometheus\nsudo mkdir /var/lib/prometheus\n
  3. Change directory ownership - ensure Prometheus has access
    sudo chown -R prometheus:prometheus /etc/prometheus\nsudo chown -R prometheus:prometheus /var/lib/prometheus\n
"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#install-and-configure-prometheus","title":"Install and Configure Prometheus","text":"

After preparing the environment; install and configure the latest version of Prometheus as follows:

  1. Download Prometheus - obtain the respective release binary for your system architecture from the Prometheus releases page. Replace the placeholder text with the respective release binary, e.g. https://github.com/prometheus/prometheus/releases/download/v3.0.0/prometheus-3.0.0.linux-amd64.tar.gz
    sudo apt-get update && sudo apt-get upgrade\nwget INSERT_RELEASE_DOWNLOAD_LINK\ntar xfz prometheus-*.tar.gz\ncd prometheus-3.0.0.linux-amd64\n
  2. Set up Prometheus - copy binaries and directories, assign ownership of these files to the prometheus user, and clean up download directory as follows:

    1. Binaries2. Directories3. Clean up
    sudo cp ./prometheus /usr/local/bin/\nsudo cp ./promtool /usr/local/bin/\nsudo cp ./prometheus /usr/local/bin/\n
    sudo cp -r ./consoles /etc/prometheus\nsudo cp -r ./console_libraries /etc/prometheus\nsudo chown -R prometheus:prometheus /etc/prometheus/consoles\nsudo chown -R prometheus:prometheus /etc/prometheus/console_libraries\n
    cd .. && rm -r prometheus*\n
  3. Create prometheus.yml for configuration - run this command to define global settings, rule files, and scrape targets:

    sudo nano /etc/prometheus/prometheus.yml\n
    Prometheus is scraped every 5 seconds in this example configuration file, ensuring detailed internal metrics. Node metrics with customizable intervals are scraped from port 9615 by default. prometheus-config.yml
    global:\n  scrape_interval: 15s\n  evaluation_interval: 15s\n\nrule_files:\n  # - \"first.rules\"\n  # - \"second.rules\"\n\nscrape_configs:\n  - job_name: 'prometheus'\n    scrape_interval: 5s\n    static_configs:\n      - targets: ['localhost:9090']\n  - job_name: 'substrate_node'\n    scrape_interval: 5s\n    static_configs:\n      - targets: ['localhost:9615']\n

  4. Validate configuration with promtool - use the open source monitoring system to check your configuration

    promtool check config /etc/prometheus/prometheus.yml\n

  5. Assign ownership - save the configuration file and change the ownership of the file to prometheus user
    sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml\n
"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#start-prometheus","title":"Start Prometheus","text":"
  1. Launch Prometheus - use the following command to launch Prometheus with a given configuration, set the storage location for metric data, and enable web console templates and libraries:

    sudo -u prometheus /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus/ --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries\n

    If you set the server up properly, you should see terminal output similar to the following:

  2. Verify access - verify you can access the Prometheus interface by visiting the following address:

    http://SERVER_IP_ADDRESS:9090/graph\n

    If the interface appears to work as expected, exit the process using Control + C.

  3. Create new systemd service file - this will automatically start the server during the boot process

    sudo nano /etc/systemd/system/prometheus.service\n
    Add the following code to the service file:

    prometheus.service

    [Unit]\nDescription=Prometheus Monitoring\nWants=network-online.target\nAfter=network-online.target\n\n[Service]\nUser=prometheus\nGroup=prometheus\nType=simple\nExecStart=/usr/local/bin/prometheus \\\n --config.file /etc/prometheus/prometheus.yml \\\n --storage.tsdb.path /var/lib/prometheus/ \\\n --web.console.templates=/etc/prometheus/consoles \\\n --web.console.libraries=/etc/prometheus/console_libraries\nExecReload=/bin/kill -HUP $MAINPID\n\n[Install]\nWantedBy=multi-user.target\n
    Once you save the file, execute the following command to reload systemd and enable the service so that it will load automatically during the operating system's startup:

    sudo systemctl daemon-reload && sudo systemctl enable prometheus && sudo systemctl start prometheus\n
    4. Verify service - return to the Prometheus interface at the following address to verify the service is running:
    http://SERVER_IP_ADDRESS:9090/\n

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#install-and-configure-grafana","title":"Install and Configure Grafana","text":"

Grafana provides a powerful, customizable interface to visualize metrics collected by Prometheus. This guide follows Grafana's canonical installation instructions. To install and configure Grafana, follow these steps:

  1. Install Grafana prerequisites - run the following commands to install the required packages:

    sudo apt-get install -y apt-transport-https software-properties-common wget    \n

  2. Import the GPG key:

    sudo mkdir -p /etc/apt/keyrings/\nwget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /etc/apt/keyrings/grafana.gpg > /dev/null\n

  3. Configure the stable release repo and update packages:

    echo \"deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main\" | sudo tee -a /etc/apt/sources.list.d/grafana.list\nsudo apt-get update\n

  4. Install the latest stable version of Grafana:

    sudo apt-get install grafana\n

After installing Grafana, you can move on to the configuration steps:

  1. Set Grafana to auto-start - configure Grafana to start automatically on system boot and start the service

    sudo systemctl daemon-reload\nsudo systemctl enable grafana-server.service\nsudo systemctl start grafana-server\n

  2. Verify the Grafana service is running** with the following command:

    sudo systemctl status grafana-server\n
    If necessary, you can stop or restart the service with the following commands:

    sudo systemctl stop grafana-server\nsudo systemctl restart grafana-server\n
  3. Access Grafana - open your browser, navigate to the following address, and use the default user and password admin to login:

    http://SERVER_IP_ADDRESS:3000/login\n

Change default port

If you want run Grafana on another port, edit the file /usr/share/grafana/conf/defaults.ini with a command like:

sudo vim /usr/share/grafana/conf/defaults.ini \n
You can change the http_port value as desired. Then restart Grafana with:
sudo systemctl restart grafana-server\n

Follow these steps to visualize node metrics:

  1. Select the gear icon for settings to configure the Data Sources
  2. Select Add data source to define the data source
  3. Select Prometheus
  4. Enter http://localhost:9090 in the URL field, then select Save & Test. If you see the message \"Data source is working\" your connection is configured correctly
  5. Next, select Import from the menu bar on the left, select Prometheus in the dropdown list and select Import
  6. Finally, start your Polkadot node by running ./polkadot. You should now be able to monitor your node's performance such as the current block height, network traffic, and running tasks on the Grafana dashboard

Import via grafana.com

The Grafana dashboards page features user created dashboards made available for public use. Visit \"Substrate Node Metrics\" for an example of available dashboards.

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#install-and-configure-alertmanager","title":"Install and Configure Alertmanager","text":"

The optional Alertmanager complements Prometheus by handling alerts and notifying users of potential issues. Follow these steps to install and configure Alertmanager:

  1. Download extract Alertmanager - download the latest version from the Prometheus Alertmanager releases page. Replace the placeholder text with the respective release binary, e.g. https://github.com/prometheus/alertmanager/releases/download/v0.28.0-rc.0/alertmanager-0.28.0-rc.0.linux-amd64.tar.gz
    wget INSERT_RELEASE_DOWNLOAD_LINK\ntar -xvzf alertmanager*\n
  2. Move binaries and set permissions - copy the binaries to a system directory and set appropriate permissions
    cd alertmanager-0.28.0-rc.0.linux-amd64\nsudo cp ./alertmanager /usr/local/bin/\nsudo cp ./amtool /usr/local/bin/\nsudo chown prometheus:prometheus /usr/local/bin/alertmanager\nsudo chown prometheus:prometheus /usr/local/bin/amtool\n
  3. Create configuration file - create a new alertmanager.yml file under /etc/alertmanager

    sudo mkdir /etc/alertmanager\nsudo nano /etc/alertmanager/alertmanager.yml\n
    Add the following code to the configuration file to define email notifications: alertmanager.yml
    global:\n  resolve_timeout: 1m\n\nroute:\n  receiver: 'gmail-notifications'\n\nreceivers:\n  - name: 'gmail-notifications'\n    email_configs:\n      - to: INSERT_YOUR_EMAIL\n        from: INSERT_YOUR_EMAIL\n        smarthost: smtp.gmail.com:587\n        auth_username: INSERT_YOUR_EMAIL\n        auth_identity: INSERT_YOUR_EMAIL\n        auth_password: INSERT_YOUR_APP_PASSWORD\n        send_resolved: true\n

    App password

    You must generate an app password in your Gmail account to allow Alertmanager to send you alert notification emails.

    Ensure the configuration file has the correct permissions:

    sudo chown -R prometheus:prometheus /etc/alertmanager\n
    4. Configure as a service - set up Alertmanager to run as a service by creating a systemd service file
    sudo nano /etc/systemd/system/alertmanager.service\n
    Add the following code to the service file: alertmanager.service
    [Unit]\nDescription=AlertManager Server Service\nWants=network-online.target\nAfter=network-online.target\n\n[Service]\nUser=root\nGroup=root\nType=simple\nExecStart=/usr/local/bin/alertmanager --config.file /etc/alertmanager/alertmanager.yml --web.external-url=http://SERVER_IP:9093 --cluster.advertise-address='0.0.0.0:9093'\n\n[Install]\nWantedBy=multi-user.target\n
    Reload and enable the service
    sudo systemctl daemon-reload\nsudo systemctl enable alertmanager\nsudo systemctl start alertmanager\n
    Verify the service status using the following command:
    sudo systemctl status alertmanager\n
    If you have configured the Alertmanager properly, the Active field should display active (running) similar to below:

    sudo systemctl status alertmanager alertmanager.service - AlertManager Server Service Loaded: loaded (/etc/systemd/system/alertmanager.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2020-08-20 22:01:21 CEST; 3 days ago Main PID: 20592 (alertmanager) Tasks: 70 (limit: 9830) CGroup: /system.slice/alertmanager.service

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#grafana-plugin","title":"Grafana Plugin","text":"

There is an Alertmanager plugin in Grafana that can help you monitor alert information. Follow these steps to use the plugin:

  1. Install the plugin - use the following command:
    sudo grafana-cli plugins install camptocamp-prometheus-alertmanager-datasource\n
  2. Restart Grafana
    sudo systemctl restart grafana-server\n
  3. Configure datasource - go to your Grafana dashboard SERVER_IP:3000 and configure the Alertmanager datasource as follows:
    • Go to Configuration -> Data Sources, and search for Prometheus Alertmanager
    • Fill in the URL to your server location followed by the port number used in the Alertmanager. Select Save & Test to test the connection
  4. To monitor the alerts, import the 8010 dashboard, which is used for Alertmanager. Make sure to select the Prometheus Alertmanager in the last column then select Import
"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#integrate-alertmanager","title":"Integrate Alertmanager","text":"

A few more steps are required to allow the Prometheus server to talk to the Alertmanager and to configure rules for detection and alerts. Complete the integration as follows:

  1. Update configuration - update the configuration file in the etc/prometheus/prometheus.yml to add the following code: prometheus.yml
    rule_files:\n  - 'rules.yml'\n\nalerting:\n  alertmanagers:\n    - static_configs:\n        - targets:\n            - localhost:9093\n
  2. Create rules file - here you will define the rules for detection and alerts Run the following command to create the rules file:
    sudo nano /etc/prometheus/rules.yml\n
    If any of the conditions defined in the rules file are met, an alert will be triggered. The following sample rule checks for the node being down and triggers an email notification if an outage of more than five minutes is detected: rules.yml
    groups:\n  - name: alert_rules\n    rules:\n      - alert: InstanceDown\n        expr: up == 0\n        for: 5m\n        labels:\n          severity: critical\n        annotations:\n          summary: 'Instance [{{ $labels.instance }}] down'\n          description: '[{{ $labels.instance }}] of job [{{ $labels.job }}] has been down for more than 5 minutes.'\n
    See Alerting Rules and additional alerts in the Prometheus documentation to learn more about defining and using alerting rules.
  3. Update ownership of rules file - ensure user prometheus has access by running:
    sudo chown prometheus:prometheus rules.yml\n
  4. Check rules - ensure the rules defined in rules.yml are syntactically correct by running the following command:
    sudo -u prometheus promtool check rules rules.yml\n
  5. Restart Alertmanager
    sudo systemctl restart prometheus && sudo systemctl restart alertmanager\n

Now you will receive an email alert if one of your rule triggering conditions is met.

Updated prometheus.yml
global:\n  scrape_interval: 15s\n  evaluation_interval: 15s\n\nrule_files:\n  - 'rules.yml'\n\nalerting:\n  alertmanagers:\n    - static_configs:\n        - targets:\n            - localhost:9093\n\nscrape_configs:\n  - job_name: 'prometheus'\n    scrape_interval: 5s\n    static_configs:\n      - targets: ['localhost:9090']\n  - job_name: 'substrate_node'\n    scrape_interval: 5s\n    static_configs:\n      - targets: ['localhost:9615']\n
"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#secure-your-validator","title":"Secure Your Validator","text":"

Validators in Polkadot's Proof of Stake network play a critical role in maintaining network integrity and security by keeping the network in consensus and verifying state transitions. To ensure optimal performance and minimize risks, validators must adhere to strict guidelines around security and reliable operations.

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#key-management","title":"Key Management","text":"

Though they don't transfer funds, session keys are essential for validators as they sign messages related to consensus and parachains. Securing session keys is crucial as allowing them to be exploited or used across multiple nodes can lead to a loss of staked funds via slashing.

Given the current limitations in high-availability setups and the risks associated with double-signing, it\u2019s recommended to run only a single validator instance. Keys should be securely managed, and processes automated to minimize human error.

There are two approaches for generating session keys:

  1. Generate and store in node - using the author.rotateKeys RPC call. For most users, generating keys directly within the client is recommended. You must submit a session certificate from your staking proxy to register new keys. See the How to Validate guide for instructions on setting keys

  2. Generate outside node and insert - using the author.setKeys RPC call. This flexibility accommodates advanced security setups and should only be used by experienced validator operators

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#signing-outside-the-client","title":"Signing Outside the Client","text":"

Polkadot plans to support external signing, allowing session keys to reside in secure environments like Hardware Security Modules (HSMs). However, these modules can sign any payload they receive, potentially enabling an attacker to perform slashable actions.

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#secure-validator-mode","title":"Secure-Validator Mode","text":"

Polkadot's Secure-Validator mode offers an extra layer of protection through strict filesystem, networking, and process sandboxing. This secure mode is activated by default if the machine meets the following requirements:

  1. Linux (x86-64 architecture) - usually Intel or AMD
  2. Enabled seccomp - this kernel feature facilitates a more secure approach for process management on Linux. Verify by running:
    cat /boot/config-`uname -r` | grep CONFIG_SECCOMP=\n
    If seccomp is enabled, you should see output similar to the following:
    CONFIG_SECCOMP=y\n

Note

Optionally, Linux 5.13 may also be used, as it provides access to even more strict filesystem protections.

"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#linux-best-practices","title":"Linux Best Practices","text":"

Follow these best practices to keep your validator secure:

  • Use a non-root user for all operations
  • Regularly apply OS security patches
  • Enable and configure a firewall
  • Use key-based SSH authentication; deactivate password-based login
  • Regularly back up data and harden your SSH configuration. Visit this SSH guide for more details
"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#validator-best-practices","title":"Validator Best Practices","text":"

Additional best practices can add an additional layer of security and operational reliability:

  • Only run the Polkadot binary, and only listen on the configured p2p port
  • Run on bare-metal machines, as opposed to virtual machines
  • Provisioning of the validator machine should be automated and defined in code which is kept in private version control, reviewed, audited, and tested
  • Generate and provide session keys in a secure way
  • Start Polkadot at boot and restart if stopped for any reason
  • Run Polkadot as a non-root user
  • Establish and maintain an on-call rotation for managing alerts
  • Establish and maintain a clear protocol with actions to perform for each level of each alert with an escalation policy
"},{"location":"infrastructure/running-a-validator/operational-tasks/general-management/#additional-resources","title":"Additional Resources","text":"
  • Certus One's Knowledge Base
  • EOS Block Producer Security List
  • HSM Policies and the Importance of Validator Security

For additional guidance, connect with other validators and the Polkadot engineering team in the Polkadot Validator Lounge on Element.

"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/","title":"Pause Validating","text":""},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#introduction","title":"Introduction","text":"

If you need to temporarily stop participating in Polkadot staking activities without fully unbonding your funds, chilling your account allows you to do so efficiently. Chilling removes your node from active validation or nomination in the next era while keeping your funds bonded, making it ideal for planned downtimes or temporary pauses.

This guide covers the steps for chilling as a validator or nominator, using the chill and chillOther extrinsics, and how these affect your staking status and nominations.

"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#chilling-your-node","title":"Chilling Your Node","text":"

If you need to temporarily step back from staking without unbonding your funds, you can \"chill\" your account. Chilling pauses your active staking participation, setting your account to inactive in the next era while keeping your funds bonded.

To chill your account, go to the Network > Staking > Account Actions page on Polkadot.js Apps, and select Stop. Alternatively, you can call the chill extrinsic in the Staking pallet.

"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#staking-election-timing-considerations","title":"Staking Election Timing Considerations","text":"

When a node actively participates in staking but then chills, it will continue contributing for the remainder of the current era. However, its eligibility for the next election depends on the chill status at the start of the new era:

  • Chilled during previous era - will not participate in the current era election and will remain inactive until reactivated -Chilled during current era - will not be selected for the next era's election -Chilled after current era - may be selected if it was active during the previous era and is now chilled
"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#chilling-as-a-nominator","title":"Chilling as a Nominator","text":"

When you choose to chill as a nominator, your active nominations are reset. Upon re-entering the nominating process, you must reselect validators to support manually. Depending on preferences, these can be the same validators as before or a new set. Remember that your previous nominations won\u2019t be saved or automatically reactivated after chilling.

While chilled, your nominator account remains bonded, preserving your staked funds without requiring a full unbonding process. When you\u2019re ready to start nominating again, you can issue a new nomination call to activate your bond with a fresh set of validators. This process bypasses the need for re-bonding, allowing you to maintain your stake while adjusting your involvement in active staking.

"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#chilling-as-a-validator","title":"Chilling as a Validator","text":"

When you chill as a validator, your active validator status is paused. Although your nominators remain bonded to you, the validator bond will no longer appear as an active choice for new or revised nominations until reactivated. Any existing nominators who take no action will still have their stake linked to the validator, meaning they don\u2019t need to reselect the validator upon reactivation. However, if nominators adjust their stakes while the validator is chilled, they will not be able to nominate the chilled validator until it resumes activity.

Upon reactivating as a validator, you must also reconfigure your validator preferences, such as commission rate and other parameters. These can be set to match your previous configuration or updated as desired. This step is essential for rejoining the active validator set and regaining eligibility for nominations.

"},{"location":"infrastructure/running-a-validator/operational-tasks/pause-validating/#chill-other","title":"Chill Other","text":"

Historical constraints in the runtime prevented unlimited nominators and validators from being supported. These constraints created a need for checks to keep the size of the staking system manageable. One of these checks is the chillOther extrinsic, allowing users to chill accounts that no longer met standards such as minimum staking requirements set through on-chain governance.

This control mechanism included a ChillThreshold, which was structured to define how close to the maximum number of nominators or validators the staking system would be allowed to get before users could start chilling one another. With the passage of Referendum #90, the value for maxNominatorCount on Polkadot was set to None, effectively removing the limit on how many nominators and validators can participate. This means the ChillThreshold will never be met; thus, chillOther no longer has any effect.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/","title":"Upgrade a Validator Node","text":""},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#introduction","title":"Introduction","text":"

Upgrading a Polkadot validator node is essential for staying current with network updates and maintaining optimal performance. This guide covers routine and extended maintenance scenarios, including software upgrades and major server changes. Following these steps, you can manage session keys and transition smoothly between servers without risking downtime, slashing, or network disruptions. The process requires strategic planning, especially if you need to perform long-lead maintenance, ensuring your validator remains active and compliant.

This guide will allow validators to seamlessly substitute an active validator server to allow for maintenance operations. The process can take several hours, so ensure you understand the instructions first and plan accordingly.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#prerequisites","title":"Prerequisites","text":"

Before beginning the upgrade process for your validator node, ensure the following:

  • You have a fully functional validator setup with all required binaries installed. See Set Up a Validator and Validator Requirements for additional guidance
  • Your VPS infrastructure has enough capacity to run a secondary validator instance temporarily for the upgrade process
"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#session-keys","title":"Session Keys","text":"

Session keys are used to sign validator operations and establish a connection between your validator node and your staking proxy account. These keys are stored in the client, and any change to them requires a waiting period. Specifically, if you modify your session keys, the change will take effect only after the current session is completed and two additional sessions have passed.

Remembering this delayed effect when planning upgrades is crucial to ensure that your validator continues to function correctly and avoids interruptions. To learn more about session keys and their importance, visit the Keys section.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#keystore","title":"Keystore","text":"

Your validator server's keystore folder holds the private keys needed for signing network-level transactions. It is important not to duplicate or transfer this folder between validator instances. Doing so could result in multiple validators signing with the duplicate keys, leading to severe consequences such as equivocation slashing. Instead, always generate new session keys for each validator instance.

The default path to the keystore is as follows:

/home/polkadot/.local/share/polkadot/chains/<chain>/keystore\n

Taking care to manage your keys securely ensures that your validator operates safely and without the risk of slashing penalties.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#upgrade-using-backup-validator","title":"Upgrade Using Backup Validator","text":"

The following instructions outline how to temporarily switch between two validator nodes. The original active validator is referred to as Validator A and the backup node used for maintenance purposes as Validator B.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#session-n","title":"Session N","text":"
  1. Start Validator B - launch a secondary node and wait until it is fully synced with the network. Once synced, start it with the --validator flag. This node will now act as Validator B
  2. Generate session keys - create new session keys specifically for Validator B
  3. Submit the set_key extrinsic - use your staking proxy account to submit a set_key extrinsic, linking the session keys for Validator B to your staking setup
  4. Record the session - make a note of the session in which you executed this extrinsic
  5. Wait for session changes - allow the current session to end and then wait for two additional full sessions for the new keys to take effect

Keep Validator A running

It is crucial to keep Validator A operational during this entire waiting period. Since set_key does not take effect immediately, turning off Validator A too early may result in chilling or even slashing.

"},{"location":"infrastructure/running-a-validator/operational-tasks/upgrade-your-node/#session-n3","title":"Session N+3","text":"

At this stage, Validator B becomes your active validator. You can now safely perform any maintenance tasks on Validator A.

Complete the following steps when you are ready to bring Validator A back online:

  1. Start Validator A - launch Validator A, sync the blockchain database, and ensure it is running with the --validator flag
  2. Generate new session keys for Validator A - create fresh session keys for Validator A
  3. Submit the set_key extrinsic - using your staking proxy account, submit a set_key extrinsic with the new Validator A session keys
  4. Record the session - again, make a note of the session in which you executed this extrinsic

Keep Validator B active until the session during which you executed the set-key extrinsic completes plus two additional full sessions have passed. Once Validator A has successfully taken over, you can safely stop Validator B. This process helps ensure a smooth handoff between nodes and minimizes the risk of downtime or penalties. Verify the transition by checking for finalized blocks in the new session. The logs should indicate the successful change, similar to the example below:

INSERT_COMMAND 2019-10-28 21:44:13 Applying authority set change scheduled at block #450092 2019-10-28 21:44:13 Applying GRANDPA set change to new set with 20 authorities"},{"location":"infrastructure/staking-mechanics/","title":"Staking Mechanics","text":"

Gain a deep understanding of the staking mechanics in Polkadot, with a focus on how they impact validators. In this section, you'll explore key concepts such as offenses, slashing, and reward payouts, and learn how these mechanisms influence the behavior and performance of validators within the network. Understanding these elements is crucial for optimizing your validator's participation and ensuring alignment with Polkadot's governance and security protocols.

"},{"location":"infrastructure/staking-mechanics/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"infrastructure/staking-mechanics/#additional-resourcs","title":"Additional ResourcsLearn About Nominated Proof of StakingDive Deep into Slashing MechanismsReview Validator Rewards Metrics","text":"

Take a deeper dive into the fundamentals of Polkadot's Nominated Proof of Stake (NPoS) consensus mechanism.

Read the Web3 Foundation's research article on slashing mechanisms for a comprehensive understanding of slashing, along with an in-depth examination of the offenses involved.

Check out Dune's Polkadot Staking Rewards dashboard for a detailed look at validator-specific metrics over time, such as daily staking rewards, nominators count, reward points, and more.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/","title":"Offenses and Slashes","text":""},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#introduction","title":"Introduction","text":"

In Polkadot's Nominated Proof of Stake (NPoS) system, validator misconduct is deterred through a combination of slashing, disabling, and reputation penalties. Validators and nominators who stake tokens face consequences for validator misbehavior, which range from token slashes to restrictions on network participation.

This page outlines the types of offenses recognized by Polkadot, including block equivocations and invalid votes, as well as the corresponding penalties. While some parachains may implement additional custom slashing mechanisms, this guide focuses on the offenses tied to staking within the Polkadot ecosystem.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#offenses","title":"Offenses","text":"

Polkadot is a public permissionless network. As such, it has a mechanism to disincentivize offenses and incentivize good behavior. You can review the\u00a0parachain protocol to understand better the terminology used to describe offenses. Polkadot validator offenses fall into two categories: invalid votes and equivocations.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#invalid-votes","title":"Invalid Votes","text":"

A validator will be penalized for inappropriate voting activity during the block inclusion and approval processes. The invalid voting related offenses are as follows:

  • Backing an invalid block - a para-validator backs an invalid block for inclusion in a fork of the relay chain
  • ForInvalid vote - when acting as a secondary checker, the validator votes in favor of an invalid block
  • AgainstValid vote - when acting as a secondary checker, the validator votes against a valid block. This type of vote wastes network resources required to resolve the disparate votes and resulting dispute
"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#equivocations","title":"Equivocations","text":"

Equivocation occurs when a validator produces statements that conflict with each other when producing blocks or voting. Unintentional equivocations usually occur when duplicate signing keys reside on the validator host. If keys are never duplicated, the probability of an honest equivocation slash decreases to near zero. The equivocation related offenses are as follows:

  • Equivocation - the validator produces two or more of the same block or vote
    • GRANDPA and BEEFY equivocation - the validator signs two or more votes in the same round on different chains
    • BABE equivocation - the validator produces two or more blocks on the relay chain in the same time slot
  • Double seconded equivocation - the validator attempts to second, or back, more than one block in the same round
  • Seconded and valid equivocation - the validator seconds, or backs, a block and then attempts to hide their role as the responsible backer by later placing a standard validation vote
"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#penalties","title":"Penalties","text":"

On Polkadot, offenses to the network incur different penalties depending on severity. There are three main penalties: slashing, disabling, and reputation changes.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#slashing","title":"Slashing","text":"

Validators engaging in bad actor behavior in the network may be subject to slashing if they commit a qualifying offense. When a validator is slashed, they and their nominators lose a percentage of their staked DOT or KSM, from as little as 0.01% up to 100% based on the severity of the offense. Nominators are evaluated for slashing against their active validations at any given time. Validator nodes are evaluated as discrete entities, meaning an operator can't attempt to mitigate the offense on another node they operate in order to avoid a slash.

Any slashed DOT or KSM will be added to the Treasury rather than burned or distributed as rewards. Moving slashed funds to the Treasury allows tokens to be quickly moved away from malicious validators while maintaining the ability to revert faulty slashes when needed.

Multiple active nominations

A nominator with a very large bond may nominate several validators in a single era. In this case, a slash is proportionate to the amount staked to the offending validator. Stake allocation and validator activation is controlled by the Phragm\u00e9n algorithm.

A validator slash creates an unapplied state transition. You can view pending slashes on Polkadot.js Apps. The UI will display the slash per validator, the affected nominators, and the slash amounts. The unapplied state includes a 27-day grace period during which a governance proposal can be made to reverse the slash. Once this grace period expires, the slash is applied.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#equivocation-slash","title":"Equivocation Slash","text":"

The Web3 Foundation's Slashing mechanisms page provides guidelines for evaluating the security threat level of different offenses and determining penalties proportionate to the threat level of the offense. Offenses requiring coordination between validators or extensive computational costs to the system will typically call for harsher penalties than those more likely to be unintentional than malicious. A description of potential offenses for each threat level and the corresponding penalties is as follows:

  • Level 1 - honest misconduct such as isolated cases of unresponsiveness
    • Penalty - validator can be kicked out or slashed up to 0.1% of stake in the validator slot
  • Level 2 - misconduct that can occur honestly but is a sign of bad practices. Examples include repeated cases of unresponsiveness and isolated cases of equivocation
    • Penalty - slash of up to 1% of stake in the validator slot
  • Level 3 - misconduct that is likely intentional but of limited effect on the performance or security of the network. This level will typically include signs of coordination between validators. Examples include repeated cases of equivocation or isolated cases of unjustified voting on GRANDPA
    • Penalty - reduction in networking reputation metrics, slash of up to 10% of stake in the validator slot
  • Level 4 - misconduct that poses severe security or monetary risk to the system or mass collusion. Examples include signs of extensive coordination, creating a serious security risk to the system, or forcing the system to use extensive resources to counter the misconduct
    • Penalty - slash of up to 100% of stake in the validator slot

See the next section to understand how slash amounts for equivocations are calculated. If you want to know more details about slashing, please look at the research page on Slashing mechanisms.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#slash-calculation-for-equivocation","title":"Slash Calculation for Equivocation","text":"

The slashing penalty for GRANDPA, BABE, and BEEFY equivocations is calculated using the formula below, where x represents the number of offenders and n is the total number of validators in the active set:

min((3 * x / n )^2, 1)\n

The following scenarios demonstrate how this formula means slash percentages can increase exponentially based on the number of offenders involved compared to the size of the validator pool:

  • Minor offense - assume 1 validator out of a 100 validator active set equivocates in a slot. A single validator committing an isolated offense is most likely a mistake rather than malicious attack on the network. This offense results in a 0.09% slash to the stake in the validator slot

    flowchart LR\nN[\"Total Validators = 100\"]\nX[\"Offenders = 1\"]\nF[\"min(3 * 1 / 100)^2, 1) = 0.0009\"]\nG[\"0.09% slash of stake\"]\n\nN --> F\nX --> F\nF --> G
  • Moderate offense - assume 5 validators out a 100 validator active set equivocate in a slot. This is a slightly more serious event as there may be some element of coordination involved. This offense results in a 2.25% slash to the stake in the validator slot

    flowchart LR\nN[\"Total Validators = 100\"]\nX[\"Offenders = 5\"]\nF[\"min((3 * 5 / 100)^2, 1) = 0.0225\"]\nG[\"2.25% slash of stake\"]\n\nN --> F\nX --> F\nF --> G
  • Major offense - assume 20 validators out a 100 validator active set equivocate in a slot. This is a major security threat as it possible represents a coordinated attack on the network. This offense results in a 36% slash and all slashed validators will also be chilled

    flowchart LR\nN[\"Total Validators = 100\"]\nX[\"Offenders = 20\"]\nF[\"min((3 * 20 / 100)^2, 1) = 0.36\"]\nG[\"36% slash of stake\"]\n\nN --> F\nX --> F\nF --> G

The examples above show the risk of nominating or running many validators in the active set. While rewards grow linearly (two validators will get you approximately twice as many staking rewards as one), slashing grows exponentially. Going from a single validator equivocating to two validators equivocating causes a slash four time as much as the single validator.

Validators may run their nodes on multiple machines to ensure they can still perform validation work if one of their nodes goes down. Still, validator operators should be cautious when setting these up. Equivocation is possible if they don't coordinate well in managing signing machines.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#best-practices-to-avoid-slashing","title":"Best Practices to Avoid Slashing","text":"

The following are advised to node operators to ensure that they obtain pristine binaries or source code and to ensure the security of their node:

  • Always download either source files or binaries from the official Parity repository
  • Verify the hash of downloaded files
  • Use the W3F secure validator setup or adhere to its principles
  • Ensure essential security items are checked, use a firewall, manage user access, use SSH certificates
  • Avoid using your server as a general-purpose system. Hosting a validator on your workstation or one that hosts other services increases the risk of maleficence
  • Avoid cloning servers (copying all contents) when migrating to new hardware. If an image is needed, create it before generating keys
  • High Availability (HA) systems are generally not recommended as equivocation may occur if concurrent operations happen\u2014such as when a failed server restarts or two servers are falsely online simultaneously
  • Copying the keystore folder when moving a database between instances can cause equivocation. Even brief use of duplicated keystores can result in slashing

Below are some examples of small equivocations that happened in the past:

Network Era Event Type Details Action Taken Polkadot 774 Small Equivocation The validator migrated servers and cloned the keystore folder. The on-chain event can be viewed on Subscan. The validator didn't submit a request for the slash to be canceled. Kusama 3329 Small Equivocation The validator operated a test machine with cloned keys. The test machine was online simultaneously as the primary, which resulted in a slash. Details can be found on Polkassembly. The validator requested a slash cancellation, but the council declined. Kusama 3995 Small Equivocation The validator noticed several errors, after which the client crashed, and a slash was applied. The validator recorded all events and opened GitHub issues to allow for technical opinions to be shared. Details can be found on Polkassembly. The validator requested to cancel the slash. The council approved the request as they believed the error wasn't operator-related."},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#slashing-across-eras","title":"Slashing Across Eras","text":"

There are three main difficulties to account for with slashing in NPoS:

  • A nominator can nominate multiple validators and be slashed as a result of actions taken by any of them
  • Until slashed, the stake is reused from era to era
  • Slashable offenses can be found after the fact and out of order

To balance this, the system applies only the maximum slash a participant can receive in a given time period rather than the sum. This ensures protection from excessive slashing.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#disabling","title":"Disabling","text":"

The disabling mechanism is triggered when validators commit serious infractions, such as backing invalid blocks or engaging in equivocations. Disabling stops validators from performing specific actions after they have committed an offense. Disabling is further divided into:

  • On-chain disabling - lasts for a whole era and stops validators from authoring blocks, backing, and initiating a dispute
  • Off-chain disabling - lasts for a session, is caused by losing a dispute, and stops validators from initiating a dispute

Off-chain disabling is always a lower priority than on-chain disabling. Off-chain disabling prioritizes disabling first backers and then approval checkers.

Note

The material in this guide reflects the changes introduced in Stage 2. For more details, refer to the State of Disabling issue on GitHub.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#reputation-changes","title":"Reputation Changes","text":"

Some minor offenses, such as spamming, are only punished by networking reputation changes. Validators use a reputation metric when choosing which peers to connect with. The system adds reputation if a peer provides valuable data and behaves appropriately. If they provide faulty or spam data, the system reduces their reputation. If a validator loses enough reputation, their peers will temporarily close their channels to them. This helps in fighting against Denial of Service (DoS) attacks. Performing validator tasks under reduced reputation will be harder, resulting in lower validator rewards.

"},{"location":"infrastructure/staking-mechanics/offenses-and-slashes/#penalties-by-offense","title":"Penalties by Offense","text":"

Below, you can find a summary of penalties for specific offenses:

Offense Slash (%) On-Chain Disabling Off-Chain Disabling Reputational Changes Backing Invalid 100% Yes Yes (High Priority) No ForInvalid Vote - No Yes (Mid Priority) No AgainstValid Vote - No Yes (Low Priority) No GRANDPA / BABE / BEEFY Equivocations 0.01-100% Yes No No Seconded + Valid Equivocation - No No No Double Seconded Equivocation - No No Yes"},{"location":"infrastructure/staking-mechanics/rewards-payout/","title":"Rewards Payout","text":""},{"location":"infrastructure/staking-mechanics/rewards-payout/#introduction","title":"Introduction","text":"

Understanding how rewards are distributed to validators and nominators is essential for network participants. In Polkadot and Kusama, validators earn rewards based on their era points, which are accrued through actions like block production and parachain validation.

This guide explains the payout scheme, factors influencing rewards, and how multiple validators affect returns. Validators can also share rewards with nominators, who contribute by staking behind them. By following the payout mechanics, validators can optimize their earnings and better engage with their nominators.

"},{"location":"infrastructure/staking-mechanics/rewards-payout/#era-points","title":"Era Points","text":"

The Polkadot ecosystem measures their reward cycles in a unit called an era. Kusama eras are approximately 6 hours long, and Polkadot eras are 24 hours. At the end of each era, validators are paid proportionally to the amount of era points they have collected. Era points are reward points earned for payable actions like:

  • Issuing validity statements for parachain blocks
  • Producing a non-uncle block in the relay chain
  • Producing a reference to a previously unreferenced uncle block
  • Producing a referenced uncle block

Note

An uncle block is a relay chain block that is valid in every regard but has failed to become canonical. This can happen when two or more validators are block producers in a single slot, and the block produced by one validator reaches the next block producer before the others. The lagging blocks are called uncle blocks.

Payments occur at the end of every era.

"},{"location":"infrastructure/staking-mechanics/rewards-payout/#reward-variance","title":"Reward Variance","text":"

Rewards in Polkadot and Kusama staking systems can fluctuate due to differences in era points earned by para-validators and non-para-validators. Para-validators generally contribute more to the overall reward distribution due to their role in validating parachain blocks, thus influencing the variance in staking rewards.

To illustrate this relationship:

  • Para-validator era points tend to have a higher impact on the expected value of staking rewards compared to non-para-validator points
  • The variance in staking rewards increases as the total number of validators grows relative to the number of para-validators
  • In simpler terms, when more validators are added to the active set without increasing the para-validator pool, the disparity in rewards between validators becomes more pronounced

However, despite this increased variance, rewards tend to even out over time due to the continuous rotation of para-validators across eras. The network's design ensures that over multiple eras, each validator has an equal opportunity to participate in para-validation, eventually leading to a balanced distribution of rewards.

Probability in Staking Rewards

This should only serve as a high-level overview of the probabilistic nature for staking rewards.

Let:

  • pe = para-validator era points
  • ne = non-para-validator era points
  • EV = expected value of staking rewards

Then, EV(pe) has more influence on the EV than EV(ne).

Since EV(pe) has a more weighted probability on the EV, the increase in variance against the EV becomes apparent between the different validator pools (aka. validators in the active set and the ones chosen to para-validate).

Also, let:

  • v = the variance of staking rewards
  • p = number of para-validators
  • w = number validators in the active set
  • e = era

Then, v \u2191 if w \u2191, as this reduces p : w, with respect to e.

Increased v is expected, and initially keeping p \u2193 using the same para-validator set for all parachains ensures availability and voting. In addition, despite v \u2191 on an e to e basis, over time, the amount of rewards each validator receives will equal out based on the continuous selection of para-validators.

There are plans to scale the active para-validation set in the future.

"},{"location":"infrastructure/staking-mechanics/rewards-payout/#payout-scheme","title":"Payout Scheme","text":"

Validator rewards are distributed equally among all validators in the active set, regardless of the total stake behind each validator. However, individual payouts may differ based on the number of era points a validator has earned. Although factors like network connectivity can affect era points, well-performing validators should accumulate similar totals over time.

Validators can also receive tips from users, which incentivize them to include certain transactions in their blocks. Validators retain 100% of these tips.

Rewards are paid out in the network's native token (DOT for Polkadot and KSM for Kusama).

The following example illustrates a four member validator set with their names, amount they have staked, and how payout of rewards is divided. This scenario assumes all validators earned the same amount of era points and no one received tips:

%%Payout, 4 val set, A-D are validators/stakes, E is payout%%\n\nblock-beta\n    columns 1\n  block\n    A[\"Alice (18 DOT)\"]\n    B[\"Bob (9 DOT)\"]\n    C[\"Carol (8 DOT)\"]\n    D[\"Dave (7 DOT)\"]\n  end\n    space\n    E[\"Payout (8 DOT total)\"]:1\n    E --\"2 DOT\"--> A\n    E --\"2 DOT\"--> B\n    E --\"2 DOT\"--> C\n    E --\"2 DOT\"--> D 

Note that this is different than most other Proof of Stake (PoS) systems. As long as a validator is in the validator set, it will receive the same block reward as every other validator. Validator Alice, who had 18 DOT staked, received the same 2 DOT reward in this era as Dave, who had only 7 DOT staked.

"},{"location":"infrastructure/staking-mechanics/rewards-payout/#running-multiple-validators","title":"Running Multiple Validators","text":"

Running multiple validators can offer a more favorable risk/reward ratio compared to running a single one. If you have sufficient DOT or nominators staking on your validators, maintaining multiple validators within the active set can yield higher rewards.

In the preceding section, with 18 DOT staked and no nominators, Alice earned 2 DOT in one era. This example uses DOT, but the same principles apply for KSM on the Kusama network. By managing stake across multiple validators, you can potentially increase overall returns. Recall the set of validators from the preceding section:

%%Payout, 4 val set, A-D are validators/stakes, E is payout%%\n\nblock-beta\n    columns 1\n  block\n    A[\"Alice (18 DOT)\"]\n    B[\"Bob (9 DOT)\"]\n    C[\"Carol (8 DOT)\"]\n    D[\"Dave (7 DOT)\"]\n  end\n    space\n    E[\"Payout (8 DOT total)\"]:1\n    E --\"2 DOT\"--> A\n    E --\"2 DOT\"--> B\n    E --\"2 DOT\"--> C\n    E --\"2 DOT\"--> D 

Now, assume Alice decides to split their stake and run two validators, each with a nine DOT stake. This validator set only has four spots and priority is given to validators with a larger stake. In this example, Dave has the smallest stake and loses his spot in the validator set. Now, Alice will earn two shares of the total payout each era as illustrated below:

%%Payout, 4 val set, A-D are validators/stakes, E is payout%%\n\nblock-beta\n    columns 1\n  block\n    A[\"Alice (9 DOT)\"]\n    F[\"Alice (9 DOT)\"]\n    B[\"Bob (9 DOT)\"]\n    C[\"Carol (8 DOT)\"]\n  end\n    space\n    E[\"Payout (8 DOT total)\"]:1\n    E --\"2 DOT\"--> A\n    E --\"2 DOT\"--> B\n    E --\"2 DOT\"--> C\n    E --\"2 DOT\"--> F 

With enough stake, you could run more than two validators. However, each validator must have enough stake behind it to maintain a spot in the validator set.

"},{"location":"infrastructure/staking-mechanics/rewards-payout/#nominators-and-validator-payments","title":"Nominators and Validator Payments","text":"

A nominator's stake allows them to vote for validators and earn a share of the rewards without managing a validator node. Although staking rewards depend on validator activity during an era, validators themselves never control or own nominator rewards. To trigger payouts, anyone can call the staking.payoutStakers or staking.payoutStakerByPage methods, which mint and distribute rewards directly to the recipients. This trustless process ensures nominators receive their earned rewards.

Validators set a commission rate as a percentage of the block reward, affecting how rewards are shared with nominators. A 0% commission means the validator keeps only rewards from their self-stake, while a 100% commission means they retain all rewards, leaving none for nominators.

The following examples model splitting validator payments between nominator and validator using various commission percentages. For simplicity, these examples assume a Polkadot-SDK based relay chain that uses DOT as a native token and a single nominator per validator. Calculations of KSM reward payouts for Kusama follow the same formula.

Start with the original validator set from the previous section:

block-beta\n    columns 1\n  block:e\n    A[\"Alice (18 DOT)\"]\n    B[\"Bob (9 DOT)\"]\n    C[\"Carol (8 DOT)\"]\n    D[\"Dave (7 DOT)\"]\n  end\n    space\n    E[\"Payout (8 DOT total)\"]:1\n    E --\"2 DOT\"--> A\n    E --\"2 DOT\"--> B\n    E --\"2 DOT\"--> C\n    E --\"2 DOT\"--> D 

The preceding diagram shows each validator receiving a 2 DOT payout, but doesn't account for sharing rewards with nominators. The following diagram shows what nominator payout might look like for validator Alice. Alice has a 20% commission rate and holds 50% of the stake for their validator:

\nflowchart TD\n    A[\"Gross Rewards = 2 DOT\"]\n    E[\"Commission = 20%\"]\n    F[\"Alice Validator Payment = 0.4 DOT\"]\n    G[\"Total Stake Rewards = 1.6 DOT\"]\n    B[\"Alice Validator Stake = 18 DOT\"]\n    C[\"9 DOT Alice (50%)\"]\n    H[\"Alice Stake Reward = 0.8 DOT\"]\n    I[\"Total Alice Validator Reward = 1.2 DOT\"]\n    D[\"9 DOT Nominator (50%)\"]\n    J[\"Total Nominator Reward = 0.8 DOT\"]\n\n    A --> E\n    E --(2 x 0.20)--> F\n    F --(2 - 0.4)--> G\n    B --> C\n    B --> D\n    C --(1.6 x 0.50)--> H\n    H --(0.4 + 0.8)--> I\n    D --(1.60 x 0.50)--> J

Notice the validator commission rate is applied against the gross amount of rewards for the era. The validator commission is subtracted from the total rewards. After the commission is paid to the validator, the remaining amount is split among stake owners according to their percentage of the total stake. A validator's total rewards for an era include their commission plus their piece of the stake rewards.

Now, consider a different scenario for validator Bob where the commission rate is 40%, and Bob holds 33% of the stake for their validator:

\nflowchart TD\n    A[\"Gross Rewards = 2 DOT\"]\n    E[\"Commission = 40%\"]\n    F[\"Bob Validator Payment = 0.8 DOT\"]\n    G[\"Total Stake Rewards = 1.2 DOT\"]\n    B[\"Bob Validator Stake = 9 DOT\"]\n    C[\"3 DOT Bob (33%)\"]\n    H[\"Bob Stake Reward = 0.4 DOT\"]\n    I[\"Total Bob Validator Reward = 1.2 DOT\"]\n    D[\"6 DOT Nominator (67%)\"]\n    J[\"Total Nominator Reward = 0.8 DOT\"]\n\n    A --> E\n    E --(2 x 0.4)--> F\n    F --(2 - 0.8)--> G\n    B --> C\n    B --> D\n    C --(1.2 x 0.33)--> H\n    H --(0.8 + 0.4)--> I\n    D --(1.2 x 0.67)--> J

Bob holds a smaller percentage of their node's total stake, making their stake reward smaller than Alice's. In this scenario, Bob makes up the difference by charging a 40% commission rate and ultimately ends up with the same total payment as Alice. Each validator will need to find their ideal balance between the amount of stake and commission rate to attract nominators while still making running a validator worthwhile.

"},{"location":"polkadot-protocol/","title":"Learn About the Polkadot Protocol","text":"

The Polkadot protocol is designed to enable scalable, secure, and interoperable networks. It introduces a unique multichain architecture that allows independent blockchains, known as parachains, to operate seamlessly while benefiting from the shared security of the relay chain. Polkadot\u2019s decentralized governance ensures that network upgrades and decisions are community-driven, while its cross-chain messaging and interoperability features make it a hub for multichain applications.

This section offers a comprehensive technical overview of the Polkadot Protocol, delving into its multichain architecture, foundational principles, cryptographic underpinnings, and on-chain governance system. These key components constitute the core building blocks that power Polkadot, enabling seamless collaboration between parachains, efficient network operation, and decentralized decision-making through OpenGov.

Whether you're new to blockchain or an experienced developer, you'll gain insights into how the Polkadot Protocol enables scalable, interoperable, and decentralized networks.

"},{"location":"polkadot-protocol/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"polkadot-protocol/glossary/","title":"Glossary","text":"

Key definitions, concepts, and terminology specific to the Polkadot ecosystem are included here.

Additional glossaries from around the ecosystem you might find helpful:

  • Polkadot Wiki Glossary
  • Polkadot SDK Glossary
"},{"location":"polkadot-protocol/glossary/#authority","title":"Authority","text":"

The role in a blockchain that can participate in consensus mechanisms.

  • GRANDPA - the authorities vote on chains they consider final
  • Blind Assignment of Blockchain Extension (BABE) - the authorities are also block authors

Authority sets can be used as a basis for consensus mechanisms such as the Nominated Proof of Stake (NPoS) protocol.

"},{"location":"polkadot-protocol/glossary/#authority-round-aura","title":"Authority Round (Aura)","text":"

A deterministic consensus protocol where block production is limited to a rotating list of authorities that take turns creating blocks. In authority round (Aura) consensus, most online authorities are assumed to be honest. It is often used in combination with\u00a0GRANDPA\u00a0as a\u00a0hybrid consensus\u00a0protocol.

Learn more by reading the official Aura consensus algorithm wiki article.

"},{"location":"polkadot-protocol/glossary/#blind-assignment-of-blockchain-extension-babe","title":"Blind Assignment of Blockchain Extension (BABE)","text":"

A block authoring protocol similar to Aura, except authorities win slots based on a Verifiable Random Function (VRF) instead of the round-robin selection method. The winning authority can select a chain and submit a new block.

Learn more by reading the official Web3 Foundation BABE research document.

"},{"location":"polkadot-protocol/glossary/#block-author","title":"Block Author","text":"

The node responsible for the creation of a block, also called block producers. In a Proof of Work (PoW) blockchain, these nodes are called miners.

"},{"location":"polkadot-protocol/glossary/#byzantine-fault-tolerance-bft","title":"Byzantine Fault Tolerance (BFT)","text":"

The ability of a distributed computer network to remain operational if a certain proportion of its nodes or authorities are defective or behaving maliciously.

Note

A distributed network is typically considered Byzantine fault tolerant if it can remain functional, with up to one-third of nodes assumed to be defective, offline, actively malicious, and part of a coordinated attack.

"},{"location":"polkadot-protocol/glossary/#byzantine-failure","title":"Byzantine Failure","text":"

The loss of a network service due to node failures that exceed the proportion of nodes required to reach consensus.

"},{"location":"polkadot-protocol/glossary/#practical-byzantine-fault-tolerance-pbft","title":"Practical Byzantine Fault Tolerance (pBFT)","text":"

An early approach to Byzantine fault tolerance (BFT), practical Byzantine fault tolerance (pBFT) systems tolerate Byzantine behavior from up to one-third of participants.

The communication overhead for such systems is O(n\u00b2), where n is the number of nodes (participants) in the system.

"},{"location":"polkadot-protocol/glossary/#call","title":"Call","text":"

In the context of pallets containing functions to be dispatched to the runtime, Call is an enumeration data type that describes the functions that can be dispatched with one variant per pallet. A Call represents a dispatch data structure object.

"},{"location":"polkadot-protocol/glossary/#chain-specification","title":"Chain Specification","text":"

A chain specification file defines the properties required to run a node in an active or new Polkadot SDK-built network. It often contains the initial genesis runtime code, network properties (such as the network's name), the initial state for some pallets, and the boot node list. The chain specification file makes it easy to use a single Polkadot SDK codebase as the foundation for multiple independently configured chains.

"},{"location":"polkadot-protocol/glossary/#collator","title":"Collator","text":"

An author of a parachain network. They aren't authorities in themselves, as they require a relay chain to coordinate consensus.

More details are found on the Polkadot Collator Wiki.

"},{"location":"polkadot-protocol/glossary/#collective","title":"Collective","text":"

Most often used to refer to an instance of the Collective pallet on Polkadot SDK-based networks such as Kusama or Polkadot if the Collective pallet is part of the FRAME-based runtime for the network.

"},{"location":"polkadot-protocol/glossary/#consensus","title":"Consensus","text":"

Consensus is the process blockchain nodes use to agree on a chain's canonical fork. It is composed of authorship, finality, and fork-choice rule. In the Polkadot ecosystem, these three components are usually separate and the term consensus often refers specifically to authorship.

See also hybrid consensus.

"},{"location":"polkadot-protocol/glossary/#consensus-algorithm","title":"Consensus Algorithm","text":"

Ensures a set of actors\u2014who don't necessarily trust each other\u2014can reach an agreement about the state as the result of some computation. Most consensus algorithms assume that up to one-third of the actors or nodes can be Byzantine fault tolerant.

Consensus algorithms are generally concerned with ensuring two properties:

  • Safety - indicating that all honest nodes eventually agreed on the state of the chain
  • Liveness - indicating the ability of the chain to keep progressing
"},{"location":"polkadot-protocol/glossary/#consensus-engine","title":"Consensus Engine","text":"

The node subsystem responsible for consensus tasks.

For detailed information about the consensus strategies of the Polkadot network, see the Polkadot Consensus blog series.

See also hybrid consensus.

"},{"location":"polkadot-protocol/glossary/#coretime","title":"Coretime","text":"

The time allocated for utilizing a core, measured in relay chain blocks. There are two types of coretime: on-demand and bulk.

On-demand coretime refers to coretime acquired through bidding in near real-time for the validation of a single parachain block on one of the cores reserved specifically for on-demand orders. They are available as an on-demand coretime pool. Set of cores that are available on-demand. Cores reserved through bulk coretime could also be made available in the on-demand coretime pool, in parts or in entirety.

Bulk coretime is a fixed duration of continuous coretime represented by an NFT that can be split, shared, or resold. It is managed by the Broker pallet.

"},{"location":"polkadot-protocol/glossary/#development-phrase","title":"Development Phrase","text":"

A mnemonic phrase that is intentionally made public.

Well-known development accounts, such as Alice, Bob, Charlie, Dave, Eve, and Ferdie, are generated from the same secret phrase:

bottom drive obey lake curtain smoke basket hold race lonely fit walk\n

Many tools in the Polkadot SDK ecosystem, such as subkey, allow you to implicitly specify an account using a derivation path such as //Alice.

"},{"location":"polkadot-protocol/glossary/#digest","title":"Digest","text":"

An extensible field of the block header that encodes information needed by several actors in a blockchain network, including:

  • Light clients for chain synchronization
  • Consensus engines for block verification
  • The runtime itself, in the case of pre-runtime digests
"},{"location":"polkadot-protocol/glossary/#dispatchable","title":"Dispatchable","text":"

Function objects that act as the entry points in FRAME pallets. Internal or external entities can call them to interact with the blockchain\u2019s state. They are a core aspect of the runtime logic, handling transactions and other state-changing operations.

"},{"location":"polkadot-protocol/glossary/#events","title":"Events","text":"

A means of recording that some particular state transition happened.

In the context of FRAME, events are composable data types that each pallet can individually define. Events in FRAME are implemented as a set of transient storage items inspected immediately after a block has been executed and reset during block initialization.

"},{"location":"polkadot-protocol/glossary/#executor","title":"Executor","text":"

A means of executing a function call in a given runtime with a set of dependencies. There are two orchestration engines in Polkadot SDK, WebAssembly and native.

  • The native executor uses a natively compiled runtime embedded in the node to execute calls. This is a performance optimization available to up-to-date nodes

  • The WebAssembly executor uses a Wasm binary and a Wasm interpreter to execute calls. The binary is guaranteed to be up-to-date regardless of the version of the blockchain node because it is persisted in the state of the Polkadot SDK-based chain

"},{"location":"polkadot-protocol/glossary/#existential-deposit","title":"Existential Deposit","text":"

The minimum balance an account is allowed to have in the Balances pallet. Accounts cannot be created with a balance less than the existential deposit amount.

If an account balance drops below this amount, the Balances pallet uses a FRAME System API to drop its references to that account.

If the Balances pallet reference to an account is dropped, the account can be reaped.

"},{"location":"polkadot-protocol/glossary/#extrinsic","title":"Extrinsic","text":"

A general term for data that originates outside the runtime, is included in a block, and leads to some action. This includes user-initiated transactions and inherent transactions placed into the block by the block builder.

It is a SCALE-encoded array typically consisting of a version number, signature, and varying data types indicating the resulting runtime function to be called. Extrinsics can take two forms: inherents and transactions.

For more technical details, see the Polkadot spec.

"},{"location":"polkadot-protocol/glossary/#fork-choice-rulestrategy","title":"Fork Choice Rule/Strategy","text":"

A fork choice rule or strategy helps determine which chain is valid when reconciling several network forks. A common fork choice rule is the longest chain, in which the chain with the most blocks is selected.

"},{"location":"polkadot-protocol/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities","title":"FRAME (Framework for Runtime Aggregation of Modularized Entities)","text":"

Enables developers to create blockchain runtime environments from a modular set of components called pallets. It utilizes a set of procedural macros to construct runtimes.

Visit the Polkadot SDK docs for more details on FRAME.

"},{"location":"polkadot-protocol/glossary/#full-node","title":"Full Node","text":"

A node that prunes historical states, keeping only recently finalized block states to reduce storage needs. Full nodes provide current chain state access and allow direct submission and validation of extrinsics, maintaining network decentralization.

"},{"location":"polkadot-protocol/glossary/#genesis-configuration","title":"Genesis Configuration","text":"

A mechanism for specifying the initial state of a blockchain. By convention, this initial state or first block is commonly referred to as the genesis state or genesis block. The genesis configuration for Polkadot SDK-based chains is accomplished by way of a chain specification file.

"},{"location":"polkadot-protocol/glossary/#grandpa","title":"GRANDPA","text":"

A deterministic finality mechanism for blockchains that is implemented in the Rust programming language.

The formal specification is maintained by the Web3 Foundation.

"},{"location":"polkadot-protocol/glossary/#header","title":"Header","text":"

A structure that aggregates the information used to summarize a block. Primarily, it consists of cryptographic information used by light clients to get minimally secure but very efficient chain synchronization.

"},{"location":"polkadot-protocol/glossary/#hybrid-consensus","title":"Hybrid Consensus","text":"

A blockchain consensus protocol that consists of independent or loosely coupled mechanisms for block production and finality.

Hybrid consensus allows the chain to grow as fast as probabilistic consensus protocols, such as Aura, while maintaining the same level of security as deterministic finality consensus protocols, such as GRANDPA.

"},{"location":"polkadot-protocol/glossary/#inherent-transactions","title":"Inherent Transactions","text":"

A special type of unsigned transaction, referred to as inherents, that enables a block authoring node to insert information that doesn't require validation directly into a block.

Only the block-authoring node that calls the inherent transaction function can insert data into its block. In general, validators assume the data inserted using an inherent transaction is valid and reasonable even if it can't be deterministically verified.

"},{"location":"polkadot-protocol/glossary/#json-rpc","title":"JSON-RPC","text":"

A stateless, lightweight remote procedure call protocol encoded in JavaScript Object Notation (JSON). JSON-RPC provides a standard way to call functions on a remote system by using JSON.

For Polkadot SDK, this protocol is implemented through the Parity JSON-RPC crate.

"},{"location":"polkadot-protocol/glossary/#keystore","title":"Keystore","text":"

A subsystem for managing keys for the purpose of producing new blocks.

"},{"location":"polkadot-protocol/glossary/#kusama","title":"Kusama","text":"

Kusama is a Polkadot SDK-based blockchain that implements a design similar to the Polkadot network.

Kusama is a canary network and is referred to as Polkadot's \"wild cousin.\"

As a canary network, Kusama is expected to be more stable than a test network like Westend but less stable than a production network like Polkadot. Kusama is controlled by its network participants and is intended to be stable enough to encourage meaningful experimentation.

"},{"location":"polkadot-protocol/glossary/#libp2p","title":"libp2p","text":"

A peer-to-peer networking stack that allows the use of many transport mechanisms, including WebSockets (usable in a web browser).

Polkadot SDK uses the Rust implementation of the libp2p networking stack.

"},{"location":"polkadot-protocol/glossary/#light-client","title":"Light Client","text":"

A type of blockchain node that doesn't store the chain state or produce blocks.

A light client can verify cryptographic primitives and provides a remote procedure call (RPC) server, enabling blockchain users to interact with the network.

"},{"location":"polkadot-protocol/glossary/#metadata","title":"Metadata","text":"

Data that provides information about one or more aspects of a system. The metadata that exposes information about a Polkadot SDK blockchain enables you to interact with that system.

"},{"location":"polkadot-protocol/glossary/#nominated-proof-of-stake-npos","title":"Nominated Proof of Stake (NPoS)","text":"

A method for determining validators or authorities based on a willingness to commit their stake to the proper functioning of one or more block-producing nodes.

"},{"location":"polkadot-protocol/glossary/#oracle","title":"Oracle","text":"

An entity that connects a blockchain to a non-blockchain data source. Oracles enable the blockchain to access and act upon information from existing data sources and incorporate data from non-blockchain systems and services.

"},{"location":"polkadot-protocol/glossary/#origin","title":"Origin","text":"

A FRAME primitive that identifies the source of a dispatched function call into the runtime. The FRAME System pallet defines three built-in origins. As a pallet developer, you can also define custom origins, such as those defined by the Collective pallet.

"},{"location":"polkadot-protocol/glossary/#pallet","title":"Pallet","text":"

A module that can be used to extend the capabilities of a FRAME-based runtime. Pallets bundle domain-specific logic with runtime primitives like events and storage items.

"},{"location":"polkadot-protocol/glossary/#parachain","title":"Parachain","text":"

A parachain is a blockchain that derives shared infrastructure and security from a relay chain. You can learn more about parachains on the Polkadot Wiki.

"},{"location":"polkadot-protocol/glossary/#paseo","title":"Paseo","text":"

Paseo TestNet provisions testing on Polkadot's \"production\" runtime, which means less chance of feature or code mismatch when developing parachain apps. Specifically, after the Polkadot Technical fellowship proposes a runtime upgrade for Polkadot, this TestNet is updated, giving a period where the TestNet will be ahead of Polkadot to allow for testing.

"},{"location":"polkadot-protocol/glossary/#polkadot","title":"Polkadot","text":"

The Polkadot network is a blockchain that serves as the central hub of a heterogeneous blockchain network. It serves the role of the relay chain and provides shared infrastructure and security to support parachains.

"},{"location":"polkadot-protocol/glossary/#relay-chain","title":"Relay Chain","text":"

Relay chains are blockchains that provide shared infrastructure and security to the parachains in the network. In addition to providing consensus capabilities, relay chains allow parachains to communicate and exchange digital assets without needing to trust one another.

"},{"location":"polkadot-protocol/glossary/#rococo","title":"Rococo","text":"

A parachain test network for the Polkadot network. The Rococo network is a Polkadot SDK-based blockchain with an October 14, 2024 deprecation date. Development teams are encouraged to use the Paseo TestNet instead.

"},{"location":"polkadot-protocol/glossary/#runtime","title":"Runtime","text":"

The runtime provides the state transition function for a node. In Polkadot SDK, the runtime is stored as a Wasm binary in the chain state.

"},{"location":"polkadot-protocol/glossary/#slot","title":"Slot","text":"

A fixed, equal interval of time used by consensus engines such as Aura and BABE. In each slot, a subset of authorities is permitted, or obliged, to author a block.

"},{"location":"polkadot-protocol/glossary/#sovereign-account","title":"Sovereign Account","text":"

The unique account identifier for each chain in the relay chain ecosystem. It is often used in cross-consensus (XCM) interactions to sign XCM messages sent to the relay chain or other chains in the ecosystem.

The sovereign account for each chain is a root-level account that can only be accessed using the Sudo pallet or through governance. The account identifier is calculated by concatenating the Blake2 hash of a specific text string and the registered parachain identifier.

"},{"location":"polkadot-protocol/glossary/#ss58-address-format","title":"SS58 Address Format","text":"

A public key address based on the Bitcoin Base-58-check encoding. Each Polkadot SDK SS58 address uses a base-58 encoded value to identify a specific account on a specific Polkadot SDK-based chain

The canonical ss58-registry provides additional details about the address format used by different Polkadot SDK-based chains, including the network prefix and website used for different networks

"},{"location":"polkadot-protocol/glossary/#state-transition-function-stf","title":"State Transition Function (STF)","text":"

The logic of a blockchain that determines how the state changes when a block is processed. In Polkadot SDK, the state transition function is effectively equivalent to the runtime.

"},{"location":"polkadot-protocol/glossary/#storage-item","title":"Storage Item","text":"

FRAME primitives that provide type-safe data persistence capabilities to the runtime. Learn more in the storage items reference document in the Polkadot SDK.

"},{"location":"polkadot-protocol/glossary/#substrate","title":"Substrate","text":"

A flexible framework for building modular, efficient, and upgradeable blockchains. Substrate is written in the Rust programming language and is maintained by Parity Technologies.

"},{"location":"polkadot-protocol/glossary/#transaction","title":"Transaction","text":"

An extrinsic that includes a signature that can be used to verify the account authorizing it inherently or via signed extensions.

"},{"location":"polkadot-protocol/glossary/#transaction-era","title":"Transaction Era","text":"

A definable period expressed as a range of block numbers during which a transaction can be included in a block. Transaction eras are used to protect against transaction replay attacks if an account is reaped and its replay-protecting nonce is reset to zero.

"},{"location":"polkadot-protocol/glossary/#trie-patricia-merkle-tree","title":"Trie (Patricia Merkle Tree)","text":"

A data structure used to represent sets of key-value pairs and enables the items in the data set to be stored and retrieved using a cryptographic hash. Because incremental changes to the data set result in a new hash, retrieving data is efficient even if the data set is very large. With this data structure, you can also prove whether the data set includes any particular key-value pair without access to the entire data set.

In Polkadot SDK-based blockchains, state is stored in a trie data structure that supports the efficient creation of incremental digests. This trie is exposed to the runtime as a simple key/value map where both keys and values can be arbitrary byte arrays.

"},{"location":"polkadot-protocol/glossary/#validator","title":"Validator","text":"

A validator is a node that participates in the consensus mechanism of the network. Its roles include block production, transaction validation, network integrity, and security maintenance.

"},{"location":"polkadot-protocol/glossary/#webassembly-wasm","title":"WebAssembly (Wasm)","text":"

An execution architecture that allows for the efficient, platform-neutral expression of deterministic, machine-executable logic.

Wasm can be compiled from many languages, including the Rust programming language. Polkadot SDK-based chains use a Wasm binary to provide portable runtimes that can be included as part of the chain's state.

"},{"location":"polkadot-protocol/glossary/#weight","title":"Weight","text":"

A convention used in Polkadot SDK-based blockchains to measure and manage the time it takes to validate a block. Polkadot SDK defines one unit of weight as one picosecond of execution time on reference hardware.

The maximum block weight should be equivalent to one-third of the target block time with an allocation of one-third each for:

  • Block construction
  • Network propagation
  • Import and verification

By defining weights, you can trade-off the number of transactions per second and the hardware required to maintain the target block time appropriate for your use case. Weights are defined in the runtime, meaning you can tune them using runtime updates to keep up with hardware and software improvements.

"},{"location":"polkadot-protocol/glossary/#westend","title":"Westend","text":"

Westend is a Parity-maintained, Polkadot SDK-based blockchain that serves as a test network for the Polkadot network.

"},{"location":"polkadot-protocol/architecture/","title":"Architecture","text":"

Explore Polkadot's architecture, including the relay chain, parachains, and system chains, and discover the role each component plays in the broader ecosystem.

"},{"location":"polkadot-protocol/architecture/#a-brief-look-at-polkadots-chain-ecosystem","title":"A Brief Look at Polkadot\u2019s Chain Ecosystem","text":"

The following provides a brief overview of the role of each chain:

  • Polkadot chain - the central hub and main chain responsible for the overall security, consensus, and interoperability between all connected chains

  • System chains - specialized chains that provide essential services to the ecosystem, like the Asset Hub, Bridge Hub, and Coretime chain

  • Parachains - individual, specialized blockchains that run parallel to the relay chain and are connected to it

Learn more about these components by checking out the articles in this section.

"},{"location":"polkadot-protocol/architecture/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"polkadot-protocol/architecture/parachains/","title":"Parachains","text":"

Discover how parachains secure their networks and reach consensus by harnessing Polkadot\u2019s relay chain and its robust validator framework. This integrated architecture ensures shared security and seamless coordination across the entire ecosystem.

Parachains serve as the foundation of Polkadot\u2019s multichain ecosystem, enabling diverse, application-specific blockchains to operate in parallel. By connecting to the relay chain, parachains gain access to Polkadot\u2019s shared security, interoperability, and decentralized governance. This design allows developers to focus on building innovative features while benefiting from a secure and scalable infrastructure.

"},{"location":"polkadot-protocol/architecture/parachains/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"polkadot-protocol/architecture/parachains/consensus/","title":"Parachain Consensus","text":""},{"location":"polkadot-protocol/architecture/parachains/consensus/#introduction","title":"Introduction","text":"

Parachains are independent blockchains built with the Polkadot SDK, designed to leverage Polkadot\u2019s relay chain for shared security and transaction finality. These specialized chains operate as part of Polkadot\u2019s execution sharding model, where each parachain manages its own state and transactions while relying on the relay chain for validation and consensus.

At the core of parachain functionality are collators, specialized nodes that sequence transactions into blocks and maintain the parachain\u2019s state. Collators optimize Polkadot\u2019s architecture by offloading state management from the relay chain, allowing relay chain validators to focus solely on validating parachain blocks.

This guide explores how parachain consensus works, including the roles of collators and validators, and the steps involved in securing parachain blocks within Polkadot\u2019s scalable and decentralized framework.

"},{"location":"polkadot-protocol/architecture/parachains/consensus/#the-role-of-collators","title":"The Role of Collators","text":"

Collators are responsible for sequencing end-user transactions into blocks and maintaining the current state of their respective parachains. Their role is akin to Ethereum\u2019s sequencers but optimized for Polkadot's architecture.

Key responsibilities include:

  • Transaction sequencing - organizing transactions into Proof of Validity (PoV) blocks
  • State management - maintaining parachain states without burdening the relay chain validators
  • Consensus participation - sending PoV blocks to relay chain validators for approval
"},{"location":"polkadot-protocol/architecture/parachains/consensus/#consensus-and-validation","title":"Consensus and Validation","text":"

Parachain consensus operates in tandem with the relay chain, leveraging Nominated Proof of Stake (NPoS) for shared security. The process ensures parachain transactions achieve finality through the following steps:

  1. Packaging transactions - collators bundle transactions into PoV blocks (parablocks)
  2. Submission to validator - parablocks are submitted to a randomly selected subset of relay chain validators, known as paravalidators
  3. Validation of PoV Blocks - paravalidators use the parachain\u2019s state transition function (already available on the relay chain) to verify transaction validity
  4. Backing and inclusion - if a sufficient number of positive validations are received, the parablock is backed and included via a para-header on the relay chain

The following sections describe the actions taking place during each stage of the process.

"},{"location":"polkadot-protocol/architecture/parachains/consensus/#path-of-a-parachain-block","title":"Path of a Parachain Block","text":"

Polkadot achieves scalability through execution sharding, where each parachain operates as an independent shard with its own blockchain and state. Shared security for all parachains is provided by the relay chain, powered by Nominated Proof of Staking (NPoS). This framework allows parachains to focus on transaction processing and state management, while the relay chain ensures validation and finality.

The journey parachain transactions follow to reach consensus and finality can be described as follows:

  • Collators and parablocks:

    • Collators, specialized nodes on parachains, package network transactions into Proof of Validity (PoV) blocks, also called parablocks
    • These parablocks are sent to a subset of relay chain validators, known as paravalidators, for validation
    • The parachain's state transition function (Wasm blob) is not re-sent, as it is already stored on the relay chain
flowchart TB\n    %% Subgraph: Parachain\n    subgraph Parachain\n        direction LR\n        Txs[Network Transactions]\n        Collator[Collator Node]\n        ParaBlock[ParaBlock + PoV]\n        Txs -->|Package Transactions| Collator\n        Collator -->|Create| ParaBlock\n    end\n\n    subgraph Relay[\"Relay Chain\"]\n        ParaValidator\n    end\n\n    %% Main Flow\n    Parachain -->|Submit To| Relay
  • Validation by paravalidators:

    • Paravalidators are groups of approximately five relay chain validators, randomly assigned to parachains and shuffled every minute
    • Each paravalidator downloads the parachain's Wasm blob and validates the parablock by ensuring all transactions comply with the parachain\u2019s state transition rules
    • Paravalidators sign positive or negative validation statements based on the block\u2019s validity
  • Backing and approval:

    • If a parablock receives sufficient positive validation statements, it is backed and included on the relay chain as a para-header
    • An additional approval process resolves disputes. If a parablock contains invalid transactions, additional validators are tasked with verification
    • Validators who back invalid parablocks are penalized through slashing, creating strong incentives for honest behavior
flowchart\n    subgraph RelayChain[\"Relay Chain\"]\n        direction TB\n        subgraph InitialValidation[\"Initial Validation\"]\n            direction LR\n            PValidators[ParaValidators]\n            Backing[Backing\\nProcess]\n            Header[Submit Para-header\\non Relay Chain]\n        end\n        subgraph Secondary[\"Secondary Validation\"]\n            Approval[Approval\\nProcess]\n            Dispute[Dispute\\nResolution]\n            Slashing[Slashing\\nMechanism]\n        end\n\n    end\n\n\n    %% Validation Process\n    PValidators -->|Download\\nWasm\\nValidate Block| Backing\n    Backing -->|If Valid\\nSignatures| Header\n    InitialValidation -->|Additional\\nVerification| Secondary\n\n    %% Dispute Flow\n    Approval -->|If Invalid\\nDetected| Dispute\n    Dispute -->|Penalize\\nDishonest\\nValidators| Slashing

It is important to understand that relay chain blocks do not store full parachain blocks (parablocks). Instead, they include para-headers, which serve as summaries of the backed parablocks. The complete parablock remains within the parachain network, maintaining its autonomy while relying on the relay chain for validation and finality.

"},{"location":"polkadot-protocol/architecture/parachains/consensus/#where-to-go-next","title":"Where to Go Next","text":"

For more technical details, refer to the:

  • Parachain Wiki page
  • Polkadot SDK Implementation Guide section
"},{"location":"polkadot-protocol/architecture/parachains/overview/","title":"Overview","text":""},{"location":"polkadot-protocol/architecture/parachains/overview/#introduction","title":"Introduction","text":"

A parachain is a coherent, application-specific blockchain that derives security from its respective relay chain. Parachains on Polkadot are each their own separate, fully functioning blockchain. The primary difference between a parachain and a regular, \"solo\" blockchain is that the relay chain verifies the state of all parachains that are connected to it. In many ways, parachains can be thought of as a \"cynical\" rollup, as the crypto-economic protocol used (ELVES) assumes the worst-case scenario, rather than the typical optimistic approach that many roll-up mechanisms take. Once enough validators attest that a block is valid, then the probability of that block being valid is high.

As each parachain\u2019s state is validated by the relay chain, the relay chain represents the collective state of all parachains.

flowchart TB\n    subgraph \"Relay Chain\"\n        RC[Relay Chain Validators]\n        State[Collective State Validation]\n    end\n\n    PA[Parachain A]\n    PB[Parachain B]\n    PC[Parachain C]\n\n    RC -->|Validate State| PA\n    RC -->|Validate State| PB\n    RC -->|Validate State| PC\n\n    State -->|Represents Collective\\nParachain State| RC\n\n    note[\"ELVES Protocol:\\n- Crypto-economic security\\n- Assumes worst-case scenario\\n- High probability validation\"]

Coherent Systems

Coherency refers to the degree of synchronization, consistency, and interoperability between different components or chains within a system. It encompasses the internal coherence of individual chains and the external coherence between chains regarding how they interact.

A single-state machine like Ethereum is very coherent, as all of its components (smart contracts, dApps/applications, staking, consensus) operate within a single environment with the downside of less scalability. Multi-protocol state machines, such as Polkadot, offer less coherency due to their sharded nature but more scalability due to the parallelization of their architecture.

Parachains are coherent, as they are self-contained environments with domain-specific functionality.

Parachains enable parallelization of different services within the same network. However, unlike most layer two rollups, parachains don't suffer the same interoperability pitfalls that most rollups suffer. Cross-Consensus Messaging (XCM) provides a common communication format for each parachain and can be configured to allow a parachain to communicate with just the relay chain or certain parachains.

The diagram below highlights the flexibility of the Polkadot ecosystem, where each parachain specializes in a distinct domain. This example illustrates how parachains, like DeFi and GameFi, leverage XCM for cross-chain operations such as asset transfers and credential verification.

flowchart TB\n    subgraph \"Polkadot Relay Chain\"\n        RC[Relay Chain\\nCross-Consensus\\nRouting]\n    end\n\n    subgraph \"Parachain Ecosystem\"\n        direction TB\n        DeFi[DeFi Parachain\\nFinancial Services]\n        GameFi[GameFi Parachain\\nGaming Ecosystem]\n        NFT[NFT Parachain\\nDigital Collectibles]\n        Identity[Identity Parachain\\nUser Verification]\n    end\n\n    DeFi <-->|XCM: Asset Transfer| GameFi\n    GameFi <-->|XCM: Token Exchange| NFT\n    Identity <-->|XCM: Credential Verification| DeFi\n\n    RC -->|Validate & Route XCM| DeFi\n    RC -->|Validate & Route XCM| GameFi\n    RC -->|Validate & Route XCM| NFT\n    RC -->|Validate & Route XCM| Identity\n\n    note[\"XCM Features:\\n- Standardized Messaging\\n- Cross-Chain Interactions\\n- Secure Asset/Data Transfer\"]

Most parachains are built using the Polkadot SDK, which provides all the tools to create a fully functioning parachain. However, it is possible to construct a parachain that can inherit the security of the relay chain as long as it implements the correct mechanisms expected by the relay chain.

"},{"location":"polkadot-protocol/architecture/parachains/overview/#state-transition-functions-runtimes","title":"State Transition Functions (Runtimes)","text":"

At their core, parachains, like most blockchains, are deterministic, finite-state machines that are often backed by game theory and economics. The previous state of the parachain, combined with external input in the form of extrinsics, allows the state machine to progress forward, one block at a time.

Deterministic State Machines

Determinism refers to the concept that a particular input will always produce the same output. State machines are algorithmic machines that state changes based on their inputs to produce a new, updated state.

stateDiagram-v2\n    direction LR\n    [*] --> StateA : Initial State\n\n    StateA --> STF : Extrinsics/Transactions\n    STF --> StateB : Deterministic Transformation\n    StateB --> [*] : New State

The primary driver of this progression is the state transition function (STF), commonly referred to as a runtime. Each time a block is submitted, it represents the next proposed state for a parachain. By applying the state transition function to the previous state and including a new block that contains the proposed changes in the form of a list of extrinsics/transactions, the runtime defines just exactly how the parachain is to advance from state A to state B.

The STF in a Polkadot SDK-based chain is compiled to Wasm and uploaded on the relay chain. This STF is crucial for the relay chain to validate the state changes coming from the parachain, as it is used to ensure that all proposed state transitions are happening correctly as part of the validation process.

Wasm Runtimes

For more information on the Wasm meta protocol that powers runtimes, see the Polkadot SDK Rust Docs: WASM Meta Protocol

"},{"location":"polkadot-protocol/architecture/parachains/overview/#shared-security-validated-by-the-relay-chain","title":"Shared Security: Validated by the Relay Chain","text":"

The relay chain provides a layer of economic security for its parachains. Parachains submit proof of validation (PoV) data to the relay chain for validation through collators, upon which the relay chains' validators ensure the validity of this data in accordance with the STF for that particular parachain. In other words, the consensus for a parachain follows the relay chain. While parachains choose how a block is authored, what it contains, and who authors it, the relay chain ultimately provides finality and consensus for those blocks.

The Parachains Protocol

For more information regarding the parachain and relay chain validation process, view the Polkadot Wiki: Parachains' Protocol Overview: Protocols' Summary

Parachains need at least one honest collator to submit PoV data to the relay chain. Without this, the parachain can't progress. The mechanisms that facilitate this are found in the Cumulus portion of the Polkadot SDK, some of which are found in the cumulus_pallet_parachain_system

"},{"location":"polkadot-protocol/architecture/parachains/overview/#cryptoeconomic-security-elves-protocol","title":"Cryptoeconomic Security: ELVES Protocol","text":"

The ELVES (Economic Last Validation Enforcement System) protocol forms the foundation of Polkadot's cryptoeconomic security model. ELVES assumes a worst-case scenario by enforcing strict validation rules before any state transitions are finalized. Unlike optimistic approaches that rely on post-facto dispute resolution, ELVES ensures that validators collectively confirm the validity of a block before it becomes part of the parachain's state.

Validators are incentivized through staking and penalized for malicious or erroneous actions, ensuring adherence to the protocol. This approach minimizes the probability of invalid states being propagated across the network, providing robust security for parachains.

"},{"location":"polkadot-protocol/architecture/parachains/overview/#interoperability","title":"Interoperability","text":"

Polkadot's interoperability framework allows parachains to communicate with each other, fostering a diverse ecosystem of interconnected blockchains. Through Cross-Consensus Messaging (XCM), parachains can transfer assets, share data, and invoke functionalities on other chains securely. This standardized messaging protocol ensures that parachains can interact with the relay chain and each other, supporting efficient cross-chain operations.

The XCM protocol mitigates common interoperability challenges in isolated blockchain networks, such as fragmented ecosystems and limited collaboration. By enabling decentralized applications to leverage resources and functionality across parachains, Polkadot promotes a scalable, cooperative blockchain environment that benefits all participants.

"},{"location":"polkadot-protocol/architecture/parachains/overview/#where-to-go-next","title":"Where to Go Next","text":"

For further information about the consensus protocol used by parachains, see the Consensus page.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/","title":"The Polkadot Relay Chain","text":"

Discover the central role of the Polkadot Relay Chain in securing the network and fostering interoperability. As the backbone of Polkadot, the relay chain provides shared security and ensures consensus across the ecosystem. It empowers parachains with flexible coretime allocation, enabling them to purchase blockspace on demand, ensuring efficiency and scalability for diverse blockchain applications.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"polkadot-protocol/architecture/polkadot-chain/agile-coretime/","title":"Agile Coretime","text":""},{"location":"polkadot-protocol/architecture/polkadot-chain/agile-coretime/#introduction","title":"Introduction","text":"

Agile Coretime is the scheduling framework on Polkadot that lets parachains efficiently access cores, which comprise an active validator set tasked with parablock validation. As the first blockchain to enable a flexible scheduling system for blockspace production, Polkadot offers unparalleled adaptability for parachains.

Cores can be designated to a parachain either continuously through bulk coretime or dynamically via on-demand coretime. Additionally, Polkadot supports scheduling multiple cores in parallel through elastic scaling, which is a feature under active development on Polkadot. This flexibility empowers parachains to optimize their resource usage and block production according to their unique needs.

In this guide, you'll learn how bulk coretime enables continuous core access with features like interlacing and splitting, and how on-demand coretime provides flexible, pay-per-use scheduling for parachains. For a deep dive on Agile Coretime and its terminology, refer to the Wiki doc.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/agile-coretime/#bulk-coretime","title":"Bulk Coretime","text":"

Bulk coretime is a fixed duration of continuous coretime represented by an NFT that can be purchased through coretime sales in DOT and can be split, shared, or resold. Currently, the duration of bulk coretime is set to 28 days. Coretime purchased in bulk and assigned to a single parachain is eligible for a price-capped renewal, providing a form of rent-controlled access, which is important for predicting the running costs in the near future. Suppose the bulk coretime is interlaced or split or is kept idle without assigning it to a parachain. In that case, it will be ineligible for the price-capped renewal.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/agile-coretime/#coretime-interlacing","title":"Coretime Interlacing","text":"

It is the action of dividing bulk coretime across multiple parachains that produce blocks spaced uniformly in time. For example, think of multiple parachains taking turns producing blocks, demonstrating a simple form of interlacing. This feature can be used by parachains with a low transaction volume and need not continuously produce blocks.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/agile-coretime/#coretime-splitting","title":"Coretime Splitting","text":"

It is the action of dividing bulk coretime into multiple contiguous regions. This feature can be used by parachains that need to produce blocks continuously but do not require the whole 28 days of bulk coretime and require only part of it.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/agile-coretime/#on-demand-coretime","title":"On-Demand Coretime","text":"

Polkadot has dedicated cores assigned to provide core time on demand. These cores are excluded from the coretime sales and are reserved for on-demand parachains, which pay in DOT per block.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/","title":"Overview","text":""},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#introduction","title":"Introduction","text":"

Polkadot is a next-generation blockchain protocol designed to support a multi-chain future by enabling secure communication and interoperability between different blockchains. Built as a Layer-0 protocol, Polkadot introduces innovations like application-specific Layer-1 chains (parachains), shared security through Nominated Proof of Stake (NPoS), and cross-chain interactions via its native Cross-Consensus Messaging Format (XCM).

This guide covers key aspects of Polkadot\u2019s architecture, including its high-level protocol structure, blockspace commoditization, and the role of its native token, DOT, in governance, staking, and resource allocation.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#polkadot-10","title":"Polkadot 1.0","text":"

Polkadot 1.0 represents the state of Polkadot as of 2023, coinciding with the release of Polkadot runtime v1.0.0. This section will focus on Polkadot 1.0, along with philosophical insights into network resilience and blockspace.

As a Layer-0 blockchain, Polkadot contributes to the multi-chain vision through several key innovations and initiatives, including:

  • Application-specific Layer-1 blockchains (parachains) - Polkadot's sharded network allows for parallel transaction processing, with shards that can have unique state transition functions, enabling custom-built L1 chains optimized for specific applications

  • Shared security and scalability - L1 chains connected to Polkadot benefit from its Nominated Proof of Stake (NPoS) system, providing security out-of-the-box without the need to bootstrap their own

  • Secure interoperability - Polkadot's native interoperability enables seamless data and value exchange between parachains. This interoperability can also be used outside of the ecosystem for bridging with external networks

  • Resilient infrastructure - decentralized and scalable, Polkadot ensures ongoing support for development and community initiatives via its on-chain treasury and governance

  • Rapid L1 development - the Polkadot SDK allows fast, flexible creation and deployment of Layer-1 chains

  • Cultivating the next generation of Web3 developers - Polkadot supports the growth of Web3 core developers through initiatives such as:

    • Polkadot Blockchain Academy
    • Polkadot Alpha Program
    • EdX courses
    • Rust and Substrate courses (coming soon)
"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#high-level-architecture","title":"High-Level Architecture","text":"

Polkadot features a chain that serves as the central component of the system. This chain is depicted as a ring encircled by several parachains that are connected to it.

According to Polkadot's design, any blockchain that can compile to WebAssembly (Wasm) and adheres to the Parachains Protocol becomes a parachain on the Polkadot network.

Here\u2019s a high-level overview of the Polkadot protocol architecture:

Parachains propose blocks to Polkadot validators, who check for availability and validity before finalizing them. With the relay chain providing security, collators\u2014full nodes of parachains\u2014can focus on their tasks without needing strong incentives.

The Cross-Consensus Messaging Format (XCM) allows parachains to exchange messages freely, leveraging the chain's security for trust-free communication.

In order to interact with chains that want to use their own finalization process (e.g., Bitcoin), Polkadot has bridges that offer two-way compatibility, meaning that transactions can be made between different parachains.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#polkadots-additional-functionalities","title":"Polkadot's Additional Functionalities","text":"

The Polkadot chain oversees crowdloans and auctions. Chain cores were leased through auctions for three-month periods, up to a maximum of two years.

Crowdloans enabled users to securely lend funds to teams for lease deposits in exchange for pre-sale tokens, which is the only way to access slots on Polkadot 1.0.

Note

Auctions are deprecated in favor of coretime.

Additionally, the chain handles staking, accounts, balances, and governance.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#agile-coretime","title":"Agile Coretime","text":"

The new and more efficient way of obtaining core on Polkadot is to go through the process of purchasing coretime.

Agile coretime improves the efficient use of Polkadot's network resources and offers economic flexibility for developers, extending Polkadot's capabilities far beyond the original vision outlined in the whitepaper.

It enables parachains to purchase monthly \"bulk\" allocations of coretime (the time allocated for utilizing a core, measured in Polkadot relay chain blocks), ensuring heavy-duty parachains that can author a block every six seconds with Asynchronous Backing can reliably renew their coretime each month. Although six-second block times are now the default, parachains have the option of producing blocks less frequently.

Renewal orders are prioritized over new orders, offering stability against price fluctuations and helping parachains budget more effectively for project costs.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#polkadots-resilience","title":"Polkadot's Resilience","text":"

Decentralization is a vital component of blockchain networks, but it comes with trade-offs:

  • An overly decentralized network may face challenges in reaching consensus and require significant energy to operate
  • Also, a network that achieves consensus quickly risks centralization, making it easier to manipulate or attack

A network should be decentralized enough to prevent manipulative or malicious influence. In this sense, decentralization is a tool for achieving resilience.

Polkadot 1.0 currently achieves resilience through several strategies:

  • Nominated Proof of Stake (NPoS) - ensures that the stake per validator is maximized and evenly distributed among validators

  • Decentralized nodes - designed to encourage operators to join the network. This program aims to expand and diversify the validators in the ecosystem who aim to become independent of the program during their term. Feel free to explore more about the program on the official Decentralized Nodes page

  • On-chain treasury and governance - known as OpenGov, this system allows every decision to be made through public referenda, enabling any token holder to cast a vote

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#polkadots-blockspace","title":"Polkadot's Blockspace","text":"

Polkadot 1.0\u2019s design allows for the commoditization of blockspace.

Blockspace is a blockchain's capacity to finalize and commit operations, encompassing its security, computing, and storage capabilities. Its characteristics can vary across different blockchains, affecting security, flexibility, and availability.

  • Security - measures the robustness of blockspace in Proof of Stake (PoS) networks linked to the stake locked on validator nodes, the variance in stake among validators, and the total number of validators. It also considers social centralization (how many validators are owned by single operators) and physical centralization (how many validators run on the same service provider)

  • Flexibility - reflects the functionalities and types of data that can be stored, with high-quality data essential to avoid bottlenecks in critical processes

  • Availability - indicates how easily users can access blockspace. It should be easily accessible, allowing diverse business models to thrive, ideally regulated by a marketplace based on demand and supplemented by options for \"second-hand\" blockspace

Polkadot is built on core blockspace principles, but there's room for improvement. Tasks like balance transfers, staking, and governance are managed on the relay chain.

Delegating these responsibilities to system chains could enhance flexibility and allow the relay chain to concentrate on providing shared security and interoperability.

Note

For more information about blockspace, watch Robert Habermeier\u2019s interview or read his technical blog post.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#dot-token","title":"DOT Token","text":"

DOT is the native token of the Polkadot network, much like BTC for Bitcoin and Ether for the Ethereum blockchain. DOT has 10 decimals, uses the Planck base unit, and has a balance type of u128. The same is true for Kusama's KSM token with the exception of having 12 decimals.

Redenomination of DOT

Polkadot conducted a community poll, which ended on 27 July 2020 at block 888,888, to decide whether to redenominate the DOT token. The stakeholders chose to redenominate the token, changing the value of 1 DOT from 1e12 plancks to 1e10 plancks.

Importantly, this did not affect the network's total number of base units (plancks); it only affects how a single DOT is represented.

The redenomination became effective 72 hours after transfers were enabled, occurring at block 1,248,328 on 21 August 2020 around 16:50 UTC.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#the-planck-unit","title":"The Planck Unit","text":"

The smallest unit of account balance on Polkadot SDK-based blockchains (such as Polkadot and Kusama) is called Planck, named after the Planck length, the smallest measurable distance in the physical universe.

Similar to how BTC's smallest unit is the Satoshi and ETH's is the Wei, Polkadot's native token DOT equals 1e10 Planck, while Kusama's native token KSM equals 1e12 Planck.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#uses-for-dot","title":"Uses for DOT","text":"

DOT serves three primary functions within the Polkadot network:

  • Governance - it is used to participate in the governance of the network
  • Staking - DOT is staked to support the network's operation and security
  • Buying coretime - used to purchase coretime in-bulk or on-demand and access the chain to benefit from Polkadot's security and interoperability

Additionally, DOT can serve as a transferable token. For example, DOT, held in the treasury, can be allocated to teams developing projects that benefit the Polkadot ecosystem.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/overview/#jam-and-the-road-ahead","title":"JAM and the Road Ahead","text":"

The Join-Accumulate Machine (JAM) represents a transformative redesign of Polkadot's core architecture, envisioned as the successor to the current relay chain. Unlike traditional blockchain architectures, JAM introduces a unique computational model that processes work through two primary functions:

  • Join - handles data integration
  • Accumulate - folds computations into the chain's state

JAM removes many of the opinions and constraints of the current relay chain while maintaining its core security properties. Expected improvements include:

  • Permissionless code execution - JAM is designed to be more generic and flexible, allowing for permissionless code execution through services that can be deployed without governance approval
  • More effective block time utilization - JAM's efficient pipeline processing model places the prior state root in block headers instead of the posterior state root, enabling more effective utilization of block time for computations

This architectural evolution promises to enhance Polkadot's scalability and flexibility while maintaining robust security guarantees. JAM is planned to be rolled out to Polkadot as a single, complete upgrade rather than a stream of smaller updates. This approach seeks to minimize the developer overhead required to address any breaking changes.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/","title":"Proof of Stake Consensus","text":""},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#introduction","title":"Introduction","text":"

Polkadot's Proof of Stake consensus model leverages a unique hybrid approach by design to promote decentralized and secure network operations. In traditional Proof of Stake (PoS) systems, a node's ability to validate transactions is tied to its token holdings, which can lead to centralization risks and limited validator participation. Polkadot addresses these concerns through its Nominated Proof of Stake (NPoS) model and a combination of advanced consensus mechanisms to ensure efficient block production and strong finality guarantees. This combination enables the Polkadot network to scale while maintaining security and decentralization.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#nominated-proof-of-stake","title":"Nominated Proof of Stake","text":"

Polkadot uses Nominated Proof of Stake (NPoS) to select the validator set and secure the network. This model is designed to maximize decentralization and security by balancing the roles of validators and nominators.

  • Validators - play a key role in maintaining the network's integrity. They produce new blocks, validate parachain blocks, and ensure the finality of transactions across the relay chain
  • Nominators - support the network by selecting validators to back with their stake. This mechanism allows users who don't want to run a validator node to still participate in securing the network and earn rewards based on the validators they support

In Polkadot's NPoS system, nominators can delegate their tokens to trusted validators, giving them voting power in selecting validators while spreading security responsibilities across the network.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#hybrid-consensus","title":"Hybrid Consensus","text":"

Polkadot employs a hybrid consensus model that combines two key protocols: a finality gadget called GRANDPA and a block production mechanism known as BABE. This hybrid approach enables the network to benefit from both rapid block production and provable finality, ensuring security and performance.

The hybrid consensus model has some key advantages:

  • Probabilistic finality - with BABE constantly producing new blocks, Polkadot ensures that the network continues to make progress, even when a final decision has not yet been reached on which chain is the true canonical chain

  • Provable finality - GRANDPA guarantees that once a block is finalized, it can never be reverted, ensuring that all network participants agree on the finalized chain

By using separate protocols for block production and finality, Polkadot can achieve rapid block creation and strong guarantees of finality while avoiding the typical trade-offs seen in traditional consensus mechanisms.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#block-production-babe","title":"Block Production - BABE","text":"

Blind Assignment for Blockchain Extension (BABE) is Polkadot's block production mechanism, working with GRANDPA to ensure blocks are produced consistently across the network. As validators participate in BABE, they are assigned block production slots through a randomness-based lottery system. This helps determine which validator is responsible for producing a block at a given time. BABE shares similarities with Ouroboros Praos but differs in key aspects like chain selection rules and slot timing.

Key features of BABE include:

  • Epochs and slots - BABE operates in phases called epochs, each of which is divided into slots (around 6 seconds per slot). Validators are assigned slots at the beginning of each epoch based on stake and randomness

  • Randomized block production - validators enter a lottery to determine which will produce a block in a specific slot. This randomness is sourced from the relay chain's randomness cycle

  • Multiple block producers per slot - in some cases, more than one validator might win the lottery for the same slot, resulting in multiple blocks being produced. These blocks are broadcasted, and the network's fork choice rule helps decide which chain to follow

  • Handling empty slots - if no validators win the lottery for a slot, a secondary selection algorithm ensures that a block is still produced. Validators selected through this method always produce a block, ensuring no slots are skipped

BABE's combination of randomness and slot allocation creates a secure, decentralized system for consistent block production while also allowing for fork resolution when multiple validators produce blocks for the same slot.

Additional Information
  • Refer to the BABE paper for further technical insights, including cryptographic details and formal proofs
  • Visit the Block Production Lottery section of the Polkadot Protocol Specification for technical definitions and formulas
"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#validator-participation","title":"Validator Participation","text":"

In BABE, validators participate in a lottery for every slot to determine whether they are responsible for producing a block during that slot. This process's randomness ensures a decentralized and unpredictable block production mechanism.

There are two lottery outcomes for any given slot that initiate additional processes:

  • Multiple validators in a slot - due to the randomness, multiple validators can be selected to produce a block for the same slot. When this happens, each validator produces a block and broadcasts it to the network resulting in a race condition. The network's topology and latency then determine which block reaches the majority of nodes first. BABE allows both chains to continue building until the finalization process resolves which one becomes canonical. The Fork Choice rule is then used to decide which chain the network should follow

  • No validators in a slot - on occasions when no validator is selected by the lottery, a secondary validator selection algorithm steps in. This backup ensures that a block is still produced, preventing skipped slots. However, if the primary block produced by a verifiable random function (VRF)-selected validator exists for that slot, the secondary block will be ignored. As a result, every slot will have either a primary or a secondary block

This design ensures continuous block production, even in cases of multiple competing validators or an absence of selected validators.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#finality-gadget-grandpa","title":"Finality Gadget - GRANDPA","text":"

GRANDPA (GHOST-based Recursive ANcestor Deriving Prefix Agreement) serves as the finality gadget for Polkadot's relay chain. Operating alongside the BABE block production mechanism, it ensures provable finality, giving participants confidence that blocks finalized by GRANDPA cannot be reverted.

Key features of GRANDPA include:

  • Independent finality service \u2013 GRANDPA runs separately from the block production process, operating in parallel to ensure seamless finalization
  • Chain-based finalization \u2013 instead of finalizing one block at a time, GRANDPA finalizes entire chains, speeding up the process significantly
  • Batch finalization \u2013 can finalize multiple blocks in a single round, enhancing efficiency and minimizing delays in the network
  • Partial synchrony tolerance \u2013 GRANDPA works effectively in a partially synchronous network environment, managing both asynchronous and synchronous conditions
  • Byzantine fault tolerance \u2013 can handle up to 1/5 Byzantine (malicious) nodes, ensuring the system remains secure even when faced with adversarial behavior
What is GHOST?

GHOST (Greedy Heaviest-Observed Subtree) is a consensus protocol used in blockchain networks to select the heaviest branch in a block tree. Unlike traditional longest-chain rules, GHOST can more efficiently handle high block production rates by considering the weight of subtrees rather than just the chain length.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#probabilistic-vs-provable-finality","title":"Probabilistic vs. Provable Finality","text":"

In traditional Proof of Work (PoW) blockchains, finality is probabilistic. As blocks are added to the chain, the probability that a block is final increases, but it can never be guaranteed. Eventual consensus means that over time, all nodes will agree on a single version of the blockchain, but this process can be unpredictable and slow.

Conversely, GRANDPA provides provable finality, which means that once a block is finalized, it is irreversible. By using Byzantine fault-tolerant agreements, GRANDPA finalizes blocks more efficiently and securely than probabilistic mechanisms like Nakamoto consensus. Like Ethereum's Casper the Friendly Finality Gadget (FFG), GRANDPA ensures that finalized blocks cannot be reverted, offering stronger guarantees of consensus.

Additional Information

For more details, including formal proofs and detailed algorithms, see the GRANDPA paper.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#fork-choice","title":"Fork Choice","text":"

The fork choice of the relay chain combines BABE and GRANDPA:

  1. BABE must always build on the chain that GRANDPA has finalized
  2. When there are forks after the finalized head, BABE builds on the chain with the most primary blocks to provide probabilistic finality

In the preceding diagram, finalized blocks are black, and non-finalized blocks are yellow. Primary blocks are labeled '1', and secondary blocks are labeled '2.' The topmost chain is the longest chain originating from the last finalized block, but it is not selected because it only has one primary block at the time of evaluation. In comparison, the one below it originates from the last finalized block and has three primary blocks.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#bridging-beefy","title":"Bridging - BEEFY","text":"

Bridge Efficiency Enabling Finality Yielder (BEEFY) is a specialized protocol that extends the finality guarantees provided by GRANDPA. It is specifically designed to facilitate efficient bridging between Polkadot relay chains (such as Polkadot and Kusama) and external blockchains like Ethereum. While GRANDPA is well-suited for finalizing blocks within Polkadot, it has limitations when bridging external chains that weren't built with Polkadot's interoperability features in mind. BEEFY addresses these limitations by ensuring other networks can efficiently verify finality proofs.

Key features of BEEFY include:

  • Efficient finality proof verification - BEEFY enables external networks to easily verify Polkadot finality proofs, ensuring seamless communication between chains
  • Merkle Mountain Ranges (MMR) - this data structure is used to efficiently store and transmit proofs between chains, optimizing data storage and reducing transmission overhead
  • ECDSA signature schemes - BEEFY uses ECDSA signatures, which are widely supported on Ethereum and other EVM-based chains, making integration with these ecosystems smoother
  • Light client optimization - BEEFY reduces the computational burden on light clients by allowing them to check for a super-majority of validator votes rather than needing to process all validator signatures, improving performance
Additional Information

For more details, including technical definitions and formulas, see Bridge design (BEEFY) in the Polkadot Protocol Specification.

"},{"location":"polkadot-protocol/architecture/polkadot-chain/pos-consensus/#resources","title":"Resources","text":"
  • GRANDPA Rust implementation
  • GRANDPA Pallet
  • Block Production and Finalization in Polkadot - Bill Laboon explains how BABE and GRANDPA work together to produce and finalize blocks on Kusama
  • Block Production and Finalization in Polkadot: Understanding the BABE and GRANDPA Protocols - Bill Laboon's MIT Cryptoeconomic Systems 2020 academic talk describing Polkadot's hybrid consensus model in-depth
"},{"location":"polkadot-protocol/architecture/system-chains/","title":"System Chains","text":"

Explore the critical roles Polkadot\u2019s system chains play in enhancing the network\u2019s efficiency and scalability. From managing on-chain assets with the Asset Hub to enabling seamless Web3 integration through the Bridge Hub and facilitating coretime operations with the Coretime chain, each system chain is designed to offload specialized tasks from the relay chain, optimizing the entire ecosystem.

These system chains are integral to Polkadot's architecture, ensuring that the relay chain remains focused on consensus and security while system chains handle vital functions like asset management, cross-chain communication, and resource allocation. By distributing responsibilities across specialized chains, Polkadot maintains high performance, scalability, and flexibility, enabling developers to build more efficient and interconnected blockchain solutions.

"},{"location":"polkadot-protocol/architecture/system-chains/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/","title":"Asset Hub","text":""},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#introduction","title":"Introduction","text":"

The Asset Hub is a critical component in the Polkadot ecosystem, enabling the management of fungible and non-fungible assets across the network. Since the relay chain focuses on maintaining security and consensus without direct asset management, Asset Hub provides a streamlined platform for creating, managing, and using on-chain assets in a fee-efficient manner. This guide outlines the core features of Asset Hub, including how it handles asset operations, cross-chain transfers, and asset integration using XCM, as well as essential tools like API Sidecar and TxWrapper for developers working with on-chain assets.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#assets-basics","title":"Assets Basics","text":"

In the Polkadot ecosystem, the relay chain does not natively support additional assets beyond its native token (DOT for Polkadot, KSM for Kusama). The Asset Hub parachain on Polkadot and Kusama provides a fungible and non-fungible assets framework. Asset Hub allows developers and users to create, manage, and use assets across the ecosystem.

Asset creators can use Asset Hub to track their asset issuance across multiple parachains and manage assets through operations such as minting, burning, and transferring. Projects that need a standardized method of handling on-chain assets will find this particularly useful. The fungible asset interface provided by Asset Hub closely resembles Ethereum's ERC-20 standard but is directly integrated into Polkadot's runtime, making it more efficient in terms of speed and transaction fees.

Integrating with Asset Hub offers several key benefits, particularly for infrastructure providers and users:

  • Support for non-native on-chain assets - Asset Hub enables seamless asset creation and management, allowing projects to develop tokens or assets that can interact with the broader ecosystem
  • Lower transaction fees - Asset Hub offers significantly lower transaction costs\u2014approximately one-tenth of the fees on the relay chain, providing cost-efficiency for regular operations
  • Reduced deposit requirements - depositing assets in Asset Hub is more accessible, with deposit requirements that are around one one-hundredth of those on the relay chain
  • Payment of transaction fees with non-native assets - users can pay transaction fees in assets other than the native token (DOT or KSM), offering more flexibility for developers and users

Assets created on the Asset Hub are stored as part of a map, where each asset has a unique ID that links to information about the asset, including details like:

  • The management team
  • The total supply
  • The number of accounts holding the asset
  • Sufficiency for account existence - whether the asset alone is enough to maintain an account without a native token balance
  • The metadata of the asset, including its name, symbol, and the number of decimals for representation

Some assets can be regarded as sufficient to maintain an account's existence, meaning that users can create accounts on the network without needing a native token balance (i.e., no existential deposit required). Developers can also set minimum balances for their assets. If an account's balance drops below the minimum, the balance is considered dust and may be cleared.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#assets-pallet","title":"Assets Pallet","text":"

The Polkadot SDK's Assets pallet is a powerful module designated for creating and managing fungible asset classes with a fixed supply. It offers a secure and flexible way to issue, transfer, freeze, and destroy assets. The pallet supports various operations and includes permissioned and non-permissioned functions to cater to simple and advanced use cases.

Visit the Assets Pallet Rust docs for more in-depth information.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#key-features","title":"Key Features","text":"

Key features of the Assets pallet include:

  • Asset issuance - allows the creation of a new asset, where the total supply is assigned to the creator's account
  • Asset transfer - enables transferring assets between accounts while maintaining a balance in both accounts
  • Asset freezing - prevents transfers of a specific asset from one account, locking it from further transactions
  • Asset destruction - allows accounts to burn or destroy their holdings, removing those assets from circulation
  • Non-custodial transfers - a non-custodial mechanism to enable one account to approve a transfer of assets on behalf of another
"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#main-functions","title":"Main Functions","text":"

The Assets pallet provides a broad interface for managing fungible assets. Some of the main dispatchable functions include:

  • create() - create a new asset class by placing a deposit, applicable when asset creation is permissionless
  • issue() - mint a fixed supply of a new asset and assign it to the creator's account
  • transfer() - transfer a specified amount of an asset between two accounts
  • approve_transfer() - approve a non-custodial transfer, allowing a third party to move assets between accounts
  • destroy() - destroy an entire asset class, removing it permanently from the chain
  • freeze() and thaw() - administrators or privileged users can lock or unlock assets from being transferred

For a full list of dispatchable and privileged functions, see the dispatchables Rust docs.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#querying-functions","title":"Querying Functions","text":"

The Assets pallet exposes several key querying functions that developers can interact with programmatically. These functions allow you to query asset information and perform operations essential for managing assets across accounts. The two main querying functions are:

  • balance(asset_id, account) - retrieves the balance of a given asset for a specified account. Useful for checking the holdings of an asset class across different accounts

  • total_supply(asset_id) - returns the total supply of the asset identified by asset_id. Allows users to verify how much of the asset exists on-chain

In addition to these basic functions, other utility functions are available for querying asset metadata and performing asset transfers. You can view the complete list of querying functions in the Struct Pallet Rust docs.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#permission-models-and-roles","title":"Permission Models and Roles","text":"

The Assets pallet incorporates a robust permission model, enabling control over who can perform specific operations like minting, transferring, or freezing assets. The key roles within the permission model are:

  • Admin - can freeze (preventing transfers) and forcibly transfer assets between accounts. Admins also have the power to reduce the balance of an asset class across arbitrary accounts. They manage the more sensitive and administrative aspects of the asset class
  • Issuer - responsible for minting new tokens. When new assets are created, the Issuer is the account that controls their distribution to other accounts
  • Freezer - can lock the transfer of assets from an account, preventing the account holder from moving their balance. This function is useful for freezing accounts involved in disputes or fraud
  • Owner - has overarching control, including destroying an entire asset class. Owners can also set or update the Issuer, Freezer, and Admin roles

These permissions provide fine-grained control over assets, enabling developers and asset managers to ensure secure, controlled operations. Each of these roles is crucial for managing asset lifecycles and ensuring that assets are used appropriately across the network.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#asset-freezing","title":"Asset Freezing","text":"

The Assets pallet allows you to freeze assets. This feature prevents transfers or spending from a specific account, effectively locking the balance of an asset class until it is explicitly unfrozen. Asset freezing is beneficial when assets are restricted due to security concerns or disputes.

Freezing assets is controlled by the Freezer role, as mentioned earlier. Only the account with the Freezer privilege can perform these operations. Here are the key freezing functions:

  • freeze(asset_id, account) - locks the specified asset of the account. While the asset is frozen, no transfers can be made from the frozen account
  • thaw(asset_id, account) - corresponding function for unfreezing, allowing the asset to be transferred again

This approach enables secure and flexible asset management, providing administrators the tools to control asset movement in special circumstances.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#non-custodial-transfers-approval-api","title":"Non-Custodial Transfers (Approval API)","text":"

The Assets pallet also supports non-custodial transfers through the Approval API. This feature allows one account to approve another account to transfer a specific amount of its assets to a third-party recipient without granting full control over the account's balance. Non-custodial transfers enable secure transactions where trust is required between multiple parties.

Here's a brief overview of the key functions for non-custodial asset transfers:

  • approve_transfer(asset_id, delegate, amount) - approves a delegate to transfer up to a certain amount of the asset on behalf of the original account holder
  • cancel_approval(asset_id, delegate) - cancels a previous approval for the delegate. Once canceled, the delegate no longer has permission to transfer the approved amount
  • transfer_approved(asset_id, owner, recipient, amount) - executes the approved asset transfer from the owner\u2019s account to the recipient. The delegate account can call this function once approval is granted

These delegated operations make it easier to manage multi-step transactions and dApps that require complex asset flows between participants.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#foreign-assets","title":"Foreign Assets","text":"

Foreign assets in Asset Hub refer to assets originating from external blockchains or parachains that are registered in the Asset Hub. These assets are typically native tokens from other parachains within the Polkadot ecosystem or bridged tokens from external blockchains such as Ethereum.

Once a foreign asset is registered in the Asset Hub by its originating blockchain's root origin, users are able to send these tokens to the Asset Hub and interact with them as they would any other asset within the Polkadot ecosystem.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#handling-foreign-assets","title":"Handling Foreign Assets","text":"

The Foreign Assets pallet, an instance of the Assets pallet, manages these assets. Since foreign assets are integrated into the same interface as native assets, developers can use the same functionalities, such as transferring and querying balances. However, there are important distinctions when dealing with foreign assets.

  • Asset identifier - unlike native assets, foreign assets are identified using an XCM Multilocation rather than a simple numeric AssetId. This multilocation identifier represents the cross-chain location of the asset and provides a standardized way to reference it across different parachains and relay chains

  • Transfers - once registered in the Asset Hub, foreign assets can be transferred between accounts, just like native assets. Users can also send these assets back to their originating blockchain if supported by the relevant cross-chain messaging mechanisms

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#integration","title":"Integration","text":"

Asset Hub supports a variety of integration tools that make it easy for developers to manage assets and interact with the blockchain in their applications. The tools and libraries provided by Parity Technologies enable streamlined operations, such as querying asset information, building transactions, and monitoring cross-chain asset transfers.

Developers can integrate Asset Hub into their projects using these core tools:

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#api-sidecar","title":"API Sidecar","text":"

API Sidecar is a RESTful service that can be deployed alongside Polkadot and Kusama nodes. It provides endpoints to retrieve real-time blockchain data, including asset information. When used with Asset Hub, Sidecar allows querying:

  • Asset look-ups - retrieve specific assets using AssetId
  • Asset balances - view the balance of a particular asset on Asset Hub

Public instances of API Sidecar connected to Asset Hub are available, such as:

  • Polkadot Asset Hub Sidecar
  • Kusama Asset Hub Sidecar

These public instances are primarily for ad-hoc testing and quick checks.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#txwrapper","title":"TxWrapper","text":"

TxWrapper is a library that simplifies constructing and signing transactions for Polkadot SDK-based chains, including Polkadot and Kusama. This tool includes support for working with Asset Hub, enabling developers to:

  • Construct offline transactions
  • Leverage asset-specific functions such as minting, burning, and transferring assets

TxWrapper provides the flexibility needed to integrate asset operations into custom applications while maintaining the security and efficiency of Polkadot's transaction model.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#asset-transfer-api","title":"Asset Transfer API","text":"

Asset Transfer API is a library focused on simplifying the construction of asset transfers for Polkadot SDK-based chains that involve system parachains like Asset Hub. It exposes a reduced set of methods that facilitate users sending transfers to other parachains or locally. Refer to the cross-chain support table for the current status of cross-chain support development.

Key features include:

  • Support for cross-chain transfers between parachains
  • Streamlined transaction construction with support for the necessary parachain metadata

The API supports various asset operations, such as paying transaction fees with non-native tokens and managing asset liquidity.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#parachain-node","title":"Parachain Node","text":"

To fully leverage the Asset Hub's functionality, developers will need to run a system parachain node. Setting up an Asset Hub node allows users to interact with the parachain in real time, syncing data and participating in the broader Polkadot ecosystem. Guidelines for setting up an Asset Hub node are available in the Parity documentation.

Using these integration tools, developers can manage assets seamlessly and integrate Asset Hub functionality into their applications, leveraging Polkadot's powerful infrastructure.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#xcm-transfer-monitoring","title":"XCM Transfer Monitoring","text":"

Since Asset Hub facilitates cross-chain asset transfers across the Polkadot ecosystem, XCM transfer monitoring becomes an essential practice for developers and infrastructure providers. This section outlines how to monitor the cross-chain movement of assets between parachains, the relay chain, and other systems.

"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#monitor-xcm-deposits","title":"Monitor XCM Deposits","text":"

As assets move between chains, tracking the cross-chain transfers in real time is crucial. Whether assets are transferred via a teleport from system parachains or through a reserve-backed transfer from any other parachain, each transfer emits a relevant event (such as the balances.minted event).

To ensure accurate monitoring of these events:

  • Track XCM deposits - query every new block created in the relay chain or Asset Hub, loop through the events array, and filter for any balances.minted events which confirm the asset was successfully transferred to the account
  • Track event origins - each balances.minted event points to a specific address. By monitoring this, service providers can verify that assets have arrived in the correct account
"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#track-xcm-information-back-to-the-source","title":"Track XCM Information Back to the Source","text":"

While the balances.minted event confirms the arrival of assets, there may be instances where you need to trace the origin of the cross-chain message that triggered the event. In such cases, you can:

  1. Query the relevant chain at the block where the balances.minted event was emitted
  2. Look for a messageQueue(Processed) event within that block's initialization. This event contains a parameter (Id) that identifies the cross-chain message received by the relay chain or Asset Hub. You can use this Id to trace the message back to its origin chain, offering full visibility of the asset transfer's journey
"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#practical-monitoring-examples","title":"Practical Monitoring Examples","text":"

The preceding sections outline the process of monitoring XCM deposits to specific accounts and then tracing back the origin of these deposits. The process of tracking an XCM transfer and the specific events to monitor may vary based on the direction of the XCM message. Here are some examples to showcase the slight differences:

  • Transfer from parachain to relay chain - track parachainsystem(UpwardMessageSent) on the parachain and messagequeue(Processed) on the relay chain
  • Transfer from relay chain to parachain - track xcmPallet(sent) on the relay chain and dmpqueue(ExecutedDownward) on the parachain
  • Transfer between parachains - track xcmpqueue(XcmpMessageSent) on the system parachain and xcmpqueue(Success) on the destination parachain
"},{"location":"polkadot-protocol/architecture/system-chains/asset-hub/#monitor-for-failed-xcm-transfers","title":"Monitor for Failed XCM Transfers","text":"

Sometimes, XCM transfers may fail due to liquidity or other errors. Failed transfers emit specific error events, which are key to resolving issues in asset transfers. Monitoring for these failure events helps catch issues before they affect asset balances.

  • Relay chain to system parachain - look for the dmpqueue(ExecutedDownward) event on the parachain with an Incomplete outcome and an error type such as UntrustedReserveLocation
  • Parachain to parachain - monitor for xcmpqueue(Fail) on the destination parachain with error types like TooExpensive

For detailed error management in XCM, see Gavin Wood's blog post on XCM Execution and Error Management.

"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/","title":"Bridge Hub","text":""},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#introduction","title":"Introduction","text":"

The Bridge Hub system parachain plays a crucial role in facilitating trustless interactions between Polkadot, Kusama, Ethereum, and other blockchain ecosystems. By implementing on-chain light clients and supporting protocols like BEEFY and GRANDPA, Bridge Hub ensures seamless message transmission and state verification across chains. It also provides essential pallets for sending and receiving messages, making it a cornerstone of Polkadot\u2019s interoperability framework. With built-in support for XCM (Cross-Consensus Messaging), Bridge Hub enables secure, efficient communication between diverse blockchain networks.

This guide covers the architecture, components, and deployment of the Bridge Hub system. You'll explore its trustless bridging mechanisms, key pallets for various blockchains, and specific implementations like Snowbridge and the Polkadot <> Kusama bridge. By the end, you'll understand how Bridge Hub enhances connectivity within the Polkadot ecosystem and beyond.

"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#trustless-bridging","title":"Trustless Bridging","text":"

Bridge Hub provides a mode of trustless bridging through its implementation of on-chain light clients and trustless relayers. The target chain and source chain both provide ways of verifying one another's state and actions (such as a transfer) based on the consensus and finality of both chains rather than an external mechanism controlled by a third party.

BEEFY (Bridge Efficiency Enabling Finality Yielder) is instrumental in this solution. It provides a more efficient way to verify the consensus on the relay chain. It allows the participants in a network to verify finality proofs, meaning a remote chain like Ethereum can verify the state of Polkadot at a given block height.

Info

In this context, \"trustless\" refers to the lack of need to trust a human when interacting with various system components. Trustless systems are based instead on trusting mathematics, cryptography, and code.

Trustless bridges are essentially two one-way bridges, where each chain has a method of verifying the state of the other in a trustless manner through consensus proofs.

For example, the Ethereum and Polkadot bridging solution that Snowbridge implements involves two light clients: one which verifies the state of Polkadot and the other which verifies the state of Ethereum. The light client for Polkadot is implemented in the runtime as a pallet, whereas the light client for Ethereum is implemented as a smart contract on the beacon chain.

"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#bridging-components","title":"Bridging Components","text":"

In any given Bridge Hub implementation (Kusama, Polkadot, or other relay chains), there are a few primary pallets that are utilized:

  • Pallet Bridge GRANDPA - an on-chain GRANDPA light client for Substrate based chains
  • Pallet Bridge Parachains - a finality module for parachains
  • Pallet Bridge Messages - a pallet which allows sending, receiving, and tracking of inbound and outbound messages
  • Pallet XCM Bridge - a pallet which, with the Bridge Messages pallet, adds XCM support to bridge pallets
"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#ethereum-specific-support","title":"Ethereum-Specific Support","text":"

Bridge Hub also has a set of components and pallets that support a bridge between Polkadot and Ethereum through Snowbridge.

To view the complete list of which pallets are included in Bridge Hub, visit the Subscan Runtime Modules page. Alternatively, the source code for those pallets can be found in the Polkadot SDK Snowbridge Pallets repository.

"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#deployed-bridges","title":"Deployed Bridges","text":"
  • Snowbridge - a general-purpose, trustless bridge between Polkadot and Ethereum
  • Hyperbridge - a cross-chain solution built as an interoperability coprocessor, providing state-proof-based interoperability across all blockchains
  • Polkadot <> Kusama Bridge - a bridge that utilizes relayers to bridge the Polkadot and Kusama relay chains trustlessly
"},{"location":"polkadot-protocol/architecture/system-chains/bridge-hub/#where-to-go-next","title":"Where to Go Next","text":"
  • Go over the Bridge Hub README in the Polkadot SDK Bridge-hub Parachains repository
  • Take a deeper dive into bridging architecture in the Polkadot SDK High-Level Bridge documentation
  • Read more about BEEFY and Bridging in the Polkadot Wiki: Bridging: BEEFY
"},{"location":"polkadot-protocol/architecture/system-chains/coretime/","title":"Coretime","text":""},{"location":"polkadot-protocol/architecture/system-chains/coretime/#introduction","title":"Introduction","text":"

The Coretime system chain facilitates the allocation, procurement, sale, and scheduling of bulk coretime, enabling tasks (such as parachains) to utilize the computation and security provided by Polkadot.

The Broker pallet, along with Cross Consensus Messaging (XCM), enables this functionality to be delegated to the system chain rather than the relay chain. Using XCMP's Upward Message Passing (UMP) to the relay chain allows for core assignments to take place for a task registered on the relay chain.

The Fellowship RFC\u00a0RFC-1: Agile Coretime contains the specification for the Coretime system chain and coretime as a concept.

Besides core management, its responsibilities include:

  • The number of cores that should be made available
  • Which tasks should be running on which cores and in what ratios
  • Accounting information for the on-demand pool

From the relay chain, it expects the following via Downward Message Passing (DMP):

  • The number of cores available to be scheduled
  • Account information on on-demand scheduling

The details for this interface can be found in RFC-5: Coretime Interface.

"},{"location":"polkadot-protocol/architecture/system-chains/coretime/#bulk-coretime-assignment","title":"Bulk Coretime Assignment","text":"

The Coretime chain allocates coretime before its usage. It also manages the ownership of a core. As cores are made up of regions (by default, one core is a single region), a region is recognized as a non-fungible asset. The Coretime chain exposes Regions over XCM as an NFT. Users can transfer individual regions, partition, interlace, or allocate them to a task. Regions describe how a task may use a core.

One core can contain more than one region.

A core can be considered a logical representation of an active validator set on the relay chain, where these validators commit to verifying the state changes for a particular task running on that region. With partitioning, having more than one region per core is possible, allowing for different computational schemes. Therefore, running more than one task on a single core is possible.

Regions can be managed in the following manner on the Coretime chain:

  • Assigning region - regions can be assigned to a task on the relay chain, such as a parachain/rollup using the assign dispatchable

Coretime Availability

When bulk coretime is obtained, block production is not immediately available. It becomes available to produce blocks for a task in the next Coretime cycle. To view the status of the current or next Coretime cycle, go to the Subscan Coretime Dashboard.

  • Transferring regions - regions may be transferred on the Coretime chain, upon which the transfer dispatchable in the Broker pallet would assign a new owner to that specific region

  • Partitioning regions - using the partition dispatchable, regions may be partitioned into two non-overlapping subregions within the same core. A partition involves specifying a pivot, wherein the new region will be defined and available for use

  • Interlacing regions - using the interlace dispatchable, interlacing regions allows a core to have alternative-compute strategies. Whereas partitioned regions are mutually exclusive, interlaced regions overlap because multiple tasks may utilize a single core in an alternating manner

For more information regarding these mechanisms, visit the coretime page on the Polkadot Wiki: Introduction to Agile Coretime.

"},{"location":"polkadot-protocol/architecture/system-chains/coretime/#on-demand-coretime","title":"On Demand Coretime","text":"

At this writing, on-demand coretime is currently deployed on the relay chain and will eventually be deployed to the Coretime chain. On-demand coretime allows parachains (previously known as parathreads) to utilize available cores per block.

The Coretime chain also handles coretime sales, details of which can be found on the Polkadot Wiki: Agile Coretime: Coretime Sales.

"},{"location":"polkadot-protocol/architecture/system-chains/coretime/#where-to-go-next","title":"Where to Go Next","text":"
  • Learn about Agile Coretime on the Polkadot Wiki
"},{"location":"polkadot-protocol/architecture/system-chains/overview/","title":"Overview of Polkadot's System Chains","text":""},{"location":"polkadot-protocol/architecture/system-chains/overview/#introduction","title":"Introduction","text":"

Polkadot's relay chain is designed to secure parachains and facilitate seamless inter-chain communication. However, resource-intensive\u2014tasks like governance, asset management, and bridging are more efficiently handled by system parachains. These specialized chains offload functionality from the relay chain, leveraging Polkadot's parallel execution model to improve performance and scalability. By distributing key functionalities across system parachains, Polkadot can maximize its relay chain's blockspace for its core purpose of securing and validating parachains.

This guide will explore how system parachains operate within Polkadot and Kusama, detailing their critical roles in network governance, asset management, and bridging. You'll learn about the currently deployed system parachains, their unique functions, and how they enhance Polkadot's decentralized ecosystem.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#system-chains","title":"System Chains","text":"

System parachains contain core Polkadot protocol features, but in parachains rather than the relay chain. Execution cores for system chains are allocated via network governance rather than purchasing coretime on a marketplace.

System parachains defer to on-chain governance to manage their upgrades and other sensitive actions as they do not have native tokens or governance systems separate from DOT or KSM. It is not uncommon to see a system parachain implemented specifically to manage network governance.

Note

You may see system parachains called common good parachains in articles and discussions. This nomenclature caused confusion as the network evolved, so system parachains is preferred.

For more details on this evolution, review this parachains forum discussion.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#existing-system-chains","title":"Existing System Chains","text":"
---\ntitle: System Parachains at a Glance\n---\nflowchart TB\n    subgraph POLKADOT[\"Polkadot\"]\n        direction LR\n            PAH[\"Polkadot Asset Hub\"]\n            PCOL[\"Polkadot Collectives\"]\n            PBH[\"Polkadot Bridge Hub\"]\n            PPC[\"Polkadot People Chain\"]\n            PCC[\"Polkadot Coretime Chain\"]\n    end\n\n    subgraph KUSAMA[\"Kusama\"]\n        direction LR\n            KAH[\"Kusama Asset Hub\"]\n            KBH[\"Kusama Bridge Hub\"]\n            KPC[\"Kusama People Chain\"]\n            KCC[\"Kusama Coretime Chain\"]\n            E[\"Encointer\"]\n        end

All system parachains are on both Polkadot and Kusama with the following exceptions:

  • Collectives - only on Polkadot
  • Encointer - only on Kusama
"},{"location":"polkadot-protocol/architecture/system-chains/overview/#asset-hub","title":"Asset Hub","text":"

The Asset Hub is an asset portal for the entire network. It helps asset creators, such as reserve-backed stablecoin issuers, track the total issuance of an asset in the network, including amounts transferred to other parachains. It also serves as the hub where asset creators can perform on-chain operations, such as minting and burning, to manage their assets effectively.

This asset management logic is encoded directly in the runtime of the chain rather than in smart contracts. The efficiency of executing logic in a parachain allows for fees and deposits that are about 1/10th of what is required on the relay chain. These low fees mean that the Asset Hub is well suited for handling the frequent transactions required when managing balances, transfers, and on-chain assets.

The Asset Hub also supports non-fungible assets (NFTs) via the Uniques pallet and NFTs pallet. For more information about NFTs, see the Polkadot Wiki section on NFT Pallets.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#collectives","title":"Collectives","text":"

The Polkadot Collectives parachain was added in Referendum 81 and exists on Polkadot but not on Kusama. The Collectives chain hosts on-chain collectives that serve the Polkadot network, including the following:

  • Polkadot Alliance - provides a set of ethics and standards for the community to follow. Includes an on-chain means to call out bad actors
  • Polkadot Technical Fellowship - a rules-based social organization to support and incentivize highly-skilled developers to contribute to the technical stability, security, and progress of the network

These on-chain collectives will play essential roles in the future of network stewardship and decentralized governance. Networks can use a bridge hub to help them act as collectives and express their legislative voices as single opinions within other networks.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#bridge-hub","title":"Bridge Hub","text":"

Before parachains, the only way to design a bridge was to put the logic onto the relay chain. Since both networks now support parachains and the isolation they provide, each network can have a parachain dedicated to bridges.

The Bridge Hub system parachain operates on the relay chain, and is responsible for faciliating bridges to the wider Web3 space. It contains the required bridge pallets in its runtime, which enable trustless bridging with other blockchain networks like Polkadot, Kusama, and Ethereum. The Bridge Hub uses the native token of the relay chain.

See the Bridge Hub documentation for additional information.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#people-chain","title":"People Chain","text":"

The People Chain provides a naming system that allows users to manage and verify their account identity.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#coretime-chain","title":"Coretime Chain","text":"

The Coretime system chain lets users buy coretime to access Polkadot's computation. Coretime marketplaces run on top of the Coretime chain. Kusama does not use the Collectives system chain. Instead, Kusama relies on the Encointer system chain, which provides Sybil resistance as a service to the entire Kusama ecosystem.

Visit Introduction to Agile Coretime in the Polkadot Wiki for more information.

"},{"location":"polkadot-protocol/architecture/system-chains/overview/#encointer","title":"Encointer","text":"

Encointer is a blockchain platform for self-sovereign ID and a global universal basic income (UBI). The Encointer protocol uses a novel Proof of Personhood (PoP) system to create unique identities and resist Sybil attacks. PoP is based on the notion that a person can only be in one place at any given time. Encointer offers a framework that allows for any group of real people to create, distribute, and use their own digital community tokens.

Participants are requested to attend physical key-signing ceremonies with small groups of random people at randomized locations. These local meetings are part of one global signing ceremony occurring at the same time. Participants use the Encointer wallet app to participate in these ceremonies and manage local community currencies.

Referendums marking key Encointer adoption milestones include:

  • Referendum 158 - Register Encointer As a Common Good Chain - registered Encointer as the second system parachain on Kusama's network
  • Referendum 187 - Encointer Runtime Upgrade to Full Functionality - introduced a runtime upgrade bringing governance and full functionality for communities to use the protocol

Tip

To learn more about Encointer, check out the official Encointer book or watch an Encointer ceremony in action.

"},{"location":"polkadot-protocol/basics/","title":"Basics","text":"

This section equips developers with the essential knowledge to create, deploy, and enhance applications and blockchains within the Polkadot ecosystem. Gain a comprehensive understanding of Polkadot\u2019s foundational components, including accounts, balances, and transactions, as well as advanced topics like data encoding and cryptographic methods. Mastering these concepts is vital for building robust and secure applications on Polkadot.

By exploring these core topics, developers can leverage Polkadot's unique architecture to build scalable and interoperable solutions. From understanding how Polkadot's networks operate to implementing efficient fee mechanisms and utilizing tools like SCALE encoding, this section provides the building blocks for innovation. Whether you're optimizing blockchain performance or designing cross-chain functionality, these insights will help you navigate Polkadot\u2019s ecosystem with confidence.

"},{"location":"polkadot-protocol/basics/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"polkadot-protocol/basics/accounts/","title":"Accounts","text":""},{"location":"polkadot-protocol/basics/accounts/#introduction","title":"Introduction","text":"

Accounts are essential for managing identity, transactions, and governance on the network in the Polkadot SDK. Understanding these components is critical for seamless development and operation on the network, whether you're building or interacting with Polkadot-based chains.

This page will guide you through the essential aspects of accounts, including their data structure, balance types, reference counters, and address formats. You\u2019ll learn how accounts are managed within the runtime, how balances are categorized, and how addresses are encoded and validated.

"},{"location":"polkadot-protocol/basics/accounts/#account-data-structure","title":"Account Data Structure","text":"

Accounts are foundational to any blockchain, and the Polkadot SDK provides a flexible management system. This section explains how the Polkadot SDK defines accounts and manages their lifecycle through data structures within the runtime.

"},{"location":"polkadot-protocol/basics/accounts/#account","title":"Account","text":"

The Account data type is a storage map within the System pallet that links an account ID to its corresponding data. This structure is fundamental for mapping account-related information within the chain.

The code snippet below shows how accounts are defined:

 /// The full account information for a particular account ID\n #[pallet::storage]\n #[pallet::getter(fn account)]\n pub type Account<T: Config> = StorageMap<\n   _,\n   Blake2_128Concat,\n   T::AccountId,\n   AccountInfo<T::Nonce, T::AccountData>,\n   ValueQuery,\n >;\n

The preceding code block defines a storage map named Account. The StorageMap is a type of on-chain storage that maps keys to values. In the Account map, the key is an account ID, and the value is the account's information. Here, T represents the generic parameter for the runtime configuration, which is defined by the pallet's configuration trait (Config).

The StorageMap consists of the following parameters:

  • _ - used in macro expansion and acts as a placeholder for the storage prefix type. Tells the macro to insert the default prefix during expansion
  • Blake2_128Concat - the hashing function applied to keys in the storage map
  • T::AccountId - represents the key type, which corresponds to the account\u2019s unique ID
  • AccountInfo<T::Nonce, T::AccountData> - the value type stored in the map. For each account ID, the map stores an AccountInfo struct containing:
    • T::Nonce - a nonce for the account, which is incremented with each transaction to ensure transaction uniqueness
    • T::AccountData - custom account data defined by the runtime configuration, which could include balances, locked funds, or other relevant information
  • ValueQuery - defines how queries to the storage map behave when no value is found; returns a default value instead of None
Additional information

For a detailed explanation of storage maps, refer to the StorageMap Rust docs.

"},{"location":"polkadot-protocol/basics/accounts/#account-info","title":"Account Info","text":"

The AccountInfo structure is another key element within the System pallet, providing more granular details about each account's state. This structure tracks vital data, such as the number of transactions and the account\u2019s relationships with other modules.

#[derive(Clone, Eq, PartialEq, Default, RuntimeDebug, Encode, Decode)]\npub struct AccountInfo<Nonce, AccountData> {\n  pub nonce: Nonce,\n  pub consumers: RefCount,\n  pub providers: RefCount,\n  pub sufficients: RefCount,\n  pub data: AccountData,\n}\n

The AccountInfo structure includes the following components:

  • nonce - tracks the number of transactions initiated by the account, which ensures transaction uniqueness and prevents replay attacks
  • consumers - counts how many other modules or pallets rely on this account\u2019s existence. The account cannot be removed from the chain (reaped) until this count reaches zero
  • providers - tracks how many modules permit this account\u2019s existence. An account can only be reaped once both providers and sufficients are zero
  • sufficients - represents the number of modules that allow the account to exist for internal purposes, independent of any other modules
  • AccountData - a flexible data structure that can be customized in the runtime configuration, usually containing balances or other user-specific data

This structure helps manage an account's state and prevents its premature removal while it is still referenced by other on-chain data or modules. The AccountInfo structure can vary as long as it satisfies the trait bounds defined by the AccountData associated type in the frame-system::pallet::Config trait.

"},{"location":"polkadot-protocol/basics/accounts/#account-reference-counters","title":"Account Reference Counters","text":"

Polkadot SDK uses reference counters to track an account\u2019s dependencies across different runtime modules. These counters ensure that accounts remain active while data is associated with them.

The reference counters include:

  • consumers - prevents account removal while other pallets still rely on the account
  • providers - ensures an account is active before other pallets store data related to it
  • sufficients - indicates the account\u2019s independence, ensuring it can exist even without a native token balance, such as when holding sufficient alternative assets
"},{"location":"polkadot-protocol/basics/accounts/#providers-reference-counters","title":"Providers Reference Counters","text":"

The providers counter ensures that an account is ready to be depended upon by other runtime modules. For example, it is incremented when an account has a balance above the existential deposit, which marks the account as active.

The system requires this reference counter to be greater than zero for the consumers counter to be incremented, ensuring the account is stable before any dependencies are added.

"},{"location":"polkadot-protocol/basics/accounts/#consumers-reference-counters","title":"Consumers Reference Counters","text":"

The consumers counter ensures that the account cannot be reaped until all references to it across the runtime have been removed. This check prevents the accidental deletion of accounts that still have active on-chain data.

It is the user\u2019s responsibility to clear out any data from other runtime modules if they wish to remove their account and reclaim their existential deposit.

"},{"location":"polkadot-protocol/basics/accounts/#sufficients-reference-counter","title":"Sufficients Reference Counter","text":"

The sufficients counter tracks accounts that can exist independently without relying on a native account balance. This is useful for accounts holding other types of assets, like tokens, without needing a minimum balance in the native token.

For instance, the Assets pallet, may increment this counter for an account holding sufficient tokens.

"},{"location":"polkadot-protocol/basics/accounts/#account-deactivation","title":"Account Deactivation","text":"

In Polkadot SDK-based chains, an account is deactivated when its reference counters (such as providers, consumers, and sufficient) reach zero. These counters ensure the account remains active as long as other runtime modules or pallets reference it.

When all dependencies are cleared and the counters drop to zero, the account becomes deactivated and may be removed from the chain (reaped). This is particularly important in Polkadot SDK-based blockchains, where accounts with balances below the existential deposit threshold are pruned from storage to conserve state resources.

Each pallet that references an account has cleanup functions that decrement these counters when the pallet no longer depends on the account. Once these counters reach zero, the account is marked for deactivation.

"},{"location":"polkadot-protocol/basics/accounts/#updating-counters","title":"Updating Counters","text":"

The Polkadot SDK provides runtime developers with various methods to manage account lifecycle events, such as deactivation or incrementing reference counters. These methods ensure that accounts cannot be reaped while still in use.

The following helper functions manage these counters:

  • inc_consumers() - increments the consumer reference counter for an account, signaling that another pallet depends on it
  • dec_consumers() - decrements the consumer reference counter, signaling that a pallet no longer relies on the account
  • inc_providers() - increments the provider reference counter, ensuring the account remains active
  • dec_providers() - decrements the provider reference counter, allowing for account deactivation when no longer in use
  • inc_sufficients() - increments the sufficient reference counter for accounts that hold sufficient assets
  • dec_sufficients() - decrements the sufficient reference counter

To ensure proper account cleanup and lifecycle management, a corresponding decrement should be made for each increment action.

The System pallet offers three query functions to assist developers in tracking account states:

  • can_inc_consumer() - checks if the account can safely increment the consumer reference
  • can_dec_provider() - ensures that no consumers exist before allowing the decrement of the provider counter
  • is_provider_required() - verifies whether the account still has any active consumer references

This modular and flexible system of reference counters tightly controls the lifecycle of accounts in Polkadot SDK-based blockchains, preventing the accidental removal or retention of unneeded accounts. You can refer to the System pallet Rust docs for more details.

"},{"location":"polkadot-protocol/basics/accounts/#account-balance-types","title":"Account Balance Types","text":"

In the Polkadot ecosystem, account balances are categorized into different types based on how the funds are utilized and their availability. These balance types determine the actions that can be performed, such as transferring tokens, paying transaction fees, or participating in governance activities. Understanding these balance types helps developers manage user accounts and implement balance-dependent logic.

A more efficient distribution of account balance types is in development

Soon, pallets in the Polkadot SDK will implement the Fungible trait (see the tracking issue for more details). For example, the transaction-storage pallet changed the implementation of the Currency trait (see the Refactor transaction storage pallet to use fungible traits PR for further details):

type BalanceOf<T> = <<T as Config>::Currency as Currency<<T as frame_system::Config>::AccountId>>::Balance;\n

To the Fungible trait:

type BalanceOf<T> = <<T as Config>::Currency as FnInspect<<T as frame_system::Config>::AccountId>>::Balance;\n

This update will enable more efficient use of account balances, allowing the free balance to be utilized for on-chain activities such as setting proxies and managing identities.

"},{"location":"polkadot-protocol/basics/accounts/#balance-types","title":"Balance Types","text":"

The five main balance types are:

  • Free balance - represents the total tokens available to the account for any on-chain activity, including staking, governance, and voting. However, it may not be fully spendable or transferrable if portions of it are locked or reserved
  • Locked balance - portions of the free balance that cannot be spent or transferred because they are tied up in specific activities like staking, vesting, or participating in governance. While the tokens remain part of the free balance, they are non-transferable for the duration of the lock
  • Reserved balance - funds locked by specific system actions, such as setting up an identity, creating proxies, or submitting deposits for governance proposals. These tokens are not part of the free balance and cannot be spent unless they are unreserved
  • Spendable balance - the portion of the free balance that is available for immediate spending or transfers. It is calculated by subtracting the maximum of locked or reserved amounts from the free balance, ensuring that existential deposit limits are met
  • Untouchable balance - funds that cannot be directly spent or transferred but may still be utilized for on-chain activities, such as governance participation or staking. These tokens are typically tied to certain actions or locked for a specific period

The spendable balance is calculated as follows:

spendable = free - max(locked - reserved, ED)\n

Here, free, locked, and reserved are defined above. The ED represents the existential deposit, the minimum balance required to keep an account active and prevent it from being reaped. You may find you can't see all balance types when looking at your account via a wallet. Wallet providers often display only spendable, locked, and reserved balances.

"},{"location":"polkadot-protocol/basics/accounts/#locks","title":"Locks","text":"

Locks are applied to an account's free balance, preventing that portion from being spent or transferred. Locks are automatically placed when an account participates in specific on-chain activities, such as staking or governance. Although multiple locks may be applied simultaneously, they do not stack. Instead, the largest lock determines the total amount of locked tokens.

Locks follow these basic rules:

  • If different locks apply to varying amounts, the largest lock amount takes precedence
  • If multiple locks apply to the same amount, the lock with the longest duration governs when the balance can be unlocked
"},{"location":"polkadot-protocol/basics/accounts/#locks-example","title":"Locks Example","text":"

Consider an example where an account has 80 DOT locked for both staking and governance purposes like so:

  • 80 DOT is staked with a 28-day lock period
  • 24 DOT is locked for governance with a 1x conviction and a 7-day lock period
  • 4 DOT is locked for governance with a 6x conviction and a 224-day lock period

In this case, the total locked amount is 80 DOT because only the largest lock (80 DOT from staking) governs the locked balance. These 80 DOT will be released at different times based on the lock durations. In this example, the 24 DOT locked for governance will be released first since the shortest lock period is seven days. The 80 DOT stake with a 28-day lock period is released next. Now, all that remains locked is the 4 DOT for governance. After 224 days, all 80 DOT (minus the existential deposit) will be free and transferrable.

"},{"location":"polkadot-protocol/basics/accounts/#edge-cases-for-locks","title":"Edge Cases for Locks","text":"

In scenarios where multiple convictions and lock periods are active, the lock duration and amount are determined by the longest period and largest amount. For example, if you delegate with different convictions and attempt to undelegate during an active lock period, the lock may be extended for the full amount of tokens. For a detailed discussion on edge case lock behavior, see this Stack Exchange post.

"},{"location":"polkadot-protocol/basics/accounts/#balance-types-on-polkadotjs","title":"Balance Types on Polkadot.js","text":"

Polkadot.js provides a user-friendly interface for managing and visualizing various account balances on Polkadot and Kusama networks. When interacting with Polkadot.js, you will encounter multiple balance types that are critical for understanding how your funds are distributed and restricted. This section explains how different balances are displayed in the Polkadot.js UI and what each type represents.

The most common balance types displayed on Polkadot.js are:

  • Total balance - the total number of tokens available in the account. This includes all tokens, whether they are transferable, locked, reserved, or vested. However, the total balance does not always reflect what can be spent immediately. In this example, the total balance is 0.6274 KSM

  • Transferrable balance - shows how many tokens are immediately available for transfer. It is calculated by subtracting the locked and reserved balances from the total balance. For example, if an account has a total balance of 0.6274 KSM and a transferrable balance of 0.0106 KSM, only the latter amount can be sent or spent freely

  • Vested balance - tokens that allocated to the account but released according to a specific schedule. Vested tokens remain locked and cannot be transferred until fully vested. For example, an account with a vested balance of 0.2500 KSM means that this amount is owned but not yet transferable

  • Locked balance - tokens that are temporarily restricted from being transferred or spent. These locks typically result from participating in staking, governance, or vested transfers. In Polkadot.js, locked balances do not stack\u2014only the largest lock is applied. For instance, if an account has 0.5500 KSM locked for governance and staking, the locked balance would display 0.5500 KSM, not the sum of all locked amounts

  • Reserved balance - refers to tokens locked for specific on-chain actions, such as setting an identity, creating a proxy, or making governance deposits. Reserved tokens are not part of the free balance, but can be freed by performing certain actions. For example, removing an identity would unreserve those funds

  • Bonded balance - the tokens locked for staking purposes. Bonded tokens are not transferrable until they are unbonded after the unbonding period

  • Redeemable balance - the number of tokens that have completed the unbonding period and are ready to be unlocked and transferred again. For example, if an account has a redeemable balance of 0.1000 KSM, those tokens are now available for spending

  • Democracy balance - reflects the number of tokens locked for governance activities, such as voting on referenda. These tokens are locked for the duration of the governance action and are only released after the lock period ends

By understanding these balance types and their implications, developers and users can better manage their funds and engage with on-chain activities more effectively.

"},{"location":"polkadot-protocol/basics/accounts/#address-formats","title":"Address Formats","text":"

The SS58 address format is a core component of the Polkadot SDK that enables accounts to be uniquely identified across Polkadot-based networks. This format is a modified version of Bitcoin's Base58Check encoding, specifically designed to accommodate the multi-chain nature of the Polkadot ecosystem. SS58 encoding allows each chain to define its own set of addresses while maintaining compatibility and checksum validation for security.

"},{"location":"polkadot-protocol/basics/accounts/#basic-format","title":"Basic Format","text":"

SS58 addresses consist of three main components:

base58encode(concat(<address-type>, <address>, <checksum>))\n
  • Address type - a byte or set of bytes that define the network (or chain) for which the address is intended. This ensures that addresses are unique across different Polkadot SDK-based chains
  • Address - the public key of the account encoded as bytes
  • Checksum - a hash-based checksum which ensures that addresses are valid and unaltered. The checksum is derived from the concatenated address type and address components, ensuring integrity

The encoding process transforms the concatenated components into a Base58 string, providing a compact and human-readable format that avoids easily confused characters (e.g., zero '0', capital 'O', lowercase 'l'). This encoding function (encode) is implemented exactly as defined in Bitcoin and IPFS specifications, using the same alphabet as both implementations.

Additional information

Refer to Ss58Codec for more details on the SS58 address format implementation.

"},{"location":"polkadot-protocol/basics/accounts/#address-type","title":"Address Type","text":"

The address type defines how an address is interpreted and to which network it belongs. Polkadot SDK uses different prefixes to distinguish between various chains and address formats:

  • Address types 0-63 - simple addresses, commonly used for network identifiers
  • Address types 64-127 - full addresses that support a wider range of network identifiers
  • Address types 128-255 - reserved for future address format extensions

For example, Polkadot\u2019s main network uses an address type of 0, while Kusama uses 2. This ensures that addresses can be used without confusion between networks.

The address type is always encoded as part of the SS58 address, making it easy to quickly identify the network. Refer to the SS58 registry for the canonical listing of all address type identifiers and how they map to Polkadot SDK-based networks.

"},{"location":"polkadot-protocol/basics/accounts/#address-length","title":"Address Length","text":"

SS58 addresses can have different lengths depending on the specific format. Address lengths range from as short as 3 to 35 bytes, depending on the complexity of the address and network requirements. This flexibility allows SS58 addresses to adapt to different chains while providing a secure encoding mechanism.

Total Type Raw account Checksum 3 1 1 1 4 1 2 1 5 1 2 2 6 1 4 1 7 1 4 2 8 1 4 3 9 1 4 4 10 1 8 1 11 1 8 2 12 1 8 3 13 1 8 4 14 1 8 5 15 1 8 6 16 1 8 7 17 1 8 8 35 1 32 2

SS58 addresses also support different payload sizes, allowing a flexible range of account identifiers.

"},{"location":"polkadot-protocol/basics/accounts/#checksum-types","title":"Checksum Types","text":"

A checksum is applied to validate SS58 addresses. Polkadot SDK uses a Blake2b-512 hash function to calculate the checksum, which is appended to the address before encoding. The checksum length can vary depending on the address format (e.g., 1-byte, 2-byte, or longer), providing varying levels of validation strength.

The checksum ensures that an address is not modified or corrupted, adding an extra layer of security for account management.

"},{"location":"polkadot-protocol/basics/accounts/#validating-addresses","title":"Validating Addresses","text":"

SS58 addresses can be validated using the subkey command-line interface or the Polkadot.js API. These tools help ensure an address is correctly formatted and valid for the intended network. The following sections will provide an overview of how validation works with these tools.

"},{"location":"polkadot-protocol/basics/accounts/#using-subkey","title":"Using Subkey","text":"

Subkey is a CLI tool provided by Polkadot SDK for generating and managing keys. It can inspect and validate SS58 addresses.

The inspect command gets a public key and an SS58 address from the provided secret URI. The basic syntax for the subkey inspect command is:

subkey inspect [flags] [options] uri\n

For the uri command-line argument, you can specify the secret seed phrase, a hex-encoded private key, or an SS58 address. If the input is a valid address, the subkey program displays the corresponding hex-encoded public key, account identifier, and SS58 addresses.

For example, to inspect the public keys derived from a secret seed phrase, you can run a command similar to the following:

subkey inspect \"caution juice atom organ advance problem want pledge someone senior holiday very\"\n

The command displays output similar to the following:

subkey inspect \"caution juice atom organ advance problem want pledge someone senior holiday very\" Secret phrase caution juice atom organ advance problem want pledge someone senior holiday very is account: Secret seed: 0xc8fa03532fb22ee1f7f6908b9c02b4e72483f0dbd66e4cd456b8f34c6230b849 Public key (hex): 0xd6a3105d6768e956e9e5d41050ac29843f98561410d3a47f9dd5b3b227ab8746 Public key (SS58): 5Gv8YYFu8H1btvmrJy9FjjAWfb99wrhV3uhPFoNEr918utyR Account ID: 0xd6a3105d6768e956e9e5d41050ac29843f98561410d3a47f9dd5b3b227ab8746 SS58 Address: 5Gv8YYFu8H1btvmrJy9FjjAWfb99wrhV3uhPFoNEr918utyR

The subkey program assumes an address is based on a public/private key pair. If you inspect an address, the command returns the 32-byte account identifier.

However, not all addresses in Polkadot SDK-based networks are based on keys.

Depending on the command-line options you specify and the input you provided, the command output might also display the network for which the address has been encoded. For example:

subkey inspect \"12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU\"\n

The command displays output similar to the following:

subkey inspect \"12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU\" Public Key URI 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU is account: Network ID/Version: polkadot Public key (hex): 0x46ebddef8cd9bb167dc30878d7113b7e168e6f0646beffd77d69d39bad76b47a Account ID: 0x46ebddef8cd9bb167dc30878d7113b7e168e6f0646beffd77d69d39bad76b47a Public key (SS58): 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU SS58 Address: 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU

"},{"location":"polkadot-protocol/basics/accounts/#using-polkadotjs-api","title":"Using Polkadot.js API","text":"

To verify an address in JavaScript or TypeScript projects, you can use the functions built into the Polkadot.js API. For example:

// Import Polkadot.js API dependencies\nconst { decodeAddress, encodeAddress } = require('@polkadot/keyring');\nconst { hexToU8a, isHex } = require('@polkadot/util');\n\n// Specify an address to test.\nconst address = 'INSERT_ADDRESS_TO_TEST';\n\n// Check address\nconst isValidSubstrateAddress = () => {\n  try {\n    encodeAddress(isHex(address) ? hexToU8a(address) : decodeAddress(address));\n\n    return true;\n  } catch (error) {\n    return false;\n  }\n};\n\n// Query result\nconst isValid = isValidSubstrateAddress();\nconsole.log(isValid);\n

If the function returns true, the specified address is a valid address.

"},{"location":"polkadot-protocol/basics/accounts/#other-ss58-implementations","title":"Other SS58 Implementations","text":"

Support for encoding and decoding Polkadot SDK SS58 addresses has been implemented in several other languages and libraries.

  • Crystal - wyhaines/base58.cr
  • Go - itering/subscan-plugin
  • Python - polkascan/py-scale-codec
  • TypeScript - subsquid/squid-sdk
"},{"location":"polkadot-protocol/basics/chain-data/","title":"Chain Data","text":""},{"location":"polkadot-protocol/basics/chain-data/#introduction","title":"Introduction","text":"

Understanding and leveraging on-chain data is a fundamental aspect of blockchain development. Whether you're building frontend applications or backend systems, accessing and decoding runtime metadata is vital to interacting with the blockchain. This guide introduces you to the tools and processes for generating and retrieving metadata, explains its role in application development, and outlines the additional APIs available for interacting with a Polkadot node. By mastering these components, you can ensure seamless communication between your applications and the blockchain.

"},{"location":"polkadot-protocol/basics/chain-data/#application-development","title":"Application Development","text":"

You might not be directly involved in building frontend applications as a blockchain developer. However, most applications that run on a blockchain require some form of frontend or user-facing client to enable users or other programs to access and modify the data that the blockchain stores. For example, you might develop a browser-based, mobile, or desktop application that allows users to submit transactions, post articles, view their assets, or track previous activity. The backend for that application is configured in the runtime logic for your blockchain, but the frontend client makes the runtime features accessible to your users.

For your custom chain to be useful to others, you'll need to provide a client application that allows users to view, interact with, or update information that the blockchain keeps track of. In this article, you'll learn how to expose information about your runtime so that client applications can use it, see examples of the information exposed, and explore tools and libraries that use it.

"},{"location":"polkadot-protocol/basics/chain-data/#understand-metadata","title":"Understand Metadata","text":"

Polkadot SDK-based blockchain networks are designed to expose their runtime information, allowing developers to learn granular details regarding pallets, RPC calls, and runtime APIs. The metadata also exposes their related documentation. The chain's metadata is SCALE-encoded, allowing for the development of browser-based, mobile, or desktop applications to support the chain's runtime upgrades seamlessly. It is also possible to develop applications compatible with multiple Polkadot SDK-based chains simultaneously.

"},{"location":"polkadot-protocol/basics/chain-data/#expose-runtime-information-as-metadata","title":"Expose Runtime Information as Metadata","text":"

To interact with a node or the state of the blockchain, you need to know how to connect to the chain and access the exposed runtime features. This interaction involves a Remote Procedure Call (RPC) through a node endpoint address, commonly through a secure web socket connection.

An application developer typically needs to know the contents of the runtime logic, including the following details:

  • Version of the runtime the application is connecting to
  • Supported APIs
  • Implemented pallets
  • Defined functions and corresponding type signatures
  • Defined custom types
  • Exposed parameters users can set

As the Polkadot SDK is modular and provides a composable framework for building blockchains, there are limitless opportunities to customize the schema of properties. Each runtime can be configured with its properties, including function calls and types, which can be changed over time with runtime upgrades.

The Polkadot SDK enables you to generate the runtime metadata schema to capture information unique to a runtime. The metadata for a runtime describes the pallets in use and types defined for a specific runtime version. The metadata includes information about each pallet's storage items, functions, events, errors, and constants. The metadata also provides type definitions for any custom types included in the runtime.

Metadata provides a complete inventory of a chain's runtime. It is key to enabling client applications to interact with the node, parse responses, and correctly format message payloads sent back to that chain.

"},{"location":"polkadot-protocol/basics/chain-data/#generate-metadata","title":"Generate Metadata","text":"

To efficiently use the blockchain's networking resources and minimize the data transmitted over the network, the metadata schema is encoded using the Parity SCALE Codec. This encoding is done automatically through the scale-infocrate.

At a high level, generating the metadata involves the following steps:

  1. The pallets in the runtime logic expose callable functions, types, parameters, and documentation that need to be encoded in the metadata
  2. The scale-info crate collects type information for the pallets in the runtime, builds a registry of the pallets that exist in a particular runtime, and the relevant types for each pallet in the registry. The type information is detailed enough to enable encoding and decoding for every type
  3. The frame-metadata crate describes the structure of the runtime based on the registry provided by the scale-info crate
  4. Nodes provide the RPC method state_getMetadata to return a complete description of all the types in the current runtime as a hex-encoded vector of SCALE-encoded bytes
"},{"location":"polkadot-protocol/basics/chain-data/#retrieve-runtime-metadata","title":"Retrieve Runtime Metadata","text":"

The type information provided by the metadata enables applications to communicate with nodes using different runtime versions and across chains that expose different calls, events, types, and storage items. The metadata also allows libraries to generate a substantial portion of the code needed to communicate with a given node, enabling libraries like subxt to generate frontend interfaces that are specific to a target chain.

"},{"location":"polkadot-protocol/basics/chain-data/#use-polkadotjs","title":"Use Polkadot.js","text":"

Visit the Polkadot.js Portal and select the Developer dropdown in the top banner. Select RPC Calls to make the call to request metadata. Follow these steps to make the RPC call:

  1. Select state as the endpoint to call
  2. Select getMetadata(at) as the method to call
  3. Click Submit RPC call to submit the call and return the metadata in JSON format
"},{"location":"polkadot-protocol/basics/chain-data/#use-curl","title":"Use Curl","text":"

You can fetch the metadata for the network by calling the node's RPC endpoint. This request returns the metadata in bytes rather than human-readable JSON:

curl -H \"Content-Type: application/json\" \\\n-d '{\"id\":1, \"jsonrpc\":\"2.0\", \"method\": \"state_getMetadata\"}' \\\nhttps://rpc.polkadot.io\n
"},{"location":"polkadot-protocol/basics/chain-data/#use-subxt","title":"Use Subxt","text":"

subxt may also be used to fetch the metadata of any data in a human-readable JSON format:

subxt metadata  --url wss://rpc.polkadot.io --format json > spec.json\n

Another option is to use the subxt explorer web UI.

"},{"location":"polkadot-protocol/basics/chain-data/#client-applications-and-metadata","title":"Client Applications and Metadata","text":"

The metadata exposes the expected way to decode each type, meaning applications can send, retrieve, and process application information without manual encoding and decoding. Client applications must use the SCALE codec library to encode and decode RPC payloads to use the metadata. Client applications use the metadata to interact with the node, parse responses, and format message payloads sent to the node.

"},{"location":"polkadot-protocol/basics/chain-data/#metadata-format","title":"Metadata Format","text":"

Although the SCALE-encoded bytes can be decoded using the frame-metadata and parity-scale-codec libraries, there are other tools, such as subxt and the Polkadot-JS API, that can convert the raw data to human-readable JSON format.

The types and type definitions included in the metadata returned by the state_getMetadata RPC call depend on the runtime's metadata version.

In general, the metadata includes the following information:

  • A constant identifying the file as containing metadata
  • The version of the metadata format used in the runtime
  • Type definitions for all types used in the runtime and generated by the scale-info crate
  • Pallet information for the pallets included in the runtime in the order that they are defined in the construct_runtime macro

Metadata formats may vary

Depending on the frontend library used (such as the Polkadot API), they may format the metadata differently than the raw format shown.

The following example illustrates a condensed and annotated section of metadata decoded and converted to JSON:

[\n    1635018093,\n    {\n        \"V14\": {\n            \"types\": {\n                \"types\": [{}]\n            },\n            \"pallets\": [{}],\n            \"extrinsic\": {\n                \"ty\": 126,\n                \"version\": 4,\n                \"signed_extensions\": [{}]\n            },\n            \"ty\": 141\n        }\n    }\n]\n

The constant 1635018093 is a magic number that identifies the file as a metadata file. The rest of the metadata is divided into the types, pallets, and extrinsic sections:

  • The types section contains an index of the types and information about each type's type signature
  • The pallets section contains information about each pallet in the runtime
  • The extrinsic section describes the type identifier and transaction format version that the runtime uses

Different extrinsic versions can have varying formats, especially when considering signed transactions.

"},{"location":"polkadot-protocol/basics/chain-data/#pallets","title":"Pallets","text":"

The following is a condensed and annotated example of metadata for a single element in the pallets array (the sudo pallet):

{\n    \"name\": \"Sudo\",\n    \"storage\": {\n        \"prefix\": \"Sudo\",\n        \"entries\": [\n            {\n                \"name\": \"Key\",\n                \"modifier\": \"Optional\",\n                \"ty\": {\n                    \"Plain\": 0\n                },\n                \"default\": [0],\n                \"docs\": [\"The `AccountId` of the sudo key.\"]\n            }\n        ]\n    },\n    \"calls\": {\n        \"ty\": 117\n    },\n    \"event\": {\n        \"ty\": 42\n    },\n    \"constants\": [],\n    \"error\": {\n        \"ty\": 124\n    },\n    \"index\": 8\n}\n

Every element metadata contains the name of the pallet it represents and information about its storage, calls, events, and errors. You can look up details about the definition of the calls, events, and errors by viewing the type index identifier. The type index identifier is the u32 integer used to access the type information for that item. For example, the type index identifier for calls in the Sudo pallet is 117. If you view information for that type identifier in the types section of the metadata, it provides information about the available calls, including the documentation for each call.

For example, the following is a condensed excerpt of the calls for the Sudo pallet:

{\n    \"id\": 117,\n    \"type\": {\n        \"path\": [\"pallet_sudo\", \"pallet\", \"Call\"],\n        \"params\": [\n            {\n                \"name\": \"T\",\n                \"type\": null\n            }\n        ],\n        \"def\": {\n            \"variant\": {\n                \"variants\": [\n                    {\n                        \"name\": \"sudo\",\n                        \"fields\": [\n                            {\n                                \"name\": \"call\",\n                                \"type\": 114,\n                                \"typeName\": \"Box<<T as Config>::RuntimeCall>\"\n                            }\n                        ],\n                        \"index\": 0,\n                        \"docs\": [\n                            \"Authenticates sudo key, dispatches a function call with `Root` origin\"\n                        ]\n                    },\n                    {\n                        \"name\": \"sudo_unchecked_weight\",\n                        \"fields\": [\n                            {\n                                \"name\": \"call\",\n                                \"type\": 114,\n                                \"typeName\": \"Box<<T as Config>::RuntimeCall>\"\n                            },\n                            {\n                                \"name\": \"weight\",\n                                \"type\": 8,\n                                \"typeName\": \"Weight\"\n                            }\n                        ],\n                        \"index\": 1,\n                        \"docs\": [\n                            \"Authenticates sudo key, dispatches a function call with `Root` origin\"\n                        ]\n                    },\n                    {\n                        \"name\": \"set_key\",\n                        \"fields\": [\n                            {\n                                \"name\": \"new\",\n                                \"type\": 103,\n                                \"typeName\": \"AccountIdLookupOf<T>\"\n                            }\n                        ],\n                        \"index\": 2,\n                        \"docs\": [\n                            \"Authenticates current sudo key, sets the given AccountId (`new`) as the new sudo\"\n                        ]\n                    },\n                    {\n                        \"name\": \"sudo_as\",\n                        \"fields\": [\n                            {\n                                \"name\": \"who\",\n                                \"type\": 103,\n                                \"typeName\": \"AccountIdLookupOf<T>\"\n                            },\n                            {\n                                \"name\": \"call\",\n                                \"type\": 114,\n                                \"typeName\": \"Box<<T as Config>::RuntimeCall>\"\n                            }\n                        ],\n                        \"index\": 3,\n                        \"docs\": [\n                            \"Authenticates sudo key, dispatches a function call with `Signed` origin from a given account\"\n                        ]\n                    }\n                ]\n            }\n        }\n    }\n}\n

For each field, you can access type information and metadata for the following:

  • Storage metadata - provides the information required to enable applications to get information for specific storage items
  • Call metadata - includes information about the runtime calls defined by the #[pallet] macro including call names, arguments and documentation
  • Event metadata - provides the metadata generated by the #[pallet::event] macro, including the name, arguments, and documentation for each pallet event
  • Constants metadata - provides metadata generated by the #[pallet::constant] macro, including the name, type, and hex-encoded value of the constant
  • Error metadata - provides metadata generated by the #[pallet::error] macro, including the name and documentation for each pallet error

Note

Type identifiers change from time to time, so you should avoid relying on specific type identifiers in your applications.

"},{"location":"polkadot-protocol/basics/chain-data/#extrinsic","title":"Extrinsic","text":"

The runtime generates extrinsic metadata and provides useful information about transaction format. When decoded, the metadata contains the transaction version and the list of signed extensions.

For example:

{\n    \"extrinsic\": {\n        \"ty\": 126,\n        \"version\": 4,\n        \"signed_extensions\": [\n            {\n                \"identifier\": \"CheckNonZeroSender\",\n                \"ty\": 132,\n                \"additional_signed\": 41\n            },\n            {\n                \"identifier\": \"CheckSpecVersion\",\n                \"ty\": 133,\n                \"additional_signed\": 4\n            },\n            {\n                \"identifier\": \"CheckTxVersion\",\n                \"ty\": 134,\n                \"additional_signed\": 4\n            },\n            {\n                \"identifier\": \"CheckGenesis\",\n                \"ty\": 135,\n                \"additional_signed\": 11\n            },\n            {\n                \"identifier\": \"CheckMortality\",\n                \"ty\": 136,\n                \"additional_signed\": 11\n            },\n            {\n                \"identifier\": \"CheckNonce\",\n                \"ty\": 138,\n                \"additional_signed\": 41\n            },\n            {\n                \"identifier\": \"CheckWeight\",\n                \"ty\": 139,\n                \"additional_signed\": 41\n            },\n            {\n                \"identifier\": \"ChargeTransactionPayment\",\n                \"ty\": 140,\n                \"additional_signed\": 41\n            }\n        ]\n    },\n    \"ty\": 141\n}\n

The type system is composite, meaning each type identifier contains a reference to a specific type or to another type identifier that provides information about the associated primitive types.

For example, you can encode the BitVec<Order, Store> type, but to decode it properly, you must know the types used for the Order and Store types. To find type information for Order and Store, you can use the path in the decoded JSON to locate their type identifiers.

"},{"location":"polkadot-protocol/basics/chain-data/#included-rpc-apis","title":"Included RPC APIs","text":"

A standard node comes with the following APIs to interact with a node:

  • AuthorApiServer - make calls into a full node, including authoring extrinsics and verifying session keys
  • ChainApiServer - retrieve block header and finality information
  • OffchainApiServer - make RPC calls for off-chain workers
  • StateApiServer - query information about on-chain state such as runtime version, storage items, and proofs
  • SystemApiServer - retrieve information about network state, such as connected peers and node roles
"},{"location":"polkadot-protocol/basics/chain-data/#additional-resources","title":"Additional Resources","text":"

The following tools can help you locate and decode metadata:

  • Subxt Explorer
  • Metadata Portal \ud83c\udf17
  • De[code] Sub[strate]
"},{"location":"polkadot-protocol/basics/cryptography/","title":"Cryptography","text":""},{"location":"polkadot-protocol/basics/cryptography/#introduction","title":"Introduction","text":"

Cryptography forms the backbone of blockchain technology, providing the mathematical verifiability crucial for consensus systems, data integrity, and user security. While a deep understanding of the underlying mathematical processes isn't necessary for most blockchain developers, grasping the fundamental applications of cryptography is essential. This page comprehensively overviews cryptographic implementations used across Polkadot SDK-based chains and the broader blockchain ecosystem.

"},{"location":"polkadot-protocol/basics/cryptography/#hash-functions","title":"Hash Functions","text":"

Hash functions are fundamental to blockchain technology, creating a unique digital fingerprint for any piece of data, including simple text, images, or any other form of file. They map input data of any size to a fixed-size output (typically 32 bytes) using complex mathematical operations. Hashing is used to verify data integrity, create digital signatures, and provide a secure way to store passwords. This form of mapping is known as the \"pigeonhole principle,\" it is primarily implemented to efficiently and verifiably identify data from large sets.

"},{"location":"polkadot-protocol/basics/cryptography/#key-properties-of-hash-functions","title":"Key Properties of Hash Functions","text":"
  • Deterministic - the same input always produces the same output
  • Quick computation - it's easy to calculate the hash value for any given input
  • Pre-image resistance - it's infeasible to generate the input data from its hash
  • Small changes in input yield large changes in output - known as the \"avalanche effect\"
  • Collision resistance - the probabilities are extremely low to find two different inputs with the same hash
"},{"location":"polkadot-protocol/basics/cryptography/#blake2","title":"Blake2","text":"

The Polkadot SDK utilizes Blake2, a state-of-the-art hashing method that offers:

  • Equal or greater security compared to SHA-2
  • Significantly faster performance than other algorithms

These properties make Blake2 ideal for blockchain systems, reducing sync times for new nodes and lowering the resources required for validation.

Note

For detailed technical specifications on Blake2, refer to the official Blake2 paper.

"},{"location":"polkadot-protocol/basics/cryptography/#types-of-cryptography","title":"Types of Cryptography","text":"

There are two different ways that cryptographic algorithms are implemented: symmetric cryptography and asymmetric cryptography.

"},{"location":"polkadot-protocol/basics/cryptography/#symmetric-cryptography","title":"Symmetric Cryptography","text":"

Symmetric encryption is a branch of cryptography that isn't based on one-way functions, unlike asymmetric cryptography. It uses the same cryptographic key to encrypt plain text and decrypt the resulting ciphertext.

Symmetric cryptography is a type of encryption that has been used throughout history, such as the Enigma Cipher and the Caesar Cipher. It is still widely used today and can be found in Web2 and Web3 applications alike. There is only one single key, and a recipient must also have access to it to access the contained information.

"},{"location":"polkadot-protocol/basics/cryptography/#symmetric-advantages","title":"Advantages","text":"
  • Fast and efficient for large amounts of data
  • Requires less computational power
"},{"location":"polkadot-protocol/basics/cryptography/#symmetric-disadvantages","title":"Disadvantages","text":"
  • Key distribution can be challenging
  • Scalability issues in systems with many users
"},{"location":"polkadot-protocol/basics/cryptography/#asymmetric-cryptography","title":"Asymmetric Cryptography","text":"

Asymmetric encryption is a type of cryptography that uses two different keys, known as a keypair: a public key, used to encrypt plain text, and a private counterpart, used to decrypt the ciphertext.

The public key encrypts a fixed-length message that can only be decrypted with the recipient's private key and, sometimes, a set password. The public key can be used to cryptographically verify that the corresponding private key was used to create a piece of data without compromising the private key, such as with digital signatures. This has obvious implications for identity, ownership, and properties and is used in many different protocols across Web2 and Web3.

"},{"location":"polkadot-protocol/basics/cryptography/#asymmetric-advantages","title":"Advantages","text":"
  • Solves the key distribution problem
  • Enables digital signatures and secure key exchange
"},{"location":"polkadot-protocol/basics/cryptography/#asymmetric-disadvantages","title":"Disadvantages","text":"
  • Slower than symmetric encryption
  • Requires more computational resources
"},{"location":"polkadot-protocol/basics/cryptography/#trade-offs-and-compromises","title":"Trade-offs and Compromises","text":"

Symmetric cryptography is faster and requires fewer bits in the key to achieve the same level of security that asymmetric cryptography provides. However, it requires a shared secret before communication can occur, which poses issues to its integrity and a potential compromise point. On the other hand, asymmetric cryptography doesn't require the secret to be shared ahead of time, allowing for far better end-user security.

Hybrid symmetric and asymmetric cryptography is often used to overcome the engineering issues of asymmetric cryptography, as it is slower and requires more bits in the key to achieve the same level of security. It encrypts a key and then uses the comparatively lightweight symmetric cipher to do the \"heavy lifting\" with the message.

"},{"location":"polkadot-protocol/basics/cryptography/#digital-signatures","title":"Digital Signatures","text":"

Digital signatures are a way of verifying the authenticity of a document or message using asymmetric keypairs. They are used to ensure that a sender or signer's document or message hasn't been tampered with in transit, and for recipients to verify that the data is accurate and from the expected sender.

Signing digital signatures only requires a low-level understanding of mathematics and cryptography. For a conceptual example -- when signing a check, it is expected that it cannot be cashed multiple times. This isn't a feature of the signature system but rather the check serialization system. The bank will check that the serial number on the check hasn't already been used. Digital signatures essentially combine these two concepts, allowing the signature to provide the serialization via a unique cryptographic fingerprint that cannot be reproduced.

Unlike pen-and-paper signatures, knowledge of a digital signature cannot be used to create other signatures. Digital signatures are often used in bureaucratic processes, as they are more secure than simply scanning in a signature and pasting it onto a document.

Polkadot SDK provides multiple different cryptographic schemes and is generic so that it can support anything that implements the Pair trait.

"},{"location":"polkadot-protocol/basics/cryptography/#example-of-creating-a-digital-signature","title":"Example of Creating a Digital Signature","text":"

The process of creating and verifying a digital signature involves several steps:

  1. The sender creates a hash of the message
  2. The hash is encrypted using the sender's private key, creating the signature
  3. The message and signature are sent to the recipient
  4. The recipient decrypts the signature using the sender's public key
  5. The recipient hashes the received message and compares it to the decrypted hash

If the hashes match, the signature is valid, confirming the message's integrity and the sender's identity.

"},{"location":"polkadot-protocol/basics/cryptography/#elliptic-curve","title":"Elliptic Curve","text":"

Blockchain technology requires the ability to have multiple keys creating a signature for block proposal and validation. To this end, Elliptic Curve Digital Signature Algorithm (ECDSA) and Schnorr signatures are two of the most commonly used methods. While ECDSA is a far simpler implementation, Schnorr signatures are more efficient when it comes to multi-signatures.

Schnorr signatures bring some noticeable features over the ECDSA/EdDSA schemes:

  • It is better for hierarchical deterministic key derivations
  • It allows for native multi-signature through signature aggregation
  • It is generally more resistant to misuse

One sacrifice that is made when using Schnorr signatures over ECDSA is that both require 64 bytes, but only ECDSA signatures communicate their public key.

"},{"location":"polkadot-protocol/basics/cryptography/#various-implementations","title":"Various Implementations","text":"
  • ECDSA - Polkadot SDK provides an ECDSA signature scheme using the secp256k1 curve. This is the same cryptographic algorithm used to secure Bitcoin and Ethereum

  • Ed25519 - is an EdDSA signature scheme using Curve25519. It is carefully engineered at several levels of design and implementation to achieve very high speeds without compromising security

  • SR25519 - is based on the same underlying curve as Ed25519. However, it uses Schnorr signatures instead of the EdDSA scheme

"},{"location":"polkadot-protocol/basics/data-encoding/","title":"Data Encoding","text":""},{"location":"polkadot-protocol/basics/data-encoding/#introduction","title":"Introduction","text":"

The Polkadot SDK uses a lightweight and efficient encoding/decoding mechanism to optimize data transmission across the network. This mechanism, known as the SCALE codec, is used for serializing and deserializing data.

The SCALE codec enables communication between the runtime and the outer node. This mechanism is designed for high-performance, copy-free data encoding and decoding in resource-constrained environments like the Polkadot SDK Wasm runtime.

It is not self-describing, meaning the decoding context must fully know the encoded data types.

Parity's libraries utilize the parity-scale-codec crate (a Rust implementation of the SCALE codec) to handle encoding and decoding for interactions between RPCs and the runtime.

The codec mechanism is ideal for Polkadot SDK-based chains because:

  • It is lightweight compared to generic serialization frameworks like serde, which add unnecessary bulk to binaries
  • It doesn\u2019t rely on Rust\u2019s libstd, making it compatible with no_std environments like Wasm runtime
  • It integrates seamlessly with Rust, allowing easy derivation of encoding and decoding logic for new types using #[derive(Encode, Decode)]

Defining a custom encoding scheme in the Polkadot SDK-based chains, rather than using an existing Rust codec library, is crucial for enabling cross-platform and multi-language support.

"},{"location":"polkadot-protocol/basics/data-encoding/#scale-codec","title":"SCALE Codec","text":"

The codec is implemented using the following traits:

  • Encode
  • Decode
  • CompactAs
  • HasCompact
  • EncodeLike
"},{"location":"polkadot-protocol/basics/data-encoding/#encode","title":"Encode","text":"

The Encode trait handles data encoding into SCALE format and includes the following key functions:

  • size_hint(&self) -> usize - estimates the number of bytes required for encoding to prevent multiple memory allocations. This should be inexpensive and avoid complex operations. Optional if the size isn\u2019t known
  • encode_to<T: Output>(&self, dest: &mut T) - encodes the data, appending it to a destination buffer
  • encode(&self) -> Vec<u8> - encodes the data and returns it as a byte vector
  • using_encoded<R, F: FnOnce(&[u8]) -> R>(&self, f: F) -> R - encodes the data and passes it to a closure, returning the result
  • encoded_size(&self) -> usize - calculates the encoded size. Should be used when the encoded data isn\u2019t required

Note

For best performance, value types should override using_encoded, and allocating types should override encode_to. It's recommended to implement size_hint for all types where possible.

"},{"location":"polkadot-protocol/basics/data-encoding/#decode","title":"Decode","text":"

The Decode trait handles decoding SCALE-encoded data back into the appropriate types:

  • fn decode<I: Input>(value: &mut I) -> Result<Self, Error> - decodes data from the SCALE format, returning an error if decoding fails
"},{"location":"polkadot-protocol/basics/data-encoding/#compactas","title":"CompactAs","text":"

The CompactAs trait wraps custom types for compact encoding:

  • encode_as(&self) -> &Self::As - encodes the type as a compact type
  • decode_from(_: Self::As) -> Result<Self, Error> - decodes from a compact encoded type
"},{"location":"polkadot-protocol/basics/data-encoding/#hascompact","title":"HasCompact","text":"

The HasCompact trait indicates a type supports compact encoding.

"},{"location":"polkadot-protocol/basics/data-encoding/#encodelike","title":"EncodeLike","text":"

The EncodeLike trait is used to ensure multiple types that encode similarly are accepted by the same function. When using derive, it is automatically implemented.

"},{"location":"polkadot-protocol/basics/data-encoding/#data-types","title":"Data Types","text":"

The table below outlines how the Rust implementation of the Parity SCALE codec encodes different data types.

Type Description Example SCALE Decoded Value SCALE Encoded Value Boolean Boolean values are encoded using the least significant bit of a single byte. false / true 0x00 / 0x01 Compact/general integers A \"compact\" or general integer encoding is sufficient for encoding large integers (up to 2^536) and is more efficient at encoding most values than the fixed-width version. unsigned integer 0 / unsigned integer 1 / unsigned integer 42 / unsigned integer 69 / unsigned integer 65535 / BigInt(100000000000000) 0x00 / 0x04 / 0xa8 / 0x1501 / 0xfeff0300 / 0x0b00407a10f35a Enumerations (tagged-unions) A fixed number of variants Fixed-width integers Basic integers are encoded using a fixed-width little-endian (LE) format. signed 8-bit integer 69 / unsigned 16-bit integer 42 / unsigned 32-bit integer 16777215 0x45 / 0x2a00 / 0xffffff00 Options One or zero values of a particular type. Some / None 0x01 followed by the encoded value / 0x00 Results Results are commonly used enumerations which indicate whether certain operations were successful or unsuccessful. Ok(42) / Err(false) 0x002a / 0x0100 Strings Strings are Vectors of bytes (Vec) containing a valid UTF8 sequence. Structs For structures, the values are named, but that is irrelevant for the encoding (names are ignored - only order matters). SortedVecAsc::from([3, 5, 2, 8]) [3, 2, 5, 8] Tuples A fixed-size series of values, each with a possibly different but predetermined and fixed type. This is simply the concatenation of each encoded value. Tuple of compact unsigned integer and boolean: (3, false) 0x0c00 Vectors (lists, series, sets) A collection of same-typed values is encoded, prefixed with a compact encoding of the number of items, followed by each item's encoding concatenated in turn. Vector of unsigned 16-bit integers: [4, 8, 15, 16, 23, 42] 0x18040008000f00100017002a00"},{"location":"polkadot-protocol/basics/data-encoding/#encode-and-decode-rust-trait-implementations","title":"Encode and Decode Rust Trait Implementations","text":"

Here's how the Encode and Decode traits are implemented:

use parity_scale_codec::{Encode, Decode};\n\n[derive(Debug, PartialEq, Encode, Decode)]\nenum EnumType {\n    #[codec(index = 15)]\n    A,\n    B(u32, u64),\n    C {\n        a: u32,\n        b: u64,\n    },\n}\n\nlet a = EnumType::A;\nlet b = EnumType::B(1, 2);\nlet c = EnumType::C { a: 1, b: 2 };\n\na.using_encoded(|ref slice| {\n    assert_eq!(slice, &b\"\\x0f\");\n});\n\nb.using_encoded(|ref slice| {\n    assert_eq!(slice, &b\"\\x01\\x01\\0\\0\\0\\x02\\0\\0\\0\\0\\0\\0\\0\");\n});\n\nc.using_encoded(|ref slice| {\n    assert_eq!(slice, &b\"\\x02\\x01\\0\\0\\0\\x02\\0\\0\\0\\0\\0\\0\\0\");\n});\n\nlet mut da: &[u8] = b\"\\x0f\";\nassert_eq!(EnumType::decode(&mut da).ok(), Some(a));\n\nlet mut db: &[u8] = b\"\\x01\\x01\\0\\0\\0\\x02\\0\\0\\0\\0\\0\\0\\0\";\nassert_eq!(EnumType::decode(&mut db).ok(), Some(b));\n\nlet mut dc: &[u8] = b\"\\x02\\x01\\0\\0\\0\\x02\\0\\0\\0\\0\\0\\0\\0\";\nassert_eq!(EnumType::decode(&mut dc).ok(), Some(c));\n\nlet mut dz: &[u8] = &[0];\nassert_eq!(EnumType::decode(&mut dz).ok(), None);\n
"},{"location":"polkadot-protocol/basics/data-encoding/#scale-codec-libraries","title":"SCALE Codec Libraries","text":"

Several SCALE codec implementations are available in various languages. Here's a list of them:

  • AssemblyScript - LimeChain/as-scale-codec
  • C - MatthewDarnell/cScale
  • C++ - qdrvm/scale-codec-cpp
  • JavaScript - polkadot-js/api
  • Dart - leonardocustodio/polkadart
  • Haskell - airalab/hs-web3
  • Golang - itering/scale.go
  • Java - splix/polkaj
  • Python - polkascan/py-scale-codec
  • Ruby - wuminzhe/scale_rb
  • TypeScript - parity-scale-codec-ts, scale-ts, soramitsu/scale-codec-js-library, subsquid/scale-codec
"},{"location":"polkadot-protocol/basics/interoperability/","title":"Interoperability","text":""},{"location":"polkadot-protocol/basics/interoperability/#introduction","title":"Introduction","text":"

Interoperability lies at the heart of the Polkadot ecosystem, enabling communication and collaboration across a diverse range of blockchains. By bridging the gaps between parachains, relay chains, and even external networks, Polkadot unlocks the potential for truly decentralized applications, efficient resource sharing, and scalable solutions.

Polkadot\u2019s design ensures that blockchains can transcend their individual limitations by working together as part of a unified system. This cooperative architecture is what sets Polkadot apart in the blockchain landscape.

"},{"location":"polkadot-protocol/basics/interoperability/#why-interoperability-matters","title":"Why Interoperability Matters","text":"

The blockchain ecosystem is inherently fragmented. Different blockchains excel in specialized domains such as finance, gaming, or supply chain management, but these chains function in isolation without interoperability. This lack of connectivity stifles the broader utility of blockchain technology.

Interoperability solves this problem by enabling blockchains to:

  • Collaborate across networks - chains can interact to share assets, functionality, and data, creating synergies that amplify their individual strengths
  • Achieve greater scalability - specialized chains can offload tasks to others, optimizing performance and resource utilization
  • Expand use-case potential - cross-chain applications can leverage features from multiple blockchains, unlocking novel user experiences and solutions

In the Polkadot ecosystem, interoperability transforms a collection of isolated chains into a cohesive, efficient network, pushing the boundaries of what blockchains can achieve together.

"},{"location":"polkadot-protocol/basics/interoperability/#key-mechanisms-for-interoperability","title":"Key Mechanisms for Interoperability","text":"

At the core of Polkadot's cross-chain collaboration are foundational technologies designed to break down barriers between networks. These mechanisms empower blockchains to communicate, share resources, and operate as a cohesive ecosystem.

"},{"location":"polkadot-protocol/basics/interoperability/#cross-consensus-messaging-xcm-the-backbone-of-communication","title":"Cross-Consensus Messaging (XCM): The Backbone of Communication","text":"

Polkadot's Cross-Consensus Messaging (XCM) is the standard framework for interaction between parachains, relay chains, and, eventually, external blockchains. XCM provides a trustless, secure messaging format for exchanging assets, sharing data, and executing cross-chain operations.

Through XCM, decentralized applications can:

  • Transfer tokens and other assets across chains
  • Coordinate complex workflows that span multiple blockchains
  • Enable seamless user experiences where underlying blockchain differences are invisible
  • XCM exemplifies Polkadot\u2019s commitment to creating a robust and interoperable ecosystem

For further information about XCM, check the Introduction to XCM article.

"},{"location":"polkadot-protocol/basics/interoperability/#bridges-connecting-external-networks","title":"Bridges: Connecting External Networks","text":"

While XCM enables interoperability within the Polkadot ecosystem, bridges extend this functionality to external blockchains such as Ethereum and Bitcoin. By connecting these networks, bridges allow Polkadot-based chains to access external liquidity, additional functionalities, and broader user bases.

With bridges, developers and users gain the ability to:

  • Integrate external assets into Polkadot-based applications
  • Combine the strengths of Polkadot\u2019s scalability with the liquidity of other networks
  • Facilitate accurate multi-chain applications that transcend ecosystem boundaries

For more information about bridges in the Polkadot ecosystem, see the Bridge Hub guide.

"},{"location":"polkadot-protocol/basics/interoperability/#the-polkadot-advantage","title":"The Polkadot Advantage","text":"

Polkadot was purpose-built for interoperability. Unlike networks that add interoperability as an afterthought, Polkadot integrates it as a fundamental design principle. This approach offers several distinct advantages:

  • Developer empowerment - polkadot\u2019s interoperability tools allow developers to build applications that leverage multiple chains\u2019 capabilities without added complexity
  • Enhanced ecosystem collaboration - chains in Polkadot can focus on their unique strengths while contributing to the ecosystem\u2019s overall growth
  • Future-proofing blockchain - by enabling seamless communication, Polkadot ensures its ecosystem can adapt to evolving demands and technologies
"},{"location":"polkadot-protocol/basics/interoperability/#looking-ahead","title":"Looking Ahead","text":"

Polkadot\u2019s vision of interoperability extends beyond technical functionality, representing a shift towards a more collaborative blockchain landscape. By enabling chains to work together, Polkadot fosters innovation, efficiency, and accessibility, paving the way for a decentralized future where blockchains are not isolated competitors but interconnected collaborators.

"},{"location":"polkadot-protocol/basics/networks/","title":"Networks","text":""},{"location":"polkadot-protocol/basics/networks/#introduction","title":"Introduction","text":"

The Polkadot ecosystem is built on a robust set of networks designed to enable secure and scalable development. Whether you are testing new features or deploying to live production, Polkadot offers several layers of networks tailored for each stage of the development process. From local environments to experimental networks like Kusama and community-run TestNets such as Paseo, developers can thoroughly test, iterate, and validate their applications. This guide will introduce you to Polkadot's various networks and explain how they fit into the development workflow.

"},{"location":"polkadot-protocol/basics/networks/#network-overview","title":"Network Overview","text":"

Polkadot's development process is structured to ensure new features and upgrades are rigorously tested before being deployed on live production networks. The progression follows a well-defined path, starting from local environments and advancing through TestNets, ultimately reaching the Polkadot MainNet. The diagram below outlines the typical progression of the Polkadot development cycle:

\nflowchart LR\n    id1[Local] --> id2[Westend] --> id4[Kusama] --> id5[Polkadot]  \n    id1[Local] --> id3[Paseo] --> id5[Polkadot] 
This flow ensures developers can thoroughly test and iterate without risking real tokens or affecting production networks. Testing tools like Chopsticks and various TestNets make it easier to experiment safely before releasing to production.

A typical journey through the Polkadot core protocol development process might look like this:

  1. Local development node - development starts in a local environment, where developers can create, test, and iterate on upgrades or new features using a local development node. This stage allows rapid experimentation in an isolated setup without any external dependencies

  2. Westend - after testing locally, upgrades are deployed to Westend, Polkadot's primary TestNet. Westend simulates real-world conditions without using real tokens, making it the ideal place for rigorous feature testing before moving on to production networks

  3. Kusama - once features have passed extensive testing on Westend, they move to Kusama, Polkadot's experimental and fast-moving \"canary\" network. Kusama operates as a high-fidelity testing ground with actual economic incentives, giving developers insights into how their features will perform in a real-world environment

  4. Polkadot - after passing tests on Westend and Kusama, features are considered ready for deployment to Polkadot, the live production network

In addition, parachain developers can leverage local TestNets like Zombienet and deploy upgrades on parachain TestNets.

  1. Paseo - For parachain and dApp developers, Paseo serves as a community-run TestNet that mirrors Polkadot's runtime. Like Westend for core protocol development, Paseo provides a testing ground for parachain development without affecting live networks

Note

The Rococo TestNet deprecation date was October 14, 2024. Teams should use Westend for Polkadot protocol and feature testing and Paseo for chain development-related testing.

"},{"location":"polkadot-protocol/basics/networks/#polkadot-development-networks","title":"Polkadot Development Networks","text":"

Development and testing are crucial to building robust dApps and parachains and performing network upgrades within the Polkadot ecosystem. To achieve this, developers can leverage various networks and tools that provide a risk-free environment for experimentation and validation before deploying features to live networks. These networks help avoid the costs and risks associated with real tokens, enabling testing for functionalities like governance, cross-chain messaging, and runtime upgrades.

"},{"location":"polkadot-protocol/basics/networks/#kusama-network","title":"Kusama Network","text":"

Kusama is the experimental version of Polkadot, designed for developers who want to move quickly and test their applications in a real-world environment with economic incentives. Kusama serves as a production-grade testing ground where developers can deploy features and upgrades with the pressure of game theory and economics in mind. It mirrors Polkadot but operates as a more flexible space for innovation.

The native token for Kusama is KSM. For more information about KSM, visit the Native Assets page.

"},{"location":"polkadot-protocol/basics/networks/#test-networks","title":"Test Networks","text":"

The following test networks provide controlled environments for testing upgrades and new features. TestNet tokens are available from the Polkadot faucet.

"},{"location":"polkadot-protocol/basics/networks/#westend","title":"Westend","text":"

Westend is Polkadot's primary permanent TestNet. Unlike temporary test networks, Westend is not reset to the genesis block, making it an ongoing environment for testing Polkadot core features. Managed by Parity Technologies, Westend ensures that developers can test features in a real-world simulation without using actual tokens.

The native token for Westend is WND. More details about WND can be found on the Native Assets page.

"},{"location":"polkadot-protocol/basics/networks/#paseo","title":"Paseo","text":"

Paseo is a community-managed TestNet designed for parachain and dApp developers. It mirrors Polkadot's runtime and is maintained by Polkadot community members. Paseo provides a dedicated space for parachain developers to test their applications in a Polkadot-like environment without the risks associated with live networks.

The native token for Paseo is PAS. Additional information on PAS is available on the Native Assets page.

"},{"location":"polkadot-protocol/basics/networks/#local-test-networks","title":"Local Test Networks","text":"

Local test networks are an essential part of the development cycle for blockchain developers using the Polkadot SDK. They allow for fast, iterative testing in controlled, private environments without connecting to public TestNets. Developers can quickly spin up local instances to experiment, debug, and validate their code before deploying to larger TestNets like Westend or Paseo. Two key tools for local network testing are Zombienet and Chopsticks.

"},{"location":"polkadot-protocol/basics/networks/#zombienet","title":"Zombienet","text":"

Zombienet is a flexible testing framework for Polkadot SDK-based blockchains. It enables developers to create and manage ephemeral, short-lived networks. This feature makes Zombienet particularly useful for quick iterations, as it allows you to run multiple local networks concurrently, mimicking different runtime conditions. Whether you're developing a parachain or testing your custom blockchain logic, Zombienet gives you the tools to automate local testing.

Key features of Zombienet include:

  • Creating dynamic, local networks with different configurations
  • Running parachains and relay chains in a simulated environment
  • Efficient testing of network components like cross-chain messaging and governance

Zombienet is ideal for developers looking to test quickly and thoroughly before moving to more resource-intensive public TestNets.

"},{"location":"polkadot-protocol/basics/networks/#chopsticks","title":"Chopsticks","text":"

Chopsticks is a tool designed to create forks of Polkadot SDK-based blockchains, allowing developers to interact with network forks as part of their testing process. This capability makes Chopsticks a powerful option for testing upgrades, runtime changes, or cross-chain applications in a forked network environment.

Key features of Chopsticks include:

  • Forking live Polkadot SDK-based blockchains for isolated testing
  • Simulating cross-chain messages in a private, controlled setup
  • Debugging network behavior by interacting with the fork in real-time

Chopsticks provides a controlled environment for developers to safely explore the effects of runtime changes. It ensures that network behavior is tested and verified before upgrades are deployed to live networks.

"},{"location":"polkadot-protocol/basics/randomness/","title":"Randomness","text":""},{"location":"polkadot-protocol/basics/randomness/#introduction","title":"Introduction","text":"

Randomness is crucial in Proof of Stake (PoS) blockchains to ensure a fair and unpredictable distribution of validator duties. However, computers are inherently deterministic, meaning the same input always produces the same output. What we typically refer to as \"random\" numbers on a computer are actually pseudo-random. These numbers rely on an initial \"seed,\" which can come from external sources like atmospheric noise, heart rates, or even lava lamps. While this may seem random, given the same \"seed,\" the same sequence of numbers will always be generated.

In a global blockchain network, relying on real-world entropy for randomness isn\u2019t feasible because these inputs vary by time and location. If nodes use different inputs, blockchains can fork. Hence, real-world randomness isn't suitable for use as a seed in blockchain systems.

Currently, two primary methods for generating randomness in blockchains are used: RANDAO and VRF (Verifiable Random Function). Polkadot adopts the VRF approach for its randomness.

"},{"location":"polkadot-protocol/basics/randomness/#vrf","title":"VRF","text":"

A\u00a0Verifiable Random Function (VRF)\u00a0is a cryptographic function that generates a random number and proof that ensures the submitter produced the number. This proof allows anyone to verify the validity of the random number.

Polkadot's VRF is similar to the one used in Ouroboros Praos, which secures randomness for block production in systems like BABE (Polkadot\u2019s block production mechanism).

The key difference is that Polkadot's VRF doesn\u2019t rely on a central clock\u2014avoiding the issue of whose clock to trust. Instead, it uses its own past results and slot numbers to simulate time and determine future outcomes.

"},{"location":"polkadot-protocol/basics/randomness/#how-vrf-works","title":"How VRF Works","text":"

Slots on Polkadot are discrete units of time, each lasting six seconds, and can potentially hold a block. Multiple slots form an epoch, with 2400 slots making up one four-hour epoch.

In each slot, validators execute a \"die roll\" using a VRF. The VRF uses three inputs:

  1. A \"secret key\", unique to each validator, is used for the die roll
  2. An epoch randomness value, derived from the hash of VRF outputs from blocks two epochs ago (N-2), so past randomness influences the current epoch (N)
  3. The current slot number

This process helps maintain fair randomness across the network.

Here is a graphical representation:

The VRF produces two outputs: a result (the random number) and a proof (verifying that the number was generated correctly).

The\u00a0result\u00a0is checked by the validator against a protocol threshold. If it's below the threshold, the validator becomes a candidate for block production in that slot.

The validator then attempts to create a block, submitting it along with the PROOF and RESULT.

So, VRF can be expressed like:

(RESULT, PROOF) = VRF(SECRET, EPOCH_RANDOMNESS_VALUE, CURRENT_SLOT_NUMBER)

Put simply, performing a \"VRF roll\" generates a random number along with proof that the number was genuinely produced and not arbitrarily chosen.

After executing the VRF, the RESULT is compared to a protocol-defined THRESHOLD. If the RESULT is below the THRESHOLD, the validator becomes a valid candidate to propose a block for that slot. Otherwise, the validator skips the slot.

As a result, there may be multiple validators eligible to propose a block for a slot. In this case, the block accepted by other nodes will prevail, provided it is on the chain with the latest finalized block as determined by the GRANDPA finality gadget. It's also possible for no block producers to be available for a slot, in which case the AURA consensus takes over. AURA is a fallback mechanism that randomly selects a validator to produce a block, running in parallel with BABE and only stepping in when no block producers exist for a slot. Otherwise, it remains inactive.

Because validators roll independently, no block candidates may appear in some slots if all roll numbers are above the threshold.

Note

The resolution of this issue and the assurance that Polkadot block times remain near constant-time can be checked on the PoS Consensus page.

"},{"location":"polkadot-protocol/basics/randomness/#randao","title":"RANDAO","text":"

An alternative on-chain randomness method is Ethereum's\u00a0RANDAO, where validators perform thousands of hashes on a seed and publish the final hash during a round. The collective input from all validators forms the random number, and as long as one honest validator participates, the randomness is secure.

To enhance security,\u00a0RANDAO\u00a0can optionally be combined with a\u00a0Verifiable Delay Function (VDF), ensuring that randomness can't be predicted or manipulated during computation.

Note

More information about RANDAO can be found in the ETH documentation.

"},{"location":"polkadot-protocol/basics/randomness/#vdfs","title":"VDFs","text":"

Verifiable Delay Functions (VDFs) are time-bound computations that, even on parallel computers, take a set amount of time to complete.

They produce a unique result that can be quickly verified publicly. When combined with RANDAO, feeding RANDAO's output into a VDF introduces a delay that nullifies an attacker's chance to influence the randomness.

However,\u00a0VDF\u00a0likely requires specialized ASIC devices to run separately from standard nodes.

Warning

While only one is needed to secure the system, and they will be open-source and inexpensive, running them involves significant costs without direct incentives, adding friction for blockchain users.

"},{"location":"polkadot-protocol/basics/randomness/#additional-resources","title":"Additional Resources","text":"
  • Polkadot's research on blockchain randomness and sortition - contains reasoning for choices made along with proofs
  • Discussion on Randomness used in Polkadot - W3F researchers explore when and under what conditions Polkadot's randomness can be utilized
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/","title":"Blocks, Transactions, and Fees","text":"

Discover the inner workings of Polkadot\u2019s blocks and transactions, including their structure, processing, and lifecycle within the network. Learn how blocks are authored, validated, and finalized, ensuring seamless operation and consensus across the ecosystem. Dive into the various types of transactions\u2014signed, unsigned, and inherent\u2014and understand how they are constructed, submitted, and validated.

Uncover how Polkadot\u2019s fee system balances resource usage and economic incentives. Explore the role of transaction weights, runtime specifics, and the precise formula used to calculate fees. These mechanisms ensure fair resource allocation while maintaining the network\u2019s efficiency and scalability.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/","title":"Blocks","text":""},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#introduction","title":"Introduction","text":"

In the Polkadot SDK, blocks are fundamental to the functioning of the blockchain, serving as containers for transactions and changes to the chain's state. Blocks consist of headers and an array of transactions, ensuring the integrity and validity of operations on the network. This guide explores the essential components of a block, the process of block production, and how blocks are validated and imported across the network. By understanding these concepts, developers can better grasp how blockchains maintain security, consistency, and performance within the Polkadot ecosystem.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#what-is-a-block","title":"What is a Block?","text":"

In the Polkadot SDK, a block is a fundamental unit that encapsulates both the header and an array of transactions. The block header includes critical metadata to ensure the integrity and sequence of the blockchain. Here's a breakdown of its components:

  • Block height - indicates the number of blocks created in the chain so far
  • Parent hash - the hash of the previous block, providing a link to maintain the blockchain's immutability
  • Transaction root - cryptographic digest summarizing all transactions in the block
  • State root - a cryptographic digest representing the post-execution state
  • Digest - additional information that can be attached to a block, such as consensus-related messages

Each transaction is part of a series that is executed according to the runtime's rules. The transaction root is a cryptographic digest of this series, which prevents alterations and enables succinct verification by light clients. This verification process allows light clients to confirm whether a transaction exists in a block with only the block header, avoiding downloading the entire block.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#block-production","title":"Block Production","text":"

When an authoring node is authorized to create a new block, it selects transactions from the transaction queue based on priority. This step, known as block production, relies heavily on the executive module to manage the initialization and finalization of blocks. The process is summarized as follows:

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#initialize-block","title":"Initialize Block","text":"

The block initialization process begins with a series of function calls that prepare the block for transaction execution:

  1. Call on_initialize - the executive module calls the\u00a0on_initialize\u00a0hook from the system pallet and other runtime pallets to prepare for the block's transactions
  2. Coordinate runtime calls - coordinates function calls in the order defined by the transaction queue
  3. Verify information - once on_initialize\u00a0functions are executed, the executive module checks the parent hash in the block header and the trie root to verify information is consistent
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#finalize-block","title":"Finalize Block","text":"

Once transactions are processed, the block must be finalized before being broadcast to the network. The finalization steps are as follows:

  1. -Call on_finalize - the executive module calls the on_finalize hooks in each pallet to ensure any remaining state updates or checks are completed before the block is sealed and published
  2. -Verify information - the block's digest and storage root in the header are checked against the initialized block to ensure consistency
  3. -Call on_idle - the\u00a0on_idle hook is triggered to process any remaining tasks using the leftover weight from the block
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#block-authoring-and-import","title":"Block Authoring and Import","text":"

Once the block is finalized, it is gossiped to other nodes in the network. Nodes follow this procedure:

  1. Receive transactions - the authoring node collects transactions from the network
  2. Validate - transactions are checked for validity
  3. Queue - valid transactions are placed in the transaction pool for execution
  4. Execute - state changes are made as the transactions are executed
  5. Publish - the finalized block is broadcast to the network
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/blocks/#block-import-queue","title":"Block Import Queue","text":"

After a block is published, other nodes on the network can import it into their chain state. The block import queue is part of the outer node in every Polkadot SDK-based node and ensures incoming blocks are valid before adding them to the node's state.

In most cases, you don't need to know details about how transactions are gossiped or how other nodes on the network import blocks. The following traits are relevant, however, if you plan to write any custom consensus logic or want a deeper dive into the block import queue:

  • ImportQueue - the trait that defines the block import queue
  • Link - the trait that defines the link between the block import queue and the network
  • BasicQueue - a basic implementation of the block import queue
  • Verifier - the trait that defines the block verifier
  • BlockImport - the trait that defines the block import process

These traits govern how blocks are validated and imported across the network, ensuring consistency and security.

Additional information

Refer to the Block reference to learn more about the block structure in the Polkadot SDK runtime.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/","title":"Transactions Weights and Fees","text":""},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#introductions","title":"Introductions","text":"

When transactions are executed, or data is stored on-chain, the activity changes the chain's state and consumes blockchain resources. Because the resources available to a blockchain are limited, managing how operations on-chain consume them is important. In addition to being limited in practical terms, such as storage capacity, blockchain resources represent a potential attack vector for malicious users. For example, a malicious user might attempt to overload the network with messages to stop the network from producing new blocks. To protect blockchain resources from being drained or overloaded, you need to manage how they are made available and how they are consumed. The resources to be aware of include:

  • Memory usage
  • Storage input and output
  • Computation
  • Transaction and block size
  • State database size

The Polkadot SDK provides block authors with several ways to manage access to resources and to prevent individual components of the chain from consuming too much of any single resource. Two of the most important mechanisms available to block authors are\u00a0weights\u00a0and\u00a0transaction fees.

Weights\u00a0manage the time it takes to validate a block and characterize the time it takes to execute the calls in the block's body. By controlling the execution time a block can consume, weights set limits on storage input, output, and computation.

Some of the weight allowed for a block is consumed as part of the block's initialization and finalization. The weight might also be used to execute mandatory inherent extrinsic calls. To help ensure blocks don\u2019t consume too much execution time and prevent malicious users from overloading the system with unnecessary calls, weights are combined with\u00a0transaction fees.

Transaction fees provide an economic incentive to limit execution time, computation, and the number of calls required to perform operations. Transaction fees are also used to make the blockchain economically sustainable because they are typically applied to transactions initiated by users and deducted before a transaction request is executed.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#how-fees-are-calculated","title":"How Fees are Calculated","text":"

The final fee for a transaction is calculated using the following parameters:

  • base fee - this is the minimum amount a user pays for a transaction. It is declared a\u00a0base weight\u00a0in the runtime and converted to a fee using the\u00a0WeightToFee\u00a0conversion
  • weight fee - a fee proportional to the execution time (input and output and computation) that a transaction consumes
  • length fee - a fee proportional to the encoded length of the transaction
  • tip - an optional tip to increase the transaction\u2019s priority, giving it a higher chance to be included in the transaction queue

The base fee and proportional weight and length fees constitute the\u00a0inclusion fee. The inclusion fee is the minimum fee that must be available for a transaction to be included in a block.

inclusion fee = base fee + weight fee + length fee\n

Transaction fees are withdrawn before the transaction is executed. After the transaction is executed, the weight can be adjusted to reflect the resources used. If a transaction uses fewer resources than expected, the transaction fee is corrected, and the adjusted transaction fee is deposited.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#using-the-transaction-payment-pallet","title":"Using the Transaction Payment Pallet","text":"

The\u00a0Transaction Payment pallet\u00a0provides the basic logic for calculating the inclusion fee. You can also use the Transaction Payment pallet to:

  • Convert a weight value into a deductible fee based on a currency type using\u00a0Config::WeightToFee
  • Update the fee for the next block by defining a multiplier based on the chain\u2019s final state at the end of the previous block using\u00a0Config::FeeMultiplierUpdate
  • Manage the withdrawal, refund, and deposit of transaction fees using\u00a0Config::OnChargeTransaction

You can learn more about these configuration traits in the\u00a0Transaction Payment\u00a0documentation.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#understanding-the-inclusion-fee","title":"Understanding the Inclusion Fee","text":"

The formula for calculating the inclusion fee is as follows:

inclusion_fee = base_fee + length_fee + [targeted_fee_adjustment * weight_fee]\n

And then, for calculating the final fee:

final_fee = inclusion_fee + tip\n

In the first formula, the\u00a0targeted_fee_adjustment\u00a0is a multiplier that can tune the final fee based on the network\u2019s congestion.

  • The\u00a0base_fee\u00a0derived from the base weight covers inclusion overhead like signature verification
  • The\u00a0length_fee\u00a0is a per-byte fee that is multiplied by the length of the encoded extrinsic
  • The\u00a0weight_fee\u00a0fee is calculated using two parameters:
  • The\u00a0ExtrinsicBaseWeight\u00a0that is declared in the runtime and applies to all extrinsics
  • The\u00a0#[pallet::weight]\u00a0annotation that accounts for an extrinsic's complexity

To convert the weight to Currency, the runtime must define a WeightToFee struct that implements a conversion function, Convert<Weight,Balance>.

Note that the extrinsic sender is charged the inclusion fee before the extrinsic is invoked. The fee is deducted from the sender's balance even if the transaction fails upon execution.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#accounts-with-an-insufficient-balance","title":"Accounts with an Insufficient Balance","text":"

If an account does not have a sufficient balance to pay the inclusion fee and remain alive\u2014that is, enough to pay the inclusion fee and maintain the minimum\u00a0existential deposit\u2014then you should ensure the transaction is canceled so that no fee is deducted and the transaction does not begin execution.

The Polkadot SDK doesn't enforce this rollback behavior. However, this scenario would be rare because the transaction queue and block-making logic perform checks to prevent it before adding an extrinsic to a block.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#fee-multipliers","title":"Fee Multipliers","text":"

The inclusion fee formula always results in the same fee for the same input. However, weight can be dynamic and\u2014based on how\u00a0WeightToFee\u00a0is defined\u2014the final fee can include some degree of variability. The Transaction Payment pallet provides the\u00a0FeeMultiplierUpdate\u00a0configurable parameter to account for this variability.

The Polkadot network inspires the default update function and implements a targeted adjustment in which a target saturation level of block weight is defined. If the previous block is more saturated, the fees increase slightly. Similarly, if the last block has fewer transactions than the target, fees are decreased by a small amount. For more information about fee multiplier adjustments, see the\u00a0Web3 Research Page.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#transactions-with-special-requirements","title":"Transactions with Special Requirements","text":"

Inclusion fees must be computable before execution and can only represent fixed logic. Some transactions warrant limiting resources with other strategies. For example:

  • Bonds are a type of fee that might be returned or slashed after some on-chain event. For example, you might want to require users to place a bond to participate in a vote. The bond might then be returned at the end of the referendum or slashed if the voter attempted malicious behavior
  • Deposits are fees that might be returned later. For example, you might require users to pay a deposit to execute an operation that uses storage. The user\u2019s deposit could be returned if a subsequent operation frees up storage
  • Burn operations are used to pay for a transaction based on its internal logic. For example, a transaction might burn funds from the sender if the transaction creates new storage items to pay for the increased state size
  • Limits enable you to enforce constant or configurable limits on specific operations. For example, the default Staking pallet only allows nominators to nominate 16 validators to limit the complexity of the validator election process

It is important to note that if you query the chain for a transaction fee, it only returns the inclusion fee.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#default-weight-annotations","title":"Default Weight Annotations","text":"

All dispatchable functions in the Polkadot SDK must specify a weight. The way of doing that is using the annotation-based system that lets you combine fixed values for database read/write weight and/or fixed values based on benchmarks. The most basic example would look like this:

#[pallet::weight(100_000)]\nfn my_dispatchable() {\n    // ...\n}\n

Note that the\u00a0ExtrinsicBaseWeight\u00a0is automatically added to the declared weight to account for the costs of simply including an empty extrinsic into a block.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#weights-and-database-readwrite-operations","title":"Weights and Database Read/Write Operations","text":"

To make weight annotations independent of the deployed database backend, they are defined as a constant and then used in the annotations when expressing database accesses performed by the dispatchable:

#[pallet::weight(T::DbWeight::get().reads_writes(1, 2) + 20_000)]\nfn my_dispatchable() {\n    // ...\n}\n

This dispatchable allows one database to read and two to write, in addition to other things that add the additional 20,000. Database access is generally every time a value declared inside the\u00a0#[pallet::storage]\u00a0block is accessed. However, unique accesses are counted because after a value is accessed, it is cached, and reaccessing it does not result in a database operation. That is:

  • Multiple reads of the exact value count as one read
  • Multiple writes of the exact value count as one write
  • Multiple reads of the same value, followed by a write to that value, count as one read and one write
  • A write followed by a read-only counts as one write
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#dispatch-classes","title":"Dispatch Classes","text":"

Dispatches are broken into three classes:

  • Normal
  • Operational
  • Mandatory

If a dispatch is not defined as\u00a0Operational\u00a0or\u00a0Mandatory\u00a0in the weight annotation, the dispatch is identified as\u00a0Normal\u00a0by default. You can specify that the dispatchable uses another class like this:

#[pallet::dispatch((DispatchClass::Operational))]\nfn my_dispatchable() {\n    // ...\n}\n

This tuple notation also allows you to specify a final argument determining whether the user is charged based on the annotated weight. If you don't specify otherwise,\u00a0Pays::Yes\u00a0is assumed:

#[pallet::dispatch(DispatchClass::Normal, Pays::No)]\nfn my_dispatchable() {\n    // ...\n}\n
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#normal-dispatches","title":"Normal Dispatches","text":"

Dispatches in this class represent normal user-triggered transactions. These types of dispatches only consume a portion of a block's total weight limit. For information about the maximum portion of a block that can be consumed for normal dispatches, see\u00a0AvailableBlockRatio. Normal dispatches are sent to the\u00a0transaction pool.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#operational-dispatches","title":"Operational Dispatches","text":"

Unlike normal dispatches, which represent\u00a0the usage\u00a0of network capabilities, operational dispatches are those that\u00a0provide\u00a0network capabilities. Operational dispatches can consume the entire weight limit of a block. They are not bound by the\u00a0AvailableBlockRatio. Dispatches in this class are given maximum priority and are exempt from paying the\u00a0length_fee.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#mandatory-dispatches","title":"Mandatory Dispatches","text":"

Mandatory dispatches are included in a block even if they cause the block to surpass its weight limit. You can only use the mandatory dispatch class for\u00a0inherent transactions\u00a0that the block author submits. This dispatch class is intended to represent functions in the block validation process. Because these dispatches are always included in a block regardless of the function weight, the validation process must prevent malicious nodes from abusing the function to craft valid but impossibly heavy blocks. You can typically accomplish this by ensuring that:

  • The operation performed is always light
  • The operation can only be included in a block once

To make it more difficult for malicious nodes to abuse mandatory dispatches, they cannot be included in blocks that return errors. This dispatch class serves the assumption that it is better to allow an overweight block to be created than not to allow any block to be created at all.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#dynamic-weights","title":"Dynamic Weights","text":"

In addition to purely fixed weights and constants, the weight calculation can consider the input arguments of a dispatchable. The weight should be trivially computable from the input arguments with some basic arithmetic:

use frame_support:: {\n    dispatch:: {\n        DispatchClass::Normal,\n        Pays::Yes,\n    },\n   weights::Weight,\n};\n\n#[pallet::weight(FunctionOf(\n  |args: (&Vec<User>,)| args.0.len().saturating_mul(10_000),\n  )\n]\nfn handle_users(origin, calls: Vec<User>) {\n    // Do something per user\n}\n
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#post-dispatch-weight-correction","title":"Post Dispatch Weight Correction","text":"

Depending on the execution logic, a dispatchable function might consume less weight than was prescribed pre-dispatch. To correct weight, the function declares a different return type and returns its actual weight:

#[pallet::weight(10_000 + 500_000_000)]\nfn expensive_or_cheap(input: u64) -> DispatchResultWithPostInfo {\n    let was_heavy = do_calculation(input);\n\n    if (was_heavy) {\n        // None means \"no correction\" from the weight annotation.\n        Ok(None.into())\n    } else {\n        // Return the actual weight consumed.\n        Ok(Some(10_000).into())\n    }\n}\n
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#custom-fees","title":"Custom Fees","text":"

You can also define custom fee systems through custom weight functions or inclusion fee functions.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#custom-weights","title":"Custom Weights","text":"

Instead of using the default weight annotations, you can create a custom weight calculation type using the\u00a0weights\u00a0module. The custom weight calculation type must implement the following traits:

  • WeighData<T>\u00a0to determine the weight of the dispatch
  • ClassifyDispatch<T>\u00a0to determine the class of the dispatch
  • PaysFee<T>\u00a0to determine whether the sender of the dispatch pays fees

The Polkadot SDK then bundles the output information of the three traits into the\u00a0DispatchInfo struct and provides it by implementing the\u00a0GetDispatchInfo\u00a0for all\u00a0Call\u00a0variants and opaque extrinsic types. This is used internally by the System and Executive modules.

ClassifyDispatch,\u00a0WeighData, and\u00a0PaysFee\u00a0are generic over\u00a0T, which gets resolved into the tuple of all dispatch arguments except for the origin. The following example illustrates a\u00a0struct\u00a0that calculates the weight as\u00a0m * len(args),\u00a0where\u00a0m\u00a0is a given multiplier and\u00a0args\u00a0is the concatenated tuple of all dispatch arguments. In this example, the dispatch class is\u00a0Operational\u00a0if the transaction has more than 100 bytes of length in arguments and will pay fees if the encoded length exceeds 10 bytes.

struct LenWeight(u32);\nimpl<T> WeighData<T> for LenWeight {\n    fn weigh_data(&self, target: T) -> Weight {\n        let multiplier = self.0;\n        let encoded_len = target.encode().len() as u32;\n        multiplier * encoded_len\n    }\n}\n\nimpl<T> ClassifyDispatch<T> for LenWeight {\n    fn classify_dispatch(&self, target: T) -> DispatchClass {\n        let encoded_len = target.encode().len() as u32;\n        if encoded_len > 100 {\n            DispatchClass::Operational\n        } else {\n            DispatchClass::Normal\n        }\n    }\n}\n\nimpl<T> PaysFee<T> {\n    fn pays_fee(&self, target: T) -> Pays {\n        let encoded_len = target.encode().len() as u32;\n        if encoded_len > 10 {\n            Pays::Yes\n        } else {\n            Pays::No\n        }\n    }\n}\n

A weight calculator function can also be coerced to the final type of the argument instead of defining it as a vague type that can be encoded. The code would roughly look like this:

struct CustomWeight;\nimpl WeighData<(&u32, &u64)> for CustomWeight {\n    fn weigh_data(&self, target: (&u32, &u64)) -> Weight {\n        ...\n    }\n}\n\n// given a dispatch:\n#[pallet::call]\nimpl<T: Config<I>, I: 'static> Pallet<T, I> {\n    #[pallet::weight(CustomWeight)]\n    fn foo(a: u32, b: u64) { ... }\n}\n

In this example, the CustomWeight can only be used in conjunction with a dispatch with a particular signature (u32, u64), as opposed to LenWeight, which can be used with anything because there aren't any assumptions about <T>.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#custom-inclusion-fee","title":"Custom Inclusion Fee","text":"

The following example illustrates how to customize your inclusion fee. You must configure the appropriate associated types in the respective module.

// Assume this is the balance type\ntype Balance = u64;\n\n// Assume we want all the weights to have a `100 + 2 * w` conversion to fees\nstruct CustomWeightToFee;\nimpl WeightToFee<Weight, Balance> for CustomWeightToFee {\n    fn convert(w: Weight) -> Balance {\n        let a = Balance::from(100);\n        let b = Balance::from(2);\n        let w = Balance::from(w);\n        a + b * w\n    }\n}\n\nparameter_types! {\n    pub const ExtrinsicBaseWeight: Weight = 10_000_000;\n}\n\nimpl frame_system::Config for Runtime {\n    type ExtrinsicBaseWeight = ExtrinsicBaseWeight;\n}\n\nparameter_types! {\n    pub const TransactionByteFee: Balance = 10;\n}\n\nimpl transaction_payment::Config {\n    type TransactionByteFee = TransactionByteFee;\n    type WeightToFee = CustomWeightToFee;\n    type FeeMultiplierUpdate = TargetedFeeAdjustment<TargetBlockFullness>;\n}\n\nstruct TargetedFeeAdjustment<T>(sp_std::marker::PhantomData<T>);\nimpl<T: Get<Perquintill>> WeightToFee<Fixed128, Fixed128> for TargetedFeeAdjustment<T> {\n    fn convert(multiplier: Fixed128) -> Fixed128 {\n        // Don't change anything. Put any fee update info here.\n        multiplier\n    }\n}\n
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/fees/#further-resources","title":"Further Resources","text":"

You now know the weight system, how it affects transaction fee computation, and how to specify weights for your dispatchable calls. The next step is determining the correct weight for your dispatchable operations. You can use Substrate\u00a0benchmarking functions\u00a0and\u00a0frame-benchmarking\u00a0calls to test your functions with different parameters and empirically determine the proper weight in their worst-case scenarios.

  • Benchmark
  • SignedExtension
  • Custom weights for the Example pallet
  • Web3 Foundation Research
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/","title":"Transactions","text":""},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#introduction","title":"Introduction","text":"

Transactions are essential components of blockchain networks, enabling state changes and the execution of key operations. In the Polkadot SDK, transactions, often called extrinsics, come in multiple forms, including signed, unsigned, and inherent transactions.

This guide walks you through the different transaction types and how they're formatted, validated, and processed within the Polkadot ecosystem. You'll also learn how to customize transaction formats and construct transactions for FRAME-based runtimes, ensuring a complete understanding of how transactions are built and executed in Polkadot SDK-based chains.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#what-is-a-transaction","title":"What Is a Transaction?","text":"

In the Polkadot SDK, transactions represent operations that modify the chain's state, bundled into blocks for execution. The term extrinsic is often used to refer to any data that originates outside the runtime and is included in the chain. While other blockchain systems typically refer to these operations as \"transactions,\" the Polkadot SDK adopts the broader term \"extrinsic\" to capture the wide variety of data types that can be added to a block.

There are three primary types of transactions (extrinsics) in the Polkadot SDK:

  • Signed transactions - signed by the submitting account, often carrying transaction fees
  • Unsigned transactions - submitted without a signature, often requiring custom validation logic
  • Inherent transactions - typically inserted directly into blocks by block authoring nodes, without gossiping between peers

Each type serves a distinct purpose, and understanding when and how to use each is key to efficiently working with the Polkadot SDK.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#signed-transactions","title":"Signed Transactions","text":"

Signed transactions require an account's signature and typically involve submitting a request to execute a runtime call. The signature serves as a form of cryptographic proof that the sender has authorized the action, using their private key. These transactions often involve a transaction fee to cover the cost of execution and incentivize block producers.

Signed transactions are the most common type of transaction and are integral to user-driven actions, such as token transfers. For instance, when you transfer tokens from one account to another, the sending account must sign the transaction to authorize the operation.

For example, the pallet_balances::Call::transfer_allow_death extrinsic in the Balances pallet allows you to transfer tokens. Since your account initiates this transaction, your account key is used to sign it. You'll also be responsible for paying the associated transaction fee, with the option to include an additional tip to incentivize faster inclusion in the block.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#unsigned-transactions","title":"Unsigned Transactions","text":"

Unsigned transactions do not require a signature or account-specific data from the sender. Unlike signed transactions, they do not come with any form of economic deterrent, such as fees, which makes them susceptible to spam or replay attacks. Custom validation logic must be implemented to mitigate these risks and ensure these transactions are secure.

Unsigned transactions typically involve scenarios where including a fee or signature is unnecessary or counterproductive. However, due to the absence of fees, they require careful validation to protect the network. For example, pallet_im_online::Call::heartbeat extrinsic allows validators to send a heartbeat signal, indicating they are active. Since only validators can make this call, the logic embedded in the transaction ensures that the sender is a validator, making the need for a signature or fee redundant.

Unsigned transactions are more resource-intensive than signed ones because custom validation is required, but they play a crucial role in certain operational scenarios, especially when regular user accounts aren't involved.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#inherent-transactions","title":"Inherent Transactions","text":"

Inherent transactions are a specialized type of unsigned transaction that is used primarily for block authoring. Unlike signed or other unsigned transactions, inherent transactions are added directly by block producers and are not broadcasted to the network or stored in the transaction queue. They don't require signatures or the usual validation steps and are generally used to insert system-critical data directly into blocks.

A key example of an inherent transaction is inserting a timestamp into each block. The pallet_timestamp::Call::now extrinsic allows block authors to include the current time in the block they are producing. Since the block producer adds this information, there is no need for transaction validation, like signature verification. The validation in this case is done indirectly by the validators, who check whether the timestamp is within an acceptable range before finalizing the block.

Another example is the paras_inherent::Call::enter extrinsic, which enables parachain collator nodes to send validation data to the relay chain. This inherent transaction ensures that the necessary parachain data is included in each block without the overhead of gossiped transactions.

Inherent transactions serve a critical role in block authoring by allowing important operational data to be added directly to the chain without needing the validation processes required for standard transactions.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-formats","title":"Transaction Formats","text":"

Understanding the structure of signed and unsigned transactions is crucial for developers building on Polkadot SDK-based chains. Whether you're optimizing transaction processing, customizing formats, or interacting with the transaction pool, knowing the format of extrinsics, Polkadot's term for transactions, is essential.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#types-of-transaction-formats","title":"Types of Transaction Formats","text":"

In Polkadot SDK-based chains, extrinsics can fall into three main categories:

  • Unchecked extrinsics - typically used for signed transactions that require validation. They contain a signature and additional data, such as a nonce and information for fee calculation. Unchecked extrinsics are named as such because they require validation checks before being accepted into the transaction pool
  • Checked extrinsics - typically used for inherent extrinsics (unsigned transactions); these don't require signature verification. Instead, they carry information such as where the extrinsic originates and any additional data required for the block authoring process
  • Opaque extrinsics - used when the format of an extrinsic is not yet fully committed or finalized. They are still decodable, but their structure can be flexible depending on the context
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#signed-transaction-data-structure","title":"Signed Transaction Data Structure","text":"

A signed transaction typically includes the following components:

  • Signature - verifies the authenticity of the transaction sender
  • Call - the actual function or method call the transaction is requesting (for example, transferring funds)
  • Nonce - tracks the number of prior transactions sent from the account, helping to prevent replay attacks
  • Tip - an optional incentive to prioritize the transaction in block inclusion
  • Additional data - includes details such as spec version, block hash, and genesis hash to ensure the transaction is valid within the correct runtime and chain context

Here's a simplified breakdown of how signed transactions are typically constructed in a Polkadot SDK runtime:

<signing account ID> + <signature> + <additional data>\n

Each part of the signed transaction has a purpose, ensuring the transaction's authenticity and context within the blockchain.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#signed-extensions","title":"Signed Extensions","text":"

Polkadot SDK also provides the concept of signed extensions, which allow developers to extend extrinsics with additional data or validation logic before they are included in a block. The SignedExtension set helps enforce custom rules or protections, such as ensuring the transaction's validity or calculating priority.

The transaction queue regularly calls signed extensions to verify a transaction's validity before placing it in the ready queue. This safeguard ensures transactions won't fail in a block. Signed extensions are commonly used to enforce validation logic and protect the transaction pool from spam and replay attacks.

In FRAME, a signed extension can hold any of the following types by default:

  • AccountId - to encode the sender's identity
  • Call - to encode the pallet call to be dispatched. This data is used to calculate transaction fees
  • AdditionalSigned - to handle any additional data to go into the signed payload allowing you to attach any custom logic prior to dispatching a transaction
  • Pre - to encode the information that can be passed from before a call is dispatched to after it gets dispatched

Signed extensions can enforce checks like:

  • CheckSpecVersion - ensures the transaction is compatible with the runtime's current version
  • CheckWeight - calculates the weight (or computational cost) of the transaction, ensuring the block doesn't exceed the maximum allowed weight

These extensions are critical in the transaction lifecycle, ensuring that only valid and prioritized transactions are processed.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-construction","title":"Transaction Construction","text":"

Building transactions in the Polkadot SDK involves constructing a payload that can be verified, signed, and submitted for inclusion in a block. Each runtime in the Polkadot SDK has its own rules for validating and executing transactions, but there are common patterns for constructing a signed transaction.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#construct-a-signed-transaction","title":"Construct a Signed Transaction","text":"

A signed transaction in the Polkadot SDK includes various pieces of data to ensure security, prevent replay attacks, and prioritize processing. Here's an overview of how to construct one:

  1. Construct the unsigned payload - gather the necessary information for the call, including:
    • Pallet index - identifies the pallet where the runtime function resides
    • Function index - specifies the particular function to call in the pallet
    • Parameters - any additional arguments required by the function call
  2. Create a signing payload - once the unsigned payload is ready, additional data must be included:
    • Transaction nonce - unique identifier to prevent replay attacks
    • Era information - defines how long the transaction is valid before it's dropped from the pool
    • Block hash - ensures the transaction doesn't execute on the wrong chain or fork
  3. Sign the payload - using the sender's private key, sign the payload to ensure that the transaction can only be executed by the account holder
  4. Serialize the signed payload - once signed, the transaction must be serialized into a binary format, ensuring the data is compact and easy to transmit over the network
  5. Submit the serialized transaction - finally, submit the serialized transaction to the network, where it will enter the transaction pool and wait for processing by an authoring node

The following is an example of how a signed transaction might look:

node_runtime::UncheckedExtrinsic::new_signed(\n    function.clone(),                                      // some call\n    sp_runtime::AccountId32::from(sender.public()).into(), // some sending account\n    node_runtime::Signature::Sr25519(signature.clone()),   // the account's signature\n    extra.clone(),                                         // the signed extensions\n)\n
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-encoding","title":"Transaction Encoding","text":"

Before a transaction is sent to the network, it is serialized and encoded using a structured encoding process that ensures consistency and prevents tampering:

  • [1] - compact encoded length in bytes of the entire transaction
  • [2] - a\u00a0u8\u00a0containing 1 byte to indicate whether the transaction is signed or unsigned (1 bit) and the encoded transaction version ID (7 bits)
  • [3] - if signed, this field contains an account ID, an SR25519 signature, and some extra data
  • [4] - encoded call data, including pallet and function indices and any required arguments

This encoded format ensures consistency and efficiency in processing transactions across the network. By adhering to this format, applications can construct valid transactions and pass them to the network for execution.

Additional Information

Learn how compact encoding works using\u00a0SCALE.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#customize-transaction-construction","title":"Customize Transaction Construction","text":"

Although the basic steps for constructing transactions are consistent across Polkadot SDK-based chains, developers can customize transaction formats and validation rules. For example:

  • Custom pallets - you can define new pallets with custom function calls, each with its own parameters and validation logic
  • Signed extensions - developers can implement custom extensions that modify how transactions are prioritized, validated, or included in blocks

By leveraging Polkadot SDK's modular design, developers can create highly specialized transaction logic tailored to their chain's needs.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#lifecycle-of-a-transaction","title":"Lifecycle of a Transaction","text":"

In the Polkadot SDK, transactions are often referred to as extrinsics because the data in transactions originates outside of the runtime. These transactions contain data that initiates changes to the chain state. The most common type of extrinsic is a signed transaction, which is cryptographically verified and typically incurs a fee. This section focuses on how signed transactions are processed, validated, and ultimately included in a block.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#define-transaction-properties","title":"Define Transaction Properties","text":"

The Polkadot SDK runtime defines key transaction properties, such as:

  • Transaction validity - ensures the transaction meets all runtime requirements
  • Signed or unsigned - identifies whether a transaction needs to be signed by an account
  • State changes - determines how the transaction modifies the state of the chain

Pallets, which compose the runtime's logic, define the specific transactions that your chain supports. When a user submits a transaction, such as a token transfer, it becomes a signed transaction, verified by the user's account signature. If the account has enough funds to cover fees, the transaction is executed, and the chain's state is updated accordingly.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#process-on-a-block-authoring-node","title":"Process on a Block Authoring Node","text":"

In Polkadot SDK-based networks, some nodes are authorized to author blocks. These nodes validate and process transactions. When a transaction is sent to a node that can produce blocks, it undergoes a lifecycle that involves several stages, including validation and execution. Non-authoring nodes gossip the transaction across the network until an authoring node receives it. The following diagram illustrates the lifecycle of a transaction that's submitted to a network and processed by an authoring node.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#validate-and-queue","title":"Validate and Queue","text":"

Once a transaction reaches an authoring node, it undergoes an initial validation process to ensure it meets specific conditions defined in the runtime. This validation includes checks for:

  • Correct nonce - ensures the transaction is sequentially valid for the account
  • Sufficient funds - confirms the account can cover any associated transaction fees
  • Signature validity - verifies that the sender's signature matches the transaction data

After these checks, valid transactions are placed in the transaction pool, where they are queued for inclusion in a block. The transaction pool regularly re-validates queued transactions to ensure they remain valid before being processed. To reach consensus, two-thirds of the nodes must agree on the order of the transactions executed and the resulting state change. Transactions are validated and queued on the local node in a transaction pool to prepare for consensus.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-pool","title":"Transaction Pool","text":"

The transaction pool is responsible for managing valid transactions. It ensures that only transactions that pass initial validity checks are queued. Transactions that fail validation, expire, or become invalid for other reasons are removed from the pool.

The transaction pool organizes transactions into two queues:

  • Ready queue - transactions that are valid and ready to be included in a block
  • Future queue - transactions that are not yet valid but could be in the future, such as transactions with a nonce too high for the current state

Details on how the transaction pool validates transactions, including fee and signature handling, can be found in the validate_transaction method.

"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#invalid-transactions","title":"Invalid Transactions","text":"

If a transaction is invalid, for example, due to an invalid signature or insufficient funds, it is rejected and won't be added to the block. Invalid transactions might be rejected for reasons such as:

  • The transaction has already been included in a block
  • The transaction's signature does not match the sender
  • The transaction is too large to fit in the current block
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-ordering-and-priority","title":"Transaction Ordering and Priority","text":"

When a node is selected as the next block author, it prioritizes transactions based on weight, length, and tip amount. The goal is to fill the block with high-priority transactions without exceeding its maximum size or computational limits. Transactions are ordered as follows:

  • Inherents first - inherent transactions, such as block timestamp updates, are always placed first
  • Nonce-based ordering - transactions from the same account are ordered by their nonce
  • Fee-based ordering - among transactions with the same nonce or priority level, those with higher fees are prioritized
"},{"location":"polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-execution","title":"Transaction Execution","text":"

Once a block author selects transactions from the pool, the transactions are executed in priority order. As each transaction is processed, the state changes are written directly to the chain's storage. It's important to note that these changes are not cached, meaning a failed transaction won't revert earlier state changes, which could leave the block in an inconsistent state.

Events are also written to storage. Runtime logic should not emit an event before performing the associated actions. If the associated transaction fails after the event was emitted, the event will not revert.

Additional Information

Watch Seminar: Lifecycle of a transaction for a video overview of the lifecycle of transactions and the types of transactions that exist.

"},{"location":"polkadot-protocol/onchain-governance/","title":"On-Chain Governance","text":"

Polkadot's on-chain governance system, OpenGov, enables decentralized decision-making across the network. It empowers stakeholders to propose, vote on, and enact changes with transparency and efficiency. This system ensures that governance is both flexible and inclusive, allowing developers to integrate custom governance solutions and mechanisms within the network. Understanding how OpenGov functions is crucial for anyone looking to engage with Polkadot\u2019s decentralized ecosystem, whether you\u2019re proposing upgrades, managing referenda, or exploring voting structures.

At the core of Polkadot\u2019s governance system are three key pallets: Preimage, Referenda, and Conviction Voting. These components enable flexible, decentralized decision-making, providing developers with the tools to create tailored governance solutions. This modular approach ensures governance remains dynamic, secure, and adaptable, fostering deeper participation and alignment with the network\u2019s goals. By leveraging these pallets, developers can build custom governance models that shape the evolution of the Polkadot ecosystem.

"},{"location":"polkadot-protocol/onchain-governance/#start-building-governance-solutions","title":"Start Building Governance Solutions","text":"

To develop solutions related to Polkadot's governance system, it\u2019s essential to understand three key pallets:

  • Preimage - stores and manages the content or the detailed information of a referendum proposal before it is voted on
  • Referenda - manages the lifecycle of a referendum, including proposal submission, voting, and execution. Once a referendum is proposed and voted on, it can be enacted if it passes the required threshold
  • Conviction Voting - manages the voting power based on the \"conviction\" or commitment of voters, providing a more flexible and nuanced voting mechanism
"},{"location":"polkadot-protocol/onchain-governance/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"polkadot-protocol/onchain-governance/origins-tracks/","title":"Origins and Tracks","text":""},{"location":"polkadot-protocol/onchain-governance/origins-tracks/#introduction","title":"Introduction","text":"

Polkadot's OpenGov system empowers decentralized decision-making and active community participation by tailoring the governance process to the impact of proposed changes. Through a system of origins and tracks, OpenGov ensures that every referendum receives the appropriate scrutiny, balancing security, inclusivity, and efficiency.

This guide will help you understand the role of origins in classifying proposals by privilege and priority. You will learn how tracks guide proposals through tailored stages like voting, confirmation, and enactment and how to select the correct origin for your referendum to align with community expectations and network governance.

Origins and tracks are vital in streamlining the governance workflow and maintaining Polkadot's resilience and adaptability.

"},{"location":"polkadot-protocol/onchain-governance/origins-tracks/#origins","title":"Origins","text":"

Origins are the foundation of Polkadot's OpenGov governance system. They categorize proposals by privilege and define their decision-making rules. Each origin corresponds to a specific level of importance and risk, guiding how referendums progress through the governance process.

  • High-privilege origins like Root Origin govern critical network changes, such as core software upgrades
  • Lower-privilege origins like Small Spender handle minor requests, such as community project funding under 10,000 DOT

Proposers select an origin based on the nature of their referendum. Origins determine parameters like approval thresholds, required deposits, and timeframes for voting and confirmation. Each origin is paired with a track, which acts as a roadmap for the proposal's lifecycle, including preparation, voting, and enactment.

OpenGov Origins

Explore the Polkadot OpenGov Origins page for a detailed list of origins and their associated parameters.

"},{"location":"polkadot-protocol/onchain-governance/origins-tracks/#tracks","title":"Tracks","text":"

Tracks define a referendum's journey from submission to enactment, tailoring governance parameters to the impact of proposed changes. Each track operates independently and includes several key stages:

  • Preparation - time for community discussion before voting begins
  • Voting - period for token holders to cast their votes
  • Decision - finalization of results and determination of the proposal's outcome
  • Confirmation - period to verify sustained community support before enactment
  • Enactment - final waiting period before the proposal takes effect

Tracks customize these stages with parameters like decision deposit requirements, voting durations, and approval thresholds, ensuring proposals from each origin receive the required scrutiny and process. For example, a runtime upgrade in the Root Origin track will have longer timeframes and stricter thresholds than a treasury request in the Small Spender track.

"},{"location":"polkadot-protocol/onchain-governance/origins-tracks/#additional-resources","title":"Additional Resources","text":"
  • Visit Origins and Tracks Info for a list of origins and tracks for Polkadot and Kusama including associated parameters

  • See Approval and Support for a deeper dive into the approval and support system

"},{"location":"polkadot-protocol/onchain-governance/overview/","title":"On-Chain Governance","text":""},{"location":"polkadot-protocol/onchain-governance/overview/#introduction","title":"Introduction","text":"

Polkadot\u2019s governance system exemplifies decentralized decision-making, empowering its community of stakeholders to shape the network\u2019s future through active participation. The latest evolution, OpenGov, builds on Polkadot\u2019s foundation by providing a more inclusive and efficient governance model.

This guide will explain the principles and structure of OpenGov and walk you through its key components, such as Origins, Tracks, and Delegation. You will learn about improvements over earlier governance systems, including streamlined voting processes and enhanced stakeholder participation.

With OpenGov, Polkadot achieves a flexible, scalable, and democratic governance framework that allows multiple proposals to proceed simultaneously, ensuring the network evolves in alignment with its community's needs.

"},{"location":"polkadot-protocol/onchain-governance/overview/#governance-evolution","title":"Governance Evolution","text":"

Polkadot\u2019s governance journey began with Governance V1, a system that proved effective in managing treasury funds and protocol upgrades. However, it faced limitations, such as:

  • Slow voting cycles, causing delays in decision-making
  • Inflexibility in handling multiple referendums, restricting scalability

To address these challenges, Polkadot introduced OpenGov, a governance model designed for greater inclusivity, efficiency, and scalability. OpenGov replaces the centralized structures of Governance V1, such as the Council and Technical Committee, with a fully decentralized and dynamic framework.

For a full comparison of the historic and current governance models, visit the Gov1 vs. Polkadot OpenGov section of the Polkadot Wiki.

"},{"location":"polkadot-protocol/onchain-governance/overview/#opengov-key-features","title":"OpenGov Key Features","text":"

OpenGov transforms Polkadot\u2019s governance into a decentralized, stakeholder-driven model, eliminating centralized decision-making bodies like the Council. Key enhancements include:

  • Decentralization - shifts all decision-making power to the public, ensuring a more democratic process
  • Enhanced delegation - allows users to delegate their votes to trusted experts across specific governance tracks
  • Simultaneous referendums - multiple proposals can progress at once, enabling faster decision-making
  • Polkadot Technical Fellowship - a broad, community-driven group replacing the centralized Technical Committee

This new system ensures Polkadot governance remains agile and inclusive, even as the ecosystem grows.

"},{"location":"polkadot-protocol/onchain-governance/overview/#origins-and-tracks","title":"Origins and Tracks","text":"

In OpenGov, origins and tracks are central to managing proposals and votes.

  • Origin - determines the authority level of a proposal (e.g., Treasury, Root) which decides the track of all referendums from that origin
  • Track - define the procedural flow of a proposal, such as voting duration, approval thresholds, and enactment timelines

Developers must be aware that referendums from different origins and tracks will take varying amounts of time to reach approval and enactment. The Polkadot Technical Fellowship has the option to shorten this timeline by whitelisting a proposal and allowing it to be enacted through the Whitelist Caller origin.

Visit Origins and Tracks Info for details on current origins and tracks, associated terminology, and parameters.

"},{"location":"polkadot-protocol/onchain-governance/overview/#referendums","title":"Referendums","text":"

In OpenGov, anyone can submit a referendum, fostering an open and participatory system. The timeline for a referendum depends on the privilege level of the origin with more significant changes offering more time for community voting and participation before enactment.

The timeline for an individual referendum includes four distinct periods:

  • Lead-in - a minimum amount of time to allow for community participation, available room in the origin, and payment of the decision deposit. Voting is open during this period
  • Decision - voting continues
  • Confirmation - referendum must meet approval and support criteria during entire period to avoid rejection
  • Enactment - changes approved by the referendum are executed
"},{"location":"polkadot-protocol/onchain-governance/overview/#vote-on-referendums","title":"Vote on Referendums","text":"

Voters can vote with their tokens on each referendum. Polkadot uses a voluntary token locking mechanism, called conviction voting, as a way for voters to increase their voting power. A token holder signals they have a stronger preference for approving a proposal based upon their willingness to lock up tokens. Longer voluntary token locks are seen as a signal of continual approval and translate to increased voting weight.

See Voting on a Referendum for a deeper look at conviction voting and related token locks.

"},{"location":"polkadot-protocol/onchain-governance/overview/#delegate-voting-power","title":"Delegate Voting Power","text":"

The OpenGov system also supports multi-role delegations, allowing token holders to assign their voting power on different tracks to entities with expertise in those areas.

For example, if a token holder lacks the technical knowledge to evaluate proposals on the Root track, they can delegate their voting power for that track to an expert they trust to vote in the best interest of the network. This ensures informed decision-making across tracks while maintaining flexibility for token holders.

Visit Multirole Delegation for more details on delegating voting power.

"},{"location":"polkadot-protocol/onchain-governance/overview/#cancel-a-referendum","title":"Cancel a Referendum","text":"

Polkadot OpenGov has two origins for rejecting ongoing referendums:

  • Referendum Canceller - cancels an active referendum when non-malicious errors occur and refunds the deposits to the originators
  • Referendum Killer - used for urgent, malicious cases this origin instantly terminates an active referendum and slashes deposits

See Cancelling, Killing, and Blacklisting for additional information on rejecting referendums.

"},{"location":"polkadot-protocol/onchain-governance/overview/#additional-resources","title":"Additional Resources","text":"
  • Democracy pallet - handles administration of general stakeholder voting
  • Gov2: Polkadot\u2019s Next Generation of Decentralised Governance - Medium article by Gavin Wood
  • Polkadot Direction - Matrix Element client
  • Polkassembly - OpenGov dashboard and UI
  • Polkadot.js Apps Governance - overview of active referendums
"},{"location":"tutorials/","title":"Tutorials","text":"

Welcome to the Polkadot Tutorials hub! Whether you\u2019re building parachains, integrating system chains, or developing decentralized applications, these step-by-step guides are designed to help you achieve your goals efficiently and effectively. Each guide links to relevant sections of the Polkadot documentation for developers who want to explore specific topics in greater depth.

Not sure where to start? Check out the highlighted tutorials below!

"},{"location":"tutorials/#get-started","title":"Get StartedSpin Up a SolochainRun a Local Relay ChainFork a Live Chain with ChopsticksOpen an XCM Channel","text":"

Learn how to compile and launch a local blockchain node using Polkadot SDK. Launch, run, and interact with a pre-configured node template.

This tutorial will guide you through preparing a relay chain so that you can connect a test parachain node to it for local testing.

Learn how to fork live Polkadot SDK chains with Chopsticks. Configure forks, replay blocks, test XCM, and interact programmatically or via UI.

Learn how to open HRMP channels between parachains on Polkadot. Discover the step-by-step process for establishing uni- and bidirectional communication.

"},{"location":"tutorials/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/interoperability/","title":"Cross-Chain Interoperability Tutorials","text":"

This section introduces you to the core interoperability solutions within the Polkadot ecosystem through practical, hands-on tutorials. These resources are designed to help you master cross-chain communication techniques, from setting up messaging channels between parachains to leveraging Polkadot's advanced features of the XCM protocol.

By following these guides, you\u2019ll gain the skills needed to implement seamless integration and interaction across diverse blockchains, unlocking the full potential of Polkadot's interconnected network.

"},{"location":"tutorials/interoperability/#xcm-cross-consensus-messaging","title":"XCM (Cross-Consensus Messaging)","text":"

XCM provides a secure and trustless framework that facilitates communication between parachains, relay chains, and external blockchains, enabling asset transfers, data sharing, and complex cross-chain workflows.

"},{"location":"tutorials/interoperability/#for-parachain-integrators","title":"For Parachain Integrators","text":"

Learn to establish and use cross-chain communication channels:

  • Opening HRMP Channels Between Parachains - set up uni- and bidirectional messaging channels between parachains
  • Opening HRMP Channels with System Parachains - establish communication channels with system parachains using optimized XCM messages
"},{"location":"tutorials/interoperability/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/interoperability/#additional-resources","title":"Additional ResourcesLearn about Polkadot's InteroperabilityExplore Comprehensive XCM Guides","text":"

Explore the importance of interoperability in the Polkadot ecosystem, covering XCM, bridges, and cross-chain communication.

Looking for comprehensive guides and technical resources on XCM? Explore foundational concepts, advanced configuration, and best practices for building cross-chain solutions using XCM.

"},{"location":"tutorials/interoperability/xcm-channels/","title":"Tutorials for Managing XCM Channels","text":"

Establishing XCM channels is essential to unlocking Polkadot's native interoperability. Before bridging assets or sending cross-chain contract calls, the necessary XCM channels must be established.

These tutorials guide you through the process of setting up Horizontal Relay-routed Message Passing (HRMP) channels for cross-chain messaging. Learn how to configure unidirectional channels between parachains and the simplified single-message process for bidirectional channels with system parachains like Asset Hub.

"},{"location":"tutorials/interoperability/xcm-channels/#understand-the-process-of-opening-channels","title":"Understand the Process of Opening Channels","text":"

Each parachain starts with two default unidirectional XCM channels: an upward channel for sending messages to the relay chain, and a downward channel for receiving messages. These channels are implicitly available.

To enable communication between parachains, explicit HRMP channels must be established by registering them on the relay chain. This process requires a deposit to cover the costs associated with storing message queues on the relay chain. The deposit amount depends on the specific relay chain\u2019s parameters.

"},{"location":"tutorials/interoperability/xcm-channels/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/interoperability/xcm-channels/#additional-resources","title":"Additional ResourcesReview HRMP Configurations and Extrinsics","text":"

Learn about the configurable parameters that govern HRMP channel behavior and the dispatchable extrinsics used to manage them.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-para/","title":"Opening HRMP Channels Between Parachains","text":""},{"location":"tutorials/interoperability/xcm-channels/para-to-para/#introduction","title":"Introduction","text":"

For establishing communication channels between parachains on the Polkadot network using the Horizontal Relay-routed Message Passing (HRMP) protocol, the following steps are required:

  1. Channel request - the parachain that wants to open an HRMP channel must make a request to the parachain it wishes to have an open channel with
  2. Channel acceptance - the other parachain must then accept this request to complete the channel establishment

This process results in a unidirectional HRMP channel, where messages can flow in only one direction between the two parachains.

An additional HRMP channel must be established in the opposite direction to enable bidirectional communication. This requires repeating the request and acceptance process but with the parachains reversing their roles.

Once both unidirectional channels are established, the parachains can send messages back and forth freely through the bidirectional HRMP communication channel.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-para/#prerequisites","title":"Prerequisites","text":"

Before proceeding, ensure you meet the following requirements:

  • Blockchain network with a relay chain and at least two connected parachains
  • Wallet with sufficient funds to execute transactions on the participant chains
"},{"location":"tutorials/interoperability/xcm-channels/para-to-para/#procedure-for-initiating-hrmp-channel-setup","title":"Procedure for Initiating HRMP Channel Setup","text":"

This example will demonstrate how to open a channel between parachain 2500 and parachain 2600, using Rococo Local as the relay chain.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-para/#fund-sender-sovereign-account","title":"Fund Sender Sovereign Account","text":"

The sovereign account for parachain 2500 on the relay chain must be funded so it can take care of any XCM transact fees.

Use Polkadot.js Apps UI to connect to the relay chain and transfer funds from your account to the parachain 2500 sovereign account.

Calculating Parachain Sovereign Account

To generate the sovereign account address for a parachain, you'll need to follow these steps:

  1. Determine if the parachain is an \"up/down\" chain (parent or child) or a \"sibling\" chain:

    • Up/down chains use the prefix 0x70617261 (which decodes to b\"para\")

    • Sibling chains use the prefix 0x7369626c (which decodes to b\"sibl\")

  2. Calculate the u32 scale encoded value of the parachain ID:

    • Parachain 2500 would be encoded as c4090000
  3. Combine the prefix and parachain ID encoding to form the full sovereign account address:

    The sovereign account of parachain 2500 in relay chain will be 0x70617261c4090000000000000000000000000000000000000000000000000000 and the SS58 format of this address is 5Ec4AhPSY2GEE4VoHUVheqv5wwq2C1HMKa7c9fVJ1WKivX1Y

To perform this conversion, you can also use the \"Para ID\" to Address section in Substrate Utilities.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-para/#create-channel-opening-extrinsic","title":"Create Channel Opening Extrinsic","text":"
  1. In Polkadot.js Apps, connect to the relay chain, navigate to the Developer dropdown and select the Extrinsics option

  2. Construct an hrmpInitOpenChannel extrinsic call

    1. Select the hrmp pallet
    2. Choose the hrmpInitOpenChannel extrinsic
    3. Fill in the parameters
      • recipient - parachain ID of the target chain (in this case, 2600)
      • proposedMaxCapacity - max number of messages that can be pending in the channel at once
      • proposedMaxMessageSize - max message size that could be put into the channel
    4. Copy the encoded call data The encoded call data for opening a channel with parachain 2600 is 0x3c00280a00000800000000001000.
"},{"location":"tutorials/interoperability/xcm-channels/para-to-para/#crafting-and-submitting-the-xcm-message-from-the-sender","title":"Crafting and Submitting the XCM Message from the Sender","text":"

To initiate the HRMP channel opening process, you need to create an XCM message that includes the encoded hrmpInitOpenChannel call data from the previous step. This message will be sent from your parachain to the relay chain.

This example uses the sudo pallet to dispatch the extrinsic. Verify the XCM configuration of the parachain you're working with and ensure you're using an origin with the necessary privileges to execute the polkadotXcm.send extrinsic.

The XCM message should contain the following instructions:

  • WithdrawAsset - withdraws assets from the origin's ownership and places them in the Holding Register
  • BuyExecution - pays for the execution of the current message using the assets in the Holding Register
  • Transact - execute the encoded transaction call
  • RefundSurplus - increases the Refunded Weight Register to the value of the Surplus Weight Register, attempting to reclaim any excess fees paid via BuyExecution
  • DepositAsset - subtracts assets from the Holding Register and deposits equivalent on-chain assets under the specified beneficiary's ownership

Note

For more detailed information about XCM's functionality, complexities, and instruction set, refer to the xcm-format documentation.

In essence, this process withdraws funds from the parachain's sovereign account to the XCVM Holding Register, then uses these funds to purchase execution time for the XCM Transact instruction, executes Transact, refunds any unused execution time and deposits any remaining funds into a specified account.

To send the XCM message to the relay chain, connect to parachain 2500 in Polkadot.js Apps. Fill in the required parameters as shown in the image below, ensuring that you:

  1. Replace the call field with your encoded hrmpInitOpenChannel call data from the previous step
  2. Use the correct beneficiary information
  3. Click the Submit Transaction button to dispatch the XCM message to the relay chain

Note

The exact process and parameters for submitting this XCM message may vary depending on your specific parachain and relay chain configurations. Always refer to the most current documentation for your particular network setup.

After submitting the XCM message to initiate the HRMP channel opening, you should verify that the request was successful. Follow these steps to check the status of your channel request:

  1. Using Polkadot.js Apps, connect to the relay chain and navigate to the Developer dropdown, then select the Chain state option

  2. Query the HRMP open channel requests

    1. Select hrmp
    2. Choose the hrmpOpenChannelRequests call
    3. Click the + button to execute the query
    4. Check the status of all pending channel requests

If your channel request was successful, you should see an entry for your parachain ID in the list of open channel requests. This confirms that your request has been properly registered on the relay chain and is awaiting acceptance by the target parachain.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-para/#procedure-for-accepting-hrmp-channel","title":"Procedure for Accepting HRMP Channel","text":"

For the channel to be fully established, the target parachain must accept the channel request by submitting an XCM message to the relay chain.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-para/#fund-receiver-sovereign-account","title":"Fund Receiver Sovereign Account","text":"

Before proceeding, ensure that the sovereign account of parachain 2600 on the relay chain is funded. This account will be responsible for covering any XCM transact fees. To fund the account, follow the same process described in the previous section, Fund Sovereign Account.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-para/#create-channel-accepting-extrinsic","title":"Create Channel Accepting Extrinsic","text":"
  1. In Polkadot.js Apps, connect to the relay chain, navigate to the Developer dropdown and select the Extrinsics option

  2. Construct an hrmpAcceptOpenChannel extrinsic call

    1. Select the hrmp pallet
    2. Choose the hrmpAcceptOpenChannel extrinsic
    3. Fill in the parameters:
      • sender - parachain ID of the requesting chain (in this case, 2500)
    4. Copy the encoded call data The encoded call data for accepting a channel with parachain 2500 should be 0x3c01c4090000
"},{"location":"tutorials/interoperability/xcm-channels/para-to-para/#crafting-and-submitting-the-xcm-message-from-the-receiver","title":"Crafting and Submitting the XCM Message from the Receiver","text":"

To accept the HRMP channel opening, you need to create and submit an XCM message that includes the encoded hrmpAcceptOpenChannel call data from the previous step. This process is similar to the one described in the previous section, Crafting and Submitting the XCM Message, with a few key differences:

  • Use the encoded call data for hrmpAcceptOpenChannel obtained in Step 2 of this section
  • In the last XCM instruction (DepositAsset), set the beneficiary to parachain 2600's sovereign account to receive any surplus funds

To send the XCM message to the relay chain, connect to parachain 2600 in Polkadot.js Apps. Fill in the required parameters as shown in the image below, ensuring that you:

  1. Replace the call field with your encoded hrmpAcceptOpenChannel call data from the previous step
  2. Use the correct beneficiary information
  3. Click the Submit Transaction button to dispatch the XCM message to the relay chain

After submitting the XCM message to accept the HRMP channel opening, verify that the channel has been set up correctly.

  1. Using Polkadot.js Apps, connect to the relay chain and navigate to the Developer dropdown, then select the Chain state option

  2. Query the HRMP channels

    1. Select hrmp
    2. Choose the hrmpChannels call
    3. Click the + button to execute the query
    4. Check the status of the opened channel

If the channel has been successfully established, you should see the channel details in the query results.

By following these steps, you will have successfully accepted the HRMP channel request and established a unidirectional channel between the two parachains.

Note

Remember that for full bidirectional communication, you'll need to repeat this process in the opposite direction, with parachain 2600 initiating a channel request to parachain 2500.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-system/","title":"Opening HRMP Channels with System Parachains","text":""},{"location":"tutorials/interoperability/xcm-channels/para-to-system/#introduction","title":"Introduction","text":"

While establishing Horizontal Relay-routed Message Passing (HRMP) channels between regular parachains involves a two-step request and acceptance procedure, opening channels with system parachains follows a more straightforward approach.

System parachains are specialized chains that provide core functionality to the Polkadot network. Examples include Asset Hub for cross-chain asset transfers and Bridge Hub for connecting to external networks. Given their critical role, establishing communication channels with these system parachains has been optimized for efficiency and ease of use.

Any parachain can establish a bidirectional channel with a system chain through a single operation, requiring just one XCM message from the parachain to the relay chain.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-system/#prerequisites","title":"Prerequisites","text":"

To successfully complete this process, you'll need to have the following in place:

  • Access to a blockchain network consisting of:
    • A relay chain
    • A parachain
    • An Asset Hub system chain
  • A wallet containing enough funds to cover transaction fees on each of the participating chains
"},{"location":"tutorials/interoperability/xcm-channels/para-to-system/#procedure-for-establishing-hrmp-channel","title":"Procedure for Establishing HRMP Channel","text":"

This guide demonstrates opening an HRMP channel between parachain 2500 and system chain Asset Hub (parachain 1000) on the Rococo Local relay chain.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-system/#fund-parachain-sovereign-account","title":"Fund Parachain Sovereign Account","text":"

The sovereign account for parachain 2500 on the relay chain must be funded so it can take care of any XCM transact fees.

Use Polkadot.js Apps UI to connect to the relay chain and transfer funds from your account to the parachain 2500 sovereign account.

Calculating Parachain Sovereign Account

To generate the sovereign account address for a parachain, you'll need to follow these steps:

  1. Determine if the parachain is an \"up/down\" chain (parent or child) or a \"sibling\" chain:

    • Up/down chains use the prefix 0x70617261 (which decodes to b\"para\")

    • Sibling chains use the prefix 0x7369626c (which decodes to b\"sibl\")

  2. Calculate the u32 scale encoded value of the parachain ID:

    • Parachain 2500 would be encoded as c4090000
  3. Combine the prefix and parachain ID encoding to form the full sovereign account address:

    The sovereign account of parachain 2500 in relay chain will be 0x70617261c4090000000000000000000000000000000000000000000000000000 and the SS58 format of this address is 5Ec4AhPSY2GEE4VoHUVheqv5wwq2C1HMKa7c9fVJ1WKivX1Y

To perform this conversion, you can also use the \"Para ID\" to Address section in Substrate Utilities.

"},{"location":"tutorials/interoperability/xcm-channels/para-to-system/#create-establish-channel-with-system-extrinsic","title":"Create Establish Channel with System Extrinsic","text":"
  1. In Polkadot.js Apps, connect to the relay chain, navigate to the Developer dropdown and select the Extrinsics option

  2. Construct an establish_channel_with_system extrinsic call

    1. Select the hrmp pallet
    2. Choose the establish_channel_with_system extrinsic
    3. Fill in the parameters:
      • target_system_chain - parachain ID of the target system chain (in this case, 1000)
    4. Copy the encoded call data The encoded call data for establishing a channel with system parachain 1000 should be 0x3c0ae8030000
"},{"location":"tutorials/interoperability/xcm-channels/para-to-system/#crafting-and-submitting-the-xcm-message","title":"Crafting and Submitting the XCM Message","text":"

Connect to parachain 2500 using Polkadot.js Apps to send the XCM message to the relay chain. Input the necessary parameters as illustrated in the image below. Make sure to:

  1. Insert your previously encoded establish_channel_with_system call data into the call field
  2. Provide beneficiary details
  3. Dispatch the XCM message to the relay chain by clicking the Submit Transaction button

Note

The exact process and parameters for submitting this XCM message may vary depending on your specific parachain and relay chain configurations. Always refer to the most current documentation for your particular network setup.

After successfully submitting the XCM message to the relay chain, two HRMP channels should be created, establishing bidirectional communication between parachain 2500 and system chain 1000. To verify this, follow these steps:

  1. Using Polkadot.js Apps, connect to the relay chain and navigate to the Developer dropdown, then select Chain state

  2. Query the HRMP channels

    1. Select hrmp from the options
    2. Choose the hrmpChannels call
    3. Click the + button to execute the query
  3. Examine the query results. You should see output similar to the following:

    [\n    [\n        [\n            {\n                \"sender\": 1000,\n                \"recipient\": 2500\n            }\n        ],\n        {\n            \"maxCapacity\": 8,\n            \"maxTotalSize\": 8192,\n            \"maxMessageSize\": 1048576,\n            \"msgCount\": 0,\n            \"totalSize\": 0,\n            \"mqcHead\": null,\n            \"senderDeposit\": 0,\n            \"recipientDeposit\": 0\n        }\n    ],\n    [\n        [\n            {\n                \"sender\": 2500,\n                \"recipient\": 1000\n            }\n        ],\n        {\n            \"maxCapacity\": 8,\n            \"maxTotalSize\": 8192,\n            \"maxMessageSize\": 1048576,\n            \"msgCount\": 0,\n            \"totalSize\": 0,\n            \"mqcHead\": null,\n            \"senderDeposit\": 0,\n            \"recipientDeposit\": 0\n        }\n    ]\n]\n

The output confirms the successful establishment of two HRMP channels:

  • From chain 1000 (system chain) to chain 2500 (parachain)
  • From chain 2500 (parachain) to chain 1000 (system chain)

This bidirectional channel enables direct communication between the system chain and the parachain, allowing for cross-chain message passing.

"},{"location":"tutorials/interoperability/xcm-transfers/","title":"XCM Transfers","text":"

Discover comprehensive tutorials that guide you through performing asset transfers between distinct consensus systems. These tutorials leverage XCM (Cross-Consensus Messaging) technology, that enables cross-chain communication and asset exchanges across different blockchain networks. Whether you're working within the same ecosystem or bridging multiple systems, XCM ensures secure, efficient, and interoperable solutions.

By mastering XCM-based transfers, you'll unlock new possibilities for building cross-chain applications and expanding blockchain utility. Learn the methods, tools, and best practices for testibg XCM-powered transfers, ensuring your systems achieve robust interoperability.

"},{"location":"tutorials/interoperability/xcm-transfers/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/","title":"From Relay Chain to Parachain","text":""},{"location":"tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/#introduction","title":"Introduction","text":"

Cross-Consensus Messaging (XCM) facilitates asset transfers both within the same consensus system and between different ones, such as between a relay chain and its parachains. For cross-system transfers, two main methods are available:

  • Asset teleportation - a simple and efficient method involving only the source and destination chains, ideal for systems with a high level of trust
  • Reserve-backed transfers - involves a trusted reserve holding real assets and mints derivative tokens to track ownership. This method is suited for systems with lower trust levels

In this tutorial, you will learn how to perform a reserve-backed transfer of DOT between a relay chain (Polkadot) and a parachain (Astar).

"},{"location":"tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/#prerequisites","title":"Prerequisites","text":"

When adapting this tutorial for other chains, before you can send messages between different consensus systems, you must first open HRMP channels. For detailed guidance, refer to the XCM Channels article before for further information about.

This tutorial uses Chopsticks to fork a relay chain and a parachain connected via HRMP channels. For more details on this setup, see the XCM Testing section on the Chopsticks page.

"},{"location":"tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/#setup","title":"Setup","text":"

To simulate XCM operations between different consensus systems, start by forking the network with the following command:

chopsticks xcm -r polkadot -p astar\n
After executing this command, the relay chain and parachain will expose the following WebSocket endpoints:

Chain WebSocket Endpoint Polkadot (relay chain)
ws://localhost:8001
Astar (parachain)
ws://localhost:8000

You can perform the reserve-backed transfer using either the Polkadot.js Apps interface or the Polkadot API, depending on your preference. Both methods provide the same functionality to facilitate asset transfers between the relay chain and parachain.

"},{"location":"tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/#using-polkadotjs-apps","title":"Using Polkadot.js Apps","text":"

Open two browser tabs and can connect these endpoints using the Polkadot.js Apps interface:

a. Add the custom endpoint for each chain

b. Click Switch to connect to the respective network

This reserve-backed transfer method facilitates asset transfers from a local chain to a destination chain by trusting a third party called a reserve to store the real assets. Fees on the destination chain are deducted from the asset specified in the assets vector at the fee_asset_item index, covering up to the specified weight_limit. The operation fails if the required weight exceeds this limit, potentially putting the transferred assets at risk.

The following steps outline how to execute a reserve-backed transfer from the Polkadot relay chain to the Astar parachain.

"},{"location":"tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/#from-the-relay-chain-perspective","title":"From the Relay Chain Perspective","text":"
  1. Navigate to the Extrinsics page

    1. Click on the Developer tab from the top navigation bar
    2. Select Extrinsics from the dropdown

  2. Select xcmPallet

  3. Select the limitedReservedAssetTransfer extrinsic from the dropdown list

  4. Fill out the required fields:

    1. dest - specifies the destination context for the assets. Commonly set to [Parent, Parachain(..)] for parachain-to-parachain transfers or [Parachain(..)] for relay chain-to-parachain transfers. In this case, since the transfer is from a relay chain to a parachain, the destination (Location) is the following:

      { parents: 0, interior: { X1: [{ Parachain: 2006 }] } }\n
    2. beneficiary - defines the recipient of the assets within the destination context, typically represented as an AccountId32 value. This example uses the following account present in the destination chain:

      X2mE9hCGX771c3zzV6tPa8U2cDz4U4zkqUdmBrQn83M3cm7\n
    3. assets - lists the assets to be withdrawn, including those designated for fee payment on the destination chain

    4. feeAssetItem - indicates the index of the asset within the assets list to be used for paying fees
    5. weightLimit - specifies the weight limit, if applicable, for the fee payment on the remote chain
    6. Click on the Submit Transaction button to send the transaction

After submitting the transaction, verify that the xcmPallet.FeesPaid and xcmPallet.Sent events have been emitted:

"},{"location":"tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/#from-the-parachain-perspective","title":"From the Parachain Perspective","text":"

After submitting the transaction from the relay chain, confirm its success by checking the parachain's events. Look for the assets.Issued event, which verifies that the assets have been issued to the destination as expected:

"},{"location":"tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/#using-papi","title":"Using PAPI","text":"

To programmatically execute the reserve-backed asset transfer between the relay chain and the parachain, you can use Polkadot API (PAPI). PAPI is a robust toolkit that simplifies interactions with Polkadot-based chains. For this project, you'll first need to set up your environment, install necessary dependencies, and create a script to handle the transfer process.

  1. Start by creating a folder for your project:

    mkdir reserve-backed-asset-transfer\ncd reserve-backed-asset\n

  2. Initialize a Node.js project and install the required dependencies. Execute the following commands:

    npm init\nnpm install polkadot-api @polkadot-labs/hdkd @polkadot-labs/hdkd-helpers\n
  3. To enable static, type-safe APIs for interacting with the Polkadot and Astar chains, add their metadata to your project using PAPI:

    npx papi add dot -n polkadot\nnpx papi add astar -w wss://rpc.astar.network\n

    Note

    • dot and astar are arbitrary names you assign to the chains, allowing you to access their metadata information
    • The first command uses the well-known Polkadot chain, while the second connects to the Astar chain using its WebSocket endpoint
  4. Create a index.js file and insert the following code to configure the clients and handle the asset transfer

    // Import necessary modules from Polkadot API and helpers\nimport {\n  astar, // Astar chain metadata\n  dot, // Polkadot chain metadata\n  XcmVersionedLocation,\n  XcmVersionedAssets,\n  XcmV3Junction,\n  XcmV3Junctions,\n  XcmV3WeightLimit,\n  XcmV3MultiassetFungibility,\n  XcmV3MultiassetAssetId,\n} from '@polkadot-api/descriptors';\nimport { createClient } from 'polkadot-api';\nimport { sr25519CreateDerive } from '@polkadot-labs/hdkd';\nimport {\n  DEV_PHRASE,\n  entropyToMiniSecret,\n  mnemonicToEntropy,\n  ss58Decode,\n} from '@polkadot-labs/hdkd-helpers';\nimport { getPolkadotSigner } from 'polkadot-api/signer';\nimport { getWsProvider } from 'polkadot-api/ws-provider/web';\nimport { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat';\nimport { Binary } from 'polkadot-api';\n\n// Create Polkadot client using WebSocket provider for Polkadot chain\nconst polkadotClient = createClient(\n  withPolkadotSdkCompat(getWsProvider('ws://127.0.0.1:8001')),\n);\nconst dotApi = polkadotClient.getTypedApi(dot);\n\n// Create Astar client using WebSocket provider for Astar chain\nconst astarClient = createClient(\n  withPolkadotSdkCompat(getWsProvider('ws://localhost:8000')),\n);\nconst astarApi = astarClient.getTypedApi(astar);\n\n// Create keypair for Alice using dev phrase to sign transactions\nconst miniSecret = entropyToMiniSecret(mnemonicToEntropy(DEV_PHRASE));\nconst derive = sr25519CreateDerive(miniSecret);\nconst aliceKeyPair = derive('//Alice');\nconst alice = getPolkadotSigner(\n  aliceKeyPair.publicKey,\n  'Sr25519',\n  aliceKeyPair.sign,\n);\n\n// Define recipient (Dave) address on Astar chain\nconst daveAddress = 'X2mE9hCGX771c3zzV6tPa8U2cDz4U4zkqUdmBrQn83M3cm7';\nconst davePublicKey = ss58Decode(daveAddress)[0];\nconst idBenef = Binary.fromBytes(davePublicKey);\n\n// Define Polkadot Asset ID on Astar chain (example)\nconst polkadotAssetId = 340282366920938463463374607431768211455n;\n\n// Fetch asset balance of recipient (Dave) before transaction\nlet assetMetadata = await astarApi.query.Assets.Account.getValue(\n  polkadotAssetId,\n  daveAddress,\n);\nconsole.log('Asset balance before tx:', assetMetadata?.balance ?? 0);\n\n// Prepare and submit transaction to transfer assets from Polkadot to Astar\nconst tx = dotApi.tx.XcmPallet.limited_reserve_transfer_assets({\n  dest: XcmVersionedLocation.V3({\n    parents: 0,\n    interior: XcmV3Junctions.X1(\n      XcmV3Junction.Parachain(2006), // Destination is the Astar parachain\n    ),\n  }),\n  beneficiary: XcmVersionedLocation.V3({\n    parents: 0,\n    interior: XcmV3Junctions.X1(\n      XcmV3Junction.AccountId32({\n        // Beneficiary address on Astar\n        network: undefined,\n        id: idBenef,\n      }),\n    ),\n  }),\n  assets: XcmVersionedAssets.V3([\n    {\n      id: XcmV3MultiassetAssetId.Concrete({\n        parents: 0,\n        interior: XcmV3Junctions.Here(), // Asset from the sender's location\n      }),\n      fun: XcmV3MultiassetFungibility.Fungible(120000000000), // Asset amount to transfer\n    },\n  ]),\n  fee_asset_item: 0, // Asset used to pay transaction fees\n  weight_limit: XcmV3WeightLimit.Unlimited(), // No weight limit on transaction\n});\n\n// Sign and submit the transaction\ntx.signSubmitAndWatch(alice).subscribe({\n  next: async (event) => {\n    if (event.type === 'finalized') {\n      console.log('Transaction completed successfully');\n    }\n  },\n  error: console.error,\n  complete() {\n    polkadotClient.destroy(); // Clean up after transaction\n  },\n});\n\n// Wait for transaction to complete\nawait new Promise((resolve) => setTimeout(resolve, 20000));\n\n// Fetch asset balance of recipient (Dave) after transaction\nassetMetadata = await astarApi.query.Assets.Account.getValue(\n  polkadotAssetId,\n  daveAddress,\n);\nconsole.log('Asset balance after tx:', assetMetadata?.balance ?? 0);\n\n// Exit the process\nprocess.exit(0);\n

    Note

    To use this script with real-world blockchains, you'll need to update the WebSocket endpoint to the appropriate one, replace the Alice account with a valid account, and ensure the account has sufficient funds to cover transaction fees.

  5. Execute the script

    node index.js\n
  6. Check the terminal output. If the operation is successful, you should see the following message:

    node index.js Asset balance before tx: 0 Transaction completed successfully Asset balance after tx: 119999114907n

"},{"location":"tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/#additional-resources","title":"Additional Resources","text":"

You can perform these operations using the Asset Transfer API for an alternative approach. Refer to the Asset Transfer API guide in the documentation for more details.

"},{"location":"tutorials/polkadot-sdk/","title":"Polkadot SDK Tutorials","text":"

The Polkadot SDK is a versatile framework for building custom blockchains, whether as standalone networks or as part of the Polkadot ecosystem. With its modular design and extensible tools, libraries, and runtime components, the SDK simplifies the process of creating parachains, system chains, and solochains.

Ready to create a parachain from the ground up? Start with the tutorials highlighted in the Build and Deploy a Parachain section.

"},{"location":"tutorials/polkadot-sdk/#build-and-deploy-a-parachain","title":"Build and Deploy a Parachain","text":"

Follow these key milestones to guide you through parachain development. Each step links to detailed tutorials for a deeper dive into each stage:

  • Install the Polkadot SDK - set up the necessary tools to begin building on Polkadot. This step will get your environment ready for parachain development

  • Start Developing Your Own Parachain - kickstart your development by setting up a local solochain. This tutorial will lay the foundation for building and customizing your own parachain within the Polkadot ecosystem

  • Prepare Your Parachain for Deployment - follow these steps to set up a local relay chain environment and connect your parachain, getting it ready for deployment on the Polkadot network

"},{"location":"tutorials/polkadot-sdk/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/polkadot-sdk/#additional-resources","title":"Additional ResourcesView the Polkadot SDK Source Code","text":"

Check out the Polkadot SDK repository on GitHub to explore the source code and stay updated on the latest releases.

"},{"location":"tutorials/polkadot-sdk/parachains/","title":"Tutorials for Building Parachains with the Polkadot SDK","text":"

The Polkadot SDK enables you to build custom blockchains that can operate either independently or as part of the Polkadot network. These tutorials guide you through two main development paths: building a standalone chain (solochain) or creating a parachain that connects to Polkadot.

"},{"location":"tutorials/polkadot-sdk/parachains/#local-development","title":"Local Development","text":"

Start by learning the fundamentals through these local development tutorials:

  • Launch a Local Solochain - compile and run your first blockchain node
  • Connect Multiple Nodes - use predefined accounts to create a basic network
  • Spin Up Your Nodes - set up a network with custom validators and Aura consensus
  • Upgrade a Running Network - perform forkless runtime upgrades to add features
"},{"location":"tutorials/polkadot-sdk/parachains/#parachain-development","title":"Parachain Development","text":"

Ready to connect your parachain to Polkadot? Follow these tutorials to build and deploy a parachain:

  • Prepare a Relay Chain - set up a local relay chain for testing
  • Prepare a Parachain - configure and connect your parachain to the relay chain
  • Acquire a TestNet Slot - deploy your parachain to the Paseo TestNet
"},{"location":"tutorials/polkadot-sdk/parachains/#key-takeaways","title":"Key Takeaways","text":"

Through these tutorials, you'll gain practical experience with:

  • Node operation and network setup
  • Chain configuration and consensus
  • Runtime development and upgrades
  • Parachain deployment and management

Each tutorial builds upon previous concepts while providing flexibility to focus on your specific development goals, whether that's building a standalone chain or a fully integrated parachain.

"},{"location":"tutorials/polkadot-sdk/parachains/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/","title":"Connect to a Relay Chain","text":"

Ready to connect your parachain to Polkadot? Take the next step in your parachain development journey by learning how to connect your custom chain to a relay chain. These tutorials will guide you through the core processes of parachain integration, covering:

  • Relay chain setup and configuration
  • Registering and acquiring a parachain slot
  • Preparing the genesis state and runtime
  • Configuring collator nodes for network operation
  • Deploying your parachain to a TestNet

Each tutorial is designed to build on foundational concepts, offering a clear and structured progression from local development to seamless integration with Polkadot\u2019s public network. Whether you\u2019re aiming to test locally or deploy on TestNet, these guides will ensure you\u2019re equipped with the skills to succeed.

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/","title":"Acquire a TestNet Slot","text":""},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/#introduction","title":"Introduction","text":"

This tutorial demonstrates deploying a parachain on a public test network like the Paseo network. Public TestNets have a higher bar to entry than a private network but represent an essential step in preparing a parachain project to move into a production network.

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/#prerequisites","title":"Prerequisites","text":"

Before you start, you need to have the following prerequisites:

  • You know how to generate and modify chain specification files as described in the Generate Chain Specs section
  • You know how to generate and store keys as described in the Spin Your Nodes tutorial
  • You have completed the Prepare a Local Relay Chain and the Prepare a Local Parachain tutorials on your local computer
"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/#get-started-with-an-account-and-tokens","title":"Get Started with an Account and Tokens","text":"

To perform any action on Paseo, you need PAS tokens, which can be requested from the Polkadot Faucet. Also, to store the tokens, you must have access to a Substrate-compatible digital currency wallet. Development keys and accounts should never hold assets of actual value and should not be used for production. Many options are available for holding digital currency\u2014including hardware wallets and browser-based applications\u2014and some are more reputable than others. You should do your own research before selecting one.

However, you can use the Polkadot.js Apps interface to get you started for testing purposes.

To prepare an account, follow these steps:

  1. Open the Polkadot.js Apps interface and connect to the Paseo network

  2. Navigate to the Accounts section

    1. Click on the Accounts tab in the top menu
    2. Select the Accounts option from the dropdown menu

  3. Copy the address of the account you want to use for the parachain deployment

  4. Visit the Polkadot Faucet and paste the copied address in the input field. Ensure that the network is set to Paseo and click on the Get some PASs button

    After a few seconds, you will receive 100 PAS tokens in your account.

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/#reserve-a-parachain-identifier","title":"Reserve a Parachain Identifier","text":"

You must reserve a parachain identifier before registering a parathread on Paseo. The steps are similar to the ones you followed in Prepare a Local Parachain to reserve an identifier on the local relay chain. However, for the public TestNet, you'll be assigned the next available identifier.

To reserve a parachain identifier, follow these steps:

  1. Navigate to the Parachains section

    1. Click on the Network tab in the top menu
    2. Select the Parachains option from the dropdown menu

  2. Register a parathread

    1. Select the Parathreads tab
    2. Click on the + ParaId button

  3. Review the transaction and click on the + Submit button

    For this case, the next available parachain identifier is 4508.

  4. After submitting the transaction, you can navigate to the Explorer tab and check the list of recent events for successful registrar.Reserved

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/#modify-the-chain-specification-file","title":"Modify the Chain Specification File","text":"

The files required to register a parachain must specify the correct relay chain to connect to and the parachain identifier you have been assigned. To make these changes, you must build and modify the chain specification file for your parachain. In this tutorial, the relay chain is paseo, and the parachain identifier is 4508.

To modify the chain specification:

  1. Generate the plain text chain specification for the parachain template node by running the following command:

    ./target/release/parachain-template-node build-spec \\\n  --disable-default-bootnode > plain-parachain-chainspec.json\n
  2. Open the plain text chain specification for the parachain template node in a text editor

  3. Set relay_chain to paseo and para_id to the identifier you've been assigned. For example, if your reserved identifier is 4508, set the para_id field to 4508:

    \"...\": \"...\",\n\"relay_chain\": \"paseo\",\n\"para_id\": 4508,\n        \"...\": {}\n    }\n}\n
  4. Set the parachainId to the parachain identifier that you previously reserved:

    {\n    \"...\": \"...\",\n\"genesis\": {\n    \"runtime\": {\n        \"...\": {},\n        \"parachainInfo\": {\n            \"parachainId\": 4508\n        },\n        },\n        \"...\": {}\n    }\n}\n
  5. Add the public key for your account to the session keys section. Each configured session key will require a running collator:

    {\n    \"...\": \"...\",\n\"genesis\": {\n    \"runtime\": {\n        \"...\": {},\n            \"session\": {\n                \"keys\": [\n                    [\n                        \"5HErbKmL5JmUKDVsH1aGyXTGZb4i9iaNsFhSgkNDr8qp2Dvj\",\n                        \"5HErbKmL5JmUKDVsH1aGyXTGZb4i9iaNsFhSgkNDr8qp2Dvj\",\n                        {\n                            \"aura\": \"5HErbKmL5JmUKDVsH1aGyXTGZb4i9iaNsFhSgkNDr8qp2Dvj\"\n                        }\n                    ]\n                ]\n            }\n        },\n        \"...\": {}\n    }\n}\n
  6. Save your changes and close the plain text chain specification file

  7. Generate a raw chain specification file from the modified chain specification file:

    ./target/release/parachain-template-node build-spec \\\n  --chain plain-parachain-chainspec.json \\\n  --disable-default-bootnode \\\n  --raw > raw-parachain-chainspec.json\n

    After running the command, you will see the following output:

    ./target/release/parachain-template-node build-spec --chain plain-parachain-chainspec.json --disable-default-bootnode --raw > raw-parachain-chainspec.json 2024-09-11 09:48:15 Building chain spec 2024-09-11 09:48:15 assembling new collators for new session 0 at #0 2024-09-11 09:48:15 assembling new collators for new session 1 at #0

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/#export-required-files","title":"Export Required Files","text":"

To prepare the parachain collator to be registered on Paseo, follow these steps:

  1. Export the Wasm runtime for the parachain by running a command similar to the following:

    ./target/release/parachain-template-node export-genesis-wasm \\\n  --chain raw-parachain-chainspec.json para-4508-wasm\n
  2. Export the genesis state for the parachain by running a command similar to the following:

    ./target/release/parachain-template-node export-genesis-state \\\n  --chain raw-parachain-chainspec.json para-4508-state\n
"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/#start-the-collator-node","title":"Start the Collator Node","text":"

You must have the ports for the collator publicly accessible and discoverable to enable parachain nodes to peer with Paseo validator nodes to produce blocks. You can specify the ports with the --port command-line option. For example, you can start the collator with a command similar to the following:

./target/release/parachain-template-node --collator \\\n  --chain raw-parachain-chainspec.json \\\n  --base-path /tmp/parachain/pubs-demo \\\n  --port 50333 \\\n  --rpc-port 8855 \\\n  -- \\\n  --execution wasm \\\n  --chain paseo \\\n  --port 50343 \\\n  --rpc-port 9988\n

In this example, the first --port setting specifies the port for the collator node and the second --port specifies the embedded relay chain node port. The first --rpc-port setting specifies the port you can connect to the collator. The second --rpc-port specifies the port for connecting to the embedded relay chain.

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/#obtain-coretime","title":"Obtain Coretime","text":"

With your parachain collator operational, the next step is acquiring coretime. This is essential for ensuring your parachain's security through the relay chain. Agile Coretime enhances Polkadot's resource management, offering developers greater economic adaptability. Once you have configured your parachain, you can follow two paths:

  • Bulk coretime is purchased via the Broker pallet on the respective coretime system parachain. You can purchase bulk coretime on the coretime chain and assign the purchased core to the registered ParaID
  • On-demand coretime is ordered via the OnDemandAssignment pallet, which is located on the respective relay chain

For more information on coretime, refer to the Coretime documentation.

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/","title":"Prepare a Parachain","text":""},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/#introduction","title":"Introduction","text":"

This tutorial illustrates reserving a parachain identifier with a local relay chain and connecting a local parachain to that relay chain. By completing this tutorial, you will accomplish the following objectives:

  • Compile a local parachain node
  • Reserve a unique identifier with the local relay chain for the parachain to use
  • Configure a chain specification for the parachain
  • Export the runtime and genesis state for the parachain
  • Start the local parachain and see that it connects to the local relay chain
"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure that you have the following prerequisites:

  • Configured a local relay chain with two validators as described in the Prepare a Relay Chain tutorial
  • You are aware that parachain versions and dependencies are tightly coupled with the version of the relay chain they connect to and know the software version you used to configure the relay chain
"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/#build-the-parachain-template","title":"Build the Parachain Template","text":"

This tutorial uses the Polkadot SDK Parachain Template to illustrate launching a parachain that connects to a local relay chain. The parachain template is similar to the Solochain Template used in development. You can also use the parachain template as the starting point for developing a custom parachain project.

To build the parachain template, follow these steps:

  1. Clone the branch of the polkadot-sdk-parachain-template repository

    git clone https://github.com/paritytech/polkadot-sdk-parachain-template.git\n

    Note

    Ensure that you clone the correct branch of the repository that matches the version of the relay chain you are connecting to.

  2. Change the directory to the cloned repository

    cd polkadot-sdk-solochain-template\n
  3. Build the parachain template collator

    cargo build --release\n

    Note

    Depending on your system\u2019s performance, compiling the node can take a few minutes.

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/#reserve-a-parachain-identifier","title":"Reserve a Parachain Identifier","text":"

Every parachain must reserve a unique ParaID identifier to connect to its specific relay chain. Each relay chain manages its own set of unique identifiers for the parachains that connect to it. The identifier is called a ParaID because the same identifier can be used to identify a slot occupied by a parachain or a parathread.

Note that you must have an account with sufficient funds to reserve a slot on a relay chain. You can determine the number of tokens a specific relay chain requires by checking the ParaDeposit configuration in the paras_registrar pallet for that relay chain. The following example shows a ParaDeposit requirement of 40 native tokens:

parameter_types! {\n    pub const ParaDeposit: Balance = 40 * UNITS;\n}\n\nimpl paras_registrar::Config for Runtime {\n    type RuntimeOrigin = RuntimeOrigin;\n    type RuntimeEvent = RuntimeEvent;\n    type Currency = Balances;\n    type OnSwap = (Crowdloan, Slots);\n    type ParaDeposit = ParaDeposit;\n    type DataDepositPerByte = DataDepositPerByte;\n    type WeightInfo = weights::runtime_common_paras_registrar::WeightInfo<Runtime>;\n}\n

Each relay chain allows its identifiers by incrementing the identifier starting at 2000 for all chains that aren't system parachains. System parachains use a different method to allocate slot identifiers.

To reserve a parachain identifier, follow these steps:

  1. Ensure your local relay chain validators are running. For further information, refer to the Prepare a Relay Chain tutorial

  2. Connect to a local relay chain node using the Polkadot.js Apps interface. If you have followed the Prepare a Relay Chain tutorial, you can access the Polkadot.js Apps interface at ws://localhost:9944

  3. Navigate to the Parachains section

    1. Click on the Network tab
    2. Select Parachains from the dropdown menu

  4. Register a parathread

    1. Select the Parathreads tab
    2. Click on the + ParaId button

  5. Fill in the required fields and click on the + Submit button

    Note

    The account used to reserve the identifier will be the account charged for the transaction and the origin account for the parathread associated with the identifier.

  6. After submitting the transaction, you can navigate to the Explorer tab and check the list of recent events for successful registrar.Reserved

You are now ready to prepare the chain specification and generate the files required for your parachain to connect to the relay chain using the reserved identifier (paraId 2000).

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/#modify-the-default-chain-specification","title":"Modify the Default Chain Specification","text":"

To register your parachain with the local relay chain, you must modify the default chain specification to use your reserved parachain identifier.

To modify the default chain specification, follow these steps:

  1. Generate the plain text chain specification for the parachain template node by running the following command

    ./target/release/parachain-template-node build-spec \\\n  --disable-default-bootnode > plain-parachain-chainspec.json\n
  2. Open the plain text chain specification for the parachain template node in a text editor

  3. Set the para_id to the parachain identifier that you previously reserved. For example, if your reserved identifier is 2000, set the para_id field to 2000:

    \"...\": \"...\",\n\"relay_chain\": \"rococo-local\",\n\"para_id\": 2000,\n\"genesis\": {\n        \"...\": {}\n    }\n}\n
  4. Set the parachainId to the parachain identifier that you previously reserved. For example, if your reserved identifier is 2000, set the parachainId field to 2000

    \"...\": \"...\",\n    \"genesis\": {\n        \"runtime\": {\n            \"...\": {},\n            \"parachainInfo\": {\n                \"parachainId\": 2000\n            }\n        },\n        \"...\": {}\n    }\n}\n
  5. If you complete this tutorial simultaneously as anyone on the same local network, an additional step is needed to prevent accidentally peering with their nodes. Find the following line and add characters to make your protocolId unique

    \"...\": \"...\",\n\"protocolId\": \"template-local\",\n\"genesis\": {\n        \"...\": {}\n    }\n}\n
  6. Save your changes and close the plain text chain specification file

  7. Generate a raw chain specification file from the modified chain specification file by running the following command

    ./target/release/parachain-template-node build-spec \\\n  --chain plain-parachain-chainspec.json \\\n  --disable-default-bootnode \\\n  --raw > raw-parachain-chainspec.json\n

    After running the command, you will see the following output:

    ./target/release/parachain-template-node build-spec \\ --chain plain-parachain-chainspec.json \\ --disable-default-bootnode \\ --raw > raw-parachain-chainspec.json 2024-09-10 14:34:58 Building chain spec 2024-09-10 14:34:59 assembling new collators for new session 0 at #0 2024-09-10 14:34:59 assembling new collators for new session 1 at #0

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/#prepare-the-parachain-collator","title":"Prepare the Parachain Collator","text":"

With the local relay chain running and the raw chain specification for the parachain template updated, you can start the parachain collator node and export information about its runtime and genesis state.

To prepare the parachain collator to be registered:

  1. Export the Wasm runtime for the parachain

    The relay chain needs the parachain-specific runtime validation logic to validate parachain blocks. You can export the Wasm runtime for a parachain collator node by running a command similar to the following:

    ./target/release/parachain-template-node export-genesis-wasm \\\n  --chain raw-parachain-chainspec.json para-2000-wasm\n
  2. Generate a parachain genesis state

    To register a parachain, the relay chain needs to know the genesis state of the parachain. You can export the entire genesis state\u2014hex-encoded\u2014to a file by running a command similar to the following:

    ./target/release/parachain-template-node export-genesis-state \\\n  --chain raw-parachain-chainspec.json para-2000-genesis-state\n

    After running the command, you will see the following output:

    ./target/release/parachain-template-node export-genesis-state \\ --chain raw-parachain-chainspec.json para-2000-genesis-state 2024-09-10 14:41:13 \ud83d\udd28 Initializing Genesis block/state (state: 0xb089\u20261830, header-hash: 0x6b0b\u2026bd69)

    Note

    You should note that the runtime and state you export must be for the genesis block. You can't connect a parachain with any previous state to a relay chain. All parachains must start from block 0 on the relay chain. See Convert a Solo Chain for details on how the parachain template was created and how to convert the chain logic\u2014not its history or state migrations\u2014to a parachain.

  3. Start a collator node with a command similar to the following

    ./target/release/parachain-template-node \\\n  --charlie \\\n  --collator \\\n  --force-authoring \\\n  --chain raw-parachain-chainspec.json \\\n  --base-path /tmp/charlie-parachain/ \\\n  --unsafe-force-node-key-generation \\\n  --port 40333 \\\n  --rpc-port 8844 \\\n  -- \\\n  --chain INSERT_RELAY_CHAIN_PATH/local-raw-spec.json \\\n  --port 30333 \\\n  --rpc-port 9946\n

    Note

    Ensure that you replace INSERT_RELAY_CHAIN_PATH with the path to the raw chain specification for the local relay chain.

    After running the command, you will see the following output:

    ./target/release/parachain-template-node \\ --charlie \\ --collator \\ --force-authoring \\ --chain raw-parachain-chainspec.json \\ --base-path /tmp/charlie-parachain/ \\ --unsafe-force-node-key-generation \\ --port 40333 \\ --rpc-port 8844 \\ -- \\ --chain INSERT_RELAY_CHAIN_PATH/local-raw-spec.json \\ --port 30333 \\ --rpc-port 9946 2024-09-10 16:26:30 [Parachain] PoV size { header: 0.21875kb, extrinsics: 3.6103515625kb, storage_proof: 3.150390625kb } 2024-09-10 16:26:30 [Parachain] Compressed PoV size: 6.150390625kb 2024-09-10 16:26:33 [Relaychain] \ud83d\udca4 Idle (2 peers), best: #1729 (0x3aa4\u2026cb6b), finalized #1726 (0xff7a\u20264352), \u2b07 9.1kiB/s \u2b06 3.8kiB/s

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/#register-with-the-local-relay-chain","title":"Register With the Local Relay Chain","text":"

With the local relay chain and collator node running, you can register the parachain on the local relay chain. In a live public network, registration typically involves a parachain auction. You can use a Sudo transaction and the Polkadot.js Apps interface for this tutorial and local testing. A Sudo transaction lets you bypass the steps required to acquire a parachain or parathread slot. This transaction should be executed in the relay chain.

To register the parachain, follow these steps:

  1. Validate that your local relay chain validators are running
  2. Navigate to the Sudo tab in the Polkadot.js Apps interface

    1. Click on the Developer tab
    2. Select Sudo from the dropdown menu

  3. Submit a transaction with Sudo privileges

    1. Select the paraSudoWrapper pallet
    2. Click on the sudoScheduleParaInitialize extrinsic from the list of available extrinsics

  4. Fill in the required fields

    1. id - type the parachain identifier you reserved
    2. genesisHead - click the file upload button and select the para-2000-genesis-state file you exported
    3. validationCode - click the file upload button and select the para-2000-wasm file you exported
    4. paraKind - select Yes if you are registering a parachain or No if you are registering a parathread

    5. Click on the Submit Transaction button

  5. After submitting the transaction, you can navigate to the Explorer tab and check the list of recent events for successful paras.PvfCheckAccepted

    After the parachain is initialized, you can see it in Parachains section of the Polkadot.js Apps interface

  6. Click Network and select Parachains and wait for a new epoch to start

The relay chain tracks the latest block\u2014the head\u2014of each parachain. When a relay chain block is finalized, the parachain blocks that have completed the validation process are also finalized. This is how Polkadot achieves pooled, shared security for its parachains.

After the parachain connects to the relay chain in the next epoch and finalizes its first block you can see information about it in the Polkadot/Substrate Portal.

The terminal where the parachain is running also displays details similar to the following:

... [Relaychain] \ud83d\udca4 Idle (2 peers), best: #90 (0x5f73\u20261ccf), finalized #87 (0xeb50\u202668ea), \u2b07 1.4kiB/s \u2b06 1.1kiB/s [Parachain] \ud83d\udca4 Idle (0 peers), best: #0 (0x3626\u2026fef3), finalized #0 (0x3626\u2026fef3), \u2b07 1.2kiB/s \u2b06 0.7kiB/s [Relaychain] \ud83d\udca4 Idle (2 peers), best: #90 (0x5f73\u20261ccf), finalized #88 (0xd43c\u2026c3e6), \u2b07 0.7kiB/s \u2b06 0.5kiB/s [Parachain] \ud83d\udca4 Idle (0 peers), best: #0 (0x3626\u2026fef3), finalized #0 (0x3626\u2026fef3), \u2b07 1.0kiB/s \u2b06 0.6kiB/s [Relaychain] \ud83d\udc76 New epoch 9 launching at block 0x1c93\u20264aa9 (block slot 281848325 >= start slot 281848325) [Relaychain] \ud83d\udc76 Next epoch starts at slot 281848335 [Relaychain] \u2728 Imported #91 (0x1c93\u20264aa9) [Parachain] Starting collation. relay_parent=0x1c936289cfe15fabaa369f7ae5d73050581cb12b75209c11976afcf07f6a4aa9 at=0x36261113c31019d4b2a1e27d062e186f46da0e8f6786177dc7b35959688ffef3 [Relaychain] \ud83d\udca4 Idle (2 peers), best: #91 (0x1c93\u20264aa9), finalized #88 (0xd43c\u2026c3e6), \u2b07 1.2kiB/s \u2b06 0.7kiB/s [Parachain] \ud83d\udca4 Idle (0 peers), best: #0 (0x3626\u2026fef3), finalized #0 (0x3626\u2026fef3), \u2b07 0.2kiB/s \u2b06 37 B/s"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/#resetting-the-blockchain-state","title":"Resetting the Blockchain State","text":"

The parachain collator you connected to the relay chain in this tutorial contains all of the blockchain data for the parachain. There's only one node in this parachain network, so any transactions you submit are only stored on this node. Relay chains don't store any parachain state. The relay chain only stores header information for the parachains that connect to it.

For testing purposes, you might want to purge the blockchain state to start over periodically. However, you should remember that if you purge the chain state or manually delete the database, you won\u2019t be able to recover the data or restore the chain state. If you want to preserve data, you should ensure you have a copy before you purge the parachain state.

If you want to start over with a clean environment for testing, you should completely remove the chain state for the local relay chain nodes and the parachain.

To reset the blockchain state, follow these steps:

  1. In the terminal where the parachain template node is running, press Control-C

  2. Purge the parachain collator state by running the following command

    ./target/release/parachain-template-node purge-chain \\\n  --chain raw-parachain-chainspec.json\n
  3. In the terminal where either the alice validator node or the bob validator node is running, press Control-C

  4. Purge the local relay chain state by running the following command

    ./target/release/polkadot purge-chain \\\n  --chain local-raw-spec.json\n

After purging the chain state, you can restart the local relay chain and parachain collator nodes to begin with a clean environment.

Note

Note that to reset the network state and allow all the nodes to sync after the reset, each of them needs to purge their databases. Otherwise, the nodes won't be able to sync with each other effectively.

Now that you have successfully connected a parachain to a relay chain, you can explore more advanced features and functionalities of parachains, such as:

  • Opening HRMP Channels
  • Transfer Assets Between Parachains
"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/","title":"Prepare a Relay Chain","text":""},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/#introduction","title":"Introduction","text":"

This tutorial illustrates how to configure and spin up a local relay chain. The local relay chain is needed to set up a local testing environment to which a test parachain node can connect. Setting up a local relay chain is a crucial step in parachain development. It allows developers to test their parachains in a controlled environment, simulating the interaction between a parachain and the relay chain without needing a live network. This local setup facilitates faster development cycles and easier debugging.

The scope of this tutorial includes:

  • Installing necessary components for a local relay chain
  • Configuring the relay chain settings
  • Starting and running the local relay chain
  • Verifying the relay chain is operational
"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/#prerequisites","title":"Prerequisites","text":"

Before diving into this tutorial, it's recommended that you have a basic understanding of how adding trusted nodes works in Polkadot. For further information about this process, refer to the Spin Your Nodes tutorial.

To complete this tutorial, ensure that you have:

  • Installed Rust and the Rust toolchain. Refer to the Installation guide for step-by-step instructions on setting up your development environment
  • Completed Launch a Local Solochain tutorial and know how to compile and run a Polkadot SDK-based node
"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/#build-a-local-relay-chain","title":"Build a Local Relay Chain","text":"

To build a local relay chain, follow these steps:

  1. Clone the most recent release branch of the Polkadot SDK repository to prepare a stable working environment:

    git clone --depth 1 --branch polkadot-stable2407-2 \\\nhttps://github.com/paritytech/polkadot-sdk.git\n

    Note

    The branch polkadot-stable2407-2 is used in this tutorial since it is the branch that contains the latest stable release of the Polkadot SDK. You can find the latest release of the Polkadot SDK on the Release tab on the Polkadot GitHub repository.

    Note

    Note that the --depth 1 flag is used to clone only the latest commit of the branch, which speeds up the cloning process.

  2. Change the directory to the Polkadot SDK repository:

    cd polkadot-sdk\n
  3. Build the relay chain node:

    cargo build --release\n

    Note

    Depending on your machine's specifications, the build process may take some time.

  4. Verify that the node is built correctly:

    ./target/release/polkadot --version\n

If command-line help is displayed, the node is ready to configure.

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/#relay-chain-configuration","title":"Relay Chain Configuration","text":"

Every Substrate-based chain requires a chain specification. The relay chain's chain specification provides the same configuration settings as the chain specification for other networks. Many of the chain specification file settings are critical for network operations. For example, the chain specification identifies peers participating in the network, keys for validators, bootnode addresses, and other information.

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/#sample-chain-configuration","title":"Sample Chain Configuration","text":"

The local relay chain uses a sample chain specification file with two validator relay chain nodes\u2014Alice and Bob\u2014as authorities for this tutorial. Because a relay chain must have at least one more validator node running than the total number of connected parachain collators, you can only use the chain specification from this tutorial for a local relay chain network with a single parachain.

If you wanted to connect two parachains with a single collator each, you must run three or more relay chain validator nodes. You must modify the chain specification and hard-code additional validators to set up a local test network for two or more parachains.

"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/#plain-and-raw-chain-specification","title":"Plain and Raw Chain Specification","text":"

The chain specification file is available in two formats: a JSON file in plain text and a JSON file in SCALE-encoded raw format.

You can read and edit the plain text version of the chain specification file. However, the chain specification file must be converted to the SCALE-encoded raw format before you can use it to start a node. For information about converting a chain specification to the raw format, see Customize a Chain Specification.

The sample chain specification is only valid for a single parachain with two validator nodes. If you add other validators, add additional parachains to your relay chain, or want to use custom account keys instead of the predefined account, you'll need to create a custom chain specification file.

Suppose you are completing this tutorial simultaneously as anyone on the same local network. In that case, you must download and modify the plain sample relay chain spec to prevent accidentally peering with their nodes. Find the following line in the plain chain spec and add characters to make the protocolId field unique:

\"protocolId\": \"dot\",\n
"},{"location":"tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/#start-the-relay-chain-node","title":"Start the Relay Chain Node","text":"

Before starting block production for a parachain, you need to start a relay chain for them to connect.

To start the validator nodes, follow these steps:

  1. Generate the chain specification file in the plain text format and use it to create the raw chain specification file. Save the raw chain specification file in a local working directory

    1. Generate the plain text chain specification file:

      ./target/release/polkadot build-spec \\\n  --chain rococo-local-testnet > /tmp/plain-local-chainspec.json\n

      Note

      Note that the network values are set to the default when generating the chain specification file with the build-spec. You can customize the network values by editing the chain specification file for production networks.

    2. Convert the plain text chain specification file to the raw format:

      ./target/release/polkadot build-spec \\\n  --chain plain-local-chainspec.json \\\n  --raw > /tmp/raw-local-chainspec.json\n
  2. Start the first validator using the alice account by running the following command:

    ./target/release/polkadot \\\n  --alice \\\n  --validator \\\n  --base-path /tmp/alice \\\n  --chain /tmp/raw-local-chainspec.json \\\n  --port 30333 \\\n  --rpc-port 9944 \\\n  --insecure-validator-i-know-what-i-do \\\n  --force-authoring\n

    This command uses /tmp/raw-local-chainspec.json as the location of the sample chain specification file. Ensure the --chain command line specifies the path to your generated raw chain specification. This command also uses the default values for the port (port) and WebSocket port (ws-port). The values are explicitly included here as a reminder to always check these settings. After the node starts, no other nodes on the same local machine can use these ports.

  3. Review log messages as the node starts and take note of the Local node identity value. This value is the node's peer ID, which you need to connect the parachain to the relay chain:

    2024-09-09 13:49:58 Parity Polkadot 2024-09-09 13:49:58 \u270c\ufe0f version 1.15.2-d6f482d5593 2024-09-09 13:49:58 \u2764\ufe0f by Parity Technologies <admin@parity.io>, 2017-2024 2024-09-09 13:49:58 \ud83d\udccb Chain specification: Rococo Local Testnet 2024-09-09 13:49:58 \ud83c\udff7 Node name: Alice 2024-09-09 13:49:58 \ud83d\udc64 Role: AUTHORITY 2024-09-09 13:49:58 \ud83d\udcbe Database: RocksDb at /tmp/relay/alice/chains/rococo_local_testnet/db/full 2024-09-09 13:49:59 \ud83c\udff7 Local node identity is: 12D3KooWG393uX82rR3QgDkZpb7U8StzuRx9BQUXCvWsP1ctgygp 2024-09-09 13:49:59 Running libp2p network backend ...

    Note

    You need to specify this identifier to enable other nodes to connect. In this case, the Local node identity is 12D3KooWG393uX82rR3QgDkZpb7U8StzuRx9BQUXCvWsP1ctgygp.

  4. Open a new terminal and start the second validator using the bob account. The command is similar to the command used to start the first node, with a few crucial differences:

    ./target/release/polkadot \\\n  --bob \\\n  --validator \\\n  --base-path /tmp/bob \\\n  --chain /tmp/raw-local-chainspec.json \\\n  --port 30334 \\\n  --rpc-port 9945\n

    Notice that this command uses a different base path (/tmp/relay/bob), validator key (--bob), and ports (30334 and 9945).

    Because both validators are running on a single local computer, it isn't necessary to specify the --bootnodes command-line option and the first node's IP address and peer identifier. The --bootnodes option is required to connect nodes outside the local network or not identified in the chain specification file.

    If you don't see the relay chain producing blocks, try disabling your firewall or adding the bootnodes command-line option with the address of Alice's node to start the node. Adding the bootnodes option looks like this (with the node identity of Alice's node):

    --bootnodes \\\n  /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWG393uX82rR3QgDkZpb7U8StzuRx9BQUXCvWsP1ctgygp\n
  5. Verify that the relay chain nodes are running by checking the logs for each node. The logs should show that the nodes are connected and producing blocks. For example, Bob's logs will be displayed as follows:

    ... 2024-09-10 13:29:38 \ud83c\udfc6 Imported #55 (0xad6a\u2026567c \u2192 0xecae\u2026ad12) 2024-09-10 13:29:38 \ud83d\udca4 Idle (1 peers), best: #55 (0xecae\u2026ad12), finalized #0 (0x1cac\u2026618d), \u2b07 2.0kiB/s \u2b06 1.6kiB/s ...

Once the relay chain nodes are running, you can proceed to the next tutorial to set up a test parachain node and connect it to the relay chain.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/","title":"Build a Solochain","text":"

This section provides practical, step-by-step tutorials for building and managing your own local solochain using the Polkadot SDK. A solochain is a standalone blockchain, typically used for testing, experimentation, or creating a custom blockchain that doesn't require integration with the Polkadot relay chain. You can customize your solochain to fit your exact needs. The following tutorials guide you through a complete workflow, from launching a single node to managing a network of validators.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/#key-takeaways","title":"Key Takeaways","text":"

By following along with these tutorials, you'll gain comprehensive experience with launching and managing blockchain nodes, including:

  • Node compilation and deployment
  • Network configuration and peer connectivity
  • Validator authorization and key management
  • Runtime upgrades and network maintenance
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/","title":"Connect Predefined Default Nodes","text":""},{"location":"tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/#introduction","title":"Introduction","text":"

This tutorial introduces you to the process of initiating a private blockchain network with a set of default authorized validators (Alice and Bob). If you prefer, you can also launch a private blockchain with your own validator accounts.

The Polkadot SDK Solochain Template implements an authority consensus model to regulate block production. In this model, the creation of blocks is restricted to a predefined list of authorized accounts, known as \"authorities,\" who operate in a round-robin fashion.

To demonstrate this concept, you'll simulate a network environment using two nodes running on a single computer, each configured with different accounts and keys. Throughout this tutorial, you'll gain practical insight into the functionality of the authority consensus model by observing how these two predefined accounts, serving as authorities, enable the nodes to produce blocks.

By completing this tutorial, you will accomplish the following objectives:

  • Start a blockchain node using a predefined account
  • Learn the key command-line options used to start a node
  • Determine if a node is running and producing blocks
  • Connect a second node to a running network
  • Verify peer computers produce and finalize blocks
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/#prerequisites","title":"Prerequisites","text":"

Before proceeding, ensure you have the following prerequisites in place:

  • Installed and configured Rust on your system. Refer to the Installation guide for detailed instructions on installing Rust and setting up your development environment
  • Completed the Launch a Local Solochain guide and have the Polkadot SDK Solochain Template installed on your local machine
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/#start-the-first-blockchain-node","title":"Start the First Blockchain Node","text":"

This tutorial demonstrates the fundamentals of a private network using a predefined chain specification called local and two preconfigured user accounts. You'll simulate a private network by running two nodes on a single local computer, using accounts named Alice and Bob.

Follow these steps to start your first blockchain node:

  1. Navigate to the root directory where you compiled the Polkadot SDK Solochain Template

  2. Clear any existing chain data by executing the following:

    ./target/release/solochain-template-node purge-chain --base-path /tmp/alice --chain local\n

    When prompted to confirm, type y and press Enter. This step ensures a clean start for your new network

  3. Launch the first blockchain node using the Alice account:

    ./target/release/solochain-template-node \\\n--base-path /tmp/alice \\\n--chain local \\\n--alice \\\n--port 30333 \\\n--rpc-port 9945 \\\n--node-key 0000000000000000000000000000000000000000000000000000000000000001 \\\n--validator\n

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/#review-the-command-line-options","title":"Review the Command-Line Options","text":"

Before proceeding, examine the key command-line options used to start the node:

  • --base-path - specifies the directory for storing all chain-related data
  • --chain - defines the chain specification to use
  • --alice - adds the predefined keys for the Alice account to the node's keystore. This account is used for block production and finalization
  • --port - sets the listening port for peer-to-peer (p2p) traffic. Different ports are necessary when running multiple nodes on the same machine
  • --rpc-port - specifies the port for incoming JSON-RPC traffic via WebSocket and HTTP
  • --node-key - defines the Ed25519 secret key for libp2p networking
  • --validator - enables this node to participate in block production and finalization for the network

For a comprehensive overview of all available command-line options for the node template, you can access the built-in help documentation. Execute the following command in your terminal:

./target/release/solochain-template-node --help\n
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/#review-the-node-messages","title":"Review the Node Messages","text":"

Upon successful node startup, the terminal displays messages detailing network operations and information relevant to the running node. This output includes details about the chain specification, system data, network status, and other crucial parameters. You should see output similar to this:

./target/release/solochain-template-node \\ --base-path /tmp/alice \\ --chain local \\ --alice \\ --port 30333 \\ --rpc-port 9945 \\ --node-key 0000000000000000000000000000000000000000000000000000000000000001 \\ --validator 2024-09-10 08:35:43 Substrate Node 2024-09-10 08:35:43 \u270c\ufe0f version 0.1.0-8599efc46ae 2024-09-10 08:35:43 \u2764\ufe0f by Parity Technologies <admin@parity.io>, 2017-2024 2024-09-10 08:35:43 \ud83d\udccb Chain specification: Local Testnet 2024-09-10 08:35:43 \ud83c\udff7 Node name: Alice 2024-09-10 08:35:43 \ud83d\udc64 Role: AUTHORITY 2024-09-10 08:35:43 \ud83d\udcbe Database: RocksDb at /tmp/alice/chains/local_testnet/db/full 2024-09-10 08:35:43 \ud83d\udd28 Initializing Genesis block/state (state: 0x074c\u202627bd, header-hash: 0x850f\u2026951f) 2024-09-10 08:35:43 \ud83d\udc74 Loading GRANDPA authority set from genesis on what appears to be first startup. 2024-09-10 08:35:43 Using default protocol ID \"sup\" because none is configured in the chain specs 2024-09-10 08:35:43 \ud83c\udff7 Local node identity is: 12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp 2024-09-10 08:35:43 Running libp2p network backend 2024-09-10 08:35:43 \ud83d\udcbb Operating system: macos 2024-09-10 08:35:43 \ud83d\udcbb CPU architecture: aarch64 2024-09-10 08:35:43 \ud83d\udce6 Highest known block at #0 2024-09-10 08:35:43 \u303d\ufe0f Prometheus exporter started at 127.0.0.1:9615 2024-09-10 08:35:43 Running JSON-RPC server: addr=127.0.0.1:9945, allowed origins=[\"http://localhost:*\", \"http://127.0.0.1:*\", \"https://localhost:*\", \"https://127.0.0.1:*\", \"https://polkadot.js.org\"] 2024-09-10 08:35:48 \ud83d\udca4 Idle (0 peers), best: #0 (0x850f\u2026951f), finalized #0 (0x850f\u2026951f), \u2b07 0 \u2b06 0

Pay particular attention to the following key messages:

  • Genesis block initialization:

    2024-09-10 08:35:43 \ud83d\udd28 Initializing Genesis block/state (state: 0x074c\u202627bd, header-hash: 0x850f\u2026951f)\n

    This message identifies the initial state or genesis block used by the node. When starting subsequent nodes, ensure these values match.

  • Node identity:

    2024-09-10 08:35:43 \ud83c\udff7  Local node identity is: 12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp\n

    This string uniquely identifies the node. It's determined by the --node-key used to start the node with the Alice account. Use this identifier when connecting additional nodes to the network.

  • Network status:

    2024-09-10 08:35:48 \ud83d\udca4 Idle (0 peers), best: #0 (0x850f\u2026951f), finalized #0 (0x850f\u2026951f), \u2b07 0 \u2b06 0\n

    This message indicates that:

    • No other nodes are currently in the network
    • No blocks are being produced
    • Block production will commence once another node joins the network
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/#add-a-second-node-to-the-network","title":"Add a Second Node to the Network","text":"

After successfully running the first node with the Alice account keys, you can expand the network by adding a second node using the Bob account. This process involves connecting to the existing network using the running node as a reference point. The commands are similar to those used for the first node, with some key differences to ensure proper network integration.

To add a node to the running blockchain:

  1. Open a new terminal shell on your computer

  2. Navigate to the root directory where you compiled the Polkadot SDK Solochain Template

  3. Clear any existing chain data for the new node:

    ./target/release/solochain-template-node purge-chain --base-path /tmp/bob --chain local -y\n

    Note

    The -y flag automatically confirms the operation without prompting.

  4. Start the second local blockchain node using the Bob account:

    ./target/release/solochain-template-node \\\n--base-path /tmp/bob \\\n--chain local \\\n--bob \\\n--port 30334 \\\n--rpc-port 9946 \\\n--node-key 0000000000000000000000000000000000000000000000000000000000000002 \\\n--validator \\\n--bootnodes /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp\n

    Key differences in this command:

    • Unique paths and ports - to avoid conflicts on the same machine, different values are used for:

      • --base-path - set to /tmp/bob
      • --port - set to 30334
      • --rpc-port - set to 9946
    • Bootnode specification - the --bootnodes option is crucial for network discovery:

      • Format - /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp
      • Components:
        • ip4 - indicates IPv4 format
        • 127.0.0.1 - IP address of the running node (localhost in this case)
        • tcp - specifies TCP for peer-to-peer communication
        • 30333 - port number for peer-to-peer TCP traffic
        • 12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp - unique identifier of the Alice node
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/#verify-blocks-are-produced-and-finalized","title":"Verify Blocks are Produced and Finalized","text":"

After starting the second node, both nodes should connect as peers and commence block production.

Follow these steps to verify that blocks are being produced and finalized:

  1. Observe the output in the terminal of the first node (Alice):

    ... 2024-09-10 09:04:57 discovered: 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD /ip4/192.168.1.4/tcp/30334 2024-09-10 09:04:58 \ud83d\udca4 Idle (0 peers), best: #0 (0x850f\u2026951f), finalized #0 (0x850f\u2026951f), \u2b07 0.3kiB/s \u2b06 0.3kiB/s 2024-09-10 09:05:00 \ud83d\ude4c Starting consensus session on top of parent 0x850ffab4827cb0297316cbf01fc7c2afb954c5124f366f25ea88bfd19ede951f (#0) 2024-09-10 09:05:00 \ud83c\udf81 Prepared block for proposing at 1 (2 ms) [hash: 0xe21a305e6647b0b0c6c73ba31a49ae422809611387fadb7785f68d0a1db0b52d; parent_hash: 0x850f\u2026951f; extrinsics (1): [0x0c18\u202608d8] 2024-09-10 09:05:00 \ud83d\udd16 Pre-sealed block for proposal at 1. Hash now 0x75bbb026db82a4d6ff88b96f952a29e15dac2b7df24d4cb95510945e2bede82d, previously 0xe21a305e6647b0b0c6c73ba31a49ae422809611387fadb7785f68d0a1db0b52d. 2024-09-10 09:05:00 \ud83c\udfc6 Imported #1 (0x850f\u2026951f \u2192 0x75bb\u2026e82d) 2024-09-10 09:05:03 \ud83d\udca4 Idle (1 peers), best: #1 (0x75bb\u2026e82d), finalized #0 (0x850f\u2026951f), \u2b07 0.7kiB/s \u2b06 0.8kiB/s 2024-09-10 09:05:06 \ud83c\udfc6 Imported #2 (0x75bb\u2026e82d \u2192 0x774d\u2026a176) 2024-09-10 09:05:08 \ud83d\udca4 Idle (1 peers), best: #2 (0x774d\u2026a176), finalized #0 (0x850f\u2026951f), \u2b07 0.6kiB/s \u2b06 0.5kiB/s 2024-09-10 09:05:12 \ud83d\ude4c Starting consensus session on top of parent 0x774dec6bff7a27c38e21106a5a7428ae5d50b991f39cda7c0aa3c0c9322da176 (#2) 2024-09-10 09:05:12 \ud83c\udf81 Prepared block for proposing at 3 (0 ms) [hash: 0x10bb4589a7a13bac657219a9ff06dcef8d55e46a4275aa287a966b5648a6d486; parent_hash: 0x774d\u2026a176; extrinsics (1): [0xdcd4\u2026b5ec] 2024-09-10 09:05:12 \ud83d\udd16 Pre-sealed block for proposal at 3. Hash now 0x01e080f4b8421c95d0033aac7310b36972fdeef7c6025f8a153c436c1bb214ee, previously 0x10bb4589a7a13bac657219a9ff06dcef8d55e46a4275aa287a966b5648a6d486. 2024-09-10 09:05:12 \ud83c\udfc6 Imported #3 (0x774d\u2026a176 \u2192 0x01e0\u202614ee) 2024-09-10 09:05:13 \ud83d\udca4 Idle (1 peers), best: #3 (0x01e0\u202614ee), finalized #0 (0x850f\u2026951f), \u2b07 0.6kiB/s \u2b06 0.6kiB/s 2024-09-10 09:05:18 \ud83c\udfc6 Imported #4 (0x01e0\u202614ee \u2192 0xe176\u20260430) 2024-09-10 09:05:18 \ud83d\udca4 Idle (1 peers), best: #4 (0xe176\u20260430), finalized #1 (0x75bb\u2026e82d), \u2b07 0.6kiB/s \u2b06 0.6kiB/s

    Key information in this output:

    • Second node discovery - discovered: 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD
    • Peer count - 1 peers
    • Block production - best: #4 (0xe176\u20260430)
    • Block finalization - finalized #1 (0x75bb\u2026e82d)
  2. Check the terminal of the second node (Bob) for similar output

  3. Shut down one node using Control-C in its terminal. Observe the remaining node's output:

    2024-09-10 09:10:03 \ud83d\udca4 Idle (1 peers), best: #51 (0x0dd6\u2026e763), finalized #49 (0xb70a\u20261fc0), \u2b07 0.7kiB/s \u2b06 0.6kiB/s 2024-09-10 09:10:08 \ud83d\udca4 Idle (0 peers), best: #52 (0x2c40\u2026a50e), finalized #49 (0xb70a\u20261fc0), \u2b07 0.3kiB/s \u2b06 0.3kiB/s

    Note that the peer count drops to zero, and block production stops.

  4. Shut down the second node using Control-C in its terminal

  5. Clean up chain state from the simulated network by using the purge-chain subcommand:

    • For Alice's node:
      ./target/release/solochain-template-node purge-chain \\\n--base-path /tmp/alice \\\n--chain local \\\n-y\n
    • For Bob's node:
      ./target/release/solochain-template-node purge-chain \\\n--base-path /tmp/bob \\\n--chain local \\\n-y\n
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/","title":"Launch a Local Solochain","text":""},{"location":"tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/#introduction","title":"Introduction","text":"

Polkadot SDK offers a versatile and extensible blockchain development framework, enabling you to create custom blockchains tailored to your specific application or business requirements.

This tutorial guides you through compiling and launching a standalone blockchain node using the Polkadot SDK Solochain Template. You'll create a fully functional chain that operates independently, without connections to a relay chain or parachain.

The node template provides a pre-configured, functional single-node blockchain you can run in your local development environment. It includes several key components, such as user accounts and account balances.

These predefined elements allow you to experiment with common blockchain operations without requiring initial template modifications. In this tutorial, you will:

  • Build and start a local blockchain node using the node template
  • Explore how to use a front-end interface to:
    • View information about blockchain activity
    • Submit a transaction

By the end of this tutorial, you'll have a working local solochain and understand how to interact with it, setting the foundation for further customization and development.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/#prerequisites","title":"Prerequisites","text":"

To get started with the node template, you'll need to have the following set up on your development machine first:

  • Rust installation - the node template is written in Rust, so you'll need to have it installed and configured on your system. Refer to the Installation guide for step-by-step instructions on setting up your development environment
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/#compile-a-node","title":"Compile a Node","text":"

The Polkadot SDK Solochain Template provides a ready-to-use development environment for building using the Polkadot SDK. Follow these steps to compile the node:

  1. Clone the node template repository:

    git clone -b v0.0.2 https://github.com/paritytech/polkadot-sdk-solochain-template\n

    Note

    This tutorial uses version v0.0.2 of the Polkadot SDK Solochain Template. Make sure you're using the correct version to match these instructions.

  2. Navigate to the root of the node template directory:

    cd polkadot-sdk-solochain-template\n

  3. Compile the node template:

    cargo build --release\n

    Note

    Initial compilation may take several minutes, depending on your machine specifications. Always use the --release flag to build optimized, production-ready artifacts.

  4. Upon successful compilation, you should see output similar to: cargo build --release Compiling solochain-template-node Finished release profile [optimized] target(s) in 27.12s

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/#start-the-local-node","title":"Start the Local Node","text":"

After successfully compiling your node, you can run it and produce blocks. This process will start your local blockchain and allow you to interact. Follow these steps to launch your node in development mode:

  1. In the terminal where you compiled your node, start it in development mode:

    ./target/release/solochain-template-node --dev\n
    The --dev option does the following:

    • Specifies that the node runs using the predefined development chain specification
    • Deletes all active data (keys, blockchain database, networking information) when stopped
    • Ensures a clean working state each time you restart the node
  2. Verify that your node is running by reviewing the terminal output. You should see something similar to: ./target/release/solochain-template-node --dev 2024-09-09 08:32:42 Substrate Node 2024-09-09 08:32:42 \u270c\ufe0f version 0.1.0-8599efc46ae 2024-09-09 08:32:42 \u2764\ufe0f by Parity Technologies <admin@parity.io>, 2017-2024 2024-09-09 08:32:42 \ud83d\udccb Chain specification: Development 2024-09-09 08:32:42 \ud83c\udff7 Node name: light-boundary-7850 2024-09-09 08:32:42 \ud83d\udc64 Role: AUTHORITY 2024-09-09 08:32:42 \ud83d\udcbe Database: RocksDb at /var/folders/x0/xl_kjddj3ql3bx7752yr09hc0000gn/T/substrate0QH9va/chains/dev/db/full 2024-09-09 08:32:42 \ud83d\udd28 Initializing Genesis block/state (state: 0xc2a0\u202616ba, header-hash: 0x0eef\u2026935d) 2024-09-09 08:32:42 \ud83d\udc74 Loading GRANDPA authority set from genesis on what appears to be first startup. 2024-09-09 08:32:42 Using default protocol ID \"sup\" because none is configured in the chain specs 2024-09-09 08:32:42 \ud83c\udff7 Local node identity is: 12D3KooWPhdUzf66di1SuuRFgjkFs6X8jm3Uj2ss5ri31WuVAbgt 2024-09-09 08:32:42 Running libp2p network backend 2024-09-09 08:32:42 \ud83d\udcbb Operating system: macos 2024-09-09 08:32:42 \ud83d\udcbb CPU architecture: aarch64 2024-09-09 08:32:42 \ud83d\udce6 Highest known block at #0 2024-09-09 08:32:42 \u303d\ufe0f Prometheus exporter started at 127.0.0.1:9615 2024-09-09 08:32:42 Running JSON-RPC server: addr=127.0.0.1:9944, allowed origins=[\"*\"] 2024-09-09 08:32:47 \ud83d\udca4 Idle (0 peers), best: #0 (0x0eef\u2026935d), finalized #0 (0x0eef\u2026935d), \u2b07 0 \u2b06 0 2024-09-09 08:32:48 \ud83d\ude4c Starting consensus session on top of parent 0x0eef4a08ef90cc04d01864514dc5cb2bd822314309b770b49b0177f920ed935d (#0) 2024-09-09 08:32:48 \ud83c\udf81 Prepared block for proposing at 1 (1 ms) [hash: 0xc14630b76907550bef9037dcbfafa2b25c8dc763495f30d9e36ad4b93b673b36; parent_hash: 0x0eef\u2026935d; extrinsics (1): [0xbcd8\u20265132] 2024-09-09 08:32:48 \ud83d\udd16 Pre-sealed block for proposal at 1. Hash now 0xcb3d2f28bc73807dac5cf19fcfb2ac6d7e922756da9d41ca0c9dadbd0e45265b, previously 0xc14630b76907550bef9037dcbfafa2b25c8dc763495f30d9e36ad4b93b673b36. 2024-09-09 08:32:48 \ud83c\udfc6 Imported #1 (0x0eef\u2026935d \u2192 0xcb3d\u2026265b) ...

  3. Confirm that your blockchain is producing new blocks by checking if the number after finalized is increasing ... 2024-09-09 08:32:47 \ud83d\udca4 Idle (0 peers), best: #0 (0x0eef\u2026935d), finalized #0 (0x0eef\u2026935d), \u2b07 0 \u2b06 0 ... 2024-09-09 08:32:52 \ud83d\udca4 Idle (0 peers), best: #1 (0xcb3d\u2026265b), finalized #0 (0x0eef\u2026935d), \u2b07 0 \u2b06 0 ... 2024-09-09 08:32:57 \ud83d\udca4 Idle (0 peers), best: #2 (0x16d7\u2026083f), finalized #0 (0x0eef\u2026935d), \u2b07 0 \u2b06 0 ... 2024-09-09 08:33:02 \ud83d\udca4 Idle (0 peers), best: #3 (0xe6a4\u20262cc4), finalized #1 (0xcb3d\u2026265b), \u2b07 0 \u2b06 0 ...

    Note

    The details of the log output will be explored in a later tutorial. For now, knowing that your node is running and producing blocks is sufficient.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/#interact-with-the-node","title":"Interact with the Node","text":"

When running the template node, it's accessible by default at:

ws://localhost:9944\n
To interact with your node using the Polkadot.js Apps interface, follow these steps:

  1. Open Polkadot.js Apps in your web browser and click the network icon in the top left corner

  2. Connect to your local node:

    1. Scroll to the bottom and select Development
    2. Choose Custom
    3. Enter ws://localhost:9944 in the input field
    4. Click the Switch button

  3. Verify connection:

    • Once connected, you should see solochain-template-runtime in the top left corner
    • The interface will display information about your local blockchain

You are now connected to your local node and can now interact with it through the Polkadot.js Apps interface. This tool enables you to explore blocks, execute transactions, and interact with your blockchain's features. For in-depth guidance on using the interface effectively, refer to the Polkadot.js Guides available on the Polkadot Wiki.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/#stop-the-node","title":"Stop the Node","text":"

When you're done exploring your local node, you can stop it to remove any state changes you've made. Since you started the node with the --dev option, stopping the node will purge all persistent block data, allowing you to start fresh the next time.

To stop the local node:

  1. Return to the terminal window where the node output is displayed
  2. Press Control-C to stop the running process
  3. Verify that your terminal returns to the prompt in the polkadot-sdk-solochain-template directory
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/","title":"Spin Your Own Nodes","text":""},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#introduction","title":"Introduction","text":"

This tutorial guides you through launching a private blockchain network with a small, trusted set of validators. In decentralized networks, consensus ensures that nodes agree on the state of the data at any given time. The Polkadot SDK Solochain Template uses Aura (Authority Round), a proof of authority consensus mechanism where a fixed set of trusted validators produces blocks in a round-robin fashion. This approach offers an easy way to launch a standalone blockchain with a predefined list of validators.

You'll learn how to generate keys, create a custom chain specification, and start a two-node blockchain network using the Aura consensus mechanism.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#prerequisites","title":"Prerequisites","text":"

Before starting this tutorial, ensure you have:

  • Installed and configured Rust on your system. For detailed instructions on installing Rust and setting up your development environment, refer to the Installation guide
  • Completed the Launch a Local Solochain tutorial and have the Polkadot SDK Solochain Template installed on your local machine
  • Experience using predefined accounts to start nodes on a single computer, as described in the Connect Multiple Nodes guide
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#generate-an-account-and-keys","title":"Generate an Account and Keys","text":"

Unlike in the Connect Multiple Nodes tutorial, where you used predefined accounts and keys to start peer nodes, this tutorial requires you to generate unique secret keys for your validator nodes. It's crucial to understand that each participant is responsible for generating and managing their own unique set of keys in a real blockchain network.

This process of generating your own keys serves several important purposes:

  • It enhances the security of your network by ensuring that each node has its own unique cryptographic identity
  • It simulates a more realistic blockchain environment where participants don't share key information
  • It helps you understand the process of key generation, which is a fundamental skill in blockchain operations

There are a couple of Polkadot Wiki articles that may help you better understand the different signing algorithms used in this tutorial. See the Keypairs and Signing section to learn about the sr25519 and ed25519 signing algorithms. Refer to the Keys section to learn more about the different types of keys used in the ecosystem.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#key-generation-options","title":"Key Generation Options","text":"

There are several ways you can generate keys. The available methods are:

  • solochain-template-node key subcommand - the most straightforward method for developers working directly with the node is to use the integrated key generation feature. Using the key subcommand, you can generate keys directly from your node's command line interface. This method ensures compatibility with your chain and is convenient for quick setup and testing
  • subkey - it is a powerful standalone utility specifically designed for Polkadot SDK-based chains. It offers advanced options for key generation, including support for different key types such as ed25519 and sr25519. This tool allows fine-grained control over the key generation process
  • Third-party key generation utilities - various tools developed by the community
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#generate-local-keys-with-the-node-template","title":"Generate Local Keys with the Node Template","text":"

Best practices for key generation:

  • Whenever possible, use an air-gapped computer, meaning never connected to the internet, when generating keys for a production blockchain
  • If an air-gapped computer is not an option, disconnect from the internet before generating keys for any public or private blockchain not under your control

For this tutorial, however, you'll use the solochain-template-node command-line options to generate random keys locally while remaining connected to the internet. This method is suitable for learning and testing purposes.

Follow these steps to generate your keys:

  1. Navigate to the root directory where you compiled the node template

  2. Generate a random secret phrase and Sr25519 keys. Enter a password when prompted:

    ./target/release/solochain-template-node key generate \\\n--scheme Sr25519 \\\n--password-interactive\n

    The command will output information about the generated keys similar to the following:

    ./target/release/solochain-template-node key generate \\ --scheme Sr25519 \\ --password-interactive Key password: Secret phrase: digital width rely long insect blind usual name oyster easy steak spend Network ID: substrate Secret seed: 0xc52405d0b45dd856cbf1371f3b33fbde20cb76bf6ee440d12ea15f7ed17cca0a Public key (hex): 0xea23fa399c6bd91af3d7ea2d0ad46c48aff818b285342d9aaf15b3172270e914 Account ID: 0xea23fa399c6bd91af3d7ea2d0ad46c48aff818b285342d9aaf15b3172270e914 Public key (SS58): 5HMhkSHpD4XcibjbU9ZiGemLpnsTUzLsG5JhQJQEcxp3KJaW SS58 Address: 5HMhkSHpD4XcibjbU9ZiGemLpnsTUzLsG5JhQJQEcxp3KJaW

    Protect Your Keys

    Never share your secret phrase or private keys. If exposed, someone could:

    • Impersonate you on the network
    • Steal all funds associated with the account
    • Perform transactions on your behalf
    • Potentially compromise your entire blockchain identity

    Note the Sr25519 public key for the account (SS58 format). This key will be used for producing blocks with Aura. In this example, the Sr25519 public key for the account is 5HMhkSHpD4XcibjbU9ZiGemLpnsTUzLsG5JhQJQEcxp3KJaW.

  3. Use the generated secret phrase to derive keys using the Ed25519 signature scheme.

    ./target/release/solochain-template-node key inspect \\\n--scheme Ed25519 \\\n--password-interactive \\\n\"INSERT_SECRET_PHRASE\"\n

    When prompted for a Key password, enter the same password you used in the previous step

    Note

    Replace INSERT_SECRET_PHRASE with the secret phrase generated in step 2.

    The command will output information about the generated keys similar to the following:

    ./target/release/solochain-template-node key inspect \\ --scheme Ed25519 \\ --password-interactive \\ \"digital width rely long insect blind usual name oyster easy steak spend\" Key password: Secret phrase: digital width rely long insect blind usual name oyster easy steak spend Network ID: substrate Secret seed: 0xc52405d0b45dd856cbf1371f3b33fbde20cb76bf6ee440d12ea15f7ed17cca0a Public key (hex): 0xc9c2cd111f98f2bf78bab6787449fc007dd7f2a5d02f099919f7fb50ade97dd6 Account ID: 0xc9c2cd111f98f2bf78bab6787449fc007dd7f2a5d02f099919f7fb50ade97dd6 Public key (SS58): 5GdFMFbXy24uz8mFZroFUgdBkY2pq6igBNGAq9tsBfEZRSzP SS58 Address: 5GdFMFbXy24uz8mFZroFUgdBkY2pq6igBNGAq9tsBfEZRSzP

    The Ed25519 key you've generated is crucial for block finalization using the grandpa consensus algorithm. The Ed25519 public key for the account is 5GdFMFbXy24uz8mFZroFUgdBkY2pq6igBNGAq9tsBfEZRSzP.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#generate-a-second-set-of-keys","title":"Generate a Second Set of Keys","text":"

In this tutorial, the private network will consist of two nodes, meaning you'll need two distinct sets of keys. You have several options for generating this second set of keys:

  • Use the keys from one of the predefined accounts
  • Follow the steps from the previous section, but use a different identity on your local machine to create a new key pair
  • Derive a child key pair to simulate a second identity on your local machine

For this tutorial, the second set of keys will be:

  • Sr25519 (for Aura) - 5Df9bvnbqKNR8S1W2Uj5XSpJCKUomyymwCGf6WHKyoo3GDev
  • Ed25519 (for Grandpa) - 5DJRQQWEaJart5yQnA6gnKLYKHLdpX6V4vHgzAYfNPT2NNuW
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#create-a-custom-chain-specification","title":"Create a Custom Chain Specification","text":"

After generating key pairs for your blockchain, the next step is creating a custom chain specification. You will share this specification with trusted validators participating in your network.

To enable others to participate in your blockchain, ensure that each participant generates their own key pair. Once you collect the keys from all network participants, you can create a custom chain specification to replace the default local one.

In this tutorial, you'll modify the local chain specification to create a custom version for a two-node network. The same process can be used to add more nodes if you have the necessary keys.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#steps-to-create-a-custom-chain-specification","title":"Steps to Create a Custom Chain Specification","text":"
  1. Open a terminal and navigate to the root directory of your compiled node template

  2. Export the local chain specification:

    ./target/release/solochain-template-node build-spec \\\n--disable-default-bootnode \\\n--chain local > customSpec.json\n
  3. Preview the customSpec.json file:

    • Preview first fields:

      head customSpec.json\n
      head customSpec.json
      \n{\n    \"name\": \"Local Testnet\",\n    \"id\": \"local_testnet\",\n    \"chainType\": \"Local\",\n    \"bootNodes\": [],\n    \"telemetryEndpoints\": null,\n    \"protocolId\": null,\n    \"properties\": null,\n    \"codeSubstitutes\": { ... },\n    \"genesis\": { ... }\n}\n

    • Preview last fields:

      tail -n 78 customSpec.json\n
      tail -n 78 customSpec.json
      \n{\n    \"patch\": {\n        \"aura\": {\n            \"authorities\": [\n                \"5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY\",\n                \"5FHneW46xGXgs5mUiveU4sbTyGBzmstUspZC92UhjJM694ty\"\n            ]\n        },\n        \"balances\": {\n            \"balances\": [\n                [\n                    \"5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY\",\n                    1152921504606846976\n                ],\n                [\n                    \"5FHneW46xGXgs5mUiveU4sbTyGBzmstUspZC92UhjJM694ty\",\n                    1152921504606846976\n                ],\n                [\n                    \"5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y\",\n                    1152921504606846976\n                ],\n                [\n                    \"5DAAnrj7VHTznn2AWBemMuyBwZWs6FNFjdyVXUeYum3PTXFy\",\n                    1152921504606846976\n                ],\n                [\n                    \"5HGjWAeFDfFCWPsjFQdVV2Msvz2XtMktvgocEZcCj68kUMaw\",\n                    1152921504606846976\n                ],\n                [\n                    \"5CiPPseXPECbkjWCa6MnjNokrgYjMqmKndv2rSnekmSK2DjL\",\n                    1152921504606846976\n                ],\n                [\n                    \"5GNJqTPyNqANBkUVMN1LPPrxXnFouWXoe2wNSmmEoLctxiZY\",\n                    1152921504606846976\n                ],\n                [\n                    \"5HpG9w8EBLe5XCrbczpwq5TSXvedjrBGCwqxK1iQ7qUsSWFc\",\n                    1152921504606846976\n                ],\n                [\n                    \"5Ck5SLSHYac6WFt5UZRSsdJjwmpSZq85fd5TRNAdZQVzEAPT\",\n                    1152921504606846976\n                ],\n                [\n                    \"5HKPmK9GYtE1PSLsS1qiYU9xQ9Si1NcEhdeCq9sw5bqu4ns8\",\n                    1152921504606846976\n                ],\n                [\n                    \"5FCfAonRZgTFrTd9HREEyeJjDpT397KMzizE6T3DvebLFE7n\",\n                    1152921504606846976\n                ],\n                [\n                    \"5CRmqmsiNFExV6VbdmPJViVxrWmkaXXvBrSX8oqBT8R9vmWk\",\n                    1152921504606846976\n                ]\n            ]\n        },\n        \"grandpa\": {\n            \"authorities\": [\n                [\n                    \"5FA9nQDVg267DEd8m1ZypXLBnvN7SFxYwV7ndqSYGiN9TTpu\",\n                    1\n                ],\n                [\n                    \"5GoNkf6WdbxCFnPdAnYYQyCjAKPJgLNxXwPjwTh6DGg6gN3E\",\n                    1\n                ]\n            ]\n        },\n        \"sudo\": {\n            \"key\": \"5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY\"\n        }\n    }\n}  \n    

      This command will display fields that include configuration details for pallets, such as sudo and balances, as well as the validator settings for the Aura and Grandpa keys.

  4. Edit customSpec.json:

    1. Update the name field:

      \"name\": \"My Custom Testnet\",\n

    2. Add Sr25519 addresses for each validator in the authorities array to the aura field to specify the nodes with authority to create blocks:

      \"aura\": {\n  \"authorities\": [\n    \"5HMhkSHpD4XcibjbU9ZiGemLpnsTUzLsG5JhQJQEcxp3KJaW\",\n    \"5Df9bvnbqKNR8S1W2Uj5XSpJCKUomyymwCGf6WHKyoo3GDev\"\n  ]\n},\n
    3. Add Ed25519 addresses for each validator in the authorities array to the grandpa field to specify the nodes with the authority to finalize blocks. Include a voting weight (typically 1) for each validator to define their voting power:

      \"grandpa\": {\n  \"authorities\": [\n    [\n      \"5GdFMFbXy24uz8mFZroFUgdBkY2pq6igBNGAq9tsBfEZRSzP\",\n      1\n    ],\n    [\n      \"5DJRQQWEaJart5yQnA6gnKLYKHLdpX6V4vHgzAYfNPT2NNuW\",\n      1\n    ]\n  ]\n},\n
  5. Save and close customSpec.json

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#convert-chain-specification-to-raw-format","title":"Convert Chain Specification to Raw Format","text":"

After creating your custom chain specification, the next crucial step is converting it to a raw format. This process is essential because the raw format includes encoded storage keys that nodes use to reference data in their local storage. By distributing a raw chain specification, you ensure that each node in the network stores data using the same storage keys, which is vital for maintaining data integrity and facilitating network synchronization.

To convert your chain specification to the raw format, follow these steps:

  1. Navigate to the root directory where you compiled the node template

  2. Run the following command to convert the customSpec.json chain specification to the raw format and save it as customSpecRaw.json:

    ./target/release/solochain-template-node build-spec \\\n--chain=customSpec.json \\\n--raw \\\n--disable-default-bootnode > customSpecRaw.json\n
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#add-keys-to-the-keystore","title":"Add Keys to the Keystore","text":"

To enable block production and finalization, you need to add two types of keys to the keystore for each node in the network:

  • aura authority keys for block production
  • grandpa authority keys for block finalization

Follow these steps for each node in your network:

  1. Open a terminal and navigate to the root directory where you compiled the node template

  2. Insert the aura secret key:

    ./target/release/solochain-template-node key insert \\\n--base-path /tmp/node01 \\\n--chain customSpecRaw.json \\\n--scheme Sr25519 \\\n--suri \"INSERT_SECRET_PHRASE\" \\\n--password-interactive \\\n--key-type aura\n

    Note

    Replace INSERT_SECRET_PHRASE with the secret phrase or seed you generated earlier. When prompted, enter the password you used to generate the keys.

  3. Insert the grandpa secret key:

    ./target/release/solochain-template-node key insert \\\n--base-path /tmp/node01 \\\n--chain customSpecRaw.json \\\n--scheme Ed25519 \\\n--suri \"INSERT_SECRET_PHRASE\" \\\n--password-interactive \\\n--key-type gran\n

    Note

    Use the same secret phrase or seed and password as in step 2.

  4. Verify that your keys are in the keystore by running the following command:

    ls /tmp/node01/chains/local_testnet/keystore\n

    You should see output similar to:

    ls /tmp/node01/chains/local_testnet/keystore 61757261ea23fa399c6bd91af3d7ea2d0ad46c48aff818b285342d9aaf15b3172270e914 6772616ec9c2cd111f98f2bf78bab6787449fc007dd7f2a5d02f099919f7fb50ade97dd6

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#start-the-first-node","title":"Start the First Node","text":"

Before starting the first node, it's crucial to generate a network key. This key ensures that the node's identity remains consistent, allowing other nodes to connect to it reliably as a bootnode for synchronization.

To generate a network key, run the following command:

./target/release/solochain-template-node key \\\ngenerate-node-key --base-path /tmp/node01\n

Note

This command generates a network key and stores it in the same base path used for storing the aura and grandpa keys.

After generating the network key, start the first node using your custom chain specification with the following command:

./target/release/solochain-template-node \\\n--base-path /tmp/node01 \\\n--chain ./customSpecRaw.json \\\n--port 30333 \\\n--rpc-port 9945 \\\n--validator \\\n--name MyNode01 \\\n--password-interactive\n

Upon execution, you should see output similar to the following:

./target/release/solochain-template-node \\ --base-path /tmp/node01 \\ --chain ./customSpecRaw.json \\ --port 30333 \\ --rpc-port 9945 \\ --validator \\ --name MyNode01 \\ --password-interactive 2024-09-12 11:18:46 Substrate Node 2024-09-12 11:18:46 \u270c\ufe0f version 0.1.0-8599efc46ae 2024-09-12 11:18:46 \u2764\ufe0f by Parity Technologies <admin@parity.io>, 2017-2024 2024-09-12 11:18:46 \ud83d\udccb Chain specification: My Custom Testnet 2024-09-12 11:18:46 \ud83c\udff7 Node name: MyNode01 2024-09-12 11:18:46 \ud83d\udc64 Role: AUTHORITY 2024-09-12 11:18:46 \ud83d\udcbe Database: RocksDb at /tmp/node01/chains/local_testnet/db/full 2024-09-12 11:18:46 Using default protocol ID \"sup\" because none is configured in the chain specs 2024-09-12 11:18:46 \ud83c\udff7 Local node identity is: 12D3KooWSbaPxmb2tWLgkQVoJdxzpBPTd9dQPmKiJfsvtP753Rg1 2024-09-12 11:18:46 Running libp2p network backend 2024-09-12 11:18:46 \ud83d\udcbb Operating system: macos 2024-09-12 11:18:46 \ud83d\udcbb CPU architecture: aarch64 2024-09-12 11:18:46 \ud83d\udce6 Highest known block at #0 2024-09-12 11:18:46 \u303d\ufe0f Prometheus exporter started at 127.0.0.1:9615 2024-09-12 11:18:46 Running JSON-RPC server: addr=127.0.0.1:9945, allowed origins=[\"http://localhost:*\", \"http://127.0.0.1:*\", \"https://localhost:*\", \"https://127.0.0.1:*\", \"https://polkadot.js.org\"] 2024-09-12 11:18:51 \ud83d\udca4 Idle (0 peers), best: #0 (0x850f\u2026951f), finalized #0 (0x850f\u2026951f), \u2b07 0 \u2b06 0

After starting the first node, you'll notice:

  • The node is running with the custom chain specification (\"My Custom Testnet\")
  • The local node identity is displayed (12D3KooWSbaPxmb2tWLgkQVoJdxzpBPTd9dQPmKiJfsvtP753Rg1 in this example). This identity is crucial for other nodes to connect to this one
  • The node is currently idle with 0 peers, as it's the only node in the network at this point
  • No blocks are being produced. Block production will commence once another node joins the network
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/#add-more-nodes","title":"Add More Nodes","text":"

Block finalization requires at least two-thirds of the validators. In this example network configured with two validators, block finalization can only start after the second node has been added.

Before starting additional nodes, ensure you've properly configured their keys as described in the Add Keys to the Keystore section. For this node, the keys should be stored under the /tmp/node02 base path.

To add a second validator to the private network, run the following command:

./target/release/solochain-template-node \\\n--base-path /tmp/node02 \\\n--chain ./customSpecRaw.json \\\n--port 30334 \\\n--rpc-port 9946 \\\n--validator \\\n--name MyNode02 \\\n--bootnodes /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWSbaPxmb2tWLgkQVoJdxzpBPTd9dQPmKiJfsvtP753Rg1 \\\n--unsafe-force-node-key-generation \\\n--password-interactive\n

Key points about this command:

  • It uses a different base-path and name to identify this as the second validator
  • The --chain option specifies the same chain specification file used for the first node
  • The --bootnodes option is crucial. It should contain the local node identifier from the first node in the network
  • The --unsafe-force-node-key-generation parameter forces the generation of a new node key if one doesn't exist. For non-bootnode validators (like this second node and any subsequent nodes), it's less critical if the key changes because they won't be used as bootnodes. However, for consistency and best practices, it's recommended to generate and maintain a stable node key for all validators once the network is set up

After both nodes have added their keys to their respective keystores (under /tmp/node01 and /tmp/node02) and been run, you should see:

  • The same genesis block and state root hashes on both nodes
  • Each node showing one peer
  • Block proposals being produced
  • After a few seconds, new blocks being finalized on both nodes

If successful, you should see logs similar to the following on both nodes:

2024-09-12 15:37:05 \ud83d\udca4 Idle (0 peers), best: #0 (0x8af7\u202653fd), finalized #0 (0x8af7\u202653fd), \u2b07 0 \u2b06 0 2024-09-12 15:37:08 discovered: 12D3KooWMaL5zqYiMnVikaYCGF65fKekSPqXGgyz92eRcqcnfpey /ip4/192.168.1.2/tcp/30334 2024-09-12 15:37:10 \ud83d\udca4 Idle (1 peers), best: #0 (0x8af7\u202653fd), finalized #0 (0x8af7\u202653fd), \u2b07 0.6kiB/s \u2b06 0.6kiB/s 2024-09-12 15:37:12 \ud83d\ude4c Starting consensus session on top of parent 0x8af7c72457d437486fe697b4a11ef42b26c8b4448836bdb2220495aea39f53fd (#0) 2024-09-12 15:37:12 \ud83c\udf81 Prepared block for proposing at 1 (6 ms) [hash: 0xb97cb3a4a62f0cb320236469d8e1e13227a15138941f3c9819b6b78f91986262; parent_hash: 0x8af7\u202653fd; extrinsics (1): [0x1ef4\u2026eecb] 2024-09-12 15:37:12 \ud83d\udd16 Pre-sealed block for proposal at 1. Hash now 0x05115677207265f22c6d428fb00b65a0e139c866c975913431ddefe291124f04, previously 0xb97cb3a4a62f0cb320236469d8e1e13227a15138941f3c9819b6b78f91986262. 2024-09-12 15:37:12 \ud83c\udfc6 Imported #1 (0x8af7\u202653fd \u2192 0x0511\u20264f04) 2024-09-12 15:37:15 \ud83d\udca4 Idle (1 peers), best: #1 (0x0511\u20264f04), finalized #0 (0x8af7\u202653fd), \u2b07 0.5kiB/s \u2b06 0.6kiB/s 2024-09-12 15:37:18 \ud83c\udfc6 Imported #2 (0x0511\u20264f04 \u2192 0x17a7\u2026a1fd) 2024-09-12 15:37:20 \ud83d\udca4 Idle (1 peers), best: #2 (0x17a7\u2026a1fd), finalized #0 (0x8af7\u202653fd), \u2b07 0.6kiB/s \u2b06 0.5kiB/s 2024-09-12 15:37:24 \ud83d\ude4c Starting consensus session on top of parent 0x17a77a8799bd58c7b82ca6a1e3322b38e7db574ee6c92fbcbc26bbe5214da1fd (#2) 2024-09-12 15:37:24 \ud83c\udf81 Prepared block for proposing at 3 (1 ms) [hash: 0x74d78266b1ac2514050ced3f34fbf98a28c6a2856f49dbe8b44686440a45f879; parent_hash: 0x17a7\u2026a1fd; extrinsics (1): [0xe35f\u20268d48] 2024-09-12 15:37:24 \ud83d\udd16 Pre-sealed block for proposal at 3. Hash now 0x12cc1e9492988cfd3ffe4a6eb3186b1abb351a12a97809f7bae4a7319e177dee, previously 0x74d78266b1ac2514050ced3f34fbf98a28c6a2856f49dbe8b44686440a45f879. 2024-09-12 15:37:24 \ud83c\udfc6 Imported #3 (0x17a7\u2026a1fd \u2192 0x12cc\u20267dee) 2024-09-12 15:37:25 \ud83d\udca4 Idle (1 peers), best: #3 (0x12cc\u20267dee), finalized #1 (0x0511\u20264f04), \u2b07 0.5kiB/s \u2b06 0.6kiB/s"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/","title":"Upgrade a Running Network","text":""},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#introduction","title":"Introduction","text":"

One of the key advantages of the Polkadot SDK development framework is its support for forkless upgrades to the blockchain runtime, which forms the core logic of the chain. Unlike many other blockchains, where introducing new features or improving existing ones often requires a hard fork, Polkadot SDK enables seamless upgrades even when introducing breaking changes\u2014without disrupting the network's operation.

Polkadot SDK's design incorporates the runtime directly into the blockchain's state, allowing participants to upgrade the runtime by calling the set_code function within a transaction. This mechanism ensures that updates are validated using the blockchain's consensus and cryptographic guarantees, allowing runtime logic to be updated or extended without forking the chain or requiring a new blockchain client.

In this tutorial, you'll learn how to upgrade the runtime of a Polkadot SDK-based blockchain without stopping the network or creating a fork.

You'll make the following changes to a running network node's runtime:

  • Increase the spec_version
  • Add the Utility pallet
  • Increase the minimum balance for network accounts

By the end of this tutorial, you\u2019ll have the skills to upgrade the runtime and submit a transaction to deploy the modified runtime on a live network.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#prerequisites","title":"Prerequisites","text":"

Before starting this tutorial, ensure you meet the following requirements:

  • Installed and configured Rust on your system. Refer to the Installation guide for detailed instructions on installing Rust and setting up your development environment
  • Completed the Launch a Local Solochain tutorial and have the Polkadot SDK Solochain Template installed on your machine
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#start-the-node","title":"Start the Node","text":"

To demonstrate how to update a running node, you first need to start the local node with the current runtime.

  1. Navigate to the root directory where you compiled the Polkadot SDK Solochain Template

  2. Start the local node in development mode by running the following command:

    ./target/release/solochain-template-node --dev\n

    Note

    Keep the node running throughout this tutorial. You can modify and re-compile the runtime without stopping or restarting the node.

  3. Connect to your node using the same steps outlined in the Interact with the Node section. Once connected, you\u2019ll notice the node template is using the default version, 100, displayed in the upper left

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#modify-the-runtime","title":"Modify the Runtime","text":""},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#add-the-utility-pallet-to-the-dependencies","title":"Add the Utility Pallet to the Dependencies","text":"

First, you'll update the Cargo.toml file to include the Utility pallet as a dependency for the runtime. Follow these steps:

  1. Open the runtime/Cargo.toml file and locate the [dependencies] section. Add the Utility pallet by inserting the following line:

    pallet-utility = { version = \"37.0.0\", default-features = false}\n

    Your [dependencies] section should now look something like this:

    [dependencies]\ncodec = { features = [\"derive\"], workspace = true }\nscale-info = { features = [\"derive\", \"serde\"], workspace = true }\nframe-support = { features = [\"experimental\"], workspace = true }\n...\npallet-utility = { version = \"37.0.0\", default-features = false }\n
  2. In the [features] section, add the Utility pallet to the std feature list by including:

    [features]\ndefault = [\"std\"]\nstd = [\n    \"codec/std\",\n    \"scale-info/std\",\n    \"frame-executive/std\",\n    ...\n    \"pallet-utility/std\",\n]\n

  3. Save the changes and close the Cargo.toml file

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#update-the-runtime-configuration","title":"Update the Runtime Configuration","text":"

You'll now modify the runtime/src/lib.rs file to integrate the Utility pallet and make other necessary changes. In this section, you'll configure the Utility pallet by implementing its Config trait, update the runtime macro to include the new pallet, adjust the EXISTENTIAL_DEPOSIT value, and increment the runtime version.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#configure-the-utility-pallet","title":"Configure the Utility Pallet","text":"

To configure the Utility pallet, take the following steps:

  1. Implement the Config trait for the Utility pallet:

    ...\n/// Configure the pallet-template in pallets/template\nimpl pallet_template::Config for Runtime {\n    ...\n}\n\n// Add here after all the other pallets implementations\nimpl pallet_utility::Config for Runtime {\n    type RuntimeEvent = RuntimeEvent;\n    type RuntimeCall = RuntimeCall;\n    type PalletsOrigin = OriginCaller;\n    type WeightInfo = pallet_utility::weights::SubstrateWeight<Runtime>;\n}\n...\n
  2. Locate the #[frame_support::runtime] macro and add the Utility pallet:

     // Create the runtime by composing the FRAME pallets that were previously configured\n #[frame_support::runtime]\n mod runtime {\n     ...\n     // Include the custom logic from the pallet-template in the runtime\n     #[runtime::pallet_index(7)]\n     pub type TemplateModule = pallet_template;\n\n     #[runtime::pallet_index(8)]\n     pub type Utility = pallet_utility;\n     ...\n }\n
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#update-existential-deposit-value","title":"Update Existential Deposit Value","text":"

To update the EXISTENTIAL_DEPOSIT in the Balances pallet, locate the constant and set the value to 1000:

...\n/// Existential deposit\npub const EXISTENTIAL_DEPOSIT: u128 = 1000;\n...\n

Note

This change increases the minimum balance required for accounts to remain active. No accounts with balances between 500 and 1000 will be removed. For account removal, a storage migration is needed. See Storage Migration for details.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#update-runtime-version","title":"Update Runtime Version","text":"

Locate the runtime_version macro and increment the spec_version field from 100 to 101:

#[sp_version::runtime_version]\npub const VERSION: RuntimeVersion = RuntimeVersion {\n    spec_name: create_runtime_str!(\"solochain-template-runtime\"),\n    impl_name: create_runtime_str!(\"solochain-template-runtime\"),\n    authoring_version: 1,\n    spec_version: 101,\n    impl_version: 1,\n    apis: RUNTIME_API_VERSIONS,\n    transaction_version: 1,\n    state_version: 1,\n};\n
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#recompile-the-runtime","title":"Recompile the Runtime","text":"

Once you've made all the necessary changes, recompile the runtime by running:

cargo build --release\n

The build artifacts will be output to the target/release directory. The Wasm build artifacts can be found in the target/release/wbuild/solochain-template-runtime directory. You should see the following files:

  • solochain_template_runtime.compact.compressed.wasm
  • solochain_template_runtime.compact.wasm
  • solochain_template_runtime.wasm
"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#execute-the-runtime-upgrade","title":"Execute the Runtime Upgrade","text":"

Now that you've generated the Wasm artifact for your modified runtime, it's time to upgrade the running network. This process involves submitting a transaction to load the new runtime logic.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#understand-runtime-upgrades","title":"Understand Runtime Upgrades","text":""},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#authorization-with-sudo","title":"Authorization with Sudo","text":"

In production networks, runtime upgrades typically require community approval through governance. For this tutorial, the Sudo pallet will be used to simplify the process. The Sudo pallet allows a designated account (usually Alice in development environments) to perform privileged operations, including runtime upgrades.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#resource-accounting","title":"Resource Accounting","text":"

Runtime upgrades use the set_code extrinsic, which is designed to consume an entire block's resources. This design prevents other transactions from executing on different runtime versions within the same block. The set_code extrinsic is classified as an Operational call, one of the variants of the DispatchClass enum. This classification means it:

  • Can use a block's entire weight limit
  • Receives maximum priority
  • Is exempt from transaction fees

To bypass resource accounting safeguards, the sudo_unchecked_weight extrinsic will be used. This allows you to specify a weight of zero, ensuring the upgrade process has unlimited time to complete.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#perform-the-upgrade","title":"Perform the Upgrade","text":"

Follow these steps to update your network with the new runtime:

  1. Open Polkadot.js Apps in your web browser and make sure you are connected to your local node

  2. Navigate to the Developer dropdown and select the Extrinsics option

  3. Construct the set_code extrinsic call:

    1. Select the sudo pallet
    2. Choose the sudoUncheckedWeight extrinsic
    3. Select the system pallet
    4. Choose the setCode extrinsic
    5. Fill in the parameters:

      • code - the new runtime code

        Note

        You can click the file upload toggle to upload a file instead of copying the hex string value.

      • weight - leave both parameters set to the default value of 0

    6. Click on Submit Transaction

  4. Review the transaction details and click Sign and Submit to confirm the transaction

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#verify-the-upgrade","title":"Verify the Upgrade","text":""},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#runtime-version-change","title":"Runtime Version Change","text":"

Verify that the runtime version of your blockchain has been updated successfully. Follow these steps to ensure the upgrade was applied:

  1. Navigate to the Network dropdown and select the Explorer option

  2. After the transaction is included in a block, check:

    1. There has been a successful sudo.Sudid event
    2. The indicator shows that the runtime version is now 101

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#utility-pallet-addition","title":"Utility Pallet Addition","text":"

In the Extrinsics section, you should see that the Utility pallet has been added as an option.

"},{"location":"tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/#existential-deposit-update","title":"Existential Deposit Update","text":"

Check the updated existential deposit value on your blockchain. Follow these steps to query and verify the new value:

  1. Navigate to the Developer dropdown and select the Chain State option

  2. Query the existential deposit value

    1. Click on the Constants tab
    2. Select the balances pallet
    3. Choose the existentialDeposit constant
    4. Click the + button to execute the query
    5. Check the existential deposit value

"},{"location":"tutorials/polkadot-sdk/system-chains/","title":"System Chains Tutorials","text":"

In this section, you'll gain hands-on experience building solutions that integrate with system chains on Polkadot using the Polkadot SDK. System chains like the Asset Hub provide essential infrastructure for enabling cross-chain interoperability and asset management across the Polkadot ecosystem.

Through these tutorials, you'll learn how to leverage these system chains to enhance the functionality and security of your blockchain applications.

"},{"location":"tutorials/polkadot-sdk/system-chains/#for-parachain-integrators","title":"For Parachain Integrators","text":"

Enhance cross-chain interoperability and expand your parachain\u2019s functionality:

  • Register your parachain's asset on Asset Hub - connect your parachain\u2019s assets to Asset Hub as a foreign asset using XCM, enabling seamless cross-chain transfers and integration
"},{"location":"tutorials/polkadot-sdk/system-chains/#for-developers-leveraging-system-chains","title":"For Developers Leveraging System Chains","text":"

Unlock new possibilities by tapping into Polkadot\u2019s system chains:

  • Register a new asset on Asset Hub - create and customize assets directly on Asset Hub (local assets) with parameters like metadata, minimum balances, and more

  • Convert Assets - use Asset Hub's AMM functionality to swap between different assets, provide liquidity to pools, and manage LP tokens

"},{"location":"tutorials/polkadot-sdk/system-chains/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/","title":"Asset Hub Tutorials","text":""},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/#benefits-of-asset-hub","title":"Benefits of Asset Hub","text":"

Polkadot SDK-based relay chains focus on security and consensus, leaving asset management to an external component, such as a system chain. The Asset Hub is one example of a system chain and is vital to managing tokens which aren't native to the Polkadot ecosystem. Developers opting to integrate with Asset Hub can expect the following benefits:

  • Support for non-native on-chain assets - create and manage your own tokens or NFTs with Polkadot ecosystem compatibility available out of the box
  • Lower transaction fees - approximately 1/10th of the cost of using the relay chain
  • Reduced deposit requirements - approximately 1/100th of the deposit required for the relay chain
  • Payment of fees with non-native assets - no need to buy native tokens for gas, increasing flexibility for developers and users
"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/#get-started","title":"Get Started","text":"

Through these tutorials, you'll learn how to manage cross-chain assets, including:

  • Asset registration and configuration
  • Cross-chain asset representation
  • Liquidity pool creation and management
  • Asset swapping and conversion
  • Transaction parameter optimization
"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/#additional-resources","title":"Additional ResourcesLearn More About Asset Hub","text":"

Explore the fundamentals of Asset Hub, including managing on-chain assets, foreign asset integration, and using XCM for cross-chain asset transfers.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/","title":"Convert Assets on Asset Hub","text":""},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/#introduction","title":"Introduction","text":"

Asset Conversion is an Automated Market Maker (AMM) utilizing Uniswap V2 logic and implemented as a pallet on Polkadot's Asset Hub. For more details about this feature, please visit the Asset Conversion on Asset Hub wiki page.

This guide will provide detailed information about the key functionalities offered by the Asset Conversion pallet on Asset Hub, including:

  • Creating a liquidity pool
  • Adding liquidity to a pool
  • Swapping assets
  • Withdrawing liquidity from a pool
"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/#prerequisites","title":"Prerequisites","text":"

Before converting assets on Asset Hub, you must ensure you have:

  • Access to the Polkadot.js Apps interface and a connection with the intended blockchain
  • A funded wallet containing the assets you wish to convert and enough available funds to cover the transaction fees
  • An asset registered on Asset Hub that you want to convert. If you haven't created an asset on Asset Hub yet, refer to the Register a Local Asset or Register a Foreign Asset documentation to create an asset.
"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/#creating-a-liquidity-pool","title":"Creating a Liquidity Pool","text":"

If an asset on Asset Hub does not have an existing liquidity pool, the first step is to create one.

The asset conversion pallet provides the createPool extrinsic to create a new liquidity pool, creating an empty liquidity pool and a new LP token asset.

Note

A testing token with the asset ID 1112 and the name PPM was created for this example.

As stated in the Test Environment Setup section, this tutorial is based on the assumption that you have an instance of Polkadot Asset Hub running locally. Therefore, the demo liquidity pool will be created between DOT and PPM tokens. However, the same steps can be applied to any other asset on Asset Hub.

From the Asset Hub perspective, the Multilocation that identifies the PPM token is the following:

{\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n

Note

The PalletInstance value of 50 represents the Assets pallet on Asset Hub. The GeneralIndex value of 1112 is the PPM asset's asset ID.

To create the liquidity pool, you can follow these steps:

  1. Navigate to the Extrinsics section on the Polkadot.js Apps interface

    1. Select Developer from the top menu
    2. Click on Extrinsics from the dropdown menu

  2. Choose the AssetConversion pallet and click on the createPool extrinsic

    1. Select the AssetConversion pallet
    2. Choose the createPool extrinsic from the list of available extrinsics

  3. Fill in the required fields:

    1. asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      {\n  parents: 0,\n  interior: 'Here'\n}\n
    2. asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      {\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n
    3. Click on Submit Transaction to create the liquidity pool

Signing and submitting the transaction triggers the creation of the liquidity pool. To verify the new pool's creation, check the Explorer section on the Polkadot.js Apps interface and ensure that the PoolCreated event was emitted.

As the preceding image shows, the lpToken ID created for this pool is 19. This ID is essential to identify the liquidity pool and associated LP tokens.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/#adding-liquidity-to-a-pool","title":"Adding Liquidity to a Pool","text":"

The addLiquidity extrinsic allows users to provide liquidity to a pool of two assets. Users specify their preferred amounts for both assets and minimum acceptable quantities. The function determines the best asset contribution, which may vary from the amounts desired but won't fall below the specified minimums. Providers receive liquidity tokens representing their pool portion in return for their contribution.

To add liquidity to a pool, follow these steps:

  1. Navigate to the Extrinsics section on the Polkadot.js Apps interface

    1. Select Developer from the top menu
    2. Click on Extrinsics from the dropdown menu

  2. Choose the assetConversion pallet and click on the addLiquidity extrinsic

    1. Select the assetConversion pallet
    2. Choose the addLiquidity extrinsic from the list of available extrinsics

  3. Fill in the required fields:

    1. asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      {\n  parents: 0,\n  interior: 'Here'\n}\n
    2. asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      {\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n
    3. amount1Desired - the amount of the first asset that will be contributed to the pool

    4. amount2Desired - the quantity of the second asset intended for pool contribution
    5. amount1Min - the minimum amount of the first asset that will be contributed
    6. amount2Min - the lowest acceptable quantity of the second asset for contribution
    7. mintTo - the account to which the liquidity tokens will be minted
    8. Click on Submit Transaction to add liquidity to the pool

    Warning

    Ensure that the appropriate amount of tokens provided has been minted previously and is available in your account before adding liquidity to the pool.

    In this case, the liquidity provided to the pool is between DOT tokens and PPM tokens with the asset ID 1112 on Polkadot Asset Hub. The intention is to provide liquidity for 1 DOT token (u128 value of 1000000000000 as it has 10 decimals) and 1 PPM token (u128 value of 1000000000000 as it also has 10 decimals).

Signing and submitting the transaction adds liquidity to the pool. To verify the liquidity addition, check the Explorer section on the Polkadot.js Apps interface and ensure that the LiquidityAdded event was emitted.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/#swapping-assets","title":"Swapping Assets","text":""},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/#swapping-from-an-exact-amount-of-tokens","title":"Swapping From an Exact Amount of Tokens","text":"

The asset conversion pallet enables users to exchange a specific quantity of one asset for another in a designated liquidity pool by swapping them for an exact amount of tokens. It guarantees the user will receive at least a predetermined minimum amount of the second asset. This function increases trading predictability and allows users to conduct asset exchanges with confidence that they are assured a minimum return.

To swap assets for an exact amount of tokens, follow these steps:

  1. Navigate to the Extrinsics section on the Polkadot.js Apps interface

    1. Select Developer from the top menu
    2. Click on Extrinsics from the dropdown menu

  2. Choose the AssetConversion pallet and click on the swapExactTokensForTokens extrinsic

    1. Select the AssetConversion pallet
    2. Choose the swapExactTokensForTokens extrinsic from the list of available extrinsics

  3. Fill in the required fields:

    1. path:Vec<StagingXcmV3MultiLocation> - an array of Multilocations representing the path of the swap. The first and last elements of the array are the input and output assets, respectively. In this case, the path consists of two elements:

      • 0: StagingXcmV3MultiLocation - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

        {\n  parents: 0,\n  interior: 'Here'\n}\n
      • 1: StagingXcmV3MultiLocation - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

        {\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n
    2. amountOut - the exact amount of the second asset that the user wants to receive

    3. amountInMax - the maximum amount of the first asset that the user is willing to swap
    4. sendTo - the account to which the swapped assets will be sent
    5. keepAlive - a boolean value that determines whether the pool should be kept alive after the swap
    6. Click on Submit Transaction to swap assets for an exact amount of tokens

    Warning

    Ensure that the appropriate amount of tokens provided has been minted previously and is available in your account before adding liquidity to the pool.

    In this case, the intention is to swap 0.01 DOT token (u128 value of 100000000000 as it has 10 decimals) for 0.04 PPM token (u128 value of 400000000000 as it also has 10 decimals).

Signing and submitting the transaction will execute the swap. To verify execution, check the Explorer section on the Polkadot.js Apps interface and make sure that the SwapExecuted event was emitted.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/#swapping-to-an-exact-amount-of-tokens","title":"Swapping To an Exact Amount of Tokens","text":"

Conversely, the Asset Conversion pallet comes with a function that allows users to trade a variable amount of one asset to acquire a precise quantity of another. It ensures that users stay within a set maximum of the initial asset to obtain the desired amount of the second asset. This provides a method to control transaction costs while achieving the intended result.

To swap assets for an exact amount of tokens, follow these steps:

  1. Navigate to the Extrinsics section on the Polkadot.js Apps interface

    1. Select Developer from the top menu
    2. Click on Extrinsics from the dropdown menu

  2. Choose the AssetConversion pallet and click on the swapTokensForExactTokens extrinsic:

    1. Select the AssetConversion pallet
    2. Choose the swapTokensForExactTokens extrinsic from the list of available extrinsics

  3. Fill in the required fields:

    1. path:Vec<StagingXcmV3MultiLocation\\> - an array of Multilocations representing the path of the swap. The first and last elements of the array are the input and output assets, respectively. In this case, the path consists of two elements:

      • 0: StagingXcmV3MultiLocation - the Multilocation of the first asset in the pool. In this case, it is the PPM token, which the following Multilocation represents:

        {\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n
      • 1: StagingXcmV3MultiLocation - the second asset's Multilocation within the pool. This refers to the DOT token, which the following Multilocation identifies:

        {\n  parents: 0,\n  interior: 'Here'\n}\n
    2. amountOut - the exact amount of the second asset that the user wants to receive

    3. amountInMax - the maximum amount of the first asset that the user is willing to swap
    4. sendTo - the account to which the swapped assets will be sent
    5. keepAlive - a boolean value that determines whether the pool should be kept alive after the swap
    6. Click on Submit Transaction to swap assets for an exact amount of tokens

    Warning

    Before swapping assets, ensure that the tokens provided have been minted previously and are available in your account.

    In this case, the intention is to swap 0.01 DOT token (u128 value of 100000000000 as it has ten decimals) for 0.04 PPM token (u128 value of 400000000000 as it also has ten decimals).

Signing and submitting the transaction will execute the swap. To verify execution, check the Explorer section on the Polkadot.js Apps interface and make sure that the SwapExecuted event was emitted.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/#withdrawing-liquidity-from-a-pool","title":"Withdrawing Liquidity from a Pool","text":"

The Asset Conversion pallet provides the removeLiquidity extrinsic to remove liquidity from a pool. This function allows users to withdraw the liquidity they offered from a pool, returning the original assets. When calling this function, users specify the number of liquidity tokens (representing their share in the pool) they wish to burn. They also set minimum acceptable amounts for the assets they expect to receive back. This mechanism ensures that users can control the minimum value they receive, protecting against unfavorable price movements during the withdrawal process.

To withdraw liquidity from a pool, follow these steps:

  1. Navigate to the Extrinsics section on the Polkadot.js Apps interface

    1. Select Developer from the top menu
    2. Click on Extrinsics from the dropdown menu

  2. Choose the AssetConversion pallet and click on the remove_liquidity extrinsic

    1. Select the AssetConversion pallet
    2. Choose the removeLiquidity extrinsic from the list of available extrinsics

  3. Fill in the required fields:

    1. asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      {\n  parents: 0,\n  interior: 'Here'\n}\n
    2. asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      {\n  parents: 0,\n  interior: {\n    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]\n  }\n}\n
    3. lpTokenBurn - the number of liquidity tokens to burn

    4. amount1MinReceived - the minimum amount of the first asset that the user expects to receive
    5. amount2MinReceived - the minimum quantity of the second asset the user expects to receive
    6. withdrawTo - the account to which the withdrawn assets will be sent
    7. Click on Submit Transaction to withdraw liquidity from the pool

    Warning

    Ensure that the tokens provided have been minted previously and are available in your account before withdrawing liquidity from the pool.

    In this case, the intention is to withdraw 0.05 liquidity tokens from the pool, expecting to receive 0.004 DOT token (u128 value of 40000000000 as it has 10 decimals) and 0.04 PPM token (u128 value of 400000000000 as it also has 10 decimals).

Signing and submitting the transaction will initiate the withdrawal of liquidity from the pool. To verify the withdrawal, check the Explorer section on the Polkadot.js Apps interface and ensure that the LiquidityRemoved event was emitted.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/#test-environment-setup","title":"Test Environment Setup","text":"

To test the Asset Conversion pallet, you can set up a local test environment to simulate different scenarios. This guide uses Chopsticks to spin up an instance of Polkadot Asset Hub. For further details on using Chopsticks, please refer to the Chopsticks documentation.

To set up a local test environment, execute the following command:

npx @acala-network/chopsticks \\\n--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml\n

Note

This command initiates a lazy fork of Polkadot Asset Hub, including the most recent block information from the network. For Kusama Asset Hub testing, simply switch out polkadot-asset-hub.yml with kusama-asset-hub.yml in the command.

You now have a local Asset Hub instance up and running, ready for you to test various asset conversion procedures. The process here mirrors what you'd do on MainNet. After completing a transaction on TestNet, you can apply the same steps to convert assets on MainNet.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/","title":"Register a Foreign Asset on Asset Hub","text":""},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/#introduction","title":"Introduction","text":"

As outlined in the Asset Hub Overview, Asset Hub supports two categories of assets: local and foreign. Local assets are created on the Asset Hub system parachain and are identified by integer IDs. On the other hand, foreign assets, which originate outside of Asset Hub, are recognized by Multilocations.

When registering a foreign asset on Asset Hub, it's essential to notice that the process involves communication between two parachains. The Asset Hub parachain will be the destination of the foreign asset, while the source parachain will be the origin of the asset. The communication between the two parachains is facilitated by the Cross-Chain Message Passing (XCMP) protocol.

This guide will take you through the process of registering a foreign asset on the Asset Hub parachain.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/#prerequisites","title":"Prerequisites","text":"

The Asset Hub parachain is one of the system parachains on a relay chain, such as Polkadot or Kusama. To interact with these parachains, you can use the Polkadot.js Apps interface for:

  • Polkadot Asset Hub
  • Kusama Asset Hub

For testing purposes, you can also interact with the Asset Hub instance on the following test networks:

  • Paseo Asset Hub

Before you start, ensure that you have:

  • Access to the Polkadot.js Apps interface, and you are connected to the desired chain
  • A parachain that supports the XCMP protocol to interact with the Asset Hub parachain
  • A funded wallet to pay for the transaction fees and subsequent registration of the foreign asset

This guide will use Polkadot, its local Asset Hub instance, and the Astar parachain (ID 2006), as stated in the Test Environment Setup section. However, the process is the same for other relay chains and their respective Asset Hub parachain, regardless of the network you are using and the parachain owner of the foreign asset.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/#steps-to-register-a-foreign-asset","title":"Steps to Register a Foreign Asset","text":""},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/#asset-hub","title":"Asset Hub","text":"
  1. Open the Polkadot.js Apps interface and connect to the Asset Hub parachain using the network selector in the top left corner

    • Testing foreign asset registration is recommended on TestNet before proceeding to MainNet. If you haven't set up a local testing environment yet, consult the Environment setup guide. After setting up, connect to the Local Node (Chopsticks) at ws://127.0.0.1:8000
    • For live network operations, connect to the Asset Hub parachain. You can choose either Polkadot or Kusama Asset Hub from the dropdown menu, selecting your preferred RPC provider
  2. Navigate to the Extrinsics page

    1. Click on the Developer tab from the top navigation bar
    2. Select Extrinsics from the dropdown

  3. Select the Foreign Assets pallet

    1. Select the foreignAssets pallet from the dropdown list
    2. Choose the create extrinsic

  4. Fill out the required fields and click on the copy icon to copy the encoded call data to your clipboard. The fields to be filled are:

    • id - as this is a foreign asset, the ID will be represented by a Multilocation that reflects its origin. For this case, the Multilocation of the asset will be from the source parachain perspective:

      { parents: 1, interior: { X1: [{ Parachain: 2006 }] } }\n
    • admin - refers to the account that will be the admin of this asset. This account will be able to manage the asset, including updating its metadata. As the registered asset corresponds to a native asset of the source parachain, the admin account should be the sovereign account of the source parachain

      Obtain the sovereign account

      The sovereign account can be obtained through Substrate Utilities.

      Ensure that Sibling is selected and that the Para ID corresponds to the source parachain. In this case, since the guide follows the test setup stated in the Test Environment Setup section, the Para ID is 2006.

    • minBalance - the minimum balance required to hold this asset

    Encoded call data

    If you want an example of the encoded call data, you can copy the following:

    0x3500010100591f007369626cd6070000000000000000000000000000000000000000000000000000a0860100000000000000000000000000\n

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/#source-parachain","title":"Source Parachain","text":"
  1. Navigate to the Developer > Extrinsics section
  2. Create the extrinsic to register the foreign asset through XCM

    1. Paste the encoded call data copied in the previous step
    2. Click the Submit Transaction button

    This XCM call involves withdrawing DOT from the sibling account of the parachain, using it to initiate an execution. The transaction will be carried out with XCM as the origin kind, and will be a hex-encoded call to create a foreign asset on Asset Hub for the specified parachain asset multilocation. Any surplus will be refunded, and the asset will be deposited into the sibling account.

    Warning

    Note that the sovereign account on the Asset Hub parachain must have a sufficient balance to cover the XCM BuyExecution instruction. If the account does not have enough balance, the transaction will fail.

    Example of the encoded call data

    If you want to have the whole XCM call ready to be copied, go to the Developer > Extrinsics > Decode section and paste the following hex-encoded call data:

    0x6300330003010100a10f030c000400010000070010a5d4e81300010000070010a5d4e80006030700b4f13501419ce03500010100591f007369626cd607000000000000000000000000000000000000000000000000000000000000000000000000000000000000\n

    Ensure to replace the encoded call data with the one you copied in the previous step.

After the transaction is successfully executed, the foreign asset will be registered on the Asset Hub parachain.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/#asset-registration-verification","title":"Asset Registration Verification","text":"

To confirm that a foreign asset has been successfully accepted and registered on the Asset Hub parachain, you can navigate to the Network > Explorer section of the Polkadot.js Apps interface for Asset Hub. You should be able to see an event that includes the following details:

In the image above, the success field indicates whether the asset registration was successful.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/#test-environment-setup","title":"Test Environment Setup","text":"

To test the foreign asset registration process before deploying it on a live network, you can set up a local parachain environment. This guide uses Chopsticks to simulate that process. For more information on using Chopsticks, please refer to the Chopsticks documentation.

To set up a test environment, run the following command:

npx @acala-network/chopsticks xcm \\\n--r polkadot \\\n--p polkadot-asset-hub \\\n--p astar\n

Note

The above command will create a lazy fork of Polkadot as the relay chain, its Asset Hub instance, and the Astar parachain. The xcm parameter enables communication through the XCMP protocol between the relay chain and the parachains, allowing the registration of foreign assets on Asset Hub. For further information on the chopsticks usage of the XCMP protocol, refer to the XCM Testing section of the Chopsticks documentation.

After executing the command, the terminal will display output indicating the Polkadot relay chain, the Polkadot Asset Hub, and the Astar parachain are running locally and connected through XCM. You can access them individually via the Polkadot.js Apps interface.

  • Polkadot Relay Chain
  • Polkadot Asset Hub
  • Astar Parachain
"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/","title":"Register a Local Asset on Asset Hub","text":""},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/#introduction","title":"Introduction","text":"

As detailed in the Asset Hub Overview page, Asset Hub accommodates two types of assets: local and foreign. Local assets are those that were created in Asset Hub and are identifiable by an integer ID. On the other hand, foreign assets originate from a sibling parachain and are identified by a Multilocation.

This guide will take you through the steps of registering a local asset on the Asset Hub parachain.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure you have access to the Polkadot.js Apps interface and a funded wallet with DOT or KSM.

  • For Polkadot Asset Hub, you would need a deposit of 10 DOT and around 0.201 DOT for the metadata
  • For Kusama Asset Hub, the deposit is 0.1 KSM and around 0.000669 KSM for the metadata

You need to ensure that your Asset Hub account balance is a bit more than the sum of those two deposits, which should seamlessly account for the required deposits and transaction fees.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/#steps-to-register-a-local-asset","title":"Steps to Register a Local Asset","text":"

To register a local asset on the Asset Hub parachain, follow these steps:

  1. Open the Polkadot.js Apps interface and connect to the Asset Hub parachain using the network selector in the top left corner

    • You may prefer to test local asset registration on TestNet before registering the asset on a MainNet hub. If you still need to set up a local testing environment, review the Environment setup section for instructions. Once the local environment is set up, connect to the Local Node (Chopsticks) available on ws://127.0.0.1:8000
    • For the live network, connect to the Asset Hub parachain. Either Polkadot or Kusama Asset Hub can be selected from the dropdown list, choosing the desired RPC provider
  2. Click on the Network tab on the top navigation bar and select Assets from the dropdown list

  3. Now, you need to examine all the registered asset IDs. This step is crucial to ensure that the asset ID you are about to register is unique. Asset IDs are displayed in the assets column

  4. Once you have confirmed that the asset ID is unique, click on the Create button on the top right corner of the page

  5. Fill in the required fields in the Create Asset form:

    1. creator account - the account to be used for creating this asset and setting up the initial metadata
    2. asset name - the descriptive name of the asset you are registering
    3. asset symbol - the symbol that will be used to represent the asset
    4. asset decimals - the number of decimal places for this token, with a maximum of 20 allowed through the user interface
    5. minimum balance - the minimum balance for the asset. This is specified in the units and decimals as requested
    6. asset ID - the selected id for the asset. This should not match an already-existing asset id
    7. Click on the Next button

  6. Choose the accounts for the roles listed below:

    1. admin account - the account designated for continuous administration of the token
    2. issuer account - the account that will be used for issuing this token
    3. freezer account - the account that will be used for performing token freezing operations
    4. Click on the Create button

  7. Click on the Sign and Submit button to complete the asset registration process

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/#verify-asset-registration","title":"Verify Asset Registration","text":"

After completing these steps, the asset will be successfully registered. You can now view your asset listed on the Assets section of the Polkadot.js Apps interface.

Note

Take into consideration that the Assets section\u2019s link may differ depending on the network you are using. For the local environment, enter ws://127.0.0.1:8000 into the Custom Endpoint field.

In this way, you have successfully registered a local asset on the Asset Hub parachain.

For an in-depth explanation of Asset Hub and its features, please refer to the Polkadot Wiki page on Asset Hub.

"},{"location":"tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/#test-setup-environment","title":"Test Setup Environment","text":"

You can set up a local parachain environment to test the asset registration process before deploying it on the live network. This guide uses Chopsticks to simulate that process. For further information on chopsticks usage, refer to the Chopsticks documentation.

To set up a test environment, execute the following command:

npx @acala-network/chopsticks \\\n--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml\n

Note

The above command will spawn a lazy fork of Polkadot Asset Hub with the latest block data from the network. If you need to test Kusama Asset Hub, replace polkadot-asset-hub.yml with kusama-asset-hub.yml in the command.

An Asset Hub instance is now running locally, and you can proceed with the asset registration process. Note that the local registration process does not differ from the live network process. Once you have a successful TestNet transaction, you can use the same steps to register the asset on MainNet.

"},{"location":"tutorials/polkadot-sdk/testing/","title":"Blockchain Testing Tutorials","text":"

Polkadot offers specialized tools that make it simple to create realistic testing environments, particularly for cross-chain interactions. These purpose-built tools enable developers to quickly spin up test networks that accurately simulate real-world scenarios. Learn to create controlled testing environments using powerful tools designed for Polkadot SDK development.

"},{"location":"tutorials/polkadot-sdk/testing/#get-started","title":"Get Started","text":"

Through these tutorials, you'll learn important testing techniques including:

  • Setting up local test environments
  • Spawning ephemeral testing networks
  • Forking live chains for testing
  • Simulating cross-chain interactions
  • Debugging blockchain behavior

Each tutorial provides step-by-step guidance for using these tools effectively in your development workflow.

"},{"location":"tutorials/polkadot-sdk/testing/#in-this-section","title":"In This Section","text":"

:::INSERT_IN_THIS_SECTION:::

"},{"location":"tutorials/polkadot-sdk/testing/fork-live-chains/","title":"Fork a Chain with Chopsticks","text":""},{"location":"tutorials/polkadot-sdk/testing/fork-live-chains/#introduction","title":"Introduction","text":"

Chopsticks is an innovative tool that simplifies the process of forking live Polkadot SDK chains. This guide provides step-by-step instructions to configure and fork chains, enabling developers to:

  • Replay blocks for state analysis
  • Test cross-chain messaging (XCM)
  • Simulate blockchain environments for debugging and experimentation

With support for both configuration files and CLI commands, Chopsticks offers flexibility for diverse development workflows. Whether you're testing locally or exploring complex blockchain scenarios, Chopsticks empowers developers to gain deeper insights and accelerate application development.

For additional support and information, please reach out through GitHub Issues.

Note

Chopsticks uses Smoldot light client, which only supports the native Polkadot SDK API. As a result, Ethereum JSON-RPC calls are not supported, and tools like Metamask cannot connect to Chopsticks-based forks.

"},{"location":"tutorials/polkadot-sdk/testing/fork-live-chains/#prerequisites","title":"Prerequisites","text":"

To follow this tutorial, ensure you have completed the following:

  • Installed Chopsticks - if you still need to do so, see the Install Chopsticks guide for assistance
  • Reviewed Configure Chopsticks - and understand how forked chains are configured
"},{"location":"tutorials/polkadot-sdk/testing/fork-live-chains/#configuration-file","title":"Configuration File","text":"

To run Chopsticks using a configuration file, utilize the --config flag. You can use a raw GitHub URL, a path to a local file, or simply the chain's name. The following commands all look different but they use the polkadot configuration in the same way:

GitHub URLLocal File PathChain Name
npx @acala-network/chopsticks \\\n--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml\n
npx @acala-network/chopsticks --config=configs/polkadot.yml\n
npx @acala-network/chopsticks --config=polkadot\n

Regardless of which method you choose from the preceding examples, you'll see an output similar to the following:

npx @acala-network/chopsticks --config=polkadot [18:38:26.155] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml app: \"chopsticks\" chopsticks::executor TRACE: Calling Metadata_metadata chopsticks::executor TRACE: Completed Metadata_metadata [18:38:28.186] INFO: Polkadot RPC listening on port 8000 app: \"chopsticks\"

Note

If using a file path, make sure you've downloaded the Polkadot configuration file, or have created your own.

"},{"location":"tutorials/polkadot-sdk/testing/fork-live-chains/#create-a-fork","title":"Create a Fork","text":"

Once you've configured Chopsticks, use the following command to fork Polkadot at block 100:

npx @acala-network/chopsticks \\\n--endpoint wss://polkadot-rpc.dwellir.com \\\n--block 100\n

If the fork is successful, you will see output similar to the following:

npx @acala-network/chopsticks \\ --endpoint wss://polkadot-rpc.dwellir.com \\ --block 100 [19:12:21.023] INFO: Polkadot RPC listening on port 8000 app: \"chopsticks\"

Access the running Chopsticks fork using the default address.

ws://localhost:8000\n
"},{"location":"tutorials/polkadot-sdk/testing/fork-live-chains/#interact-with-a-fork","title":"Interact with a Fork","text":"

You can interact with the forked chain using various libraries such as Polkadot.js and its user interface, Polkadot.js Apps.

"},{"location":"tutorials/polkadot-sdk/testing/fork-live-chains/#use-polkadotjs-apps","title":"Use Polkadot.js Apps","text":"

To interact with Chopsticks via the hosted user interface, visit Polkadot.js Apps and follow these steps:

  1. Select the network icon in the top left corner

  2. Scroll to the bottom and select Development

  3. Choose Custom
  4. Enter ws://localhost:8000 in the input field
  5. Select the Switch button

You should now be connected to your local fork and can interact with it as you would with a real chain.

"},{"location":"tutorials/polkadot-sdk/testing/fork-live-chains/#use-polkadotjs-library","title":"Use Polkadot.js Library","text":"

For programmatic interaction, you can use the Polkadot.js library. The following is a basic example:

import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function connectToFork() {\n  const wsProvider = new WsProvider('ws://localhost:8000');\n  const api = await ApiPromise.create({ provider: wsProvider });\n  await api.isReady;\n\n  // Now you can use 'api' to interact with your fork\n  console.log(`Connected to chain: ${await api.rpc.system.chain()}`);\n}\n\nconnectToFork();\n
"},{"location":"tutorials/polkadot-sdk/testing/fork-live-chains/#replay-blocks","title":"Replay Blocks","text":"

Chopsticks allows you to replay specific blocks from a chain, which is useful for debugging and analyzing state changes. You can use the parameters in the Configuration section to set up the chain configuration, and then use the run-block subcommand with the following additional options:

  • output-path - path to print output
  • html - generate HTML with storage diff
  • open - open generated HTML

For example, the command to replay block 1000 from Polkadot and save the output to a JSON file would be as follows:

npx @acala-network/chopsticks run-block  \\\n--endpoint wss://polkadot-rpc.dwellir.com  \\\n--output-path ./polkadot-output.json  \\\n--block 1000\n
Output file content
{\n    \"Call\": {\n        \"result\": \"0xba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44a10f6fc59a4d90c3b78e38fac100fc6adc6f9e69a07565ec8abce6165bd0d24078cc7bf34f450a2cc7faacc1fa1e244b959f0ed65437f44208876e1e5eefbf8dd34c040642414245b501030100000083e2cc0f00000000d889565422338aa58c0fd8ebac32234149c7ce1f22ac2447a02ef059b58d4430ca96ba18fbf27d06fe92ec86d8b348ef42f6d34435c791b952018d0a82cae40decfe5faf56203d88fdedee7b25f04b63f41f23da88c76c876db5c264dad2f70c\",\n        \"storageDiff\": [\n            [\n                \"0x0b76934f4cc08dee01012d059e1b83eebbd108c4899964f707fdaffb82636065\",\n                \"0x00\"\n            ],\n            [\n                \"0x1cb6f36e027abb2091cfb5110ab5087f0323475657e0890fbdbf66fb24b4649e\",\n                null\n            ],\n            [\n                \"0x1cb6f36e027abb2091cfb5110ab5087f06155b3cd9a8c9e5e9a23fd5dc13a5ed\",\n                \"0x83e2cc0f00000000\"\n            ],\n            [\n                \"0x1cb6f36e027abb2091cfb5110ab5087ffa92de910a7ce2bd58e99729c69727c1\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef702a5c1b19ab7a04f536c519aca4983ac\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef70a98fdbe9ce6c55837576c60c7af3850\",\n                \"0x02000000\"\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef734abf5cb34d6244378cddbf18e849d96\",\n                \"0xc03b86ae010000000000000000000000\"\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef780d41e5e16056765bc8461851072c9d7\",\n                \"0x080000000000000080e36a09000000000200000001000000000000ca9a3b00000000020000\"\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef78a42f33323cb5ced3b44dd825fda9fcc\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef799e7f93fc6a98f0874fd057f111c4d2d\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7a44704b568d21667356a5a050c118746d366e7fe86e06375e7030000\",\n                \"0xba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44\"\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7a86da5a932684f199539836fcb8c886f\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7b06c3320c6ac196d813442e270868d63\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7bdc0bd303e9855813aa8a30d4efc5112\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7df1daeb8986837f21cc5d17596bb78d15153cb1f00942ff401000000\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7df1daeb8986837f21cc5d17596bb78d1b4def25cfda6ef3a00000000\",\n                null\n            ],\n            [\n                \"0x26aa394eea5630e07c48ae0c9558cef7ff553b5a9862a516939d82b3d3d8661a\",\n                null\n            ],\n            [\n                \"0x2b06af9719ac64d755623cda8ddd9b94b1c371ded9e9c565e89ba783c4d5f5f9b4def25cfda6ef3a000000006f3d6b177c8acbd8dc9974cdb3cebfac4d31333c30865ff66c35c1bf898df5c5dd2924d3280e7201\",\n                \"0x9b000000\"\n            ],\n            [\"0x3a65787472696e7369635f696e646578\", null],\n            [\n                \"0x3f1467a096bcd71a5b6a0c8155e208103f2edf3bdf381debe331ab7446addfdc\",\n                \"0x550057381efedcffffffffffffffffff\"\n            ],\n            [\n                \"0x3fba98689ebed1138735e0e7a5a790ab0f41321f75df7ea5127be2db4983c8b2\",\n                \"0x00\"\n            ],\n            [\n                \"0x3fba98689ebed1138735e0e7a5a790ab21a5051453bd3ae7ed269190f4653f3b\",\n                \"0x080000\"\n            ],\n            [\n                \"0x3fba98689ebed1138735e0e7a5a790abb984cfb497221deefcefb70073dcaac1\",\n                \"0x00\"\n            ],\n            [\n                \"0x5f3e4907f716ac89b6347d15ececedca80cc6574281671b299c1727d7ac68cabb4def25cfda6ef3a00000000\",\n                \"0x204e0000183887050ecff59f58658b3df63a16d03a00f92890f1517f48c2f6ccd215e5450e380e00005809fd84af6483070acbb92378e3498dbc02fb47f8e97f006bb83f60d7b2b15d980d000082104c22c383925323bf209d771dec6e1388285abe22c22d50de968467e0bb6ce00b000088ee494d719d68a18aade04903839ea37b6be99552ceceb530674b237afa9166480d0000dc9974cdb3cebfac4d31333c30865ff66c35c1bf898df5c5dd2924d3280e72011c0c0000e240d12c7ad07bb0e7785ee6837095ddeebb7aef84d6ed7ea87da197805b343a0c0d0000\"\n            ],\n            [\n                \"0xae394d879ddf7f99595bc0dd36e355b5bbd108c4899964f707fdaffb82636065\",\n                null\n            ],\n            [\n                \"0xbd2a529379475088d3e29a918cd478721a39ec767bd5269111e6492a1675702a\",\n                \"0x4501407565175cfbb5dca18a71e2433f838a3d946ef532c7bff041685db1a7c13d74252fffe343a960ef84b15187ea0276687d8cb3168aeea5202ea6d651cb646517102b81ff629ee6122430db98f2cadf09db7f298b49589b265dae833900f24baa8fb358d87e12f3e9f7986a9bf920c2fb48ce29886199646d2d12c6472952519463e80b411adef7e422a1595f1c1af4b5dd9b30996fba31fa6a30bd94d2022d6b35c8bc5a8a51161d47980bf4873e01d15afc364f8939a6ce5a09454ab7f2dd53bf4ee59f2c418e85aa6eb764ad218d0097fb656900c3bdd859771858f87bf7f06fc9b6db154e65d50d28e8b2374898f4f519517cd0bedc05814e0f5297dc04beb307b296a93cc14d53afb122769dfd402166568d8912a4dff9c2b1d4b6b34d811b40e5f3763e5f3ab5cd1da60d75c0ff3c12bcef3639f5f792a85709a29b752ffd1233c2ccae88ed3364843e2fa92bdb49021ee36b36c7cdc91b3e9ad32b9216082b6a2728fccd191a5cd43896f7e98460859ca59afbf7c7d93cd48da96866f983f5ff8e9ace6f47ee3e6c6edb074f578efbfb0907673ebca82a7e1805bc5c01cd2fa5a563777feeb84181654b7b738847c8e48d4f575c435ad798aec01631e03cf30fe94016752b5f087f05adf1713910767b7b0e6521013be5370776471191641c282fdfe7b7ccf3b2b100a83085cd3af2b0ad4ab3479448e71fc44ff987ec3a26be48161974b507fb3bc8ad23838f2d0c54c9685de67dc6256e71e739e9802d0e6e3b456f6dca75600bc04a19b3cc1605784f46595bfb10d5e077ce9602ae3820436166aa1905a7686b31a32d6809686462bc9591c0bc82d9e49825e5c68352d76f1ac6e527d8ac02db3213815080afad4c2ecb95b0386e3e9ab13d4f538771dac70d3059bd75a33d0b9b581ec33bb16d0e944355d4718daccb35553012adfcdacb1c5200a2aec3756f6ad5a2beffd30018c439c1b0c4c0f86dbf19d0ad59b1c9efb7fe90906febdb9001af1e7e15101089c1ab648b199a40794d30fe387894db25e614b23e833291a604d07eec2ade461b9b139d51f9b7e88475f16d6d23de6fe7831cc1dbba0da5efb22e3b26cd2732f45a2f9a5d52b6d6eaa38782357d9ae374132d647ef60816d5c98e6959f8858cfa674c8b0d340a8f607a68398a91b3a965585cc91e46d600b1310b8f59c65b7c19e9d14864a83c4ad6fa4ba1f75bba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44c7736fc3ab2969878810153aa3c93fc08c99c478ed1bb57f647d3eb02f25cee122c70424643f4b106a7643acaa630a5c4ac39364c3cb14453055170c01b44e8b1ef007c7727494411958932ae8b3e0f80d67eec8e94dd2ff7bbe8c9e51ba7e27d50bd9f52cbaf9742edecb6c8af1aaf3e7c31542f7d946b52e0c37d194b3dd13c3fddd39db0749755c7044b3db1143a027ad428345d930afcefc0d03c3a0217147900bdea1f5830d826f7e75ecd1c4e2bc8fd7de3b35c6409acae1b2215e9e4fd7e360d6825dc712cbf9d87ae0fd4b349b624d19254e74331d66a39657da81e73d7b13adc1e5efa8efd65aa32c1a0a0315913166a590ae551c395c476116156cf9d872fd863893edb41774f33438161f9b973e3043f819d087ba18a0f1965e189012496b691f342f7618fa9db74e8089d4486c8bd1993efd30ff119976f5cc0558e29b417115f60fd8897e13b6de1a48fbeee38ed812fd267ae25bffea0caa71c09309899b34235676d5573a8c3cf994a3d7f0a5dbd57ab614c6caf2afa2e1a860c6307d6d9341884f1b16ef22945863335bb4af56e5ef5e239a55dbd449a4d4d3555c8a3ec5bd3260f88cabca88385fe57920d2d2dfc5d70812a8934af5691da5b91206e29df60065a94a0a8178d118f1f7baf768d934337f570f5ec68427506391f51ab4802c666cc1749a84b5773b948fcbe460534ed0e8d48a15c149d27d67deb8ea637c4cc28240ee829c386366a0b1d6a275763100da95374e46528a0adefd4510c38c77871e66aeda6b6bfd629d32af9b2fad36d392a1de23a683b7afd13d1e3d45dad97c740106a71ee308d8d0f94f6771164158c6cd3715e72ccfbc49a9cc49f21ead8a3c5795d64e95c15348c6bf8571478650192e52e96dd58f95ec2c0fb4f2ccc05b0ab749197db8d6d1c6de07d6e8cb2620d5c308881d1059b50ffef3947c273eaed7e56c73848e0809c4bd93619edd9fd08c8c5c88d5f230a55d2c6a354e5dd94440e7b5bf99326cf4a112fe843e7efdea56e97af845761d98f40ed2447bd04a424976fcf0fe0a0c72b97619f85cf431fe4c3aa6b3a4f61df8bc1179c11e77783bfedb7d374bd1668d0969333cb518bd20add8329462f2c9a9f04d150d60413fdd27271586405fd85048481fc2ae25b6826cb2c947e4231dc7b9a0d02a9a03f88460bced3fef5d78f732684bd218a1954a4acfc237d79ccf397913ab6864cd8a07e275b82a8a72520624738368d1c5f7e0eaa2b445cf6159f2081d3483618f7fc7b16ec4e6e4d67ab5541bcda0ca1af40efd77ef8653e223191448631a8108c5e50e340cd405767ecf932c1015aa8856b834143dc81fa0e8b9d1d8c32278fca390f2ff08181df0b74e2d13c9b7b1d85543416a0dae3a77530b9cd1366213fcf3cd12a9cd3ae0a006d6b29b5ffc5cdc1ab24343e2ab882abfd719892fca5bf2134731332c5d3bef6c6e4013d84a853cb03d972146b655f0f8541bcd36c3c0c8a775bb606edfe50d07a5047fd0fe01eb125e83673930bc89e91609fd6dfe97132679374d3de4a0b3db8d3f76f31bed53e247da591401d508d65f9ee01d3511ee70e3644f3ab5d333ca7dbf737fe75217b4582d50d98b5d59098ea11627b7ed3e3e6ee3012eadd326cf74ec77192e98619427eb0591e949bf314db0fb932ed8be58258fb4f08e0ccd2cd18b997fb5cf50c90d5df66a9f3bb203bd22061956128b800e0157528d45c7f7208c65d0592ad846a711fa3c5601d81bb318a45cc1313b122d4361a7d7a954645b04667ff3f81d3366109772a41f66ece09eb93130abe04f2a51bb30e767dd37ec6ee6a342a4969b8b342f841193f4f6a9f0fac4611bc31b6cab1d25262feb31db0b8889b6f8d78be23f033994f2d3e18e00f3b0218101e1a7082782aa3680efc8502e1536c30c8c336b06ae936e2bcf9bbfb20dd514ed2867c03d4f44954867c97db35677d30760f37622b85089cc5d182a89e29ab0c6b9ef18138b16ab91d59c2312884172afa4874e6989172014168d3ed8db3d9522d6cbd631d581d166787c93209bec845d112e0cbd825f6df8b64363411270921837cfb2f9e7f2e74cdb9cd0d2b02058e5efd9583e2651239654b887ea36ce9537c392fc5dfca8c5a0facbe95b87dfc4232f229bd12e67937d32b7ffae2e837687d2d292c08ff6194a2256b17254748857c7e3c871c3fff380115e6f7faf435a430edf9f8a589f6711720cfc5cec6c8d0d94886a39bb9ac6c50b2e8ef6cf860415192ca4c1c3aaa97d36394021a62164d5a63975bcd84b8e6d74f361c17101e3808b4d8c31d1ee1a5cf3a2feda1ca2c0fd5a50edc9d95e09fb5158c9f9b0eb5e2c90a47deb0459cea593201ae7597e2e9245aa5848680f546256f3\"\n            ],\n            [\n                \"0xd57bce545fb382c34570e5dfbf338f5e326d21bc67a4b34023d577585d72bfd7\",\n                null\n            ],\n            [\n                \"0xd57bce545fb382c34570e5dfbf338f5ea36180b5cfb9f6541f8849df92a6ec93\",\n                \"0x00\"\n            ],\n            [\n                \"0xd57bce545fb382c34570e5dfbf338f5ebddf84c5eb23e6f53af725880d8ffe90\",\n                null\n            ],\n            [\n                \"0xd5c41b52a371aa36c9254ce34324f2a53b996bb988ea8ee15bad3ffd2f68dbda\",\n                \"0x00\"\n            ],\n            [\n                \"0xf0c365c3cf59d671eb72da0e7a4113c49f1f0515f462cdcf84e0f1d6045dfcbb\",\n                \"0x50defc5172010000\"\n            ],\n            [\n                \"0xf0c365c3cf59d671eb72da0e7a4113c4bbd108c4899964f707fdaffb82636065\",\n                null\n            ],\n            [\n                \"0xf68f425cf5645aacb2ae59b51baed90420d49a14a763e1cbc887acd097f92014\",\n                \"0x9501800300008203000082030000840300008503000086030000870300008703000089030000890300008b0300008b0300008d0300008d0300008f0300008f0300009103000092030000920300009403000094030000960300009603000098030000990300009a0300009b0300009b0300009d0300009d0300009f0300009f030000a1030000a2030000a3030000a4030000a5030000a6030000a6030000a8030000a8030000aa030000ab030000ac030000ad030000ae030000af030000b0030000b1030000b1030000b3030000b3030000b5030000b6030000b7030000b8030000b9030000ba030000ba030000bc030000bc030000be030000be030000c0030000c1030000c2030000c2030000c4030000c5030000c5030000c7030000c7030000c9030000c9030000cb030000cc030000cd030000ce030000cf030000d0030000d0030000d2030000d2030000d4030000d4030000d6030000d7030000d8030000d9030000da030000db030000db030000dd030000dd030000df030000e0030000e1030000e2030000e3030000e4030000e4030000\"\n            ],\n            [\n                \"0xf68f425cf5645aacb2ae59b51baed9049b58374218f48eaf5bc23b7b3e7cf08a\",\n                \"0xb3030000\"\n            ],\n            [\n                \"0xf68f425cf5645aacb2ae59b51baed904b97380ce5f4e70fbf9d6b5866eb59527\",\n                \"0x9501800300008203000082030000840300008503000086030000870300008703000089030000890300008b0300008b0300008d0300008d0300008f0300008f0300009103000092030000920300009403000094030000960300009603000098030000990300009a0300009b0300009b0300009d0300009d0300009f0300009f030000a1030000a2030000a3030000a4030000a5030000a6030000a6030000a8030000a8030000aa030000ab030000ac030000ad030000ae030000af030000b0030000b1030000b1030000b3030000b3030000b5030000b6030000b7030000b8030000b9030000ba030000ba030000bc030000bc030000be030000be030000c0030000c1030000c2030000c2030000c4030000c5030000c5030000c7030000c7030000c9030000c9030000cb030000cc030000cd030000ce030000cf030000d0030000d0030000d2030000d2030000d4030000d4030000d6030000d7030000d8030000d9030000da030000db030000db030000dd030000dd030000df030000e0030000e1030000e2030000e3030000e4030000e4030000\"\n            ]\n        ],\n        \"offchainStorageDiff\": [],\n        \"runtimeLogs\": []\n    }\n}\n
"},{"location":"tutorials/polkadot-sdk/testing/fork-live-chains/#xcm-testing","title":"XCM Testing","text":"

To test XCM (Cross-Consensus Messaging) messages between networks, you can fork multiple parachains and a relay chain locally using Chopsticks.

  • relaychain - relay chain config file
  • parachain - parachain config file

For example, to fork Moonbeam, Astar, and Polkadot enabling XCM between them, you can use the following command:

npx @acala-network/chopsticks xcm \\\n--r polkadot \\\n--p moonbeam \\\n--p astar\n

After running it, you should see output similar to the following:

npx @acala-network/chopsticks xcm \\ --r polkadot \\ --p moonbeam \\ --p astar [13:46:07.901] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/moonbeam.yml app: \"chopsticks\" [13:46:12.631] INFO: Moonbeam RPC listening on port 8000 app: \"chopsticks\" [13:46:12.632] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/astar.yml app: \"chopsticks\" chopsticks::executor TRACE: Calling Metadata_metadata chopsticks::executor TRACE: Completed Metadata_metadata [13:46:23.669] INFO: Astar RPC listening on port 8001 app: \"chopsticks\" [13:46:25.144] INFO (xcm): Connected parachains [2004,2006] app: \"chopsticks\" [13:46:25.144] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml app: \"chopsticks\" chopsticks::executor TRACE: Calling Metadata_metadata chopsticks::executor TRACE: Completed Metadata_metadata [13:46:53.320] INFO: Polkadot RPC listening on port 8002 app: \"chopsticks\" [13:46:54.038] INFO (xcm): Connected relaychain 'Polkadot' with parachain 'Moonbeam' app: \"chopsticks\" [13:46:55.028] INFO (xcm): Connected relaychain 'Polkadot' with parachain 'Astar' app: \"chopsticks\"

Now you can interact with your forked chains using the ports specified in the output.

"},{"location":"tutorials/polkadot-sdk/testing/spawn-basic-chain/","title":"Spawn a Basic Chain with Zombienet","text":""},{"location":"tutorials/polkadot-sdk/testing/spawn-basic-chain/#introduction","title":"Introduction","text":"

Zombienet simplifies blockchain development by enabling developers to create temporary, customizable networks for testing and validation. These ephemeral chains are ideal for experimenting with configurations, debugging applications, and validating functionality in a controlled environment.

In this guide, you'll learn how to define a basic network configuration file, spawn a blockchain network using Zombienet's CLI, and interact with nodes and monitor network activity using tools like Polkadot.js Apps and Prometheus

By the end of this tutorial, you'll be equipped to deploy and test your own blockchain networks, paving the way for more advanced setups and use cases.

"},{"location":"tutorials/polkadot-sdk/testing/spawn-basic-chain/#prerequisites","title":"Prerequisites","text":"

To successfully complete this tutorial, you must ensure you've first:

  • Installed Zombienet
  • Reviewed the information in Configure Zombienet and understand how to customize a spawned network
"},{"location":"tutorials/polkadot-sdk/testing/spawn-basic-chain/#define-the-network","title":"Define the Network","text":"

Zombienet uses a configuration file to define the ephemeral network that will be spawned. Follow these steps to create and define the configuration file:

  1. Create a file named spawn-a-basic-network.toml
    touch spawn-a-basic-network.toml\n
  2. Add the following code to the file you just created: spawn-a-basic-network.toml
    [settings]\ntimeout = 120\n\n[relaychain]\n\n[[relaychain.nodes]]\nname = \"alice\"\nvalidator = true\n\n[[relaychain.nodes]]\nname = \"bob\"\nvalidator = true\n\n[[parachains]]\nid = 100\n\n[parachains.collator]\nname = \"collator01\"\n

This configuration file defines a network with the following chains:

  • relaychain - with two nodes named alice and bob
  • parachain - with a collator named collator01

Settings also defines a timeout of 120 seconds for the network to be ready.

"},{"location":"tutorials/polkadot-sdk/testing/spawn-basic-chain/#spawn-the-network","title":"Spawn the Network","text":"

To spawn the network, run the following command:

zombienet -p native spawn spawn-a-basic-network.toml\n

This command will spawn the network defined in the spawn-a-basic-network.toml configuration file. The -p native flag specifies that the network will be spawned using the native provider.

If successful, you will see the following output:

zombienet -p native spawn spawn-a-basic-network.toml Network launched \ud83d\ude80\ud83d\ude80 Namespace zombie-75a01b93c92d571f6198a67bcb380fcd Provider native Node Information Name alice Direct Link https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:55308#explorer Prometheus Link http://127.0.0.1:55310/metrics Log Cmd tail -f /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/alice.log Node Information Name bob Direct Link https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:55312#explorer Prometheus Link http://127.0.0.1:50634/metrics Log Cmd tail -f /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/bob.log Node Information Name collator01 Direct Link https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:55316#explorer Prometheus Link http://127.0.0.1:55318/metrics Log Cmd tail -f /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/collator01.log Parachain ID 100 ChainSpec Path /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/100-rococo-local.json

Note

If the IPs and ports aren't explicitly defined in the configuration file, they may change each time the network is started, causing the links provided in the output to differ from the example.

"},{"location":"tutorials/polkadot-sdk/testing/spawn-basic-chain/#interact-with-the-spawned-network","title":"Interact with the Spawned Network","text":"

After the network is launched, you can interact with it using Polkadot.js Apps. To do so, open your browser and use the provided links listed by the output as Direct Link.

"},{"location":"tutorials/polkadot-sdk/testing/spawn-basic-chain/#connect-to-the-nodes","title":"Connect to the Nodes","text":"

Use the 55308 port address to interact with the same alice node used for this tutorial. Ports can change from spawn to spawn so be sure to locate the link in the output when spawning your own node to ensure you are accessing the correct port.

If you want to interact with the nodes more programmatically, you can also use the Polkadot.js API. For example, the following code snippet shows how to connect to the alice node using the Polkadot.js API and log some information about the chain and node:

import { ApiPromise, WsProvider } from '@polkadot/api';\n\nasync function main() {\n  const wsProvider = new WsProvider('ws://127.0.0.1:55308');\n  const api = await ApiPromise.create({ provider: wsProvider });\n\n  // Retrieve the chain & node information via rpc calls\n  const [chain, nodeName, nodeVersion] = await Promise.all([\n    api.rpc.system.chain(),\n    api.rpc.system.name(),\n    api.rpc.system.version(),\n  ]);\n\n  console.log(\n    `You are connected to chain ${chain} using ${nodeName} v${nodeVersion}`,\n  );\n}\n\nmain()\n  .catch(console.error)\n  .finally(() => process.exit());\n

Both methods allow you to interact easily with the network and its nodes.

"},{"location":"tutorials/polkadot-sdk/testing/spawn-basic-chain/#check-metrics","title":"Check Metrics","text":"

You can also check the metrics of the nodes by accessing the links provided in the output as Prometheus Link. Prometheus is a monitoring and alerting toolkit that collects metrics from the nodes. By accessing the provided links, you can see the metrics of the nodes in a web interface. So, for example, the following image shows the Prometheus metrics for Bob's node from the Zombienet test:

"},{"location":"tutorials/polkadot-sdk/testing/spawn-basic-chain/#check-logs","title":"Check Logs","text":"

To view individual node logs, locate the Log Cmd command in Zombienet's startup output. For example, to see what the alice node is doing, find the log command that references alice.log in its file path. Note that Zombienet will show you the correct path for your instance when it starts up, so use that path rather than copying from the below example:

tail -f  /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/alice.log\n

After running this command, you will see the logs of the alice node in real-time, which can be useful for debugging purposes. The logs of the bob and collator01 nodes can be checked similarly.

"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..db4640f5 --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,688 @@ + + + + https://docs.polkadot.com/ + 2024-12-11 + daily + + + https://docs.polkadot.com/LICENSE/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/development-pathways/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/networks/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/interoperability/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/interoperability/intro-to-xcm/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/interoperability/send-messages/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/interoperability/test-and-debug/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/interoperability/xcm-channels/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/interoperability/xcm-config/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/customize-parachain/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/customize-parachain/add-existing-pallets/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/customize-parachain/add-smart-contract-functionality/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/customize-parachain/benchmarking/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/customize-parachain/make-custom-pallet/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/customize-parachain/overview/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/customize-parachain/pallet-testing/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/deployment/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/deployment/build-deterministic-runtime/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/deployment/generate-chain-specs/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/deployment/obtain-coretime/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/get-started/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/get-started/build-custom-parachains/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/get-started/deploy-parachain-to-polkadot/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/get-started/install-polkadot-sdk/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/get-started/intro-polkadot-sdk/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/maintenance/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/maintenance/runtime-metrics-monitoring/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/maintenance/runtime-upgrades/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/maintenance/storage-migrations/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/testing/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/testing/runtime/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/parachains/testing/setup/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/smart-contracts/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/smart-contracts/overview/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/smart-contracts/wasm-ink/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/smart-contracts/evm/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/smart-contracts/evm/native-evm-contracts/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/smart-contracts/evm/parachain-contracts/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/api-libraries/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/api-libraries/papi/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/api-libraries/polkadot-js-api/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/api-libraries/py-substrate-interface/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/api-libraries/sidecar/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/integrations/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/integrations/indexers/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/integrations/oracles/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/integrations/wallets/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/interoperability/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/interoperability/xcm-tools/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/interoperability/asset-transfer-api/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/interoperability/asset-transfer-api/overview/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/interoperability/asset-transfer-api/reference/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/parachains/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/parachains/fork-chains/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/parachains/fork-chains/chopsticks/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/parachains/fork-chains/chopsticks/get-started/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/parachains/spawn-chains/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/parachains/spawn-chains/zombienet/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/parachains/spawn-chains/zombienet/get-started/ + 2024-12-11 + daily + + + https://docs.polkadot.com/develop/toolkit/parachains/spawn-chains/zombienet/write-tests/ + 2024-12-11 + daily + + + https://docs.polkadot.com/images/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-node/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-node/setup-bootnode/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-node/setup-full-node/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-validator/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-validator/requirements/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-validator/onboarding-and-offboarding/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-validator/operational-tasks/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-validator/operational-tasks/general-management/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-validator/operational-tasks/pause-validating/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/running-a-validator/operational-tasks/upgrade-your-node/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/staking-mechanics/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/staking-mechanics/offenses-and-slashes/ + 2024-12-11 + daily + + + https://docs.polkadot.com/infrastructure/staking-mechanics/rewards-payout/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/glossary/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/parachains/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/parachains/consensus/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/parachains/overview/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/polkadot-chain/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/polkadot-chain/agile-coretime/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/polkadot-chain/overview/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/polkadot-chain/pos-consensus/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/asset-hub/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/bridge-hub/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/coretime/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/overview/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/accounts/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/chain-data/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/cryptography/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/data-encoding/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/interoperability/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/networks/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/randomness/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/blocks-transactions-fees/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/blocks-transactions-fees/blocks/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/blocks-transactions-fees/fees/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/basics/blocks-transactions-fees/transactions/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/onchain-governance/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/onchain-governance/origins-tracks/ + 2024-12-11 + daily + + + https://docs.polkadot.com/polkadot-protocol/onchain-governance/overview/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/interoperability/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/interoperability/xcm-channels/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/interoperability/xcm-channels/para-to-para/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/interoperability/xcm-channels/para-to-system/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/interoperability/xcm-transfers/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/local-chain/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/system-chains/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/system-chains/asset-hub/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/testing/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/testing/fork-live-chains/ + 2024-12-11 + daily + + + https://docs.polkadot.com/tutorials/polkadot-sdk/testing/spawn-basic-chain/ + 2024-12-11 + daily + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..199a57d9 Binary files /dev/null and b/sitemap.xml.gz differ diff --git a/tutorials/index.html b/tutorials/index.html new file mode 100644 index 00000000..11555427 --- /dev/null +++ b/tutorials/index.html @@ -0,0 +1,4917 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Tutorials

+

Welcome to the Polkadot Tutorials hub! Whether you’re building parachains, integrating system chains, or developing decentralized applications, these step-by-step guides are designed to help you achieve your goals efficiently and effectively. Each guide links to relevant sections of the Polkadot documentation for developers who want to explore specific topics in greater depth.

+

Not sure where to start? Check out the highlighted tutorials below!

+

Get Started

+ + +

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/interoperability/index.html b/tutorials/interoperability/index.html new file mode 100644 index 00000000..3c244f86 --- /dev/null +++ b/tutorials/interoperability/index.html @@ -0,0 +1,4956 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Interoperability Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Cross-Chain Interoperability Tutorials

+

This section introduces you to the core interoperability solutions within the Polkadot ecosystem through practical, hands-on tutorials. These resources are designed to help you master cross-chain communication techniques, from setting up messaging channels between parachains to leveraging Polkadot's advanced features of the XCM protocol.

+

By following these guides, you’ll gain the skills needed to implement seamless integration and interaction across diverse blockchains, unlocking the full potential of Polkadot's interconnected network.

+

XCM (Cross-Consensus Messaging)

+

XCM provides a secure and trustless framework that facilitates communication between parachains, relay chains, and external blockchains, enabling asset transfers, data sharing, and complex cross-chain workflows.

+

For Parachain Integrators

+

Learn to establish and use cross-chain communication channels:

+ +

In This Section

+

+

+

+

Additional Resources

+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/interoperability/xcm-channels/index.html b/tutorials/interoperability/xcm-channels/index.html new file mode 100644 index 00000000..d7e9bbb0 --- /dev/null +++ b/tutorials/interoperability/xcm-channels/index.html @@ -0,0 +1,4954 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Tutorials for Managing XCM Channels | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+ +
+
+ + +
+ +
+ + + + + + + + + + + +

Tutorials for Managing XCM Channels

+

Establishing XCM channels is essential to unlocking Polkadot's native interoperability. Before bridging assets or sending cross-chain contract calls, the necessary XCM channels must be established.

+

These tutorials guide you through the process of setting up Horizontal Relay-routed Message Passing (HRMP) channels for cross-chain messaging. Learn how to configure unidirectional channels between parachains and the simplified single-message process for bidirectional channels with system parachains like Asset Hub.

+

Understand the Process of Opening Channels

+

Each parachain starts with two default unidirectional XCM channels: an upward channel for sending messages to the relay chain, and a downward channel for receiving messages. These channels are implicitly available.

+

To enable communication between parachains, explicit HRMP channels must be established by registering them on the relay chain. This process requires a deposit to cover the costs associated with storing message queues on the relay chain. The deposit amount depends on the specific relay chain’s parameters.

+

In This Section

+

+

+

+

Additional Resources

+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/interoperability/xcm-channels/para-to-para/index.html b/tutorials/interoperability/xcm-channels/para-to-para/index.html new file mode 100644 index 00000000..d9cf8bb3 --- /dev/null +++ b/tutorials/interoperability/xcm-channels/para-to-para/index.html @@ -0,0 +1,5209 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Opening HRMP Channels Between Parachains | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Opening HRMP Channels Between Parachains

+

Introduction

+

For establishing communication channels between parachains on the Polkadot network using the Horizontal Relay-routed Message Passing (HRMP) protocol, the following steps are required:

+
    +
  1. Channel request - the parachain that wants to open an HRMP channel must make a request to the parachain it wishes to have an open channel with
  2. +
  3. Channel acceptance - the other parachain must then accept this request to complete the channel establishment
  4. +
+

This process results in a unidirectional HRMP channel, where messages can flow in only one direction between the two parachains.

+

An additional HRMP channel must be established in the opposite direction to enable bidirectional communication. This requires repeating the request and acceptance process but with the parachains reversing their roles.

+

Once both unidirectional channels are established, the parachains can send messages back and forth freely through the bidirectional HRMP communication channel.

+

Prerequisites

+

Before proceeding, ensure you meet the following requirements:

+
    +
  • Blockchain network with a relay chain and at least two connected parachains
  • +
  • Wallet with sufficient funds to execute transactions on the participant chains
  • +
+

Procedure for Initiating HRMP Channel Setup

+

This example will demonstrate how to open a channel between parachain 2500 and parachain 2600, using Rococo Local as the relay chain.

+

Fund Sender Sovereign Account

+ +

The sovereign account for parachain 2500 on the relay chain must be funded so it can take care of any XCM transact fees.

+

Use Polkadot.js Apps UI to connect to the relay chain and transfer funds from your account to the parachain 2500 sovereign account. +

+
+Calculating Parachain Sovereign Account +

To generate the sovereign account address for a parachain, you'll need to follow these steps:

+
    +
  1. +

    Determine if the parachain is an "up/down" chain (parent or child) or a "sibling" chain:

    +
      +
    • +

      Up/down chains use the prefix 0x70617261 (which decodes to b"para")

      +
    • +
    • +

      Sibling chains use the prefix 0x7369626c (which decodes to b"sibl")

      +
    • +
    +
  2. +
  3. +

    Calculate the u32 scale encoded value of the parachain ID:

    +
      +
    • Parachain 2500 would be encoded as c4090000
    • +
    +
  4. +
  5. +

    Combine the prefix and parachain ID encoding to form the full sovereign account address:

    +

    The sovereign account of parachain 2500 in relay chain will be 0x70617261c4090000000000000000000000000000000000000000000000000000 +and the SS58 format of this address is 5Ec4AhPSY2GEE4VoHUVheqv5wwq2C1HMKa7c9fVJ1WKivX1Y

    +
  6. +
+

To perform this conversion, you can also use the "Para ID" to Address section in Substrate Utilities.

+
+

Create Channel Opening Extrinsic

+
    +
  1. +

    In Polkadot.js Apps, connect to the relay chain, navigate to the Developer dropdown and select the Extrinsics option

    +

    +
  2. +
  3. +

    Construct an hrmpInitOpenChannel extrinsic call

    +
      +
    1. Select the hrmp pallet
    2. +
    3. Choose the hrmpInitOpenChannel extrinsic
    4. +
    5. Fill in the parameters
        +
      • recipient - parachain ID of the target chain (in this case, 2600)
      • +
      • proposedMaxCapacity - max number of messages that can be pending in the channel at once
      • +
      • proposedMaxMessageSize - max message size that could be put into the channel
      • +
      +
    6. +
    7. Copy the encoded call data + +The encoded call data for opening a channel with parachain 2600 is 0x3c00280a00000800000000001000.
    8. +
    +
  4. +
+

Crafting and Submitting the XCM Message from the Sender

+

To initiate the HRMP channel opening process, you need to create an XCM message that includes the encoded hrmpInitOpenChannel call data from the previous step. This message will be sent from your parachain to the relay chain.

+

This example uses the sudo pallet to dispatch the extrinsic. Verify the XCM configuration of the parachain you're working with and ensure you're using an origin with the necessary privileges to execute the polkadotXcm.send extrinsic.

+

The XCM message should contain the following instructions:

+
    +
  • WithdrawAsset - withdraws assets from the origin's ownership and places them in the Holding Register
  • +
  • BuyExecution - pays for the execution of the current message using the assets in the Holding Register
  • +
  • Transact - execute the encoded transaction call
  • +
  • RefundSurplus - increases the Refunded Weight Register to the value of the Surplus Weight Register, attempting to reclaim any excess fees paid via BuyExecution
  • +
  • DepositAsset - subtracts assets from the Holding Register and deposits equivalent on-chain assets under the specified beneficiary's ownership
  • +
+
+

Note

+

For more detailed information about XCM's functionality, complexities, and instruction set, refer to the xcm-format documentation.

+
+

In essence, this process withdraws funds from the parachain's sovereign account to the XCVM Holding Register, then uses these funds to purchase execution time for the XCM Transact instruction, executes Transact, refunds any unused execution time and deposits any remaining funds into a specified account.

+

To send the XCM message to the relay chain, connect to parachain 2500 in Polkadot.js Apps. Fill in the required parameters as shown in the image below, ensuring that you:

+
    +
  1. Replace the call field with your encoded hrmpInitOpenChannel call data from the previous step
  2. +
  3. Use the correct beneficiary information
  4. +
  5. Click the Submit Transaction button to dispatch the XCM message to the relay chain
  6. +
+

+
+

Note

+

The exact process and parameters for submitting this XCM message may vary depending on your specific parachain and relay chain configurations. Always refer to the most current documentation for your particular network setup.

+
+

After submitting the XCM message to initiate the HRMP channel opening, you should verify that the request was successful. Follow these steps to check the status of your channel request:

+
    +
  1. +

    Using Polkadot.js Apps, connect to the relay chain and navigate to the Developer dropdown, then select the Chain state option

    +

    +
  2. +
  3. +

    Query the HRMP open channel requests

    +
      +
    1. Select hrmp
    2. +
    3. Choose the hrmpOpenChannelRequests call
    4. +
    5. Click the + button to execute the query
    6. +
    7. Check the status of all pending channel requests
    8. +
    +

    +
  4. +
+

If your channel request was successful, you should see an entry for your parachain ID in the list of open channel requests. This confirms that your request has been properly registered on the relay chain and is awaiting acceptance by the target parachain.

+

Procedure for Accepting HRMP Channel

+

For the channel to be fully established, the target parachain must accept the channel request by submitting an XCM message to the relay chain.

+

Fund Receiver Sovereign Account

+

Before proceeding, ensure that the sovereign account of parachain 2600 on the relay chain is funded. This account will be responsible for covering any XCM transact fees. +To fund the account, follow the same process described in the previous section, Fund Sovereign Account.

+

Create Channel Accepting Extrinsic

+
    +
  1. +

    In Polkadot.js Apps, connect to the relay chain, navigate to the Developer dropdown and select the Extrinsics option

    +

    +
  2. +
  3. +

    Construct an hrmpAcceptOpenChannel extrinsic call

    +
      +
    1. Select the hrmp pallet
    2. +
    3. Choose the hrmpAcceptOpenChannel extrinsic
    4. +
    5. Fill in the parameters:
        +
      • sender - parachain ID of the requesting chain (in this case, 2500)
      • +
      +
    6. +
    7. Copy the encoded call data + +The encoded call data for accepting a channel with parachain 2500 should be 0x3c01c4090000
    8. +
    +
  4. +
+

Crafting and Submitting the XCM Message from the Receiver

+

To accept the HRMP channel opening, you need to create and submit an XCM message that includes the encoded hrmpAcceptOpenChannel call data from the previous step. This process is similar to the one described in the previous section, Crafting and Submitting the XCM Message, with a few key differences:

+
    +
  • Use the encoded call data for hrmpAcceptOpenChannel obtained in Step 2 of this section
  • +
  • In the last XCM instruction (DepositAsset), set the beneficiary to parachain 2600's sovereign account to receive any surplus funds
  • +
+

To send the XCM message to the relay chain, connect to parachain 2600 in Polkadot.js Apps. Fill in the required parameters as shown in the image below, ensuring that you:

+
    +
  1. Replace the call field with your encoded hrmpAcceptOpenChannel call data from the previous step
  2. +
  3. Use the correct beneficiary information
  4. +
  5. Click the Submit Transaction button to dispatch the XCM message to the relay chain
  6. +
+

+

After submitting the XCM message to accept the HRMP channel opening, verify that the channel has been set up correctly.

+
    +
  1. +

    Using Polkadot.js Apps, connect to the relay chain and navigate to the Developer dropdown, then select the Chain state option

    +

    +
  2. +
  3. +

    Query the HRMP channels

    +
      +
    1. Select hrmp
    2. +
    3. Choose the hrmpChannels call
    4. +
    5. Click the + button to execute the query
    6. +
    7. Check the status of the opened channel
    8. +
    +

    +
  4. +
+

If the channel has been successfully established, you should see the channel details in the query results.

+

By following these steps, you will have successfully accepted the HRMP channel request and established a unidirectional channel between the two parachains.

+
+

Note

+

Remember that for full bidirectional communication, you'll need to repeat this process in the opposite direction, with parachain 2600 initiating a channel request to parachain 2500.

+
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/interoperability/xcm-channels/para-to-system/index.html b/tutorials/interoperability/xcm-channels/para-to-system/index.html new file mode 100644 index 00000000..cf87363e --- /dev/null +++ b/tutorials/interoperability/xcm-channels/para-to-system/index.html @@ -0,0 +1,5092 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Opening HRMP Channels with System Parachains | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Opening HRMP Channels with System Parachains

+

Introduction

+

While establishing Horizontal Relay-routed Message Passing (HRMP) channels between regular parachains involves a two-step request and acceptance procedure, opening channels with system parachains follows a more straightforward approach.

+

System parachains are specialized chains that provide core functionality to the Polkadot network. Examples include Asset Hub for cross-chain asset transfers and Bridge Hub for connecting to external networks. Given their critical role, establishing communication channels with these system parachains has been optimized for efficiency and ease of use.

+

Any parachain can establish a bidirectional channel with a system chain through a single operation, requiring just one XCM message from the parachain to the relay chain.

+

Prerequisites

+

To successfully complete this process, you'll need to have the following in place:

+
    +
  • Access to a blockchain network consisting of:
      +
    • A relay chain
    • +
    • A parachain
    • +
    • An Asset Hub system chain
    • +
    +
  • +
  • A wallet containing enough funds to cover transaction fees on each of the participating chains
  • +
+

Procedure for Establishing HRMP Channel

+

This guide demonstrates opening an HRMP channel between parachain 2500 and system chain Asset Hub (parachain 1000) on the Rococo Local relay chain.

+

Fund Parachain Sovereign Account

+ +

The sovereign account for parachain 2500 on the relay chain must be funded so it can take care of any XCM transact fees.

+

Use Polkadot.js Apps UI to connect to the relay chain and transfer funds from your account to the parachain 2500 sovereign account.

+

+
+Calculating Parachain Sovereign Account +

To generate the sovereign account address for a parachain, you'll need to follow these steps:

+
    +
  1. +

    Determine if the parachain is an "up/down" chain (parent or child) or a "sibling" chain:

    +
      +
    • +

      Up/down chains use the prefix 0x70617261 (which decodes to b"para")

      +
    • +
    • +

      Sibling chains use the prefix 0x7369626c (which decodes to b"sibl")

      +
    • +
    +
  2. +
  3. +

    Calculate the u32 scale encoded value of the parachain ID:

    +
      +
    • Parachain 2500 would be encoded as c4090000
    • +
    +
  4. +
  5. +

    Combine the prefix and parachain ID encoding to form the full sovereign account address:

    +

    The sovereign account of parachain 2500 in relay chain will be 0x70617261c4090000000000000000000000000000000000000000000000000000 +and the SS58 format of this address is 5Ec4AhPSY2GEE4VoHUVheqv5wwq2C1HMKa7c9fVJ1WKivX1Y

    +
  6. +
+

To perform this conversion, you can also use the "Para ID" to Address section in Substrate Utilities.

+
+

Create Establish Channel with System Extrinsic

+
    +
  1. +

    In Polkadot.js Apps, connect to the relay chain, navigate to the Developer dropdown and select the Extrinsics option

    +

    +
  2. +
  3. +

    Construct an establish_channel_with_system extrinsic call

    +
      +
    1. Select the hrmp pallet
    2. +
    3. Choose the establish_channel_with_system extrinsic
    4. +
    5. Fill in the parameters:
        +
      • target_system_chain - parachain ID of the target system chain (in this case, 1000)
      • +
      +
    6. +
    7. Copy the encoded call data + +The encoded call data for establishing a channel with system parachain 1000 should be 0x3c0ae8030000
    8. +
    +
  4. +
+

Crafting and Submitting the XCM Message

+

Connect to parachain 2500 using Polkadot.js Apps to send the XCM message to the relay chain. Input the necessary parameters as illustrated in the image below. Make sure to:

+
    +
  1. Insert your previously encoded establish_channel_with_system call data into the call field
  2. +
  3. Provide beneficiary details
  4. +
  5. Dispatch the XCM message to the relay chain by clicking the Submit Transaction button +
  6. +
+
+

Note

+

The exact process and parameters for submitting this XCM message may vary depending on your specific parachain and relay chain configurations. Always refer to the most current documentation for your particular network setup.

+
+

After successfully submitting the XCM message to the relay chain, two HRMP channels should be created, establishing bidirectional communication between parachain 2500 and system chain 1000. To verify this, follow these steps:

+
    +
  1. +

    Using Polkadot.js Apps, connect to the relay chain and navigate to the Developer dropdown, then select Chain state +

    +
  2. +
  3. +

    Query the HRMP channels

    +
      +
    1. Select hrmp from the options
    2. +
    3. Choose the hrmpChannels call
    4. +
    5. Click the + button to execute the query +
    6. +
    +
  4. +
  5. +

    Examine the query results. You should see output similar to the following: +

    [
    +    [
    +        [
    +            {
    +                "sender": 1000,
    +                "recipient": 2500
    +            }
    +        ],
    +        {
    +            "maxCapacity": 8,
    +            "maxTotalSize": 8192,
    +            "maxMessageSize": 1048576,
    +            "msgCount": 0,
    +            "totalSize": 0,
    +            "mqcHead": null,
    +            "senderDeposit": 0,
    +            "recipientDeposit": 0
    +        }
    +    ],
    +    [
    +        [
    +            {
    +                "sender": 2500,
    +                "recipient": 1000
    +            }
    +        ],
    +        {
    +            "maxCapacity": 8,
    +            "maxTotalSize": 8192,
    +            "maxMessageSize": 1048576,
    +            "msgCount": 0,
    +            "totalSize": 0,
    +            "mqcHead": null,
    +            "senderDeposit": 0,
    +            "recipientDeposit": 0
    +        }
    +    ]
    +]
    +

    +
  6. +
+

The output confirms the successful establishment of two HRMP channels:

+
    +
  • From chain 1000 (system chain) to chain 2500 (parachain)
  • +
  • From chain 2500 (parachain) to chain 1000 (system chain)
  • +
+

This bidirectional channel enables direct communication between the system chain and the parachain, allowing for cross-chain message passing.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/index.html b/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/index.html new file mode 100644 index 00000000..c40c654a --- /dev/null +++ b/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/index.html @@ -0,0 +1,5222 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + XCM Transfers from Relay Chain to Parachain | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

From Relay Chain to Parachain

+

Introduction

+

Cross-Consensus Messaging (XCM) facilitates asset transfers both within the same consensus system and between different ones, such as between a relay chain and its parachains. For cross-system transfers, two main methods are available:

+
    +
  • Asset teleportation - a simple and efficient method involving only the source and destination chains, ideal for systems with a high level of trust
  • +
  • Reserve-backed transfers - involves a trusted reserve holding real assets and mints derivative tokens to track ownership. This method is suited for systems with lower trust levels
  • +
+

In this tutorial, you will learn how to perform a reserve-backed transfer of DOT between a relay chain (Polkadot) and a parachain (Astar).

+

Prerequisites

+

When adapting this tutorial for other chains, before you can send messages between different consensus systems, you must first open HRMP channels. For detailed guidance, refer to the XCM Channels article before for further information about.

+

This tutorial uses Chopsticks to fork a relay chain and a parachain connected via HRMP channels. For more details on this setup, see the XCM Testing section on the Chopsticks page.

+

Setup

+

To simulate XCM operations between different consensus systems, start by forking the network with the following command:

+

chopsticks xcm -r polkadot -p astar
+
+After executing this command, the relay chain and parachain will expose the following WebSocket endpoints:

+ + + + + + + + + + + + + + + + + +
ChainWebSocket Endpoint
Polkadot (relay chain)
ws://localhost:8001
Astar (parachain)
ws://localhost:8000
+

You can perform the reserve-backed transfer using either the Polkadot.js Apps interface or the Polkadot API, depending on your preference. Both methods provide the same functionality to facilitate asset transfers between the relay chain and parachain.

+

Using Polkadot.js Apps

+

Open two browser tabs and can connect these endpoints using the Polkadot.js Apps interface:

+

a. Add the custom endpoint for each chain

+

b. Click Switch to connect to the respective network

+

+

This reserve-backed transfer method facilitates asset transfers from a local chain to a destination chain by trusting a third party called a reserve to store the real assets. Fees on the destination chain are deducted from the asset specified in the assets vector at the fee_asset_item index, covering up to the specified weight_limit. The operation fails if the required weight exceeds this limit, potentially putting the transferred assets at risk.

+

The following steps outline how to execute a reserve-backed transfer from the Polkadot relay chain to the Astar parachain.

+

From the Relay Chain Perspective

+
    +
  1. +

    Navigate to the Extrinsics page

    +
      +
    1. Click on the Developer tab from the top navigation bar
    2. +
    3. Select Extrinsics from the dropdown
    4. +
    +

    +
  2. +
  3. +

    Select xcmPallet

    +

    +
  4. +
  5. +

    Select the limitedReservedAssetTransfer extrinsic from the dropdown list

    +

    +
  6. +
  7. +

    Fill out the required fields:

    +
      +
    1. +

      dest - specifies the destination context for the assets. Commonly set to [Parent, Parachain(..)] for parachain-to-parachain transfers or [Parachain(..)] for relay chain-to-parachain transfers. In this case, since the transfer is from a relay chain to a parachain, the destination (Location) is the following:

      +
      { parents: 0, interior: { X1: [{ Parachain: 2006 }] } }
      +
      +
    2. +
    3. +

      beneficiary - defines the recipient of the assets within the destination context, typically represented as an AccountId32 value. This example uses the following account present in the destination chain:

      +
      X2mE9hCGX771c3zzV6tPa8U2cDz4U4zkqUdmBrQn83M3cm7
      +
      +
    4. +
    5. +

      assets - lists the assets to be withdrawn, including those designated for fee payment on the destination chain

      +
    6. +
    7. feeAssetItem - indicates the index of the asset within the assets list to be used for paying fees
    8. +
    9. weightLimit - specifies the weight limit, if applicable, for the fee payment on the remote chain
    10. +
    11. +

      Click on the Submit Transaction button to send the transaction

      +

      +
    12. +
    +
  8. +
+

After submitting the transaction, verify that the xcmPallet.FeesPaid and xcmPallet.Sent events have been emitted:

+

+

From the Parachain Perspective

+

After submitting the transaction from the relay chain, confirm its success by checking the parachain's events. Look for the assets.Issued event, which verifies that the assets have been issued to the destination as expected:

+

+

Using PAPI

+

To programmatically execute the reserve-backed asset transfer between the relay chain and the parachain, you can use Polkadot API (PAPI). PAPI is a robust toolkit that simplifies interactions with Polkadot-based chains. For this project, you'll first need to set up your environment, install necessary dependencies, and create a script to handle the transfer process.

+
    +
  1. +

    Start by creating a folder for your project: +

    mkdir reserve-backed-asset-transfer
    +cd reserve-backed-asset
    +

    +
  2. +
  3. +

    Initialize a Node.js project and install the required dependencies. Execute the following commands:

    +
    npm init
    +npm install polkadot-api @polkadot-labs/hdkd @polkadot-labs/hdkd-helpers
    +
    +
  4. +
  5. +

    To enable static, type-safe APIs for interacting with the Polkadot and Astar chains, add their metadata to your project using PAPI:

    +
    npx papi add dot -n polkadot
    +npx papi add astar -w wss://rpc.astar.network
    +
    +
    +

    Note

    +
      +
    • dot and astar are arbitrary names you assign to the chains, allowing you to access their metadata information
    • +
    • The first command uses the well-known Polkadot chain, while the second connects to the Astar chain using its WebSocket endpoint
    • +
    +
    +
  6. +
  7. +

    Create a index.js file and insert the following code to configure the clients and handle the asset transfer

    +
    // Import necessary modules from Polkadot API and helpers
    +import {
    +  astar, // Astar chain metadata
    +  dot, // Polkadot chain metadata
    +  XcmVersionedLocation,
    +  XcmVersionedAssets,
    +  XcmV3Junction,
    +  XcmV3Junctions,
    +  XcmV3WeightLimit,
    +  XcmV3MultiassetFungibility,
    +  XcmV3MultiassetAssetId,
    +} from '@polkadot-api/descriptors';
    +import { createClient } from 'polkadot-api';
    +import { sr25519CreateDerive } from '@polkadot-labs/hdkd';
    +import {
    +  DEV_PHRASE,
    +  entropyToMiniSecret,
    +  mnemonicToEntropy,
    +  ss58Decode,
    +} from '@polkadot-labs/hdkd-helpers';
    +import { getPolkadotSigner } from 'polkadot-api/signer';
    +import { getWsProvider } from 'polkadot-api/ws-provider/web';
    +import { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat';
    +import { Binary } from 'polkadot-api';
    +
    +// Create Polkadot client using WebSocket provider for Polkadot chain
    +const polkadotClient = createClient(
    +  withPolkadotSdkCompat(getWsProvider('ws://127.0.0.1:8001')),
    +);
    +const dotApi = polkadotClient.getTypedApi(dot);
    +
    +// Create Astar client using WebSocket provider for Astar chain
    +const astarClient = createClient(
    +  withPolkadotSdkCompat(getWsProvider('ws://localhost:8000')),
    +);
    +const astarApi = astarClient.getTypedApi(astar);
    +
    +// Create keypair for Alice using dev phrase to sign transactions
    +const miniSecret = entropyToMiniSecret(mnemonicToEntropy(DEV_PHRASE));
    +const derive = sr25519CreateDerive(miniSecret);
    +const aliceKeyPair = derive('//Alice');
    +const alice = getPolkadotSigner(
    +  aliceKeyPair.publicKey,
    +  'Sr25519',
    +  aliceKeyPair.sign,
    +);
    +
    +// Define recipient (Dave) address on Astar chain
    +const daveAddress = 'X2mE9hCGX771c3zzV6tPa8U2cDz4U4zkqUdmBrQn83M3cm7';
    +const davePublicKey = ss58Decode(daveAddress)[0];
    +const idBenef = Binary.fromBytes(davePublicKey);
    +
    +// Define Polkadot Asset ID on Astar chain (example)
    +const polkadotAssetId = 340282366920938463463374607431768211455n;
    +
    +// Fetch asset balance of recipient (Dave) before transaction
    +let assetMetadata = await astarApi.query.Assets.Account.getValue(
    +  polkadotAssetId,
    +  daveAddress,
    +);
    +console.log('Asset balance before tx:', assetMetadata?.balance ?? 0);
    +
    +// Prepare and submit transaction to transfer assets from Polkadot to Astar
    +const tx = dotApi.tx.XcmPallet.limited_reserve_transfer_assets({
    +  dest: XcmVersionedLocation.V3({
    +    parents: 0,
    +    interior: XcmV3Junctions.X1(
    +      XcmV3Junction.Parachain(2006), // Destination is the Astar parachain
    +    ),
    +  }),
    +  beneficiary: XcmVersionedLocation.V3({
    +    parents: 0,
    +    interior: XcmV3Junctions.X1(
    +      XcmV3Junction.AccountId32({
    +        // Beneficiary address on Astar
    +        network: undefined,
    +        id: idBenef,
    +      }),
    +    ),
    +  }),
    +  assets: XcmVersionedAssets.V3([
    +    {
    +      id: XcmV3MultiassetAssetId.Concrete({
    +        parents: 0,
    +        interior: XcmV3Junctions.Here(), // Asset from the sender's location
    +      }),
    +      fun: XcmV3MultiassetFungibility.Fungible(120000000000), // Asset amount to transfer
    +    },
    +  ]),
    +  fee_asset_item: 0, // Asset used to pay transaction fees
    +  weight_limit: XcmV3WeightLimit.Unlimited(), // No weight limit on transaction
    +});
    +
    +// Sign and submit the transaction
    +tx.signSubmitAndWatch(alice).subscribe({
    +  next: async (event) => {
    +    if (event.type === 'finalized') {
    +      console.log('Transaction completed successfully');
    +    }
    +  },
    +  error: console.error,
    +  complete() {
    +    polkadotClient.destroy(); // Clean up after transaction
    +  },
    +});
    +
    +// Wait for transaction to complete
    +await new Promise((resolve) => setTimeout(resolve, 20000));
    +
    +// Fetch asset balance of recipient (Dave) after transaction
    +assetMetadata = await astarApi.query.Assets.Account.getValue(
    +  polkadotAssetId,
    +  daveAddress,
    +);
    +console.log('Asset balance after tx:', assetMetadata?.balance ?? 0);
    +
    +// Exit the process
    +process.exit(0);
    +
    +
    +

    Note

    +

    To use this script with real-world blockchains, you'll need to update the WebSocket endpoint to the appropriate one, replace the Alice account with a valid account, and ensure the account has sufficient funds to cover transaction fees.

    +
    +
  8. +
  9. +

    Execute the script

    +
    node index.js
    +
    +
  10. +
  11. +

    Check the terminal output. If the operation is successful, you should see the following message:

    +

    + node index.js + Asset balance before tx: 0 + Transaction completed successfully + Asset balance after tx: 119999114907n +

    +
  12. +
+

Additional Resources

+

You can perform these operations using the Asset Transfer API for an alternative approach. Refer to the Asset Transfer API guide in the documentation for more details.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/interoperability/xcm-transfers/index.html b/tutorials/interoperability/xcm-transfers/index.html new file mode 100644 index 00000000..31fa16d3 --- /dev/null +++ b/tutorials/interoperability/xcm-transfers/index.html @@ -0,0 +1,4903 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + XCM Transfers | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

XCM Transfers

+

Discover comprehensive tutorials that guide you through performing asset transfers between distinct consensus systems. These tutorials leverage XCM (Cross-Consensus Messaging) technology, that enables cross-chain communication and asset exchanges across different blockchain networks. Whether you're working within the same ecosystem or bridging multiple systems, XCM ensures secure, efficient, and interoperable solutions.

+

By mastering XCM-based transfers, you'll unlock new possibilities for building cross-chain applications and expanding blockchain utility. Learn the methods, tools, and best practices for testibg XCM-powered transfers, ensuring your systems achieve robust interoperability.

+

In This Section

+

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/index.html b/tutorials/polkadot-sdk/index.html new file mode 100644 index 00000000..c6e211ac --- /dev/null +++ b/tutorials/polkadot-sdk/index.html @@ -0,0 +1,4959 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Polkadot SDK Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Polkadot SDK Tutorials

+

The Polkadot SDK is a versatile framework for building custom blockchains, whether as standalone networks or as part of the Polkadot ecosystem. With its modular design and extensible tools, libraries, and runtime components, the SDK simplifies the process of creating parachains, system chains, and solochains.

+

Ready to create a parachain from the ground up? Start with the tutorials highlighted in the Build and Deploy a Parachain section.

+

Build and Deploy a Parachain

+

Follow these key milestones to guide you through parachain development. Each step links to detailed tutorials for a deeper dive into each stage:

+
    +
  • +

    Install the Polkadot SDK - set up the necessary tools to begin building on Polkadot. This step will get your environment ready for parachain development

    +
  • +
  • +

    Start Developing Your Own Parachain - kickstart your development by setting up a local solochain. This tutorial will lay the foundation for building and customizing your own parachain within the Polkadot ecosystem

    +
  • +
  • +

    Prepare Your Parachain for Deployment - follow these steps to set up a local relay chain environment and connect your parachain, getting it ready for deployment on the Polkadot network

    +
  • +
+

In This Section

+

+

+

+

Additional Resources

+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/index.html b/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/index.html new file mode 100644 index 00000000..0faf6215 --- /dev/null +++ b/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/acquire-a-testnet-slot/index.html @@ -0,0 +1,5156 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Acquire a TestNet Slot | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Acquire a TestNet Slot

+

Introduction

+

This tutorial demonstrates deploying a parachain on a public test network like the Paseo network. Public TestNets have a higher bar to entry than a private network but represent an essential step in preparing a parachain project to move into a production network.

+

Prerequisites

+

Before you start, you need to have the following prerequisites:

+ +

Get Started with an Account and Tokens

+

To perform any action on Paseo, you need PAS tokens, which can be requested from the Polkadot Faucet. Also, to store the tokens, you must have access to a Substrate-compatible digital currency wallet. Development keys and accounts should never hold assets of actual value and should not be used for production. Many options are available for holding digital currency—including hardware wallets and browser-based applications—and some are more reputable than others. You should do your own research before selecting one.

+

However, you can use the Polkadot.js Apps interface to get you started for testing purposes.

+

To prepare an account, follow these steps:

+
    +
  1. +

    Open the Polkadot.js Apps interface and connect to the Paseo network

    +

    +
  2. +
  3. +

    Navigate to the Accounts section

    +
      +
    1. Click on the Accounts tab in the top menu
    2. +
    3. Select the Accounts option from the dropdown menu
    4. +
    +

    +
  4. +
  5. +

    Copy the address of the account you want to use for the parachain deployment

    +

    +
  6. +
  7. +

    Visit the Polkadot Faucet and paste the copied address in the input field. Ensure that the network is set to Paseo and click on the Get some PASs button

    +

    +

    After a few seconds, you will receive 100 PAS tokens in your account.

    +
  8. +
+

Reserve a Parachain Identifier

+

You must reserve a parachain identifier before registering a parathread on Paseo. The steps are similar to the ones you followed in Prepare a Local Parachain to reserve an identifier on the local relay chain. However, for the public TestNet, you'll be assigned the next available identifier.

+

To reserve a parachain identifier, follow these steps:

+
    +
  1. +

    Navigate to the Parachains section

    +
      +
    1. Click on the Network tab in the top menu
    2. +
    3. Select the Parachains option from the dropdown menu
    4. +
    +

    +
  2. +
  3. +

    Register a parathread

    +
      +
    1. Select the Parathreads tab
    2. +
    3. Click on the + ParaId button
    4. +
    +

    +
  4. +
  5. +

    Review the transaction and click on the + Submit button

    +

    +

    For this case, the next available parachain identifier is 4508.

    +
  6. +
  7. +

    After submitting the transaction, you can navigate to the Explorer tab and check the list of recent events for successful registrar.Reserved

    +

    +
  8. +
+

Modify the Chain Specification File

+

The files required to register a parachain must specify the correct relay chain to connect to and the parachain identifier you have been assigned. To make these changes, you must build and modify the chain specification file for your parachain. In this tutorial, the relay chain is paseo, and the parachain identifier is 4508.

+

To modify the chain specification:

+
    +
  1. +

    Generate the plain text chain specification for the parachain template node by running the following command:

    +
    ./target/release/parachain-template-node build-spec \
    +  --disable-default-bootnode > plain-parachain-chainspec.json
    +
    +
  2. +
  3. +

    Open the plain text chain specification for the parachain template node in a text editor

    +
  4. +
  5. +

    Set relay_chain to paseo and para_id to the identifier you've been assigned. For example, if your reserved identifier is 4508, set the para_id field to 4508:

    +
    "...": "...",
    +"relay_chain": "paseo",
    +"para_id": 4508,
    +        "...": {}
    +    }
    +}
    +
    +
  6. +
  7. +

    Set the parachainId to the parachain identifier that you previously reserved:

    +
    {
    +    "...": "...",
    +"genesis": {
    +    "runtime": {
    +        "...": {},
    +        "parachainInfo": {
    +            "parachainId": 4508
    +        },
    +        },
    +        "...": {}
    +    }
    +}
    +
    +
  8. +
  9. +

    Add the public key for your account to the session keys section. Each configured session key will require a running collator:

    +
    {
    +    "...": "...",
    +"genesis": {
    +    "runtime": {
    +        "...": {},
    +            "session": {
    +                "keys": [
    +                    [
    +                        "5HErbKmL5JmUKDVsH1aGyXTGZb4i9iaNsFhSgkNDr8qp2Dvj",
    +                        "5HErbKmL5JmUKDVsH1aGyXTGZb4i9iaNsFhSgkNDr8qp2Dvj",
    +                        {
    +                            "aura": "5HErbKmL5JmUKDVsH1aGyXTGZb4i9iaNsFhSgkNDr8qp2Dvj"
    +                        }
    +                    ]
    +                ]
    +            }
    +        },
    +        "...": {}
    +    }
    +}
    +
    +
  10. +
  11. +

    Save your changes and close the plain text chain specification file

    +
  12. +
  13. +

    Generate a raw chain specification file from the modified chain specification file:

    +
    ./target/release/parachain-template-node build-spec \
    +  --chain plain-parachain-chainspec.json \
    +  --disable-default-bootnode \
    +  --raw > raw-parachain-chainspec.json
    +
    +

    After running the command, you will see the following output:

    +

    + ./target/release/parachain-template-node build-spec --chain plain-parachain-chainspec.json --disable-default-bootnode --raw > raw-parachain-chainspec.json +
    + 2024-09-11 09:48:15 Building chain spec + 2024-09-11 09:48:15 assembling new collators for new session 0 at #0 + 2024-09-11 09:48:15 assembling new collators for new session 1 at #0 +

    +
  14. +
+

Export Required Files

+

To prepare the parachain collator to be registered on Paseo, follow these steps:

+
    +
  1. +

    Export the Wasm runtime for the parachain by running a command similar to the following:

    +
    ./target/release/parachain-template-node export-genesis-wasm \
    +  --chain raw-parachain-chainspec.json para-4508-wasm
    +
    +
  2. +
  3. +

    Export the genesis state for the parachain by running a command similar to the following:

    +
    ./target/release/parachain-template-node export-genesis-state \
    +  --chain raw-parachain-chainspec.json para-4508-state
    +
    +
  4. +
+

Start the Collator Node

+

You must have the ports for the collator publicly accessible and discoverable to enable parachain nodes to peer with Paseo validator nodes to produce blocks. You can specify the ports with the --port command-line option. For example, you can start the collator with a command similar to the following:

+
./target/release/parachain-template-node --collator \
+  --chain raw-parachain-chainspec.json \
+  --base-path /tmp/parachain/pubs-demo \
+  --port 50333 \
+  --rpc-port 8855 \
+  -- \
+  --execution wasm \
+  --chain paseo \
+  --port 50343 \
+  --rpc-port 9988
+
+

In this example, the first --port setting specifies the port for the collator node and the second --port specifies the embedded relay chain node port. The first --rpc-port setting specifies the port you can connect to the collator. The second --rpc-port specifies the port for connecting to the embedded relay chain.

+

Obtain Coretime

+

With your parachain collator operational, the next step is acquiring coretime. This is essential for ensuring your parachain's security through the relay chain. Agile Coretime enhances Polkadot's resource management, offering developers greater economic adaptability. Once you have configured your parachain, you can follow two paths:

+
    +
  • Bulk coretime is purchased via the Broker pallet on the respective coretime system parachain. You can purchase bulk coretime on the coretime chain and assign the purchased core to the registered ParaID
  • +
  • On-demand coretime is ordered via the OnDemandAssignment pallet, which is located on the respective relay chain
  • +
+

For more information on coretime, refer to the Coretime documentation.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/index.html b/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/index.html new file mode 100644 index 00000000..5ade595b --- /dev/null +++ b/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/index.html @@ -0,0 +1,4968 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Connect to a Relay Chain | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Connect to a Relay Chain

+

Ready to connect your parachain to Polkadot? Take the next step in your parachain development journey by learning how to connect your custom chain to a relay chain. These tutorials will guide you through the core processes of parachain integration, covering:

+
    +
  • Relay chain setup and configuration
  • +
  • Registering and acquiring a parachain slot
  • +
  • Preparing the genesis state and runtime
  • +
  • Configuring collator nodes for network operation
  • +
  • Deploying your parachain to a TestNet
  • +
+

Each tutorial is designed to build on foundational concepts, offering a clear and structured progression from local development to seamless integration with Polkadot’s public network. Whether you’re aiming to test locally or deploy on TestNet, these guides will ensure you’re equipped with the skills to succeed.

+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/index.html b/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/index.html new file mode 100644 index 00000000..56c6288c --- /dev/null +++ b/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-parachain/index.html @@ -0,0 +1,5313 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Prepare a Parachain | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Prepare a Parachain

+

Introduction

+

This tutorial illustrates reserving a parachain identifier with a local relay chain and connecting a local parachain to that relay chain. By completing this tutorial, you will accomplish the following objectives:

+
    +
  • Compile a local parachain node
  • +
  • Reserve a unique identifier with the local relay chain for the parachain to use
  • +
  • Configure a chain specification for the parachain
  • +
  • Export the runtime and genesis state for the parachain
  • +
  • Start the local parachain and see that it connects to the local relay chain
  • +
+

Prerequisites

+

Before you begin, ensure that you have the following prerequisites:

+
    +
  • Configured a local relay chain with two validators as described in the Prepare a Relay Chain tutorial
  • +
  • You are aware that parachain versions and dependencies are tightly coupled with the version of the relay chain they connect to and know the software version you used to configure the relay chain
  • +
+

Build the Parachain Template

+

This tutorial uses the Polkadot SDK Parachain Template to illustrate launching a parachain that connects to a local relay chain. The parachain template is similar to the Solochain Template used in development. You can also use the parachain template as the starting point for developing a custom parachain project.

+

To build the parachain template, follow these steps:

+
    +
  1. +

    Clone the branch of the polkadot-sdk-parachain-template repository

    +
    git clone https://github.com/paritytech/polkadot-sdk-parachain-template.git
    +
    +
    +

    Note

    +

    Ensure that you clone the correct branch of the repository that matches the version of the relay chain you are connecting to.

    +
    +
  2. +
  3. +

    Change the directory to the cloned repository

    +
    cd polkadot-sdk-solochain-template
    +
    +
  4. +
  5. +

    Build the parachain template collator

    +
    cargo build --release
    +
    +
    +

    Note

    +

    Depending on your system’s performance, compiling the node can take a few minutes.

    +
    +
  6. +
+

Reserve a Parachain Identifier

+

Every parachain must reserve a unique ParaID identifier to connect to its specific relay chain. Each relay chain manages its own set of unique identifiers for the parachains that connect to it. The identifier is called a ParaID because the same identifier can be used to identify a slot occupied by a parachain or a parathread.

+

Note that you must have an account with sufficient funds to reserve a slot on a relay chain. You can determine the number of tokens a specific relay chain requires by checking the ParaDeposit configuration in the paras_registrar pallet for that relay chain. The following example shows a ParaDeposit requirement of 40 native tokens:

+
parameter_types! {
+    pub const ParaDeposit: Balance = 40 * UNITS;
+}
+
+impl paras_registrar::Config for Runtime {
+    type RuntimeOrigin = RuntimeOrigin;
+    type RuntimeEvent = RuntimeEvent;
+    type Currency = Balances;
+    type OnSwap = (Crowdloan, Slots);
+    type ParaDeposit = ParaDeposit;
+    type DataDepositPerByte = DataDepositPerByte;
+    type WeightInfo = weights::runtime_common_paras_registrar::WeightInfo<Runtime>;
+}
+
+

Each relay chain allows its identifiers by incrementing the identifier starting at 2000 for all chains that aren't system parachains. System parachains use a different method to allocate slot identifiers.

+

To reserve a parachain identifier, follow these steps:

+
    +
  1. +

    Ensure your local relay chain validators are running. For further information, refer to the Prepare a Relay Chain tutorial

    +
  2. +
  3. +

    Connect to a local relay chain node using the Polkadot.js Apps interface. If you have followed the Prepare a Relay Chain tutorial, you can access the Polkadot.js Apps interface at ws://localhost:9944

    +

    +
  4. +
  5. +

    Navigate to the Parachains section

    +
      +
    1. Click on the Network tab
    2. +
    3. Select Parachains from the dropdown menu
    4. +
    +

    +
  6. +
  7. +

    Register a parathread

    +
      +
    1. Select the Parathreads tab
    2. +
    3. Click on the + ParaId button
    4. +
    +

    +
  8. +
  9. +

    Fill in the required fields and click on the + Submit button

    +

    +
    +

    Note

    +

    The account used to reserve the identifier will be the account charged for the transaction and the origin account for the parathread associated with the identifier.

    +
    +
  10. +
  11. +

    After submitting the transaction, you can navigate to the Explorer tab and check the list of recent events for successful registrar.Reserved

    +

    +
  12. +
+

You are now ready to prepare the chain specification and generate the files required for your parachain to connect to the relay chain using the reserved identifier (paraId 2000).

+

Modify the Default Chain Specification

+

To register your parachain with the local relay chain, you must modify the default chain specification to use your reserved parachain identifier.

+

To modify the default chain specification, follow these steps:

+
    +
  1. +

    Generate the plain text chain specification for the parachain template node by running the following command

    +
    ./target/release/parachain-template-node build-spec \
    +  --disable-default-bootnode > plain-parachain-chainspec.json
    +
    +
  2. +
  3. +

    Open the plain text chain specification for the parachain template node in a text editor

    +
  4. +
  5. +

    Set the para_id to the parachain identifier that you previously reserved. For example, if your reserved identifier is 2000, set the para_id field to 2000:

    +
    "...": "...",
    +"relay_chain": "rococo-local",
    +"para_id": 2000,
    +"genesis": {
    +        "...": {}
    +    }
    +}
    +
    +
  6. +
  7. +

    Set the parachainId to the parachain identifier that you previously reserved. For example, if your reserved identifier is 2000, set the parachainId field to 2000

    +
    "...": "...",
    +    "genesis": {
    +        "runtime": {
    +            "...": {},
    +            "parachainInfo": {
    +                "parachainId": 2000
    +            }
    +        },
    +        "...": {}
    +    }
    +}
    +
    +
  8. +
  9. +

    If you complete this tutorial simultaneously as anyone on the same local network, an additional step is needed to prevent accidentally peering with their nodes. Find the following line and add characters to make your protocolId unique

    +
    "...": "...",
    +"protocolId": "template-local",
    +"genesis": {
    +        "...": {}
    +    }
    +}
    +
    +
  10. +
  11. +

    Save your changes and close the plain text chain specification file

    +
  12. +
  13. +

    Generate a raw chain specification file from the modified chain specification file by running the following command

    +
    ./target/release/parachain-template-node build-spec \
    +  --chain plain-parachain-chainspec.json \
    +  --disable-default-bootnode \
    +  --raw > raw-parachain-chainspec.json
    +
    +

    After running the command, you will see the following output:

    +

    + ./target/release/parachain-template-node build-spec \ + --chain plain-parachain-chainspec.json \ + --disable-default-bootnode \ + --raw > raw-parachain-chainspec.json + + 2024-09-10 14:34:58 Building chain spec + 2024-09-10 14:34:59 assembling new collators for new session 0 at #0 + 2024-09-10 14:34:59 assembling new collators for new session 1 at #0 + +

    +
  14. +
+

Prepare the Parachain Collator

+

With the local relay chain running and the raw chain specification for the parachain template updated, you can start the parachain collator node and export information about its runtime and genesis state.

+

To prepare the parachain collator to be registered:

+
    +
  1. +

    Export the Wasm runtime for the parachain

    +

    The relay chain needs the parachain-specific runtime validation logic to validate parachain blocks. You can export the Wasm runtime for a parachain collator node by running a command similar to the following:

    +
    ./target/release/parachain-template-node export-genesis-wasm \
    +  --chain raw-parachain-chainspec.json para-2000-wasm
    +
    +
  2. +
  3. +

    Generate a parachain genesis state

    +

    To register a parachain, the relay chain needs to know the genesis state of the parachain. You can export the entire genesis state—hex-encoded—to a file by running a command similar to the following:

    +
    ./target/release/parachain-template-node export-genesis-state \
    +  --chain raw-parachain-chainspec.json para-2000-genesis-state
    +
    +

    After running the command, you will see the following output:

    +

    + ./target/release/parachain-template-node export-genesis-state \ --chain raw-parachain-chainspec.json para-2000-genesis-state + 2024-09-10 14:41:13 🔨 Initializing Genesis block/state (state: 0xb089…1830, header-hash: 0x6b0b…bd69) + +

    +
    +

    Note

    +

    You should note that the runtime and state you export must be for the genesis block. You can't connect a parachain with any previous state to a relay chain. All parachains must start from block 0 on the relay chain. See Convert a Solo Chain for details on how the parachain template was created and how to convert the chain logic—not its history or state migrations—to a parachain.

    +
    +
  4. +
  5. +

    Start a collator node with a command similar to the following

    +
    ./target/release/parachain-template-node \
    +  --charlie \
    +  --collator \
    +  --force-authoring \
    +  --chain raw-parachain-chainspec.json \
    +  --base-path /tmp/charlie-parachain/ \
    +  --unsafe-force-node-key-generation \
    +  --port 40333 \
    +  --rpc-port 8844 \
    +  -- \
    +  --chain INSERT_RELAY_CHAIN_PATH/local-raw-spec.json \
    +  --port 30333 \
    +  --rpc-port 9946
    +
    +
    +

    Note

    +

    Ensure that you replace INSERT_RELAY_CHAIN_PATH with the path to the raw chain specification for the local relay chain.

    +
    +

    After running the command, you will see the following output:

    +

    + ./target/release/parachain-template-node \ + --charlie \ + --collator \ + --force-authoring \ + --chain raw-parachain-chainspec.json \ + --base-path /tmp/charlie-parachain/ \ + --unsafe-force-node-key-generation \ + --port 40333 \ + --rpc-port 8844 \ + -- \ + --chain INSERT_RELAY_CHAIN_PATH/local-raw-spec.json \ + --port 30333 \ + --rpc-port 9946 + + 2024-09-10 16:26:30 [Parachain] PoV size { header: 0.21875kb, extrinsics: 3.6103515625kb, storage_proof: 3.150390625kb } + 2024-09-10 16:26:30 [Parachain] Compressed PoV size: 6.150390625kb + 2024-09-10 16:26:33 [Relaychain] 💤 Idle (2 peers), best: #1729 (0x3aa4…cb6b), finalized #1726 (0xff7a…4352), ⬇ 9.1kiB/s ⬆ 3.8kiB/s + +

    +
  6. +
+

Register With the Local Relay Chain

+

With the local relay chain and collator node running, you can register the parachain on the local relay chain. In a live public network, registration typically involves a parachain auction. You can use a Sudo transaction and the Polkadot.js Apps interface for this tutorial and local testing. A Sudo transaction lets you bypass the steps required to acquire a parachain or parathread slot. This transaction should be executed in the relay chain.

+

To register the parachain, follow these steps:

+
    +
  1. Validate that your local relay chain validators are running
  2. +
  3. +

    Navigate to the Sudo tab in the Polkadot.js Apps interface

    +
      +
    1. Click on the Developer tab
    2. +
    3. Select Sudo from the dropdown menu
    4. +
    +

    +
  4. +
  5. +

    Submit a transaction with Sudo privileges

    +
      +
    1. Select the paraSudoWrapper pallet
    2. +
    3. Click on the sudoScheduleParaInitialize extrinsic from the list of available extrinsics
    4. +
    +

    +
  6. +
  7. +

    Fill in the required fields

    +
      +
    1. id - type the parachain identifier you reserved
    2. +
    3. genesisHead - click the file upload button and select the para-2000-genesis-state file you exported
    4. +
    5. validationCode - click the file upload button and select the para-2000-wasm file you exported
    6. +
    7. +

      paraKind - select Yes if you are registering a parachain or No if you are registering a parathread

      +
    8. +
    9. +

      Click on the Submit Transaction button

      +
    10. +
    +

    +
  8. +
  9. +

    After submitting the transaction, you can navigate to the Explorer tab and check the list of recent events for successful paras.PvfCheckAccepted

    +

    +

    After the parachain is initialized, you can see it in Parachains section of the Polkadot.js Apps interface

    +
  10. +
  11. +

    Click Network and select Parachains and wait for a new epoch to start

    +

    +
  12. +
+

The relay chain tracks the latest block—the head—of each parachain. When a relay chain block is finalized, the parachain blocks that have completed the validation process are also finalized. This is how Polkadot achieves pooled, shared security for its parachains.

+

After the parachain connects to the relay chain in the next epoch and finalizes its first block you can see information about it in the Polkadot/Substrate Portal.

+

The terminal where the parachain is running also displays details similar to the following:

+
+ ... + [Relaychain] 💤 Idle (2 peers), best: #90 (0x5f73…1ccf), finalized #87 (0xeb50…68ea), ⬇ 1.4kiB/s ⬆ 1.1kiB/s + [Parachain] 💤 Idle (0 peers), best: #0 (0x3626…fef3), finalized #0 (0x3626…fef3), ⬇ 1.2kiB/s ⬆ 0.7kiB/s + [Relaychain] 💤 Idle (2 peers), best: #90 (0x5f73…1ccf), finalized #88 (0xd43c…c3e6), ⬇ 0.7kiB/s ⬆ 0.5kiB/s + [Parachain] 💤 Idle (0 peers), best: #0 (0x3626…fef3), finalized #0 (0x3626…fef3), ⬇ 1.0kiB/s ⬆ 0.6kiB/s + [Relaychain] 👶 New epoch 9 launching at block 0x1c93…4aa9 (block slot 281848325 >= start slot 281848325) + [Relaychain] 👶 Next epoch starts at slot 281848335 + [Relaychain] ✨ Imported #91 (0x1c93…4aa9) + [Parachain] Starting collation. relay_parent=0x1c936289cfe15fabaa369f7ae5d73050581cb12b75209c11976afcf07f6a4aa9 at=0x36261113c31019d4b2a1e27d062e186f46da0e8f6786177dc7b35959688ffef3 + [Relaychain] 💤 Idle (2 peers), best: #91 (0x1c93…4aa9), finalized #88 (0xd43c…c3e6), ⬇ 1.2kiB/s ⬆ 0.7kiB/s + [Parachain] 💤 Idle (0 peers), best: #0 (0x3626…fef3), finalized #0 (0x3626…fef3), ⬇ 0.2kiB/s ⬆ 37 B/s + +
+ +

Resetting the Blockchain State

+

The parachain collator you connected to the relay chain in this tutorial contains all of the blockchain data for the parachain. There's only one node in this parachain network, so any transactions you submit are only stored on this node. Relay chains don't store any parachain state. The relay chain only stores header information for the parachains that connect to it.

+

For testing purposes, you might want to purge the blockchain state to start over periodically. However, you should remember that if you purge the chain state or manually delete the database, you won’t be able to recover the data or restore the chain state. If you want to preserve data, you should ensure you have a copy before you purge the parachain state.

+

If you want to start over with a clean environment for testing, you should completely remove the chain state for the local relay chain nodes and the parachain.

+

To reset the blockchain state, follow these steps:

+
    +
  1. +

    In the terminal where the parachain template node is running, press Control-C

    +
  2. +
  3. +

    Purge the parachain collator state by running the following command

    +
    ./target/release/parachain-template-node purge-chain \
    +  --chain raw-parachain-chainspec.json
    +
    +
  4. +
  5. +

    In the terminal where either the alice validator node or the bob validator node is running, press Control-C

    +
  6. +
  7. +

    Purge the local relay chain state by running the following command

    +
    ./target/release/polkadot purge-chain \
    +  --chain local-raw-spec.json
    +
    +
  8. +
+

After purging the chain state, you can restart the local relay chain and parachain collator nodes to begin with a clean environment.

+
+

Note

+

Note that to reset the network state and allow all the nodes to sync after the reset, each of them needs to purge their databases. Otherwise, the nodes won't be able to sync with each other effectively.

+
+

Now that you have successfully connected a parachain to a relay chain, you can explore more advanced features and functionalities of parachains, such as:

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/index.html b/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/index.html new file mode 100644 index 00000000..01d0b7b0 --- /dev/null +++ b/tutorials/polkadot-sdk/parachains/connect-to-relay-chain/prepare-relay-chain/index.html @@ -0,0 +1,5129 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Prepare a Relay Chain | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Prepare a Relay Chain

+

Introduction

+

This tutorial illustrates how to configure and spin up a local relay chain. The local relay chain is needed to set up a local testing environment to which a test parachain node can connect. Setting up a local relay chain is a crucial step in parachain development. It allows developers to test their parachains in a controlled environment, simulating the interaction between a parachain and the relay chain without needing a live network. This local setup facilitates faster development cycles and easier debugging.

+

The scope of this tutorial includes:

+
    +
  • Installing necessary components for a local relay chain
  • +
  • Configuring the relay chain settings
  • +
  • Starting and running the local relay chain
  • +
  • Verifying the relay chain is operational
  • +
+

Prerequisites

+

Before diving into this tutorial, it's recommended that you have a basic understanding of how adding trusted nodes works in Polkadot. For further information about this process, refer to the Spin Your Nodes tutorial.

+

To complete this tutorial, ensure that you have:

+
    +
  • Installed Rust and the Rust toolchain. Refer to the Installation guide for step-by-step instructions on setting up your development environment
  • +
  • Completed Launch a Local Solochain tutorial and know how to compile and run a Polkadot SDK-based node
  • +
+

Build a Local Relay Chain

+

To build a local relay chain, follow these steps:

+
    +
  1. +

    Clone the most recent release branch of the Polkadot SDK repository to prepare a stable working environment:

    +
    git clone --depth 1 --branch polkadot-stable2407-2 \
    +https://github.com/paritytech/polkadot-sdk.git
    +
    +
    +

    Note

    +

    The branch polkadot-stable2407-2 is used in this tutorial since it is the branch that contains the latest stable release of the Polkadot SDK. You can find the latest release of the Polkadot SDK on the Release tab on the Polkadot GitHub repository.

    +
    +
    +

    Note

    +

    Note that the --depth 1 flag is used to clone only the latest commit of the branch, which speeds up the cloning process.

    +
    +
  2. +
  3. +

    Change the directory to the Polkadot SDK repository:

    +
    cd polkadot-sdk
    +
    +
  4. +
  5. +

    Build the relay chain node:

    +
    cargo build --release
    +
    +
    +

    Note

    +

    Depending on your machine's specifications, the build process may take some time.

    +
    +
  6. +
  7. +

    Verify that the node is built correctly:

    +
    ./target/release/polkadot --version
    +
    +
  8. +
+

If command-line help is displayed, the node is ready to configure.

+

Relay Chain Configuration

+

Every Substrate-based chain requires a chain specification. The relay chain's chain specification provides the same configuration settings as the chain specification for other networks. Many of the chain specification file settings are critical for network operations. For example, the chain specification identifies peers participating in the network, keys for validators, bootnode addresses, and other information.

+

Sample Chain Configuration

+

The local relay chain uses a sample chain specification file with two validator relay chain nodes—Alice and Bob—as authorities for this tutorial. Because a relay chain must have at least one more validator node running than the total number of connected parachain collators, you can only use the chain specification from this tutorial for a local relay chain network with a single parachain.

+

If you wanted to connect two parachains with a single collator each, you must run three or more relay chain validator nodes. You must modify the chain specification and hard-code additional validators to set up a local test network for two or more parachains.

+

Plain and Raw Chain Specification

+

The chain specification file is available in two formats: a JSON file in plain text and a JSON file in SCALE-encoded raw format.

+

You can read and edit the plain text version of the chain specification file. However, the chain specification file must be converted to the SCALE-encoded raw format before you can use it to start a node. For information about converting a chain specification to the raw format, see Customize a Chain Specification.

+

The sample chain specification is only valid for a single parachain with two validator nodes. If you add other validators, add additional parachains to your relay chain, or want to use custom account keys instead of the predefined account, you'll need to create a custom chain specification file.

+

Suppose you are completing this tutorial simultaneously as anyone on the same local network. In that case, you must download and modify the plain sample relay chain spec to prevent accidentally peering with their nodes. Find the following line in the plain chain spec and add characters to make the protocolId field unique:

+
"protocolId": "dot",
+
+

Start the Relay Chain Node

+

Before starting block production for a parachain, you need to start a relay chain for them to connect.

+

To start the validator nodes, follow these steps:

+
    +
  1. +

    Generate the chain specification file in the plain text format and use it to create the raw chain specification file. Save the raw chain specification file in a local working directory

    +
      +
    1. +

      Generate the plain text chain specification file:

      +
      ./target/release/polkadot build-spec \
      +  --chain rococo-local-testnet > /tmp/plain-local-chainspec.json
      +
      +
      +

      Note

      +

      Note that the network values are set to the default when generating the chain specification file with the build-spec. You can customize the network values by editing the chain specification file for production networks.

      +
      +
    2. +
    3. +

      Convert the plain text chain specification file to the raw format:

      +
      ./target/release/polkadot build-spec \
      +  --chain plain-local-chainspec.json \
      +  --raw > /tmp/raw-local-chainspec.json
      +
      +
    4. +
    +
  2. +
  3. +

    Start the first validator using the alice account by running the following command:

    +
    ./target/release/polkadot \
    +  --alice \
    +  --validator \
    +  --base-path /tmp/alice \
    +  --chain /tmp/raw-local-chainspec.json \
    +  --port 30333 \
    +  --rpc-port 9944 \
    +  --insecure-validator-i-know-what-i-do \
    +  --force-authoring
    +
    +

    This command uses /tmp/raw-local-chainspec.json as the location of the sample chain specification file. Ensure the --chain command line specifies the path to your generated raw chain specification. This command also uses the default values for the port (port) and WebSocket port (ws-port). The values are explicitly included here as a reminder to always check these settings. After the node starts, no other nodes on the same local machine can use these ports.

    +
  4. +
  5. +

    Review log messages as the node starts and take note of the Local node identity value. This value is the node's peer ID, which you need to connect the parachain to the relay chain:

    +

    + 2024-09-09 13:49:58 Parity Polkadot +
    + 2024-09-09 13:49:58 ✌️ version 1.15.2-d6f482d5593 +
    + 2024-09-09 13:49:58 ❤️ by Parity Technologies <admin@parity.io>, 2017-2024 +
    + 2024-09-09 13:49:58 📋 Chain specification: Rococo Local Testnet +
    + 2024-09-09 13:49:58 🏷 Node name: Alice +
    + 2024-09-09 13:49:58 👤 Role: AUTHORITY +
    + 2024-09-09 13:49:58 💾 Database: RocksDb at /tmp/relay/alice/chains/rococo_local_testnet/db/full +
    + 2024-09-09 13:49:59 🏷 Local node identity is: 12D3KooWG393uX82rR3QgDkZpb7U8StzuRx9BQUXCvWsP1ctgygp +
    + 2024-09-09 13:49:59 Running libp2p network backend +
    + ... +

    +
    +

    Note

    +

    You need to specify this identifier to enable other nodes to connect. In this case, the Local node identity is 12D3KooWG393uX82rR3QgDkZpb7U8StzuRx9BQUXCvWsP1ctgygp.

    +
    +
  6. +
  7. +

    Open a new terminal and start the second validator using the bob account. The command is similar to the command used to start the first node, with a few crucial differences:

    +
    ./target/release/polkadot \
    +  --bob \
    +  --validator \
    +  --base-path /tmp/bob \
    +  --chain /tmp/raw-local-chainspec.json \
    +  --port 30334 \
    +  --rpc-port 9945
    +
    +

    Notice that this command uses a different base path (/tmp/relay/bob), validator key (--bob), and ports (30334 and 9945).

    +

    Because both validators are running on a single local computer, it isn't necessary to specify the --bootnodes command-line option and the first node's IP address and peer identifier. The --bootnodes option is required to connect nodes outside the local network or not identified in the chain specification file.

    +

    If you don't see the relay chain producing blocks, try disabling your firewall or adding the bootnodes command-line option with the address of Alice's node to start the node. Adding the bootnodes option looks like this (with the node identity of Alice's node):

    +
    --bootnodes \
    +  /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWG393uX82rR3QgDkZpb7U8StzuRx9BQUXCvWsP1ctgygp
    +
    +
  8. +
  9. +

    Verify that the relay chain nodes are running by checking the logs for each node. The logs should show that the nodes are connected and producing blocks. For example, Bob's logs will be displayed as follows:

    +

    + ... +
    + 2024-09-10 13:29:38 🏆 Imported #55 (0xad6a…567c → 0xecae…ad12) +
    + 2024-09-10 13:29:38 💤 Idle (1 peers), best: #55 (0xecae…ad12), finalized #0 (0x1cac…618d), ⬇ 2.0kiB/s ⬆ 1.6kiB/s +
    + ... +

    +
  10. +
+

Once the relay chain nodes are running, you can proceed to the next tutorial to set up a test parachain node and connect it to the relay chain.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/parachains/index.html b/tutorials/polkadot-sdk/parachains/index.html new file mode 100644 index 00000000..9330dc49 --- /dev/null +++ b/tutorials/polkadot-sdk/parachains/index.html @@ -0,0 +1,4972 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Parachain Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Tutorials for Building Parachains with the Polkadot SDK

+

The Polkadot SDK enables you to build custom blockchains that can operate either independently or as part of the Polkadot network. These tutorials guide you through two main development paths: building a standalone chain (solochain) or creating a parachain that connects to Polkadot.

+

Local Development

+

Start by learning the fundamentals through these local development tutorials:

+ +

Parachain Development

+

Ready to connect your parachain to Polkadot? Follow these tutorials to build and deploy a parachain:

+ +

Key Takeaways

+

Through these tutorials, you'll gain practical experience with:

+
    +
  • Node operation and network setup
  • +
  • Chain configuration and consensus
  • +
  • Runtime development and upgrades
  • +
  • Parachain deployment and management
  • +
+

Each tutorial builds upon previous concepts while providing flexibility to focus on your specific development goals, whether that's building a standalone chain or a fully integrated parachain.

+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/index.html b/tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/index.html new file mode 100644 index 00000000..1d79207b --- /dev/null +++ b/tutorials/polkadot-sdk/parachains/local-chain/connect-multiple-nodes/index.html @@ -0,0 +1,5195 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Predefined Accounts | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Connect Predefined Default Nodes

+

Introduction

+

This tutorial introduces you to the process of initiating a private blockchain network with a set of default authorized validators (Alice and Bob). If you prefer, you can also launch a private blockchain with your own validator accounts.

+

The Polkadot SDK Solochain Template implements an authority consensus model to regulate block production. In this model, the creation of blocks is restricted to a predefined list of authorized accounts, known as "authorities," who operate in a round-robin fashion.

+

To demonstrate this concept, you'll simulate a network environment using two nodes running on a single computer, each configured with different accounts and keys. Throughout this tutorial, you'll gain practical insight into the functionality of the authority consensus model by observing how these two predefined accounts, serving as authorities, enable the nodes to produce blocks.

+

By completing this tutorial, you will accomplish the following objectives:

+
    +
  • Start a blockchain node using a predefined account
  • +
  • Learn the key command-line options used to start a node
  • +
  • Determine if a node is running and producing blocks
  • +
  • Connect a second node to a running network
  • +
  • Verify peer computers produce and finalize blocks
  • +
+

Prerequisites

+

Before proceeding, ensure you have the following prerequisites in place:

+ +

Start the First Blockchain Node

+

This tutorial demonstrates the fundamentals of a private network using a predefined chain specification called local and two preconfigured user accounts. You'll simulate a private network by running two nodes on a single local computer, using accounts named Alice and Bob.

+

Follow these steps to start your first blockchain node:

+
    +
  1. +

    Navigate to the root directory where you compiled the Polkadot SDK Solochain Template

    +
  2. +
  3. +

    Clear any existing chain data by executing the following: +

    ./target/release/solochain-template-node purge-chain --base-path /tmp/alice --chain local
    +

    +

    When prompted to confirm, type y and press Enter. This step ensures a clean start for your new network

    +
  4. +
  5. +

    Launch the first blockchain node using the Alice account: +

    ./target/release/solochain-template-node \
    +--base-path /tmp/alice \
    +--chain local \
    +--alice \
    +--port 30333 \
    +--rpc-port 9945 \
    +--node-key 0000000000000000000000000000000000000000000000000000000000000001 \
    +--validator
    +

    +
  6. +
+

Review the Command-Line Options

+

Before proceeding, examine the key command-line options used to start the node:

+
    +
  • --base-path - specifies the directory for storing all chain-related data
  • +
  • --chain - defines the chain specification to use
  • +
  • --alice - adds the predefined keys for the Alice account to the node's keystore. This account is used for block production and finalization
  • +
  • --port - sets the listening port for peer-to-peer (p2p) traffic. Different ports are necessary when running multiple nodes on the same machine
  • +
  • --rpc-port - specifies the port for incoming JSON-RPC traffic via WebSocket and HTTP
  • +
  • --node-key - defines the Ed25519 secret key for libp2p networking
  • +
  • --validator - enables this node to participate in block production and finalization for the network
  • +
+

For a comprehensive overview of all available command-line options for the node template, you can access the built-in help documentation. Execute the following command in your terminal:

+
./target/release/solochain-template-node --help
+
+

Review the Node Messages

+

Upon successful node startup, the terminal displays messages detailing network operations and information relevant to the running node. This output includes details about the chain specification, system data, network status, and other crucial parameters. You should see output similar to this:

+
+ ./target/release/solochain-template-node \ --base-path /tmp/alice \ --chain local \ --alice \ --port 30333 \ --rpc-port 9945 \ --node-key 0000000000000000000000000000000000000000000000000000000000000001 \ --validator + 2024-09-10 08:35:43 Substrate Node + 2024-09-10 08:35:43 ✌️ version 0.1.0-8599efc46ae + 2024-09-10 08:35:43 ❤️ by Parity Technologies <admin@parity.io>, 2017-2024 + 2024-09-10 08:35:43 📋 Chain specification: Local Testnet + 2024-09-10 08:35:43 🏷 Node name: Alice + 2024-09-10 08:35:43 👤 Role: AUTHORITY + 2024-09-10 08:35:43 💾 Database: RocksDb at /tmp/alice/chains/local_testnet/db/full + 2024-09-10 08:35:43 🔨 Initializing Genesis block/state (state: 0x074c…27bd, header-hash: 0x850f…951f) + 2024-09-10 08:35:43 👴 Loading GRANDPA authority set from genesis on what appears to be first startup. + 2024-09-10 08:35:43 Using default protocol ID "sup" because none is configured in the chain specs + 2024-09-10 08:35:43 🏷 Local node identity is: 12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp + 2024-09-10 08:35:43 Running libp2p network backend + 2024-09-10 08:35:43 💻 Operating system: macos + 2024-09-10 08:35:43 💻 CPU architecture: aarch64 + 2024-09-10 08:35:43 📦 Highest known block at #0 + 2024-09-10 08:35:43 〽️ Prometheus exporter started at 127.0.0.1:9615 + 2024-09-10 08:35:43 Running JSON-RPC server: addr=127.0.0.1:9945, allowed origins=["http://localhost:*", "http://127.0.0.1:*", "https://localhost:*", "https://127.0.0.1:*", "https://polkadot.js.org"] + 2024-09-10 08:35:48 💤 Idle (0 peers), best: #0 (0x850f…951f), finalized #0 (0x850f…951f), ⬇ 0 ⬆ 0 +
+ +

Pay particular attention to the following key messages:

+
    +
  • +

    Genesis block initialization:

    +
    2024-09-10 08:35:43 🔨 Initializing Genesis block/state (state: 0x074c…27bd, header-hash: 0x850f…951f)
    +
    +

    This message identifies the initial state or genesis block used by the node. When starting subsequent nodes, ensure these values match.

    +
  • +
  • +

    Node identity:

    +
    2024-09-10 08:35:43 🏷  Local node identity is: 12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp
    +
    +

    This string uniquely identifies the node. It's determined by the --node-key used to start the node with the Alice account. Use this identifier when connecting additional nodes to the network.

    +
  • +
  • +

    Network status:

    +
    2024-09-10 08:35:48 💤 Idle (0 peers), best: #0 (0x850f…951f), finalized #0 (0x850f…951f), ⬇ 0 ⬆ 0
    +
    +

    This message indicates that:

    +
      +
    • No other nodes are currently in the network
    • +
    • No blocks are being produced
    • +
    • Block production will commence once another node joins the network
    • +
    +
  • +
+

Add a Second Node to the Network

+

After successfully running the first node with the Alice account keys, you can expand the network by adding a second node using the Bob account. This process involves connecting to the existing network using the running node as a reference point. The commands are similar to those used for the first node, with some key differences to ensure proper network integration.

+

To add a node to the running blockchain:

+
    +
  1. +

    Open a new terminal shell on your computer

    +
  2. +
  3. +

    Navigate to the root directory where you compiled the Polkadot SDK Solochain Template

    +
  4. +
  5. +

    Clear any existing chain data for the new node:

    +
    ./target/release/solochain-template-node purge-chain --base-path /tmp/bob --chain local -y
    +
    +
    +

    Note

    +

    The -y flag automatically confirms the operation without prompting.

    +
    +
  6. +
  7. +

    Start the second local blockchain node using the Bob account: +

    ./target/release/solochain-template-node \
    +--base-path /tmp/bob \
    +--chain local \
    +--bob \
    +--port 30334 \
    +--rpc-port 9946 \
    +--node-key 0000000000000000000000000000000000000000000000000000000000000002 \
    +--validator \
    +--bootnodes /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp
    +

    +

    Key differences in this command:

    +
      +
    • +

      Unique paths and ports - to avoid conflicts on the same machine, different values are used for:

      +
        +
      • --base-path - set to /tmp/bob
      • +
      • --port - set to 30334
      • +
      • --rpc-port - set to 9946
      • +
      +
    • +
    • +

      Bootnode specification - the --bootnodes option is crucial for network discovery:

      +
        +
      • Format - /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp
      • +
      • Components:
          +
        • ip4 - indicates IPv4 format
        • +
        • 127.0.0.1 - IP address of the running node (localhost in this case)
        • +
        • tcp - specifies TCP for peer-to-peer communication
        • +
        • 30333 - port number for peer-to-peer TCP traffic
        • +
        • 12D3KooWEyoppNCUx8Yx66oV9fJnriXwCcXwDDUA2kj6vnc6iDEp - unique identifier of the Alice node
        • +
        +
      • +
      +
    • +
    +
  8. +
+

Verify Blocks are Produced and Finalized

+

After starting the second node, both nodes should connect as peers and commence block production.

+

Follow these steps to verify that blocks are being produced and finalized:

+
    +
  1. +

    Observe the output in the terminal of the first node (Alice):

    +

    + ... + 2024-09-10 09:04:57 discovered: 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD /ip4/192.168.1.4/tcp/30334 + 2024-09-10 09:04:58 💤 Idle (0 peers), best: #0 (0x850f…951f), finalized #0 (0x850f…951f), ⬇ 0.3kiB/s ⬆ 0.3kiB/s + 2024-09-10 09:05:00 🙌 Starting consensus session on top of parent 0x850ffab4827cb0297316cbf01fc7c2afb954c5124f366f25ea88bfd19ede951f (#0) + 2024-09-10 09:05:00 🎁 Prepared block for proposing at 1 (2 ms) [hash: 0xe21a305e6647b0b0c6c73ba31a49ae422809611387fadb7785f68d0a1db0b52d; parent_hash: 0x850f…951f; extrinsics (1): [0x0c18…08d8] + 2024-09-10 09:05:00 🔖 Pre-sealed block for proposal at 1. Hash now 0x75bbb026db82a4d6ff88b96f952a29e15dac2b7df24d4cb95510945e2bede82d, previously 0xe21a305e6647b0b0c6c73ba31a49ae422809611387fadb7785f68d0a1db0b52d. + 2024-09-10 09:05:00 🏆 Imported #1 (0x850f…951f → 0x75bb…e82d) + 2024-09-10 09:05:03 💤 Idle (1 peers), best: #1 (0x75bb…e82d), finalized #0 (0x850f…951f), ⬇ 0.7kiB/s ⬆ 0.8kiB/s + 2024-09-10 09:05:06 🏆 Imported #2 (0x75bb…e82d → 0x774d…a176) + 2024-09-10 09:05:08 💤 Idle (1 peers), best: #2 (0x774d…a176), finalized #0 (0x850f…951f), ⬇ 0.6kiB/s ⬆ 0.5kiB/s + 2024-09-10 09:05:12 🙌 Starting consensus session on top of parent 0x774dec6bff7a27c38e21106a5a7428ae5d50b991f39cda7c0aa3c0c9322da176 (#2) + 2024-09-10 09:05:12 🎁 Prepared block for proposing at 3 (0 ms) [hash: 0x10bb4589a7a13bac657219a9ff06dcef8d55e46a4275aa287a966b5648a6d486; parent_hash: 0x774d…a176; extrinsics (1): [0xdcd4…b5ec] + 2024-09-10 09:05:12 🔖 Pre-sealed block for proposal at 3. Hash now 0x01e080f4b8421c95d0033aac7310b36972fdeef7c6025f8a153c436c1bb214ee, previously 0x10bb4589a7a13bac657219a9ff06dcef8d55e46a4275aa287a966b5648a6d486. + 2024-09-10 09:05:12 🏆 Imported #3 (0x774d…a176 → 0x01e0…14ee) + 2024-09-10 09:05:13 💤 Idle (1 peers), best: #3 (0x01e0…14ee), finalized #0 (0x850f…951f), ⬇ 0.6kiB/s ⬆ 0.6kiB/s + 2024-09-10 09:05:18 🏆 Imported #4 (0x01e0…14ee → 0xe176…0430) + 2024-09-10 09:05:18 💤 Idle (1 peers), best: #4 (0xe176…0430), finalized #1 (0x75bb…e82d), ⬇ 0.6kiB/s ⬆ 0.6kiB/s +

    +

    Key information in this output:

    +
      +
    • Second node discovery - discovered: 12D3KooWHdiAxVd8uMQR1hGWXccidmfCwLqcMpGwR6QcTP6QRMuD
    • +
    • Peer count - 1 peers
    • +
    • Block production - best: #4 (0xe176…0430)
    • +
    • Block finalization - finalized #1 (0x75bb…e82d)
    • +
    +
  2. +
  3. +

    Check the terminal of the second node (Bob) for similar output

    +
  4. +
  5. +

    Shut down one node using Control-C in its terminal. Observe the remaining node's output:

    +

    + 2024-09-10 09:10:03 💤 Idle (1 peers), best: #51 (0x0dd6…e763), finalized #49 (0xb70a…1fc0), ⬇ 0.7kiB/s ⬆ 0.6kiB/s + 2024-09-10 09:10:08 💤 Idle (0 peers), best: #52 (0x2c40…a50e), finalized #49 (0xb70a…1fc0), ⬇ 0.3kiB/s ⬆ 0.3kiB/s +

    +

    Note that the peer count drops to zero, and block production stops.

    +
  6. +
  7. +

    Shut down the second node using Control-C in its terminal

    +
  8. +
  9. +

    Clean up chain state from the simulated network by using the purge-chain subcommand:

    +
      +
    • For Alice's node: +
      ./target/release/solochain-template-node purge-chain \
      +--base-path /tmp/alice \
      +--chain local \
      +-y
      +
    • +
    • For Bob's node: +
      ./target/release/solochain-template-node purge-chain \
      +--base-path /tmp/bob \
      +--chain local \
      +-y
      +
    • +
    +
  10. +
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/parachains/local-chain/index.html b/tutorials/polkadot-sdk/parachains/local-chain/index.html new file mode 100644 index 00000000..1b221897 --- /dev/null +++ b/tutorials/polkadot-sdk/parachains/local-chain/index.html @@ -0,0 +1,5004 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Build a Local Solochain | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Build a Solochain

+

This section provides practical, step-by-step tutorials for building and managing your own local solochain using the Polkadot SDK. A solochain is a standalone blockchain, typically used for testing, experimentation, or creating a custom blockchain that doesn't require integration with the Polkadot relay chain. You can customize your solochain to fit your exact needs. The following tutorials guide you through a complete workflow, from launching a single node to managing a network of validators.

+

Key Takeaways

+

By following along with these tutorials, you'll gain comprehensive experience with launching and managing blockchain nodes, including:

+
    +
  • Node compilation and deployment
  • +
  • Network configuration and peer connectivity
  • +
  • Validator authorization and key management
  • +
  • Runtime upgrades and network maintenance
  • +
+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/index.html b/tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/index.html new file mode 100644 index 00000000..d648189f --- /dev/null +++ b/tutorials/polkadot-sdk/parachains/local-chain/launch-a-local-solochain/index.html @@ -0,0 +1,5090 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Launch a Local Solochain | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + +
+ +
+ + +
+ +
+ + + + +

Launch a Local Solochain

+

Introduction

+

Polkadot SDK offers a versatile and extensible blockchain development framework, enabling you to create custom blockchains tailored to your specific application or business requirements.

+

This tutorial guides you through compiling and launching a standalone blockchain node using the Polkadot SDK Solochain Template. You'll create a fully functional chain that operates independently, without connections to a relay chain or parachain.

+

The node template provides a pre-configured, functional single-node blockchain you can run in your local development environment. It includes several key components, such as user accounts and account balances.

+

These predefined elements allow you to experiment with common blockchain operations without requiring initial template modifications. +In this tutorial, you will:

+
    +
  • Build and start a local blockchain node using the node template
  • +
  • Explore how to use a front-end interface to:
      +
    • View information about blockchain activity
    • +
    • Submit a transaction
    • +
    +
  • +
+

By the end of this tutorial, you'll have a working local solochain and understand how to interact with it, setting the foundation for further customization and development.

+

Prerequisites

+

To get started with the node template, you'll need to have the following set up on your development machine first:

+
    +
  • Rust installation - the node template is written in Rust, so you'll need to have it installed and configured on your system. Refer to the Installation guide for step-by-step instructions on setting up your development environment
  • +
+

Compile a Node

+

The Polkadot SDK Solochain Template provides a ready-to-use development environment for building using the Polkadot SDK. Follow these steps to compile the node:

+
    +
  1. +

    Clone the node template repository: +

    git clone -b v0.0.2 https://github.com/paritytech/polkadot-sdk-solochain-template
    +

    +
    +

    Note

    +

    This tutorial uses version v0.0.2 of the Polkadot SDK Solochain Template. Make sure you're using the correct version to match these instructions.

    +
    +
  2. +
  3. +

    Navigate to the root of the node template directory: +

    cd polkadot-sdk-solochain-template
    +

    +
  4. +
  5. +

    Compile the node template: +

    cargo build --release
    +

    +
    +

    Note

    +

    Initial compilation may take several minutes, depending on your machine specifications. Always use the --release flag to build optimized, production-ready artifacts.

    +
    +
  6. +
  7. +

    Upon successful compilation, you should see output similar to: +

    + cargo build --release + Compiling solochain-template-node + Finished release profile [optimized] target(s) in 27.12s +

    +
  8. +
+

Start the Local Node

+

After successfully compiling your node, you can run it and produce blocks. This process will start your local blockchain and allow you to interact. Follow these steps to launch your node in development mode:

+
    +
  1. +

    In the terminal where you compiled your node, start it in development mode: +

    ./target/release/solochain-template-node --dev
    +
    + The --dev option does the following:

    +
      +
    • Specifies that the node runs using the predefined development chain specification
    • +
    • Deletes all active data (keys, blockchain database, networking information) when stopped
    • +
    • Ensures a clean working state each time you restart the node
    • +
    +
  2. +
  3. +

    Verify that your node is running by reviewing the terminal output. You should see something similar to: +

    + ./target/release/solochain-template-node --dev +
    + 2024-09-09 08:32:42 Substrate Node + 2024-09-09 08:32:42 ✌️ version 0.1.0-8599efc46ae + 2024-09-09 08:32:42 ❤️ by Parity Technologies <admin@parity.io>, 2017-2024 + 2024-09-09 08:32:42 📋 Chain specification: Development + 2024-09-09 08:32:42 🏷 Node name: light-boundary-7850 + 2024-09-09 08:32:42 👤 Role: AUTHORITY + 2024-09-09 08:32:42 💾 Database: RocksDb at /var/folders/x0/xl_kjddj3ql3bx7752yr09hc0000gn/T/substrate0QH9va/chains/dev/db/full + 2024-09-09 08:32:42 🔨 Initializing Genesis block/state (state: 0xc2a0…16ba, header-hash: 0x0eef…935d) + 2024-09-09 08:32:42 👴 Loading GRANDPA authority set from genesis on what appears to be first startup. + 2024-09-09 08:32:42 Using default protocol ID "sup" because none is configured in the chain specs + 2024-09-09 08:32:42 🏷 Local node identity is: 12D3KooWPhdUzf66di1SuuRFgjkFs6X8jm3Uj2ss5ri31WuVAbgt + 2024-09-09 08:32:42 Running libp2p network backend + 2024-09-09 08:32:42 💻 Operating system: macos + 2024-09-09 08:32:42 💻 CPU architecture: aarch64 + 2024-09-09 08:32:42 📦 Highest known block at #0 + 2024-09-09 08:32:42 〽️ Prometheus exporter started at 127.0.0.1:9615 + 2024-09-09 08:32:42 Running JSON-RPC server: addr=127.0.0.1:9944, allowed origins=["*"] + 2024-09-09 08:32:47 💤 Idle (0 peers), best: #0 (0x0eef…935d), finalized #0 (0x0eef…935d), ⬇ 0 ⬆ 0 + 2024-09-09 08:32:48 🙌 Starting consensus session on top of parent 0x0eef4a08ef90cc04d01864514dc5cb2bd822314309b770b49b0177f920ed935d (#0) + 2024-09-09 08:32:48 🎁 Prepared block for proposing at 1 (1 ms) [hash: 0xc14630b76907550bef9037dcbfafa2b25c8dc763495f30d9e36ad4b93b673b36; parent_hash: 0x0eef…935d; extrinsics (1): [0xbcd8…5132] + 2024-09-09 08:32:48 🔖 Pre-sealed block for proposal at 1. Hash now 0xcb3d2f28bc73807dac5cf19fcfb2ac6d7e922756da9d41ca0c9dadbd0e45265b, previously 0xc14630b76907550bef9037dcbfafa2b25c8dc763495f30d9e36ad4b93b673b36. + 2024-09-09 08:32:48 🏆 Imported #1 (0x0eef…935d → 0xcb3d…265b) + ... +

    +
  4. +
  5. +

    Confirm that your blockchain is producing new blocks by checking if the number after finalized is increasing +

    + ... + 2024-09-09 08:32:47 💤 Idle (0 peers), best: #0 (0x0eef…935d), finalized #0 (0x0eef…935d), ⬇ 0 ⬆ 0 + ... + 2024-09-09 08:32:52 💤 Idle (0 peers), best: #1 (0xcb3d…265b), finalized #0 (0x0eef…935d), ⬇ 0 ⬆ 0 + ... + 2024-09-09 08:32:57 💤 Idle (0 peers), best: #2 (0x16d7…083f), finalized #0 (0x0eef…935d), ⬇ 0 ⬆ 0 + ... + 2024-09-09 08:33:02 💤 Idle (0 peers), best: #3 (0xe6a4…2cc4), finalized #1 (0xcb3d…265b), ⬇ 0 ⬆ 0 + ... +

    +
    +

    Note

    +

    The details of the log output will be explored in a later tutorial. For now, knowing that your node is running and producing blocks is sufficient.

    +
    +
  6. +
+

Interact with the Node

+

When running the template node, it's accessible by default at:

+

ws://localhost:9944
+
+To interact with your node using the Polkadot.js Apps interface, follow these steps:

+
    +
  1. +

    Open Polkadot.js Apps in your web browser and click the network icon in the top left corner

    +

    +
  2. +
  3. +

    Connect to your local node:

    +
      +
    1. Scroll to the bottom and select Development
    2. +
    3. Choose Custom
    4. +
    5. Enter ws://localhost:9944 in the input field
    6. +
    7. Click the Switch button
    8. +
    +

    +
  4. +
  5. +

    Verify connection:

    +
      +
    • Once connected, you should see solochain-template-runtime in the top left corner
    • +
    • The interface will display information about your local blockchain
    • +
    +

    +
  6. +
+

You are now connected to your local node and can now interact with it through the Polkadot.js Apps interface. This tool enables you to explore blocks, execute transactions, and interact with your blockchain's features. For in-depth guidance on using the interface effectively, refer to the Polkadot.js Guides available on the Polkadot Wiki.

+

Stop the Node

+

When you're done exploring your local node, you can stop it to remove any state changes you've made. Since you started the node with the --dev option, stopping the node will purge all persistent block data, allowing you to start fresh the next time.

+

To stop the local node:

+
    +
  1. Return to the terminal window where the node output is displayed
  2. +
  3. Press Control-C to stop the running process
  4. +
  5. Verify that your terminal returns to the prompt in the polkadot-sdk-solochain-template directory
  6. +
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/index.html b/tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/index.html new file mode 100644 index 00000000..53d3fc07 --- /dev/null +++ b/tutorials/polkadot-sdk/parachains/local-chain/spin-your-nodes/index.html @@ -0,0 +1,5526 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Spin Up Your Nodes | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Spin Your Own Nodes

+

Introduction

+

This tutorial guides you through launching a private blockchain network with a small, trusted set of validators. In decentralized networks, consensus ensures that nodes agree on the state of the data at any given time. The Polkadot SDK Solochain Template uses Aura (Authority Round), a proof of authority consensus mechanism where a fixed set of trusted validators produces blocks in a round-robin fashion. This approach offers an easy way to launch a standalone blockchain with a predefined list of validators.

+

You'll learn how to generate keys, create a custom chain specification, and start a two-node blockchain network using the Aura consensus mechanism.

+

Prerequisites

+

Before starting this tutorial, ensure you have:

+ +

Generate an Account and Keys

+

Unlike in the Connect Multiple Nodes tutorial, where you used predefined accounts and keys to start peer nodes, this tutorial requires you to generate unique secret keys for your validator nodes. It's crucial to understand that each participant is responsible for generating and managing their own unique set of keys in a real blockchain network.

+

This process of generating your own keys serves several important purposes:

+
    +
  • It enhances the security of your network by ensuring that each node has its own unique cryptographic identity
  • +
  • It simulates a more realistic blockchain environment where participants don't share key information
  • +
  • It helps you understand the process of key generation, which is a fundamental skill in blockchain operations
  • +
+

There are a couple of Polkadot Wiki articles that may help you better understand the different signing algorithms used in this tutorial. See the Keypairs and Signing section to learn about the sr25519 and ed25519 signing algorithms. Refer to the Keys section to learn more about the different types of keys used in the ecosystem.

+

Key Generation Options

+

There are several ways you can generate keys. The available methods are:

+
    +
  • solochain-template-node key subcommand - the most straightforward method for developers working directly with the node is to use the integrated key generation feature. Using the key subcommand, you can generate keys directly from your node's command line interface. This method ensures compatibility with your chain and is convenient for quick setup and testing
  • +
  • subkey - it is a powerful standalone utility specifically designed for Polkadot SDK-based chains. It offers advanced options for key generation, including support for different key types such as ed25519 and sr25519. This tool allows fine-grained control over the key generation process
  • +
  • Third-party key generation utilities - various tools developed by the community
  • +
+

Generate Local Keys with the Node Template

+

Best practices for key generation:

+
    +
  • Whenever possible, use an air-gapped computer, meaning never connected to the internet, when generating keys for a production blockchain
  • +
  • If an air-gapped computer is not an option, disconnect from the internet before generating keys for any public or private blockchain not under your control
  • +
+

For this tutorial, however, you'll use the solochain-template-node command-line options to generate random keys locally while remaining connected to the internet. This method is suitable for learning and testing purposes.

+

Follow these steps to generate your keys:

+
    +
  1. +

    Navigate to the root directory where you compiled the node template

    +
  2. +
  3. +

    Generate a random secret phrase and Sr25519 keys. Enter a password when prompted:

    +
    ./target/release/solochain-template-node key generate \
    +--scheme Sr25519 \
    +--password-interactive
    +
    +

    The command will output information about the generated keys similar to the following:

    +

    + ./target/release/solochain-template-node key generate \ + --scheme Sr25519 \ + --password-interactive + Key password: + Secret phrase: digital width rely long insect blind usual name oyster easy steak spend + Network ID: substrate + Secret seed: 0xc52405d0b45dd856cbf1371f3b33fbde20cb76bf6ee440d12ea15f7ed17cca0a + Public key (hex): 0xea23fa399c6bd91af3d7ea2d0ad46c48aff818b285342d9aaf15b3172270e914 + Account ID: 0xea23fa399c6bd91af3d7ea2d0ad46c48aff818b285342d9aaf15b3172270e914 + Public key (SS58): 5HMhkSHpD4XcibjbU9ZiGemLpnsTUzLsG5JhQJQEcxp3KJaW + SS58 Address: 5HMhkSHpD4XcibjbU9ZiGemLpnsTUzLsG5JhQJQEcxp3KJaW + +

    +
    +

    Protect Your Keys

    +

    Never share your secret phrase or private keys. If exposed, someone could:

    +
      +
    • Impersonate you on the network
    • +
    • Steal all funds associated with the account
    • +
    • Perform transactions on your behalf
    • +
    • Potentially compromise your entire blockchain identity
    • +
    +
    +

    Note the Sr25519 public key for the account (SS58 format). This key will be used for producing blocks with Aura. In this example, the Sr25519 public key for the account is 5HMhkSHpD4XcibjbU9ZiGemLpnsTUzLsG5JhQJQEcxp3KJaW.

    +
  4. +
  5. +

    Use the generated secret phrase to derive keys using the Ed25519 signature scheme.

    +
    ./target/release/solochain-template-node key inspect \
    +--scheme Ed25519 \
    +--password-interactive \
    +"INSERT_SECRET_PHRASE"
    +
    +

    When prompted for a Key password, enter the same password you used in the previous step

    +
    +

    Note

    +

    Replace INSERT_SECRET_PHRASE with the secret phrase generated in step 2.

    +
    +

    The command will output information about the generated keys similar to the following:

    +

    + ./target/release/solochain-template-node key inspect \ + --scheme Ed25519 \ + --password-interactive \ + "digital width rely long insect blind usual name oyster easy steak spend" + Key password: + Secret phrase: digital width rely long insect blind usual name oyster easy steak spend + Network ID: substrate + Secret seed: 0xc52405d0b45dd856cbf1371f3b33fbde20cb76bf6ee440d12ea15f7ed17cca0a + Public key (hex): 0xc9c2cd111f98f2bf78bab6787449fc007dd7f2a5d02f099919f7fb50ade97dd6 + Account ID: 0xc9c2cd111f98f2bf78bab6787449fc007dd7f2a5d02f099919f7fb50ade97dd6 + Public key (SS58): 5GdFMFbXy24uz8mFZroFUgdBkY2pq6igBNGAq9tsBfEZRSzP + SS58 Address: 5GdFMFbXy24uz8mFZroFUgdBkY2pq6igBNGAq9tsBfEZRSzP + +

    +

    The Ed25519 key you've generated is crucial for block finalization using the grandpa consensus algorithm. The Ed25519 public key for the account is 5GdFMFbXy24uz8mFZroFUgdBkY2pq6igBNGAq9tsBfEZRSzP.

    +
  6. +
+

Generate a Second Set of Keys

+

In this tutorial, the private network will consist of two nodes, meaning you'll need two distinct sets of keys. You have several options for generating this second set of keys:

+
    +
  • Use the keys from one of the predefined accounts
  • +
  • Follow the steps from the previous section, but use a different identity on your local machine to create a new key pair
  • +
  • Derive a child key pair to simulate a second identity on your local machine
  • +
+

For this tutorial, the second set of keys will be:

+
    +
  • Sr25519 (for Aura) - 5Df9bvnbqKNR8S1W2Uj5XSpJCKUomyymwCGf6WHKyoo3GDev
  • +
  • Ed25519 (for Grandpa) - 5DJRQQWEaJart5yQnA6gnKLYKHLdpX6V4vHgzAYfNPT2NNuW
  • +
+

Create a Custom Chain Specification

+

After generating key pairs for your blockchain, the next step is creating a custom chain specification. You will share this specification with trusted validators participating in your network.

+

To enable others to participate in your blockchain, ensure that each participant generates their own key pair. Once you collect the keys from all network participants, you can create a custom chain specification to replace the default local one.

+

In this tutorial, you'll modify the local chain specification to create a custom version for a two-node network. The same process can be used to add more nodes if you have the necessary keys.

+

Steps to Create a Custom Chain Specification

+
    +
  1. +

    Open a terminal and navigate to the root directory of your compiled node template

    +
  2. +
  3. +

    Export the local chain specification:

    +
    ./target/release/solochain-template-node build-spec \
    +--disable-default-bootnode \
    +--chain local > customSpec.json
    +
    +
  4. +
  5. +

    Preview the customSpec.json file:

    +
      +
    • +

      Preview first fields:

      +

      head customSpec.json
      +
      +
      + head customSpec.json + +
      +{
      +    "name": "Local Testnet",
      +    "id": "local_testnet",
      +    "chainType": "Local",
      +    "bootNodes": [],
      +    "telemetryEndpoints": null,
      +    "protocolId": null,
      +    "properties": null,
      +    "codeSubstitutes": { ... },
      +    "genesis": { ... }
      +}
      +
      +
      +

      +
    • +
    • +

      Preview last fields: +

      tail -n 78 customSpec.json
      +
      +
      + tail -n 78 customSpec.json + +
      +{
      +    "patch": {
      +        "aura": {
      +            "authorities": [
      +                "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
      +                "5FHneW46xGXgs5mUiveU4sbTyGBzmstUspZC92UhjJM694ty"
      +            ]
      +        },
      +        "balances": {
      +            "balances": [
      +                [
      +                    "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5FHneW46xGXgs5mUiveU4sbTyGBzmstUspZC92UhjJM694ty",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5FLSigC9HGRKVhB9FiEo4Y3koPsNmBmLJbpXg2mp1hXcS59Y",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5DAAnrj7VHTznn2AWBemMuyBwZWs6FNFjdyVXUeYum3PTXFy",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5HGjWAeFDfFCWPsjFQdVV2Msvz2XtMktvgocEZcCj68kUMaw",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5CiPPseXPECbkjWCa6MnjNokrgYjMqmKndv2rSnekmSK2DjL",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5GNJqTPyNqANBkUVMN1LPPrxXnFouWXoe2wNSmmEoLctxiZY",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5HpG9w8EBLe5XCrbczpwq5TSXvedjrBGCwqxK1iQ7qUsSWFc",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5Ck5SLSHYac6WFt5UZRSsdJjwmpSZq85fd5TRNAdZQVzEAPT",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5HKPmK9GYtE1PSLsS1qiYU9xQ9Si1NcEhdeCq9sw5bqu4ns8",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5FCfAonRZgTFrTd9HREEyeJjDpT397KMzizE6T3DvebLFE7n",
      +                    1152921504606846976
      +                ],
      +                [
      +                    "5CRmqmsiNFExV6VbdmPJViVxrWmkaXXvBrSX8oqBT8R9vmWk",
      +                    1152921504606846976
      +                ]
      +            ]
      +        },
      +        "grandpa": {
      +            "authorities": [
      +                [
      +                    "5FA9nQDVg267DEd8m1ZypXLBnvN7SFxYwV7ndqSYGiN9TTpu",
      +                    1
      +                ],
      +                [
      +                    "5GoNkf6WdbxCFnPdAnYYQyCjAKPJgLNxXwPjwTh6DGg6gN3E",
      +                    1
      +                ]
      +            ]
      +        },
      +        "sudo": {
      +            "key": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY"
      +        }
      +    }
      +}  
      +
      +
      +

      +

      This command will display fields that include configuration details for pallets, such as sudo and balances, as well as the validator settings for the Aura and Grandpa keys.

      +
    • +
    +
  6. +
  7. +

    Edit customSpec.json:

    +
      +
    1. +

      Update the name field: +

      "name": "My Custom Testnet",
      +

      +
    2. +
    3. +

      Add Sr25519 addresses for each validator in the authorities array to the aura field to specify the nodes with authority to create blocks:

      +
      "aura": {
      +  "authorities": [
      +    "5HMhkSHpD4XcibjbU9ZiGemLpnsTUzLsG5JhQJQEcxp3KJaW",
      +    "5Df9bvnbqKNR8S1W2Uj5XSpJCKUomyymwCGf6WHKyoo3GDev"
      +  ]
      +},
      +
      +
    4. +
    5. +

      Add Ed25519 addresses for each validator in the authorities array to the grandpa field to specify the nodes with the authority to finalize blocks. Include a voting weight (typically 1) for each validator to define their voting power:

      +
      "grandpa": {
      +  "authorities": [
      +    [
      +      "5GdFMFbXy24uz8mFZroFUgdBkY2pq6igBNGAq9tsBfEZRSzP",
      +      1
      +    ],
      +    [
      +      "5DJRQQWEaJart5yQnA6gnKLYKHLdpX6V4vHgzAYfNPT2NNuW",
      +      1
      +    ]
      +  ]
      +},
      +
      +
    6. +
    +
  8. +
  9. +

    Save and close customSpec.json

    +
  10. +
+

Convert Chain Specification to Raw Format

+

After creating your custom chain specification, the next crucial step is converting it to a raw format. This process is essential because the raw format includes encoded storage keys that nodes use to reference data in their local storage. By distributing a raw chain specification, you ensure that each node in the network stores data using the same storage keys, which is vital for maintaining data integrity and facilitating network synchronization.

+

To convert your chain specification to the raw format, follow these steps:

+
    +
  1. +

    Navigate to the root directory where you compiled the node template

    +
  2. +
  3. +

    Run the following command to convert the customSpec.json chain specification to the raw format and save it as customSpecRaw.json:

    +
    ./target/release/solochain-template-node build-spec \
    +--chain=customSpec.json \
    +--raw \
    +--disable-default-bootnode > customSpecRaw.json
    +
    +
  4. +
+

Add Keys to the Keystore

+

To enable block production and finalization, you need to add two types of keys to the keystore for each node in the network:

+
    +
  • aura authority keys for block production
  • +
  • grandpa authority keys for block finalization
  • +
+

Follow these steps for each node in your network:

+
    +
  1. +

    Open a terminal and navigate to the root directory where you compiled the node template

    +
  2. +
  3. +

    Insert the aura secret key:

    +
    ./target/release/solochain-template-node key insert \
    +--base-path /tmp/node01 \
    +--chain customSpecRaw.json \
    +--scheme Sr25519 \
    +--suri "INSERT_SECRET_PHRASE" \
    +--password-interactive \
    +--key-type aura
    +
    +
    +

    Note

    +

    Replace INSERT_SECRET_PHRASE with the secret phrase or seed you generated earlier. When prompted, enter the password you used to generate the keys.

    +
    +
  4. +
  5. +

    Insert the grandpa secret key: +

    ./target/release/solochain-template-node key insert \
    +--base-path /tmp/node01 \
    +--chain customSpecRaw.json \
    +--scheme Ed25519 \
    +--suri "INSERT_SECRET_PHRASE" \
    +--password-interactive \
    +--key-type gran
    +

    +
    +

    Note

    +

    Use the same secret phrase or seed and password as in step 2.

    +
    +
  6. +
  7. +

    Verify that your keys are in the keystore by running the following command:

    +
    ls /tmp/node01/chains/local_testnet/keystore
    +
    +

    You should see output similar to:

    +

    + ls /tmp/node01/chains/local_testnet/keystore +
    + 61757261ea23fa399c6bd91af3d7ea2d0ad46c48aff818b285342d9aaf15b3172270e914 + 6772616ec9c2cd111f98f2bf78bab6787449fc007dd7f2a5d02f099919f7fb50ade97dd6 +

    +
  8. +
+

Start the First Node

+

Before starting the first node, it's crucial to generate a network key. This key ensures that the node's identity remains consistent, allowing other nodes to connect to it reliably as a bootnode for synchronization.

+

To generate a network key, run the following command:

+
./target/release/solochain-template-node key \
+generate-node-key --base-path /tmp/node01
+
+
+

Note

+

This command generates a network key and stores it in the same base path used for storing the aura and grandpa keys.

+
+

After generating the network key, start the first node using your custom chain specification with the following command:

+
./target/release/solochain-template-node \
+--base-path /tmp/node01 \
+--chain ./customSpecRaw.json \
+--port 30333 \
+--rpc-port 9945 \
+--validator \
+--name MyNode01 \
+--password-interactive
+
+

Upon execution, you should see output similar to the following:

+
+ ./target/release/solochain-template-node \ + --base-path /tmp/node01 \ + --chain ./customSpecRaw.json \ + --port 30333 \ + --rpc-port 9945 \ + --validator \ + --name MyNode01 \ + --password-interactive + 2024-09-12 11:18:46 Substrate Node + 2024-09-12 11:18:46 ✌️ version 0.1.0-8599efc46ae + 2024-09-12 11:18:46 ❤️ by Parity Technologies <admin@parity.io>, 2017-2024 + 2024-09-12 11:18:46 📋 Chain specification: My Custom Testnet + 2024-09-12 11:18:46 🏷 Node name: MyNode01 + 2024-09-12 11:18:46 👤 Role: AUTHORITY + 2024-09-12 11:18:46 💾 Database: RocksDb at /tmp/node01/chains/local_testnet/db/full + 2024-09-12 11:18:46 Using default protocol ID "sup" because none is configured in the chain specs + 2024-09-12 11:18:46 🏷 Local node identity is: 12D3KooWSbaPxmb2tWLgkQVoJdxzpBPTd9dQPmKiJfsvtP753Rg1 + 2024-09-12 11:18:46 Running libp2p network backend + 2024-09-12 11:18:46 💻 Operating system: macos + 2024-09-12 11:18:46 💻 CPU architecture: aarch64 + 2024-09-12 11:18:46 📦 Highest known block at #0 + 2024-09-12 11:18:46 〽️ Prometheus exporter started at 127.0.0.1:9615 + 2024-09-12 11:18:46 Running JSON-RPC server: addr=127.0.0.1:9945, allowed origins=["http://localhost:*", "http://127.0.0.1:*", "https://localhost:*", "https://127.0.0.1:*", "https://polkadot.js.org"] + 2024-09-12 11:18:51 💤 Idle (0 peers), best: #0 (0x850f…951f), finalized #0 (0x850f…951f), ⬇ 0 ⬆ 0 +
+ +

After starting the first node, you'll notice:

+
    +
  • The node is running with the custom chain specification ("My Custom Testnet")
  • +
  • The local node identity is displayed (12D3KooWSbaPxmb2tWLgkQVoJdxzpBPTd9dQPmKiJfsvtP753Rg1 in this example). This identity is crucial for other nodes to connect to this one
  • +
  • The node is currently idle with 0 peers, as it's the only node in the network at this point
  • +
  • No blocks are being produced. Block production will commence once another node joins the network
  • +
+

Add More Nodes

+

Block finalization requires at least two-thirds of the validators. In this example network configured with two validators, block finalization can only start after the second node has been added.

+

Before starting additional nodes, ensure you've properly configured their keys as described in the Add Keys to the Keystore section. For this node, the keys should be stored under the /tmp/node02 base path.

+

To add a second validator to the private network, run the following command:

+
./target/release/solochain-template-node \
+--base-path /tmp/node02 \
+--chain ./customSpecRaw.json \
+--port 30334 \
+--rpc-port 9946 \
+--validator \
+--name MyNode02 \
+--bootnodes /ip4/127.0.0.1/tcp/30333/p2p/12D3KooWSbaPxmb2tWLgkQVoJdxzpBPTd9dQPmKiJfsvtP753Rg1 \
+--unsafe-force-node-key-generation \
+--password-interactive
+
+

Key points about this command:

+
    +
  • It uses a different base-path and name to identify this as the second validator
  • +
  • The --chain option specifies the same chain specification file used for the first node
  • +
  • The --bootnodes option is crucial. It should contain the local node identifier from the first node in the network
  • +
  • The --unsafe-force-node-key-generation parameter forces the generation of a new node key if one doesn't exist. For non-bootnode validators (like this second node and any subsequent nodes), it's less critical if the key changes because they won't be used as bootnodes. However, for consistency and best practices, it's recommended to generate and maintain a stable node key for all validators once the network is set up
  • +
+

After both nodes have added their keys to their respective keystores (under /tmp/node01 and /tmp/node02) and been run, you should see:

+
    +
  • The same genesis block and state root hashes on both nodes
  • +
  • Each node showing one peer
  • +
  • Block proposals being produced
  • +
  • After a few seconds, new blocks being finalized on both nodes
  • +
+

If successful, you should see logs similar to the following on both nodes:

+
+ 2024-09-12 15:37:05 💤 Idle (0 peers), best: #0 (0x8af7…53fd), finalized #0 (0x8af7…53fd), ⬇ 0 ⬆ 0 + 2024-09-12 15:37:08 discovered: 12D3KooWMaL5zqYiMnVikaYCGF65fKekSPqXGgyz92eRcqcnfpey /ip4/192.168.1.2/tcp/30334 + 2024-09-12 15:37:10 💤 Idle (1 peers), best: #0 (0x8af7…53fd), finalized #0 (0x8af7…53fd), ⬇ 0.6kiB/s ⬆ 0.6kiB/s + 2024-09-12 15:37:12 🙌 Starting consensus session on top of parent 0x8af7c72457d437486fe697b4a11ef42b26c8b4448836bdb2220495aea39f53fd (#0) + 2024-09-12 15:37:12 🎁 Prepared block for proposing at 1 (6 ms) [hash: 0xb97cb3a4a62f0cb320236469d8e1e13227a15138941f3c9819b6b78f91986262; parent_hash: 0x8af7…53fd; extrinsics (1): [0x1ef4…eecb] + 2024-09-12 15:37:12 🔖 Pre-sealed block for proposal at 1. Hash now 0x05115677207265f22c6d428fb00b65a0e139c866c975913431ddefe291124f04, previously 0xb97cb3a4a62f0cb320236469d8e1e13227a15138941f3c9819b6b78f91986262. + 2024-09-12 15:37:12 🏆 Imported #1 (0x8af7…53fd → 0x0511…4f04) + 2024-09-12 15:37:15 💤 Idle (1 peers), best: #1 (0x0511…4f04), finalized #0 (0x8af7…53fd), ⬇ 0.5kiB/s ⬆ 0.6kiB/s + 2024-09-12 15:37:18 🏆 Imported #2 (0x0511…4f04 → 0x17a7…a1fd) + 2024-09-12 15:37:20 💤 Idle (1 peers), best: #2 (0x17a7…a1fd), finalized #0 (0x8af7…53fd), ⬇ 0.6kiB/s ⬆ 0.5kiB/s + 2024-09-12 15:37:24 🙌 Starting consensus session on top of parent 0x17a77a8799bd58c7b82ca6a1e3322b38e7db574ee6c92fbcbc26bbe5214da1fd (#2) + 2024-09-12 15:37:24 🎁 Prepared block for proposing at 3 (1 ms) [hash: 0x74d78266b1ac2514050ced3f34fbf98a28c6a2856f49dbe8b44686440a45f879; parent_hash: 0x17a7…a1fd; extrinsics (1): [0xe35f…8d48] + 2024-09-12 15:37:24 🔖 Pre-sealed block for proposal at 3. Hash now 0x12cc1e9492988cfd3ffe4a6eb3186b1abb351a12a97809f7bae4a7319e177dee, previously 0x74d78266b1ac2514050ced3f34fbf98a28c6a2856f49dbe8b44686440a45f879. + 2024-09-12 15:37:24 🏆 Imported #3 (0x17a7…a1fd → 0x12cc…7dee) + 2024-09-12 15:37:25 💤 Idle (1 peers), best: #3 (0x12cc…7dee), finalized #1 (0x0511…4f04), ⬇ 0.5kiB/s ⬆ 0.6kiB/s +
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/index.html b/tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/index.html new file mode 100644 index 00000000..90b2b78c --- /dev/null +++ b/tutorials/polkadot-sdk/parachains/local-chain/upgrade-a-running-network/index.html @@ -0,0 +1,5469 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Upgrade a Running Network | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Upgrade a Running Network

+

Introduction

+

One of the key advantages of the Polkadot SDK development framework is its support for forkless upgrades to the blockchain runtime, which forms the core logic of the chain. Unlike many other blockchains, where introducing new features or improving existing ones often requires a hard fork, Polkadot SDK enables seamless upgrades even when introducing breaking changes—without disrupting the network's operation.

+

Polkadot SDK's design incorporates the runtime directly into the blockchain's state, allowing participants to upgrade the runtime by calling the set_code function within a transaction. This mechanism ensures that updates are validated using the blockchain's consensus and cryptographic guarantees, allowing runtime logic to be updated or extended without forking the chain or requiring a new blockchain client.

+

In this tutorial, you'll learn how to upgrade the runtime of a Polkadot SDK-based blockchain without stopping the network or creating a fork.

+

You'll make the following changes to a running network node's runtime:

+
    +
  • Increase the spec_version
  • +
  • Add the Utility pallet
  • +
  • Increase the minimum balance for network accounts
  • +
+

By the end of this tutorial, you’ll have the skills to upgrade the runtime and submit a transaction to deploy the modified runtime on a live network.

+

Prerequisites

+

Before starting this tutorial, ensure you meet the following requirements:

+ +

Start the Node

+

To demonstrate how to update a running node, you first need to start the local node with the current runtime.

+
    +
  1. +

    Navigate to the root directory where you compiled the Polkadot SDK Solochain Template

    +
  2. +
  3. +

    Start the local node in development mode by running the following command: +

    ./target/release/solochain-template-node --dev
    +

    +
    +

    Note

    +

    Keep the node running throughout this tutorial. You can modify and re-compile the runtime without stopping or restarting the node.

    +
    +
  4. +
  5. +

    Connect to your node using the same steps outlined in the Interact with the Node section. Once connected, you’ll notice the node template is using the default version, 100, displayed in the upper left

    +

    +
  6. +
+

Modify the Runtime

+

Add the Utility Pallet to the Dependencies

+

First, you'll update the Cargo.toml file to include the Utility pallet as a dependency for the runtime. Follow these steps:

+
    +
  1. +

    Open the runtime/Cargo.toml file and locate the [dependencies] section. Add the Utility pallet by inserting the following line:

    +
    pallet-utility = { version = "37.0.0", default-features = false}
    +
    +

    Your [dependencies] section should now look something like this:

    +
    [dependencies]
    +codec = { features = ["derive"], workspace = true }
    +scale-info = { features = ["derive", "serde"], workspace = true }
    +frame-support = { features = ["experimental"], workspace = true }
    +...
    +pallet-utility = { version = "37.0.0", default-features = false }
    +
    +
  2. +
  3. +

    In the [features] section, add the Utility pallet to the std feature list by including: +

    [features]
    +default = ["std"]
    +std = [
    +    "codec/std",
    +    "scale-info/std",
    +    "frame-executive/std",
    +    ...
    +    "pallet-utility/std",
    +]
    +

    +
  4. +
  5. +

    Save the changes and close the Cargo.toml file

    +
  6. +
+

Update the Runtime Configuration

+

You'll now modify the runtime/src/lib.rs file to integrate the Utility pallet and make other necessary changes. In this section, you'll configure the Utility pallet by implementing its Config trait, update the runtime macro to include the new pallet, adjust the EXISTENTIAL_DEPOSIT value, and increment the runtime version.

+

Configure the Utility Pallet

+

To configure the Utility pallet, take the following steps:

+
    +
  1. +

    Implement the Config trait for the Utility pallet:

    +
    ...
    +/// Configure the pallet-template in pallets/template
    +impl pallet_template::Config for Runtime {
    +    ...
    +}
    +
    +// Add here after all the other pallets implementations
    +impl pallet_utility::Config for Runtime {
    +    type RuntimeEvent = RuntimeEvent;
    +    type RuntimeCall = RuntimeCall;
    +    type PalletsOrigin = OriginCaller;
    +    type WeightInfo = pallet_utility::weights::SubstrateWeight<Runtime>;
    +}
    +...
    +
    +
  2. +
  3. +

    Locate the #[frame_support::runtime] macro and add the Utility pallet:

    +
     // Create the runtime by composing the FRAME pallets that were previously configured
    + #[frame_support::runtime]
    + mod runtime {
    +     ...
    +     // Include the custom logic from the pallet-template in the runtime
    +     #[runtime::pallet_index(7)]
    +     pub type TemplateModule = pallet_template;
    +
    +     #[runtime::pallet_index(8)]
    +     pub type Utility = pallet_utility;
    +     ...
    + }
    +
    +
  4. +
+

Update Existential Deposit Value

+

To update the EXISTENTIAL_DEPOSIT in the Balances pallet, locate the constant and set the value to 1000:

+
...
+/// Existential deposit
+pub const EXISTENTIAL_DEPOSIT: u128 = 1000;
+...
+
+
+

Note

+

This change increases the minimum balance required for accounts to remain active. No accounts with balances between 500 and 1000 will be removed. For account removal, a storage migration is needed. See Storage Migration for details.

+
+

Update Runtime Version

+

Locate the runtime_version macro and increment the spec_version field from 100 to 101:

+
#[sp_version::runtime_version]
+pub const VERSION: RuntimeVersion = RuntimeVersion {
+    spec_name: create_runtime_str!("solochain-template-runtime"),
+    impl_name: create_runtime_str!("solochain-template-runtime"),
+    authoring_version: 1,
+    spec_version: 101,
+    impl_version: 1,
+    apis: RUNTIME_API_VERSIONS,
+    transaction_version: 1,
+    state_version: 1,
+};
+
+

Recompile the Runtime

+

Once you've made all the necessary changes, recompile the runtime by running:

+
cargo build --release
+
+

The build artifacts will be output to the target/release directory. The Wasm build artifacts can be found in the target/release/wbuild/solochain-template-runtime directory. You should see the following files:

+
    +
  • solochain_template_runtime.compact.compressed.wasm
  • +
  • solochain_template_runtime.compact.wasm
  • +
  • solochain_template_runtime.wasm
  • +
+

Execute the Runtime Upgrade

+

Now that you've generated the Wasm artifact for your modified runtime, it's time to upgrade the running network. This process involves submitting a transaction to load the new runtime logic.

+

Understand Runtime Upgrades

+

Authorization with Sudo

+

In production networks, runtime upgrades typically require community approval through governance. For this tutorial, the Sudo pallet will be used to simplify the process. The Sudo pallet allows a designated account (usually Alice in development environments) to perform privileged operations, including runtime upgrades.

+

Resource Accounting

+

Runtime upgrades use the set_code extrinsic, which is designed to consume an entire block's resources. This design prevents other transactions from executing on different runtime versions within the same block. The set_code extrinsic is classified as an Operational call, one of the variants of the DispatchClass enum. This classification means it:

+
    +
  • Can use a block's entire weight limit
  • +
  • Receives maximum priority
  • +
  • Is exempt from transaction fees
  • +
+

To bypass resource accounting safeguards, the sudo_unchecked_weight extrinsic will be used. This allows you to specify a weight of zero, ensuring the upgrade process has unlimited time to complete.

+

Perform the Upgrade

+

Follow these steps to update your network with the new runtime:

+
    +
  1. +

    Open Polkadot.js Apps in your web browser and make sure you are connected to your local node

    +
  2. +
  3. +

    Navigate to the Developer dropdown and select the Extrinsics option

    +

    +
  4. +
  5. +

    Construct the set_code extrinsic call:

    +
      +
    1. Select the sudo pallet
    2. +
    3. Choose the sudoUncheckedWeight extrinsic
    4. +
    5. Select the system pallet
    6. +
    7. Choose the setCode extrinsic
    8. +
    9. +

      Fill in the parameters:

      +
        +
      • +

        code - the new runtime code

        +
        +

        Note

        +

        You can click the file upload toggle to upload a file instead of copying the hex string value.

        +
        +
      • +
      • +

        weight - leave both parameters set to the default value of 0

        +
      • +
      +
    10. +
    11. +

      Click on Submit Transaction

      +
    12. +
    +

    +
  6. +
  7. +

    Review the transaction details and click Sign and Submit to confirm the transaction

    +

    +
  8. +
+

Verify the Upgrade

+

Runtime Version Change

+

Verify that the runtime version of your blockchain has been updated successfully. Follow these steps to ensure the upgrade was applied:

+
    +
  1. +

    Navigate to the Network dropdown and select the Explorer option

    +

    +
  2. +
  3. +

    After the transaction is included in a block, check:

    +
      +
    1. There has been a successful sudo.Sudid event
    2. +
    3. The indicator shows that the runtime version is now 101
    4. +
    +

    +
  4. +
+

Utility Pallet Addition

+

In the Extrinsics section, you should see that the Utility pallet has been added as an option.

+

+

Existential Deposit Update

+

Check the updated existential deposit value on your blockchain. Follow these steps to query and verify the new value:

+
    +
  1. +

    Navigate to the Developer dropdown and select the Chain State option

    +

    +
  2. +
  3. +

    Query the existential deposit value

    +
      +
    1. Click on the Constants tab
    2. +
    3. Select the balances pallet
    4. +
    5. Choose the existentialDeposit constant
    6. +
    7. Click the + button to execute the query
    8. +
    9. Check the existential deposit value
    10. +
    +

    +
  4. +
+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/index.html b/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/index.html new file mode 100644 index 00000000..e3a2fd95 --- /dev/null +++ b/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/index.html @@ -0,0 +1,5344 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Convert Assets on Asset Hub | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Convert Assets on Asset Hub

+

Introduction

+

Asset Conversion is an Automated Market Maker (AMM) utilizing Uniswap V2 logic and implemented as a pallet on Polkadot's Asset Hub. For more details about this feature, please visit the Asset Conversion on Asset Hub wiki page.

+

This guide will provide detailed information about the key functionalities offered by the Asset Conversion pallet on Asset Hub, including:

+
    +
  • Creating a liquidity pool
  • +
  • Adding liquidity to a pool
  • +
  • Swapping assets
  • +
  • Withdrawing liquidity from a pool
  • +
+

Prerequisites

+

Before converting assets on Asset Hub, you must ensure you have:

+
    +
  • Access to the Polkadot.js Apps interface and a connection with the intended blockchain
  • +
  • A funded wallet containing the assets you wish to convert and enough available funds to cover the transaction fees
  • +
  • An asset registered on Asset Hub that you want to convert. If you haven't created an asset on Asset Hub yet, refer to the Register a Local Asset or Register a Foreign Asset documentation to create an asset.
  • +
+

Creating a Liquidity Pool

+

If an asset on Asset Hub does not have an existing liquidity pool, the first step is to create one.

+

The asset conversion pallet provides the createPool extrinsic to create a new liquidity pool, creating an empty liquidity pool and a new LP token asset.

+
+

Note

+

A testing token with the asset ID 1112 and the name PPM was created for this example.

+
+

As stated in the Test Environment Setup section, this tutorial is based on the assumption that you have an instance of Polkadot Asset Hub running locally. Therefore, the demo liquidity pool will be created between DOT and PPM tokens. However, the same steps can be applied to any other asset on Asset Hub.

+

From the Asset Hub perspective, the Multilocation that identifies the PPM token is the following:

+
{
+  parents: 0,
+  interior: {
+    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
+  }
+}
+
+
+

Note

+

The PalletInstance value of 50 represents the Assets pallet on Asset Hub. The GeneralIndex value of 1112 is the PPM asset's asset ID.

+
+

To create the liquidity pool, you can follow these steps:

+
    +
  1. +

    Navigate to the Extrinsics section on the Polkadot.js Apps interface

    +
      +
    1. Select Developer from the top menu
    2. +
    3. Click on Extrinsics from the dropdown menu
    4. +
    +

    Extrinsics Section

    +
  2. +
  3. +

    Choose the AssetConversion pallet and click on the createPool extrinsic

    +
      +
    1. Select the AssetConversion pallet
    2. +
    3. Choose the createPool extrinsic from the list of available extrinsics
    4. +
    +

    Create Pool Extrinsic

    +
  4. +
  5. +

    Fill in the required fields:

    +
      +
    1. +

      asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      +
      {
      +  parents: 0,
      +  interior: 'Here'
      +}
      +
      +
    2. +
    3. +

      asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      +
      {
      +  parents: 0,
      +  interior: {
      +    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
      +  }
      +}
      +
      +
    4. +
    5. +

      Click on Submit Transaction to create the liquidity pool

      +
    6. +
    +

    Create Pool Fields

    +
  6. +
+

Signing and submitting the transaction triggers the creation of the liquidity pool. To verify the new pool's creation, check the Explorer section on the Polkadot.js Apps interface and ensure that the PoolCreated event was emitted.

+

Pool Created Event

+

As the preceding image shows, the lpToken ID created for this pool is 19. This ID is essential to identify the liquidity pool and associated LP tokens.

+

Adding Liquidity to a Pool

+

The addLiquidity extrinsic allows users to provide liquidity to a pool of two assets. Users specify their preferred amounts for both assets and minimum acceptable quantities. The function determines the best asset contribution, which may vary from the amounts desired but won't fall below the specified minimums. Providers receive liquidity tokens representing their pool portion in return for their contribution.

+

To add liquidity to a pool, follow these steps:

+
    +
  1. +

    Navigate to the Extrinsics section on the Polkadot.js Apps interface

    +
      +
    1. Select Developer from the top menu
    2. +
    3. Click on Extrinsics from the dropdown menu
    4. +
    +

    Extrinsics Section

    +
  2. +
  3. +

    Choose the assetConversion pallet and click on the addLiquidity extrinsic

    +
      +
    1. Select the assetConversion pallet
    2. +
    3. Choose the addLiquidity extrinsic from the list of available extrinsics
    4. +
    +

    Add Liquidity Extrinsic

    +
  4. +
  5. +

    Fill in the required fields:

    +
      +
    1. +

      asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      +
      {
      +  parents: 0,
      +  interior: 'Here'
      +}
      +
      +
    2. +
    3. +

      asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      +
      {
      +  parents: 0,
      +  interior: {
      +    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
      +  }
      +}
      +
      +
    4. +
    5. +

      amount1Desired - the amount of the first asset that will be contributed to the pool

      +
    6. +
    7. amount2Desired - the quantity of the second asset intended for pool contribution
    8. +
    9. amount1Min - the minimum amount of the first asset that will be contributed
    10. +
    11. amount2Min - the lowest acceptable quantity of the second asset for contribution
    12. +
    13. mintTo - the account to which the liquidity tokens will be minted
    14. +
    15. Click on Submit Transaction to add liquidity to the pool
    16. +
    +

    Add Liquidity Fields

    +
    +

    Warning

    +

    Ensure that the appropriate amount of tokens provided has been minted previously and is available in your account before adding liquidity to the pool.

    +
    +

    In this case, the liquidity provided to the pool is between DOT tokens and PPM tokens with the asset ID 1112 on Polkadot Asset Hub. The intention is to provide liquidity for 1 DOT token (u128 value of 1000000000000 as it has 10 decimals) and 1 PPM token (u128 value of 1000000000000 as it also has 10 decimals).

    +
  6. +
+

Signing and submitting the transaction adds liquidity to the pool. To verify the liquidity addition, check the Explorer section on the Polkadot.js Apps interface and ensure that the LiquidityAdded event was emitted.

+

Liquidity Added Event

+

Swapping Assets

+

Swapping From an Exact Amount of Tokens

+

The asset conversion pallet enables users to exchange a specific quantity of one asset for another in a designated liquidity pool by swapping them for an exact amount of tokens. It guarantees the user will receive at least a predetermined minimum amount of the second asset. This function increases trading predictability and allows users to conduct asset exchanges with confidence that they are assured a minimum return.

+

To swap assets for an exact amount of tokens, follow these steps:

+
    +
  1. +

    Navigate to the Extrinsics section on the Polkadot.js Apps interface

    +
      +
    1. Select Developer from the top menu
    2. +
    3. Click on Extrinsics from the dropdown menu
    4. +
    +

    Extrinsics Section

    +
  2. +
  3. +

    Choose the AssetConversion pallet and click on the swapExactTokensForTokens extrinsic

    +
      +
    1. Select the AssetConversion pallet
    2. +
    3. Choose the swapExactTokensForTokens extrinsic from the list of available extrinsics
    4. +
    +

    Swap From Exact Tokens Extrinsic

    +
  4. +
  5. +

    Fill in the required fields:

    +
      +
    1. +

      path:Vec<StagingXcmV3MultiLocation> - an array of Multilocations representing the path of the swap. The first and last elements of the array are the input and output assets, respectively. In this case, the path consists of two elements:

      +
        +
      • +

        0: StagingXcmV3MultiLocation - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

        +
        {
        +  parents: 0,
        +  interior: 'Here'
        +}
        +
        +
      • +
      • +

        1: StagingXcmV3MultiLocation - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

        +
        {
        +  parents: 0,
        +  interior: {
        +    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
        +  }
        +}
        +
        +
      • +
      +
    2. +
    3. +

      amountOut - the exact amount of the second asset that the user wants to receive

      +
    4. +
    5. amountInMax - the maximum amount of the first asset that the user is willing to swap
    6. +
    7. sendTo - the account to which the swapped assets will be sent
    8. +
    9. keepAlive - a boolean value that determines whether the pool should be kept alive after the swap
    10. +
    11. Click on Submit Transaction to swap assets for an exact amount of tokens
    12. +
    +

    Swap For Exact Tokens Fields

    +
    +

    Warning

    +

    Ensure that the appropriate amount of tokens provided has been minted previously and is available in your account before adding liquidity to the pool.

    +
    +

    In this case, the intention is to swap 0.01 DOT token (u128 value of 100000000000 as it has 10 decimals) for 0.04 PPM token (u128 value of 400000000000 as it also has 10 decimals).

    +
  6. +
+

Signing and submitting the transaction will execute the swap. To verify execution, check the Explorer section on the Polkadot.js Apps interface and make sure that the SwapExecuted event was emitted.

+

Swap From Exact Tokens Event

+

Swapping To an Exact Amount of Tokens

+

Conversely, the Asset Conversion pallet comes with a function that allows users to trade a variable amount of one asset to acquire a precise quantity of another. It ensures that users stay within a set maximum of the initial asset to obtain the desired amount of the second asset. This provides a method to control transaction costs while achieving the intended result.

+

To swap assets for an exact amount of tokens, follow these steps:

+
    +
  1. +

    Navigate to the Extrinsics section on the Polkadot.js Apps interface

    +
      +
    1. Select Developer from the top menu
    2. +
    3. Click on Extrinsics from the dropdown menu
    4. +
    +

    Extrinsics Section

    +
  2. +
  3. +

    Choose the AssetConversion pallet and click on the swapTokensForExactTokens extrinsic:

    +
      +
    1. Select the AssetConversion pallet
    2. +
    3. Choose the swapTokensForExactTokens extrinsic from the list of available extrinsics
    4. +
    +

    Swap Tokens For Exact Tokens Extrinsic

    +
  4. +
  5. +

    Fill in the required fields:

    +
      +
    1. +

      path:Vec<StagingXcmV3MultiLocation\> - an array of Multilocations representing the path of the swap. The first and last elements of the array are the input and output assets, respectively. In this case, the path consists of two elements:

      +
        +
      • +

        0: StagingXcmV3MultiLocation - the Multilocation of the first asset in the pool. In this case, it is the PPM token, which the following Multilocation represents:

        +
        {
        +  parents: 0,
        +  interior: {
        +    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
        +  }
        +}
        +
        +
      • +
      • +

        1: StagingXcmV3MultiLocation - the second asset's Multilocation within the pool. This refers to the DOT token, which the following Multilocation identifies:

        +
        {
        +  parents: 0,
        +  interior: 'Here'
        +}
        +
        +
      • +
      +
    2. +
    3. +

      amountOut - the exact amount of the second asset that the user wants to receive

      +
    4. +
    5. amountInMax - the maximum amount of the first asset that the user is willing to swap
    6. +
    7. sendTo - the account to which the swapped assets will be sent
    8. +
    9. keepAlive - a boolean value that determines whether the pool should be kept alive after the swap
    10. +
    11. Click on Submit Transaction to swap assets for an exact amount of tokens
    12. +
    +

    Swap Tokens For Exact Tokens Fields

    +
    +

    Warning

    +

    Before swapping assets, ensure that the tokens provided have been minted previously and are available in your account.

    +
    +

    In this case, the intention is to swap 0.01 DOT token (u128 value of 100000000000 as it has ten decimals) for 0.04 PPM token (u128 value of 400000000000 as it also has ten decimals).

    +
  6. +
+

Signing and submitting the transaction will execute the swap. To verify execution, check the Explorer section on the Polkadot.js Apps interface and make sure that the SwapExecuted event was emitted.

+

Swap Tokens For Exact Tokens Event

+

Withdrawing Liquidity from a Pool

+

The Asset Conversion pallet provides the removeLiquidity extrinsic to remove liquidity from a pool. This function allows users to withdraw the liquidity they offered from a pool, returning the original assets. When calling this function, users specify the number of liquidity tokens (representing their share in the pool) they wish to burn. They also set minimum acceptable amounts for the assets they expect to receive back. This mechanism ensures that users can control the minimum value they receive, protecting against unfavorable price movements during the withdrawal process.

+

To withdraw liquidity from a pool, follow these steps:

+
    +
  1. +

    Navigate to the Extrinsics section on the Polkadot.js Apps interface

    +
      +
    1. Select Developer from the top menu
    2. +
    3. Click on Extrinsics from the dropdown menu
    4. +
    +

    Extrinsics Section

    +
  2. +
  3. +

    Choose the AssetConversion pallet and click on the remove_liquidity extrinsic

    +
      +
    1. Select the AssetConversion pallet
    2. +
    3. Choose the removeLiquidity extrinsic from the list of available extrinsics
    4. +
    +

    Remove Liquidity Extrinsic

    +
  4. +
  5. +

    Fill in the required fields:

    +
      +
    1. +

      asset1 - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents:

      +
      {
      +  parents: 0,
      +  interior: 'Here'
      +}
      +
      +
    2. +
    3. +

      asset2 - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies:

      +
      {
      +  parents: 0,
      +  interior: {
      +    X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }]
      +  }
      +}
      +
      +
    4. +
    5. +

      lpTokenBurn - the number of liquidity tokens to burn

      +
    6. +
    7. amount1MinReceived - the minimum amount of the first asset that the user expects to receive
    8. +
    9. amount2MinReceived - the minimum quantity of the second asset the user expects to receive
    10. +
    11. withdrawTo - the account to which the withdrawn assets will be sent
    12. +
    13. Click on Submit Transaction to withdraw liquidity from the pool
    14. +
    +

    Remove Liquidity Fields

    +
    +

    Warning

    +

    Ensure that the tokens provided have been minted previously and are available in your account before withdrawing liquidity from the pool.

    +
    +

    In this case, the intention is to withdraw 0.05 liquidity tokens from the pool, expecting to receive 0.004 DOT token (u128 value of 40000000000 as it has 10 decimals) and 0.04 PPM token (u128 value of 400000000000 as it also has 10 decimals).

    +
  6. +
+

Signing and submitting the transaction will initiate the withdrawal of liquidity from the pool. To verify the withdrawal, check the Explorer section on the Polkadot.js Apps interface and ensure that the LiquidityRemoved event was emitted.

+

Remove Liquidity Event

+

Test Environment Setup

+

To test the Asset Conversion pallet, you can set up a local test environment to simulate different scenarios. This guide uses Chopsticks to spin up an instance of Polkadot Asset Hub. For further details on using Chopsticks, please refer to the Chopsticks documentation.

+

To set up a local test environment, execute the following command:

+
npx @acala-network/chopsticks \
+--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml
+
+
+

Note

+

This command initiates a lazy fork of Polkadot Asset Hub, including the most recent block information from the network. For Kusama Asset Hub testing, simply switch out polkadot-asset-hub.yml with kusama-asset-hub.yml in the command.

+
+

You now have a local Asset Hub instance up and running, ready for you to test various asset conversion procedures. The process here mirrors what you'd do on MainNet. After completing a transaction on TestNet, you can apply the same steps to convert assets on MainNet.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/system-chains/asset-hub/index.html b/tutorials/polkadot-sdk/system-chains/asset-hub/index.html new file mode 100644 index 00000000..28a09a51 --- /dev/null +++ b/tutorials/polkadot-sdk/system-chains/asset-hub/index.html @@ -0,0 +1,5016 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Asset Hub Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Asset Hub Tutorials

+

Benefits of Asset Hub

+

Polkadot SDK-based relay chains focus on security and consensus, leaving asset management to an external component, such as a system chain. The Asset Hub is one example of a system chain and is vital to managing tokens which aren't native to the Polkadot ecosystem. Developers opting to integrate with Asset Hub can expect the following benefits:

+
    +
  • Support for non-native on-chain assets - create and manage your own tokens or NFTs with Polkadot ecosystem compatibility available out of the box
  • +
  • Lower transaction fees - approximately 1/10th of the cost of using the relay chain
  • +
  • Reduced deposit requirements - approximately 1/100th of the deposit required for the relay chain
  • +
  • Payment of fees with non-native assets - no need to buy native tokens for gas, increasing flexibility for developers and users
  • +
+

Get Started

+

Through these tutorials, you'll learn how to manage cross-chain assets, including:

+
    +
  • Asset registration and configuration
  • +
  • Cross-chain asset representation
  • +
  • Liquidity pool creation and management
  • +
  • Asset swapping and conversion
  • +
  • Transaction parameter optimization
  • +
+

In This Section

+

+

+

+

Additional Resources

+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/index.html b/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/index.html new file mode 100644 index 00000000..a28a9f2a --- /dev/null +++ b/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/index.html @@ -0,0 +1,5088 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Register a Foreign Asset on Asset Hub | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Register a Foreign Asset on Asset Hub

+

Introduction

+

As outlined in the Asset Hub Overview, Asset Hub supports two categories of assets: local and foreign. Local assets are created on the Asset Hub system parachain and are identified by integer IDs. On the other hand, foreign assets, which originate outside of Asset Hub, are recognized by Multilocations.

+

When registering a foreign asset on Asset Hub, it's essential to notice that the process involves communication between two parachains. The Asset Hub parachain will be the destination of the foreign asset, while the source parachain will be the origin of the asset. The communication between the two parachains is facilitated by the Cross-Chain Message Passing (XCMP) protocol.

+

This guide will take you through the process of registering a foreign asset on the Asset Hub parachain.

+

Prerequisites

+

The Asset Hub parachain is one of the system parachains on a relay chain, such as Polkadot or Kusama. To interact with these parachains, you can use the Polkadot.js Apps interface for:

+ +

For testing purposes, you can also interact with the Asset Hub instance on the following test networks:

+ +

Before you start, ensure that you have:

+
    +
  • Access to the Polkadot.js Apps interface, and you are connected to the desired chain
  • +
  • A parachain that supports the XCMP protocol to interact with the Asset Hub parachain
  • +
  • A funded wallet to pay for the transaction fees and subsequent registration of the foreign asset
  • +
+

This guide will use Polkadot, its local Asset Hub instance, and the Astar parachain (ID 2006), as stated in the Test Environment Setup section. However, the process is the same for other relay chains and their respective Asset Hub parachain, regardless of the network you are using and the parachain owner of the foreign asset.

+

Steps to Register a Foreign Asset

+

Asset Hub

+
    +
  1. +

    Open the Polkadot.js Apps interface and connect to the Asset Hub parachain using the network selector in the top left corner

    +
      +
    • Testing foreign asset registration is recommended on TestNet before proceeding to MainNet. If you haven't set up a local testing environment yet, consult the Environment setup guide. After setting up, connect to the Local Node (Chopsticks) at ws://127.0.0.1:8000
    • +
    • For live network operations, connect to the Asset Hub parachain. You can choose either Polkadot or Kusama Asset Hub from the dropdown menu, selecting your preferred RPC provider
    • +
    +
  2. +
  3. +

    Navigate to the Extrinsics page

    +
      +
    1. Click on the Developer tab from the top navigation bar
    2. +
    3. Select Extrinsics from the dropdown
    4. +
    +

    Access to Developer Extrinsics section

    +
  4. +
  5. +

    Select the Foreign Assets pallet

    +
      +
    1. Select the foreignAssets pallet from the dropdown list
    2. +
    3. Choose the create extrinsic
    4. +
    +

    Select the Foreign Asset pallet

    +
  6. +
  7. +

    Fill out the required fields and click on the copy icon to copy the encoded call data to your clipboard. The fields to be filled are:

    +
      +
    • +

      id - as this is a foreign asset, the ID will be represented by a Multilocation that reflects its origin. For this case, the Multilocation of the asset will be from the source parachain perspective:

      +
      { parents: 1, interior: { X1: [{ Parachain: 2006 }] } }
      +
      +
    • +
    • +

      admin - refers to the account that will be the admin of this asset. This account will be able to manage the asset, including updating its metadata. As the registered asset corresponds to a native asset of the source parachain, the admin account should be the sovereign account of the source parachain

      +
      +Obtain the sovereign account +

      The sovereign account can be obtained through Substrate Utilities.

      +

      Ensure that Sibling is selected and that the Para ID corresponds to the source parachain. In this case, since the guide follows the test setup stated in the Test Environment Setup section, the Para ID is 2006.

      +

      Get parachain sovereign account

      +
      +
    • +
    • +

      minBalance - the minimum balance required to hold this asset

      +
    • +
    +

    Fill out the required fields

    +
    +Encoded call data +

    If you want an example of the encoded call data, you can copy the following: +

    0x3500010100591f007369626cd6070000000000000000000000000000000000000000000000000000a0860100000000000000000000000000
    +

    +
    +
  8. +
+

Source Parachain

+
    +
  1. Navigate to the Developer > Extrinsics section
  2. +
  3. +

    Create the extrinsic to register the foreign asset through XCM

    +
      +
    1. Paste the encoded call data copied in the previous step
    2. +
    3. Click the Submit Transaction button
    4. +
    +

    Register foreign asset through XCM

    +

    This XCM call involves withdrawing DOT from the sibling account of the parachain, using it to initiate an execution. The transaction will be carried out with XCM as the origin kind, and will be a hex-encoded call to create a foreign asset on Asset Hub for the specified parachain asset multilocation. Any surplus will be refunded, and the asset will be deposited into the sibling account.

    +
    +

    Warning

    +

    Note that the sovereign account on the Asset Hub parachain must have a sufficient balance to cover the XCM BuyExecution instruction. If the account does not have enough balance, the transaction will fail.

    +
    +
    +Example of the encoded call data +

    If you want to have the whole XCM call ready to be copied, go to the Developer > Extrinsics > Decode section and paste the following hex-encoded call data: +

    0x6300330003010100a10f030c000400010000070010a5d4e81300010000070010a5d4e80006030700b4f13501419ce03500010100591f007369626cd607000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    +

    +

    Ensure to replace the encoded call data with the one you copied in the previous step.

    +
    +
  4. +
+

After the transaction is successfully executed, the foreign asset will be registered on the Asset Hub parachain.

+

Asset Registration Verification

+

To confirm that a foreign asset has been successfully accepted and registered on the Asset Hub parachain, you can navigate to the Network > Explorer section of the Polkadot.js Apps interface for Asset Hub. You should be able to see an event that includes the following details:

+

Asset registration event

+

In the image above, the success field indicates whether the asset registration was successful.

+

Test Environment Setup

+

To test the foreign asset registration process before deploying it on a live network, you can set up a local parachain environment. This guide uses Chopsticks to simulate that process. For more information on using Chopsticks, please refer to the Chopsticks documentation.

+

To set up a test environment, run the following command:

+
npx @acala-network/chopsticks xcm \
+--r polkadot \
+--p polkadot-asset-hub \
+--p astar
+
+
+

Note

+

The above command will create a lazy fork of Polkadot as the relay chain, its Asset Hub instance, and the Astar parachain. The xcm parameter enables communication through the XCMP protocol between the relay chain and the parachains, allowing the registration of foreign assets on Asset Hub. For further information on the chopsticks usage of the XCMP protocol, refer to the XCM Testing section of the Chopsticks documentation.

+
+

After executing the command, the terminal will display output indicating the Polkadot relay chain, the Polkadot Asset Hub, and the Astar parachain are running locally and connected through XCM. You can access them individually via the Polkadot.js Apps interface.

+ +
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/index.html b/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/index.html new file mode 100644 index 00000000..2801e233 --- /dev/null +++ b/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/index.html @@ -0,0 +1,4996 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Register a Local Asset | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Register a Local Asset on Asset Hub

+

Introduction

+

As detailed in the Asset Hub Overview page, Asset Hub accommodates two types of assets: local and foreign. Local assets are those that were created in Asset Hub and are identifiable by an integer ID. On the other hand, foreign assets originate from a sibling parachain and are identified by a Multilocation.

+

This guide will take you through the steps of registering a local asset on the Asset Hub parachain.

+

Prerequisites

+

Before you begin, ensure you have access to the Polkadot.js Apps interface and a funded wallet with DOT or KSM.

+
    +
  • For Polkadot Asset Hub, you would need a deposit of 10 DOT and around 0.201 DOT for the metadata
  • +
  • For Kusama Asset Hub, the deposit is 0.1 KSM and around 0.000669 KSM for the metadata
  • +
+

You need to ensure that your Asset Hub account balance is a bit more than the sum of those two deposits, which should seamlessly account for the required deposits and transaction fees.

+

Steps to Register a Local Asset

+

To register a local asset on the Asset Hub parachain, follow these steps:

+
    +
  1. +

    Open the Polkadot.js Apps interface and connect to the Asset Hub parachain using the network selector in the top left corner

    +
      +
    • You may prefer to test local asset registration on TestNet before registering the asset on a MainNet hub. If you still need to set up a local testing environment, review the Environment setup section for instructions. Once the local environment is set up, connect to the Local Node (Chopsticks) available on ws://127.0.0.1:8000
    • +
    • For the live network, connect to the Asset Hub parachain. Either Polkadot or Kusama Asset Hub can be selected from the dropdown list, choosing the desired RPC provider
    • +
    +
  2. +
  3. +

    Click on the Network tab on the top navigation bar and select Assets from the dropdown list

    +

    Access to Asset Hub through Polkadot.JS

    +
  4. +
  5. +

    Now, you need to examine all the registered asset IDs. This step is crucial to ensure that the asset ID you are about to register is unique. Asset IDs are displayed in the assets column

    +

    Asset IDs on Asset Hub

    +
  6. +
  7. +

    Once you have confirmed that the asset ID is unique, click on the Create button on the top right corner of the page

    +

    Create a new asset

    +
  8. +
  9. +

    Fill in the required fields in the Create Asset form:

    +
      +
    1. creator account - the account to be used for creating this asset and setting up the initial metadata
    2. +
    3. asset name - the descriptive name of the asset you are registering
    4. +
    5. asset symbol - the symbol that will be used to represent the asset
    6. +
    7. asset decimals - the number of decimal places for this token, with a maximum of 20 allowed through the user interface
    8. +
    9. minimum balance - the minimum balance for the asset. This is specified in the units and decimals as requested
    10. +
    11. asset ID - the selected id for the asset. This should not match an already-existing asset id
    12. +
    13. Click on the Next button
    14. +
    +

    Create Asset Form

    +
  10. +
  11. +

    Choose the accounts for the roles listed below:

    +
      +
    1. admin account - the account designated for continuous administration of the token
    2. +
    3. issuer account - the account that will be used for issuing this token
    4. +
    5. freezer account - the account that will be used for performing token freezing operations
    6. +
    7. Click on the Create button
    8. +
    +

    Admin, Issuer, Freezer accounts

    +
  12. +
  13. +

    Click on the Sign and Submit button to complete the asset registration process

    +

    Sign and Submit

    +
  14. +
+

Verify Asset Registration

+

After completing these steps, the asset will be successfully registered. You can now view your asset listed on the Assets section of the Polkadot.js Apps interface.

+

Asset listed on Polkadot.js Apps

+
+

Note

+

Take into consideration that the Assets section’s link may differ depending on the network you are using. For the local environment, enter ws://127.0.0.1:8000 into the Custom Endpoint field.

+
+

In this way, you have successfully registered a local asset on the Asset Hub parachain.

+

For an in-depth explanation of Asset Hub and its features, please refer to the Polkadot Wiki page on Asset Hub.

+

Test Setup Environment

+

You can set up a local parachain environment to test the asset registration process before deploying it on the live network. This guide uses Chopsticks to simulate that process. For further information on chopsticks usage, refer to the Chopsticks documentation.

+

To set up a test environment, execute the following command:

+
npx @acala-network/chopsticks \
+--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml
+
+
+

Note

+

The above command will spawn a lazy fork of Polkadot Asset Hub with the latest block data from the network. If you need to test Kusama Asset Hub, replace polkadot-asset-hub.yml with kusama-asset-hub.yml in the command.

+
+

An Asset Hub instance is now running locally, and you can proceed with the asset registration process. Note that the local registration process does not differ from the live network process. Once you have a successful TestNet transaction, you can use the same steps to register the asset on MainNet.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/system-chains/index.html b/tutorials/polkadot-sdk/system-chains/index.html new file mode 100644 index 00000000..76674db1 --- /dev/null +++ b/tutorials/polkadot-sdk/system-chains/index.html @@ -0,0 +1,4936 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + System Chains Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+ +
+
+ + +
+ +
+ + + + + + + + + + + +

System Chains Tutorials

+

In this section, you'll gain hands-on experience building solutions that integrate with system chains on Polkadot using the Polkadot SDK. System chains like the Asset Hub provide essential infrastructure for enabling cross-chain interoperability and asset management across the Polkadot ecosystem.

+

Through these tutorials, you'll learn how to leverage these system chains to enhance the functionality and security of your blockchain applications.

+

For Parachain Integrators

+

Enhance cross-chain interoperability and expand your parachain’s functionality:

+ +

For Developers Leveraging System Chains

+

Unlock new possibilities by tapping into Polkadot’s system chains:

+
    +
  • +

    Register a new asset on Asset Hub - create and customize assets directly on Asset Hub (local assets) with parameters like metadata, minimum balances, and more

    +
  • +
  • +

    Convert Assets - use Asset Hub's AMM functionality to swap between different assets, provide liquidity to pools, and manage LP tokens

    +
  • +
+

In This Section

+

+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/testing/fork-live-chains/index.html b/tutorials/polkadot-sdk/testing/fork-live-chains/index.html new file mode 100644 index 00000000..dc78f75e --- /dev/null +++ b/tutorials/polkadot-sdk/testing/fork-live-chains/index.html @@ -0,0 +1,5311 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Fork a Chain with Chopsticks | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Fork a Chain with Chopsticks

+

Introduction

+

Chopsticks is an innovative tool that simplifies the process of forking live Polkadot SDK chains. This guide provides step-by-step instructions to configure and fork chains, enabling developers to:

+
    +
  • Replay blocks for state analysis
  • +
  • Test cross-chain messaging (XCM)
  • +
  • Simulate blockchain environments for debugging and experimentation
  • +
+

With support for both configuration files and CLI commands, Chopsticks offers flexibility for diverse development workflows. Whether you're testing locally or exploring complex blockchain scenarios, Chopsticks empowers developers to gain deeper insights and accelerate application development.

+

For additional support and information, please reach out through GitHub Issues.

+
+

Note

+

Chopsticks uses Smoldot light client, which only supports the native Polkadot SDK API. As a result, Ethereum JSON-RPC calls are not supported, and tools like Metamask cannot connect to Chopsticks-based forks.

+
+

Prerequisites

+

To follow this tutorial, ensure you have completed the following:

+ +

Configuration File

+

To run Chopsticks using a configuration file, utilize the --config flag. You can use a raw GitHub URL, a path to a local file, or simply the chain's name. The following commands all look different but they use the polkadot configuration in the same way:

+
+
+
+
npx @acala-network/chopsticks \
+--config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml
+
+
+
+
npx @acala-network/chopsticks --config=configs/polkadot.yml
+
+
+
+
npx @acala-network/chopsticks --config=polkadot
+
+
+
+
+

Regardless of which method you choose from the preceding examples, you'll see an output similar to the following:

+
+ npx @acala-network/chopsticks --config=polkadot +
+ [18:38:26.155] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml + app: "chopsticks" + chopsticks::executor TRACE: Calling Metadata_metadata + chopsticks::executor TRACE: Completed Metadata_metadata + [18:38:28.186] INFO: Polkadot RPC listening on port 8000 + app: "chopsticks" +
+ +
+

Note

+

If using a file path, make sure you've downloaded the Polkadot configuration file, or have created your own.

+
+

Create a Fork

+

Once you've configured Chopsticks, use the following command to fork Polkadot at block 100:

+
npx @acala-network/chopsticks \
+--endpoint wss://polkadot-rpc.dwellir.com \
+--block 100
+
+

If the fork is successful, you will see output similar to the following:

+
+ npx @acala-network/chopsticks \ --endpoint wss://polkadot-rpc.dwellir.com \ --block 100 +
+ [19:12:21.023] INFO: Polkadot RPC listening on port 8000 + app: "chopsticks" +
+ +

Access the running Chopsticks fork using the default address.

+
ws://localhost:8000
+
+

Interact with a Fork

+

You can interact with the forked chain using various libraries such as Polkadot.js and its user interface, Polkadot.js Apps.

+

Use Polkadot.js Apps

+

To interact with Chopsticks via the hosted user interface, visit Polkadot.js Apps and follow these steps:

+
    +
  1. +

    Select the network icon in the top left corner

    +

    +
  2. +
  3. +

    Scroll to the bottom and select Development

    +
  4. +
  5. Choose Custom
  6. +
  7. Enter ws://localhost:8000 in the input field
  8. +
  9. +

    Select the Switch button

    +

    +
  10. +
+

You should now be connected to your local fork and can interact with it as you would with a real chain.

+

Use Polkadot.js Library

+

For programmatic interaction, you can use the Polkadot.js library. The following is a basic example:

+
import { ApiPromise, WsProvider } from '@polkadot/api';
+
+async function connectToFork() {
+  const wsProvider = new WsProvider('ws://localhost:8000');
+  const api = await ApiPromise.create({ provider: wsProvider });
+  await api.isReady;
+
+  // Now you can use 'api' to interact with your fork
+  console.log(`Connected to chain: ${await api.rpc.system.chain()}`);
+}
+
+connectToFork();
+
+

Replay Blocks

+

Chopsticks allows you to replay specific blocks from a chain, which is useful for debugging and analyzing state changes. You can use the parameters in the Configuration section to set up the chain configuration, and then use the run-block subcommand with the following additional options:

+
    +
  • output-path - path to print output
  • +
  • html - generate HTML with storage diff
  • +
  • open - open generated HTML
  • +
+

For example, the command to replay block 1000 from Polkadot and save the output to a JSON file would be as follows:

+
npx @acala-network/chopsticks run-block  \
+--endpoint wss://polkadot-rpc.dwellir.com  \
+--output-path ./polkadot-output.json  \
+--block 1000
+
+
+Output file content +
{
+    "Call": {
+        "result": "0xba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44a10f6fc59a4d90c3b78e38fac100fc6adc6f9e69a07565ec8abce6165bd0d24078cc7bf34f450a2cc7faacc1fa1e244b959f0ed65437f44208876e1e5eefbf8dd34c040642414245b501030100000083e2cc0f00000000d889565422338aa58c0fd8ebac32234149c7ce1f22ac2447a02ef059b58d4430ca96ba18fbf27d06fe92ec86d8b348ef42f6d34435c791b952018d0a82cae40decfe5faf56203d88fdedee7b25f04b63f41f23da88c76c876db5c264dad2f70c",
+        "storageDiff": [
+            [
+                "0x0b76934f4cc08dee01012d059e1b83eebbd108c4899964f707fdaffb82636065",
+                "0x00"
+            ],
+            [
+                "0x1cb6f36e027abb2091cfb5110ab5087f0323475657e0890fbdbf66fb24b4649e",
+                null
+            ],
+            [
+                "0x1cb6f36e027abb2091cfb5110ab5087f06155b3cd9a8c9e5e9a23fd5dc13a5ed",
+                "0x83e2cc0f00000000"
+            ],
+            [
+                "0x1cb6f36e027abb2091cfb5110ab5087ffa92de910a7ce2bd58e99729c69727c1",
+                null
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef702a5c1b19ab7a04f536c519aca4983ac",
+                null
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef70a98fdbe9ce6c55837576c60c7af3850",
+                "0x02000000"
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef734abf5cb34d6244378cddbf18e849d96",
+                "0xc03b86ae010000000000000000000000"
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef780d41e5e16056765bc8461851072c9d7",
+                "0x080000000000000080e36a09000000000200000001000000000000ca9a3b00000000020000"
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef78a42f33323cb5ced3b44dd825fda9fcc",
+                null
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef799e7f93fc6a98f0874fd057f111c4d2d",
+                null
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef7a44704b568d21667356a5a050c118746d366e7fe86e06375e7030000",
+                "0xba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44"
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef7a86da5a932684f199539836fcb8c886f",
+                null
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef7b06c3320c6ac196d813442e270868d63",
+                null
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef7bdc0bd303e9855813aa8a30d4efc5112",
+                null
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef7df1daeb8986837f21cc5d17596bb78d15153cb1f00942ff401000000",
+                null
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef7df1daeb8986837f21cc5d17596bb78d1b4def25cfda6ef3a00000000",
+                null
+            ],
+            [
+                "0x26aa394eea5630e07c48ae0c9558cef7ff553b5a9862a516939d82b3d3d8661a",
+                null
+            ],
+            [
+                "0x2b06af9719ac64d755623cda8ddd9b94b1c371ded9e9c565e89ba783c4d5f5f9b4def25cfda6ef3a000000006f3d6b177c8acbd8dc9974cdb3cebfac4d31333c30865ff66c35c1bf898df5c5dd2924d3280e7201",
+                "0x9b000000"
+            ],
+            ["0x3a65787472696e7369635f696e646578", null],
+            [
+                "0x3f1467a096bcd71a5b6a0c8155e208103f2edf3bdf381debe331ab7446addfdc",
+                "0x550057381efedcffffffffffffffffff"
+            ],
+            [
+                "0x3fba98689ebed1138735e0e7a5a790ab0f41321f75df7ea5127be2db4983c8b2",
+                "0x00"
+            ],
+            [
+                "0x3fba98689ebed1138735e0e7a5a790ab21a5051453bd3ae7ed269190f4653f3b",
+                "0x080000"
+            ],
+            [
+                "0x3fba98689ebed1138735e0e7a5a790abb984cfb497221deefcefb70073dcaac1",
+                "0x00"
+            ],
+            [
+                "0x5f3e4907f716ac89b6347d15ececedca80cc6574281671b299c1727d7ac68cabb4def25cfda6ef3a00000000",
+                "0x204e0000183887050ecff59f58658b3df63a16d03a00f92890f1517f48c2f6ccd215e5450e380e00005809fd84af6483070acbb92378e3498dbc02fb47f8e97f006bb83f60d7b2b15d980d000082104c22c383925323bf209d771dec6e1388285abe22c22d50de968467e0bb6ce00b000088ee494d719d68a18aade04903839ea37b6be99552ceceb530674b237afa9166480d0000dc9974cdb3cebfac4d31333c30865ff66c35c1bf898df5c5dd2924d3280e72011c0c0000e240d12c7ad07bb0e7785ee6837095ddeebb7aef84d6ed7ea87da197805b343a0c0d0000"
+            ],
+            [
+                "0xae394d879ddf7f99595bc0dd36e355b5bbd108c4899964f707fdaffb82636065",
+                null
+            ],
+            [
+                "0xbd2a529379475088d3e29a918cd478721a39ec767bd5269111e6492a1675702a",
+                "0x4501407565175cfbb5dca18a71e2433f838a3d946ef532c7bff041685db1a7c13d74252fffe343a960ef84b15187ea0276687d8cb3168aeea5202ea6d651cb646517102b81ff629ee6122430db98f2cadf09db7f298b49589b265dae833900f24baa8fb358d87e12f3e9f7986a9bf920c2fb48ce29886199646d2d12c6472952519463e80b411adef7e422a1595f1c1af4b5dd9b30996fba31fa6a30bd94d2022d6b35c8bc5a8a51161d47980bf4873e01d15afc364f8939a6ce5a09454ab7f2dd53bf4ee59f2c418e85aa6eb764ad218d0097fb656900c3bdd859771858f87bf7f06fc9b6db154e65d50d28e8b2374898f4f519517cd0bedc05814e0f5297dc04beb307b296a93cc14d53afb122769dfd402166568d8912a4dff9c2b1d4b6b34d811b40e5f3763e5f3ab5cd1da60d75c0ff3c12bcef3639f5f792a85709a29b752ffd1233c2ccae88ed3364843e2fa92bdb49021ee36b36c7cdc91b3e9ad32b9216082b6a2728fccd191a5cd43896f7e98460859ca59afbf7c7d93cd48da96866f983f5ff8e9ace6f47ee3e6c6edb074f578efbfb0907673ebca82a7e1805bc5c01cd2fa5a563777feeb84181654b7b738847c8e48d4f575c435ad798aec01631e03cf30fe94016752b5f087f05adf1713910767b7b0e6521013be5370776471191641c282fdfe7b7ccf3b2b100a83085cd3af2b0ad4ab3479448e71fc44ff987ec3a26be48161974b507fb3bc8ad23838f2d0c54c9685de67dc6256e71e739e9802d0e6e3b456f6dca75600bc04a19b3cc1605784f46595bfb10d5e077ce9602ae3820436166aa1905a7686b31a32d6809686462bc9591c0bc82d9e49825e5c68352d76f1ac6e527d8ac02db3213815080afad4c2ecb95b0386e3e9ab13d4f538771dac70d3059bd75a33d0b9b581ec33bb16d0e944355d4718daccb35553012adfcdacb1c5200a2aec3756f6ad5a2beffd30018c439c1b0c4c0f86dbf19d0ad59b1c9efb7fe90906febdb9001af1e7e15101089c1ab648b199a40794d30fe387894db25e614b23e833291a604d07eec2ade461b9b139d51f9b7e88475f16d6d23de6fe7831cc1dbba0da5efb22e3b26cd2732f45a2f9a5d52b6d6eaa38782357d9ae374132d647ef60816d5c98e6959f8858cfa674c8b0d340a8f607a68398a91b3a965585cc91e46d600b1310b8f59c65b7c19e9d14864a83c4ad6fa4ba1f75bba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44c7736fc3ab2969878810153aa3c93fc08c99c478ed1bb57f647d3eb02f25cee122c70424643f4b106a7643acaa630a5c4ac39364c3cb14453055170c01b44e8b1ef007c7727494411958932ae8b3e0f80d67eec8e94dd2ff7bbe8c9e51ba7e27d50bd9f52cbaf9742edecb6c8af1aaf3e7c31542f7d946b52e0c37d194b3dd13c3fddd39db0749755c7044b3db1143a027ad428345d930afcefc0d03c3a0217147900bdea1f5830d826f7e75ecd1c4e2bc8fd7de3b35c6409acae1b2215e9e4fd7e360d6825dc712cbf9d87ae0fd4b349b624d19254e74331d66a39657da81e73d7b13adc1e5efa8efd65aa32c1a0a0315913166a590ae551c395c476116156cf9d872fd863893edb41774f33438161f9b973e3043f819d087ba18a0f1965e189012496b691f342f7618fa9db74e8089d4486c8bd1993efd30ff119976f5cc0558e29b417115f60fd8897e13b6de1a48fbeee38ed812fd267ae25bffea0caa71c09309899b34235676d5573a8c3cf994a3d7f0a5dbd57ab614c6caf2afa2e1a860c6307d6d9341884f1b16ef22945863335bb4af56e5ef5e239a55dbd449a4d4d3555c8a3ec5bd3260f88cabca88385fe57920d2d2dfc5d70812a8934af5691da5b91206e29df60065a94a0a8178d118f1f7baf768d934337f570f5ec68427506391f51ab4802c666cc1749a84b5773b948fcbe460534ed0e8d48a15c149d27d67deb8ea637c4cc28240ee829c386366a0b1d6a275763100da95374e46528a0adefd4510c38c77871e66aeda6b6bfd629d32af9b2fad36d392a1de23a683b7afd13d1e3d45dad97c740106a71ee308d8d0f94f6771164158c6cd3715e72ccfbc49a9cc49f21ead8a3c5795d64e95c15348c6bf8571478650192e52e96dd58f95ec2c0fb4f2ccc05b0ab749197db8d6d1c6de07d6e8cb2620d5c308881d1059b50ffef3947c273eaed7e56c73848e0809c4bd93619edd9fd08c8c5c88d5f230a55d2c6a354e5dd94440e7b5bf99326cf4a112fe843e7efdea56e97af845761d98f40ed2447bd04a424976fcf0fe0a0c72b97619f85cf431fe4c3aa6b3a4f61df8bc1179c11e77783bfedb7d374bd1668d0969333cb518bd20add8329462f2c9a9f04d150d60413fdd27271586405fd85048481fc2ae25b6826cb2c947e4231dc7b9a0d02a9a03f88460bced3fef5d78f732684bd218a1954a4acfc237d79ccf397913ab6864cd8a07e275b82a8a72520624738368d1c5f7e0eaa2b445cf6159f2081d3483618f7fc7b16ec4e6e4d67ab5541bcda0ca1af40efd77ef8653e223191448631a8108c5e50e340cd405767ecf932c1015aa8856b834143dc81fa0e8b9d1d8c32278fca390f2ff08181df0b74e2d13c9b7b1d85543416a0dae3a77530b9cd1366213fcf3cd12a9cd3ae0a006d6b29b5ffc5cdc1ab24343e2ab882abfd719892fca5bf2134731332c5d3bef6c6e4013d84a853cb03d972146b655f0f8541bcd36c3c0c8a775bb606edfe50d07a5047fd0fe01eb125e83673930bc89e91609fd6dfe97132679374d3de4a0b3db8d3f76f31bed53e247da591401d508d65f9ee01d3511ee70e3644f3ab5d333ca7dbf737fe75217b4582d50d98b5d59098ea11627b7ed3e3e6ee3012eadd326cf74ec77192e98619427eb0591e949bf314db0fb932ed8be58258fb4f08e0ccd2cd18b997fb5cf50c90d5df66a9f3bb203bd22061956128b800e0157528d45c7f7208c65d0592ad846a711fa3c5601d81bb318a45cc1313b122d4361a7d7a954645b04667ff3f81d3366109772a41f66ece09eb93130abe04f2a51bb30e767dd37ec6ee6a342a4969b8b342f841193f4f6a9f0fac4611bc31b6cab1d25262feb31db0b8889b6f8d78be23f033994f2d3e18e00f3b0218101e1a7082782aa3680efc8502e1536c30c8c336b06ae936e2bcf9bbfb20dd514ed2867c03d4f44954867c97db35677d30760f37622b85089cc5d182a89e29ab0c6b9ef18138b16ab91d59c2312884172afa4874e6989172014168d3ed8db3d9522d6cbd631d581d166787c93209bec845d112e0cbd825f6df8b64363411270921837cfb2f9e7f2e74cdb9cd0d2b02058e5efd9583e2651239654b887ea36ce9537c392fc5dfca8c5a0facbe95b87dfc4232f229bd12e67937d32b7ffae2e837687d2d292c08ff6194a2256b17254748857c7e3c871c3fff380115e6f7faf435a430edf9f8a589f6711720cfc5cec6c8d0d94886a39bb9ac6c50b2e8ef6cf860415192ca4c1c3aaa97d36394021a62164d5a63975bcd84b8e6d74f361c17101e3808b4d8c31d1ee1a5cf3a2feda1ca2c0fd5a50edc9d95e09fb5158c9f9b0eb5e2c90a47deb0459cea593201ae7597e2e9245aa5848680f546256f3"
+            ],
+            [
+                "0xd57bce545fb382c34570e5dfbf338f5e326d21bc67a4b34023d577585d72bfd7",
+                null
+            ],
+            [
+                "0xd57bce545fb382c34570e5dfbf338f5ea36180b5cfb9f6541f8849df92a6ec93",
+                "0x00"
+            ],
+            [
+                "0xd57bce545fb382c34570e5dfbf338f5ebddf84c5eb23e6f53af725880d8ffe90",
+                null
+            ],
+            [
+                "0xd5c41b52a371aa36c9254ce34324f2a53b996bb988ea8ee15bad3ffd2f68dbda",
+                "0x00"
+            ],
+            [
+                "0xf0c365c3cf59d671eb72da0e7a4113c49f1f0515f462cdcf84e0f1d6045dfcbb",
+                "0x50defc5172010000"
+            ],
+            [
+                "0xf0c365c3cf59d671eb72da0e7a4113c4bbd108c4899964f707fdaffb82636065",
+                null
+            ],
+            [
+                "0xf68f425cf5645aacb2ae59b51baed90420d49a14a763e1cbc887acd097f92014",
+                "0x9501800300008203000082030000840300008503000086030000870300008703000089030000890300008b0300008b0300008d0300008d0300008f0300008f0300009103000092030000920300009403000094030000960300009603000098030000990300009a0300009b0300009b0300009d0300009d0300009f0300009f030000a1030000a2030000a3030000a4030000a5030000a6030000a6030000a8030000a8030000aa030000ab030000ac030000ad030000ae030000af030000b0030000b1030000b1030000b3030000b3030000b5030000b6030000b7030000b8030000b9030000ba030000ba030000bc030000bc030000be030000be030000c0030000c1030000c2030000c2030000c4030000c5030000c5030000c7030000c7030000c9030000c9030000cb030000cc030000cd030000ce030000cf030000d0030000d0030000d2030000d2030000d4030000d4030000d6030000d7030000d8030000d9030000da030000db030000db030000dd030000dd030000df030000e0030000e1030000e2030000e3030000e4030000e4030000"
+            ],
+            [
+                "0xf68f425cf5645aacb2ae59b51baed9049b58374218f48eaf5bc23b7b3e7cf08a",
+                "0xb3030000"
+            ],
+            [
+                "0xf68f425cf5645aacb2ae59b51baed904b97380ce5f4e70fbf9d6b5866eb59527",
+                "0x9501800300008203000082030000840300008503000086030000870300008703000089030000890300008b0300008b0300008d0300008d0300008f0300008f0300009103000092030000920300009403000094030000960300009603000098030000990300009a0300009b0300009b0300009d0300009d0300009f0300009f030000a1030000a2030000a3030000a4030000a5030000a6030000a6030000a8030000a8030000aa030000ab030000ac030000ad030000ae030000af030000b0030000b1030000b1030000b3030000b3030000b5030000b6030000b7030000b8030000b9030000ba030000ba030000bc030000bc030000be030000be030000c0030000c1030000c2030000c2030000c4030000c5030000c5030000c7030000c7030000c9030000c9030000cb030000cc030000cd030000ce030000cf030000d0030000d0030000d2030000d2030000d4030000d4030000d6030000d7030000d8030000d9030000da030000db030000db030000dd030000dd030000df030000e0030000e1030000e2030000e3030000e4030000e4030000"
+            ]
+        ],
+        "offchainStorageDiff": [],
+        "runtimeLogs": []
+    }
+}
+
+
+

XCM Testing

+

To test XCM (Cross-Consensus Messaging) messages between networks, you can fork multiple parachains and a relay chain locally using Chopsticks.

+
    +
  • relaychain - relay chain config file
  • +
  • parachain - parachain config file
  • +
+

For example, to fork Moonbeam, Astar, and Polkadot enabling XCM between them, you can use the following command:

+
npx @acala-network/chopsticks xcm \
+--r polkadot \
+--p moonbeam \
+--p astar
+
+

After running it, you should see output similar to the following:

+
+ npx @acala-network/chopsticks xcm \ + --r polkadot \ + --p moonbeam \ + --p astar +
+ [13:46:07.901] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/moonbeam.yml + app: "chopsticks" + [13:46:12.631] INFO: Moonbeam RPC listening on port 8000 + app: "chopsticks" + [13:46:12.632] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/astar.yml + app: "chopsticks" + chopsticks::executor TRACE: Calling Metadata_metadata + chopsticks::executor TRACE: Completed Metadata_metadata + [13:46:23.669] INFO: Astar RPC listening on port 8001 + app: "chopsticks" + [13:46:25.144] INFO (xcm): Connected parachains [2004,2006] + app: "chopsticks" + [13:46:25.144] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml + app: "chopsticks" + chopsticks::executor TRACE: Calling Metadata_metadata + chopsticks::executor TRACE: Completed Metadata_metadata + [13:46:53.320] INFO: Polkadot RPC listening on port 8002 + app: "chopsticks" + [13:46:54.038] INFO (xcm): Connected relaychain 'Polkadot' with parachain 'Moonbeam' + app: "chopsticks" + [13:46:55.028] INFO (xcm): Connected relaychain 'Polkadot' with parachain 'Astar' + app: "chopsticks" +
+ +

Now you can interact with your forked chains using the ports specified in the output.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/testing/index.html b/tutorials/polkadot-sdk/testing/index.html new file mode 100644 index 00000000..031102ac --- /dev/null +++ b/tutorials/polkadot-sdk/testing/index.html @@ -0,0 +1,4944 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Blockchain Testing Tutorials | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + + + +
+
+
+ + + + + + +
+
+
+ + + + + +
+
+
+ + + + + +
+
+
+ + +
+ +
+ + + + + + + + + + + +

Blockchain Testing Tutorials

+

Polkadot offers specialized tools that make it simple to create realistic testing environments, particularly for cross-chain interactions. These purpose-built tools enable developers to quickly spin up test networks that accurately simulate real-world scenarios. Learn to create controlled testing environments using powerful tools designed for Polkadot SDK development.

+

Get Started

+

Through these tutorials, you'll learn important testing techniques including:

+
    +
  • Setting up local test environments
  • +
  • Spawning ephemeral testing networks
  • +
  • Forking live chains for testing
  • +
  • Simulating cross-chain interactions
  • +
  • Debugging blockchain behavior
  • +
+

Each tutorial provides step-by-step guidance for using these tools effectively in your development workflow.

+

In This Section

+

+

+

+ + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/tutorials/polkadot-sdk/testing/spawn-basic-chain/index.html b/tutorials/polkadot-sdk/testing/spawn-basic-chain/index.html new file mode 100644 index 00000000..141b5baa --- /dev/null +++ b/tutorials/polkadot-sdk/testing/spawn-basic-chain/index.html @@ -0,0 +1,5153 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Spawn a Basic Chain with Zombienet | Polkadot Developer Docs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + + + +
+ +
+ +
+ + + + +
+
+ + + +
+
+
+ + + + + + +
+
+
+ + + + + + +
+ +
+ + + + +

Spawn a Basic Chain with Zombienet

+

Introduction

+

Zombienet simplifies blockchain development by enabling developers to create temporary, customizable networks for testing and validation. These ephemeral chains are ideal for experimenting with configurations, debugging applications, and validating functionality in a controlled environment.

+

In this guide, you'll learn how to define a basic network configuration file, spawn a blockchain network using Zombienet's CLI, and interact with nodes and monitor network activity using tools like Polkadot.js Apps and Prometheus

+

By the end of this tutorial, you'll be equipped to deploy and test your own blockchain networks, paving the way for more advanced setups and use cases.

+

Prerequisites

+

To successfully complete this tutorial, you must ensure you've first:

+ +

Define the Network

+

Zombienet uses a configuration file to define the ephemeral network that will be spawned. Follow these steps to create and define the configuration file:

+
    +
  1. Create a file named spawn-a-basic-network.toml +
    touch spawn-a-basic-network.toml
    +
  2. +
  3. Add the following code to the file you just created: +
    spawn-a-basic-network.toml
    [settings]
    +timeout = 120
    +
    +[relaychain]
    +
    +[[relaychain.nodes]]
    +name = "alice"
    +validator = true
    +
    +[[relaychain.nodes]]
    +name = "bob"
    +validator = true
    +
    +[[parachains]]
    +id = 100
    +
    +[parachains.collator]
    +name = "collator01"
    +
  4. +
+

This configuration file defines a network with the following chains:

+
    +
  • relaychain - with two nodes named alice and bob
  • +
  • parachain - with a collator named collator01
  • +
+

Settings also defines a timeout of 120 seconds for the network to be ready.

+

Spawn the Network

+

To spawn the network, run the following command:

+
zombienet -p native spawn spawn-a-basic-network.toml
+
+

This command will spawn the network defined in the spawn-a-basic-network.toml configuration file. The -p native flag specifies that the network will be spawned using the native provider.

+

If successful, you will see the following output:

+
+ zombienet -p native spawn spawn-a-basic-network.toml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Network launched 🚀🚀
Namespacezombie-75a01b93c92d571f6198a67bcb380fcd
Providernative
Node Information
Namealice
Direct Linkhttps://polkadot.js.org/apps/?rpc=ws://127.0.0.1:55308#explorer
Prometheus Linkhttp://127.0.0.1:55310/metrics
Log Cmdtail -f /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/alice.log
Node Information
Namebob
Direct Linkhttps://polkadot.js.org/apps/?rpc=ws://127.0.0.1:55312#explorer
Prometheus Linkhttp://127.0.0.1:50634/metrics
Log Cmdtail -f /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/bob.log
Node Information
Namecollator01
Direct Linkhttps://polkadot.js.org/apps/?rpc=ws://127.0.0.1:55316#explorer
Prometheus Linkhttp://127.0.0.1:55318/metrics
Log Cmdtail -f /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/collator01.log
Parachain ID100
ChainSpec Path/tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/100-rococo-local.json
+
+ +
+

Note

+

If the IPs and ports aren't explicitly defined in the configuration file, they may change each time the network is started, causing the links provided in the output to differ from the example.

+
+

Interact with the Spawned Network

+

After the network is launched, you can interact with it using Polkadot.js Apps. To do so, open your browser and use the provided links listed by the output as Direct Link.

+

Connect to the Nodes

+

Use the 55308 port address to interact with the same alice node used for this tutorial. Ports can change from spawn to spawn so be sure to locate the link in the output when spawning your own node to ensure you are accessing the correct port.

+

If you want to interact with the nodes more programmatically, you can also use the Polkadot.js API. For example, the following code snippet shows how to connect to the alice node using the Polkadot.js API and log some information about the chain and node:

+
import { ApiPromise, WsProvider } from '@polkadot/api';
+
+async function main() {
+  const wsProvider = new WsProvider('ws://127.0.0.1:55308');
+  const api = await ApiPromise.create({ provider: wsProvider });
+
+  // Retrieve the chain & node information via rpc calls
+  const [chain, nodeName, nodeVersion] = await Promise.all([
+    api.rpc.system.chain(),
+    api.rpc.system.name(),
+    api.rpc.system.version(),
+  ]);
+
+  console.log(
+    `You are connected to chain ${chain} using ${nodeName} v${nodeVersion}`,
+  );
+}
+
+main()
+  .catch(console.error)
+  .finally(() => process.exit());
+
+

Both methods allow you to interact easily with the network and its nodes.

+

Check Metrics

+

You can also check the metrics of the nodes by accessing the links provided in the output as Prometheus Link. Prometheus is a monitoring and alerting toolkit that collects metrics from the nodes. By accessing the provided links, you can see the metrics of the nodes in a web interface. So, for example, the following image shows the Prometheus metrics for Bob's node from the Zombienet test:

+

+

Check Logs

+

To view individual node logs, locate the Log Cmd command in Zombienet's startup output. For example, to see what the alice node is doing, find the log command that references alice.log in its file path. Note that Zombienet will show you the correct path for your instance when it starts up, so use that path rather than copying from the below example:

+
tail -f  /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/alice.log
+
+

After running this command, you will see the logs of the alice node in real-time, which can be useful for debugging purposes. The logs of the bob and collator01 nodes can be checked similarly.

+
+ + + +
+ + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/variables.yml b/variables.yml new file mode 100644 index 00000000..5038d7ee --- /dev/null +++ b/variables.yml @@ -0,0 +1,23 @@ +# Variables that can be reused should be added to this file +dependencies: + open_zeppelin: + repository_url: https://github.com/OpenZeppelin/polkadot-runtime-templates + version: v1.0.0 + chopsticks: + repository_url: https://github.com/AcalaNetwork/chopsticks + version: 0.13.1 + zombienet: + repository_url: https://github.com/paritytech/zombienet + version: v1.3.106 + architecture: macos-arm64 + asset_transfer_api: + repository_url: https://github.com/paritytech/asset-transfer-api + version: v0.3.1 + polkadot_sdk_solochain_template: + repository_url: https://github.com/paritytech/polkadot-sdk-solochain-template + version: v0.0.2 + srtool: + repository_url: https://github.com/paritytech/srtool + version: v0.16.0 + docker_image_name: paritytech/srtool + docker_image_version: 1.62.0 \ No newline at end of file