diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..4eada336 --- /dev/null +++ b/404.html @@ -0,0 +1,1518 @@ + + + + + + + + + + + + + + + + + + + + + + AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ +

404 - Not found

+ +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/CNAME b/CNAME new file mode 100644 index 00000000..9f3a0846 --- /dev/null +++ b/CNAME @@ -0,0 +1 @@ +ai-on-openshift.io \ No newline at end of file diff --git a/assets/ai-on-openshift-title.svg b/assets/ai-on-openshift-title.svg new file mode 100644 index 00000000..ffb9735d --- /dev/null +++ b/assets/ai-on-openshift-title.svg @@ -0,0 +1,697 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/home-robot.png b/assets/home-robot.png new file mode 100644 index 00000000..0e7123cb Binary files /dev/null and b/assets/home-robot.png differ diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 00000000..1cf13b9f Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/images/social/demos/credit-card-fraud-detection-mlflow/credit-card-fraud.png b/assets/images/social/demos/credit-card-fraud-detection-mlflow/credit-card-fraud.png new file mode 100644 index 00000000..66105f6f Binary files /dev/null and b/assets/images/social/demos/credit-card-fraud-detection-mlflow/credit-card-fraud.png differ diff --git a/assets/images/social/demos/financial-fraud-detection/financial-fraud-detection.png b/assets/images/social/demos/financial-fraud-detection/financial-fraud-detection.png new file mode 100644 index 00000000..72742fbc Binary files /dev/null and b/assets/images/social/demos/financial-fraud-detection/financial-fraud-detection.png differ diff --git a/assets/images/social/demos/llm-chat-doc/llm-chat-doc.png b/assets/images/social/demos/llm-chat-doc/llm-chat-doc.png new file mode 100644 index 00000000..c149efa1 Binary files /dev/null and b/assets/images/social/demos/llm-chat-doc/llm-chat-doc.png differ diff --git a/assets/images/social/demos/retail-object-detection/retail-object-detection.png b/assets/images/social/demos/retail-object-detection/retail-object-detection.png new file mode 100644 index 00000000..a3ca2654 Binary files /dev/null and b/assets/images/social/demos/retail-object-detection/retail-object-detection.png differ diff --git a/assets/images/social/demos/robotics-edge/robotics-edge.png b/assets/images/social/demos/robotics-edge/robotics-edge.png new file mode 100644 index 00000000..907e7511 Binary files /dev/null and b/assets/images/social/demos/robotics-edge/robotics-edge.png differ diff --git a/assets/images/social/demos/smart-city/smart-city.png b/assets/images/social/demos/smart-city/smart-city.png new file mode 100644 index 00000000..4d6e7451 Binary files /dev/null and b/assets/images/social/demos/smart-city/smart-city.png differ diff --git a/assets/images/social/demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow.png b/assets/images/social/demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow.png new file mode 100644 index 00000000..775aebb8 Binary files /dev/null and b/assets/images/social/demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow.png differ diff --git a/assets/images/social/demos/water-pump-failure-prediction/water-pump-failure-prediction.png b/assets/images/social/demos/water-pump-failure-prediction/water-pump-failure-prediction.png new file mode 100644 index 00000000..e947998b Binary files /dev/null and b/assets/images/social/demos/water-pump-failure-prediction/water-pump-failure-prediction.png differ diff --git a/assets/images/social/demos/xray-pipeline/xray-pipeline.png b/assets/images/social/demos/xray-pipeline/xray-pipeline.png new file mode 100644 index 00000000..329a6e88 Binary files /dev/null and b/assets/images/social/demos/xray-pipeline/xray-pipeline.png differ diff --git a/assets/images/social/demos/yolov5-training-serving/yolov5-training-serving.png b/assets/images/social/demos/yolov5-training-serving/yolov5-training-serving.png new file mode 100644 index 00000000..c095c08c Binary files /dev/null and b/assets/images/social/demos/yolov5-training-serving/yolov5-training-serving.png differ diff --git a/assets/images/social/getting-started/opendatahub.png b/assets/images/social/getting-started/opendatahub.png new file mode 100644 index 00000000..0afe5ac3 Binary files /dev/null and b/assets/images/social/getting-started/opendatahub.png differ diff --git a/assets/images/social/getting-started/openshift-data-science.png b/assets/images/social/getting-started/openshift-data-science.png new file mode 100644 index 00000000..3fe41154 Binary files /dev/null and b/assets/images/social/getting-started/openshift-data-science.png differ diff --git a/assets/images/social/getting-started/openshift.png b/assets/images/social/getting-started/openshift.png new file mode 100644 index 00000000..af0909c3 Binary files /dev/null and b/assets/images/social/getting-started/openshift.png differ diff --git a/assets/images/social/getting-started/why-this-site.png b/assets/images/social/getting-started/why-this-site.png new file mode 100644 index 00000000..8260faeb Binary files /dev/null and b/assets/images/social/getting-started/why-this-site.png differ diff --git a/assets/images/social/index.png b/assets/images/social/index.png new file mode 100644 index 00000000..26bb281a Binary files /dev/null and b/assets/images/social/index.png differ diff --git a/assets/images/social/odh-rhods/configuration.png b/assets/images/social/odh-rhods/configuration.png new file mode 100644 index 00000000..5e59b39e Binary files /dev/null and b/assets/images/social/odh-rhods/configuration.png differ diff --git a/assets/images/social/odh-rhods/custom-notebooks.png b/assets/images/social/odh-rhods/custom-notebooks.png new file mode 100644 index 00000000..edf15bbf Binary files /dev/null and b/assets/images/social/odh-rhods/custom-notebooks.png differ diff --git a/assets/images/social/odh-rhods/custom-runtime-triton.png b/assets/images/social/odh-rhods/custom-runtime-triton.png new file mode 100644 index 00000000..a7da897b Binary files /dev/null and b/assets/images/social/odh-rhods/custom-runtime-triton.png differ diff --git a/assets/images/social/odh-rhods/nvidia-gpus.png b/assets/images/social/odh-rhods/nvidia-gpus.png new file mode 100644 index 00000000..58c58abc Binary files /dev/null and b/assets/images/social/odh-rhods/nvidia-gpus.png differ diff --git a/assets/images/social/odh-rhods/openshift-group-management.png b/assets/images/social/odh-rhods/openshift-group-management.png new file mode 100644 index 00000000..373c1731 Binary files /dev/null and b/assets/images/social/odh-rhods/openshift-group-management.png differ diff --git a/assets/images/social/patterns/bucket-notifications/bucket-notifications.png b/assets/images/social/patterns/bucket-notifications/bucket-notifications.png new file mode 100644 index 00000000..f614c9e6 Binary files /dev/null and b/assets/images/social/patterns/bucket-notifications/bucket-notifications.png differ diff --git a/assets/images/social/patterns/kafka/kafka-to-object-storage/kafka-to-object-storage.png b/assets/images/social/patterns/kafka/kafka-to-object-storage/kafka-to-object-storage.png new file mode 100644 index 00000000..f500469f Binary files /dev/null and b/assets/images/social/patterns/kafka/kafka-to-object-storage/kafka-to-object-storage.png differ diff --git a/assets/images/social/patterns/kafka/kafka-to-serverless/kafka-to-serverless.png b/assets/images/social/patterns/kafka/kafka-to-serverless/kafka-to-serverless.png new file mode 100644 index 00000000..8e3f4ea2 Binary files /dev/null and b/assets/images/social/patterns/kafka/kafka-to-serverless/kafka-to-serverless.png differ diff --git a/assets/images/social/patterns/starproxy/starproxy.png b/assets/images/social/patterns/starproxy/starproxy.png new file mode 100644 index 00000000..1000e7dd Binary files /dev/null and b/assets/images/social/patterns/starproxy/starproxy.png differ diff --git a/assets/images/social/tools-and-applications/airflow/airflow.png b/assets/images/social/tools-and-applications/airflow/airflow.png new file mode 100644 index 00000000..6a68bd1a Binary files /dev/null and b/assets/images/social/tools-and-applications/airflow/airflow.png differ diff --git a/assets/images/social/tools-and-applications/apache-nifi/apache-nifi.png b/assets/images/social/tools-and-applications/apache-nifi/apache-nifi.png new file mode 100644 index 00000000..d6910c88 Binary files /dev/null and b/assets/images/social/tools-and-applications/apache-nifi/apache-nifi.png differ diff --git a/assets/images/social/tools-and-applications/apache-spark/apache-spark.png b/assets/images/social/tools-and-applications/apache-spark/apache-spark.png new file mode 100644 index 00000000..39ad4d17 Binary files /dev/null and b/assets/images/social/tools-and-applications/apache-spark/apache-spark.png differ diff --git a/assets/images/social/tools-and-applications/minio/minio.png b/assets/images/social/tools-and-applications/minio/minio.png new file mode 100644 index 00000000..78cbfe5d Binary files /dev/null and b/assets/images/social/tools-and-applications/minio/minio.png differ diff --git a/assets/images/social/tools-and-applications/mlflow/mlflow.png b/assets/images/social/tools-and-applications/mlflow/mlflow.png new file mode 100644 index 00000000..207118fe Binary files /dev/null and b/assets/images/social/tools-and-applications/mlflow/mlflow.png differ diff --git a/assets/images/social/tools-and-applications/rclone/rclone.png b/assets/images/social/tools-and-applications/rclone/rclone.png new file mode 100644 index 00000000..f1838a52 Binary files /dev/null and b/assets/images/social/tools-and-applications/rclone/rclone.png differ diff --git a/assets/images/social/tools-and-applications/riva/riva.png b/assets/images/social/tools-and-applications/riva/riva.png new file mode 100644 index 00000000..c8b3ba8e Binary files /dev/null and b/assets/images/social/tools-and-applications/riva/riva.png differ diff --git a/assets/images/social/whats-new/whats-new.png b/assets/images/social/whats-new/whats-new.png new file mode 100644 index 00000000..e9e1d736 Binary files /dev/null and b/assets/images/social/whats-new/whats-new.png differ diff --git a/assets/javascripts/bundle.aecac24b.min.js b/assets/javascripts/bundle.aecac24b.min.js new file mode 100644 index 00000000..464603d8 --- /dev/null +++ b/assets/javascripts/bundle.aecac24b.min.js @@ -0,0 +1,29 @@ +"use strict";(()=>{var wi=Object.create;var ur=Object.defineProperty;var Si=Object.getOwnPropertyDescriptor;var Ti=Object.getOwnPropertyNames,kt=Object.getOwnPropertySymbols,Oi=Object.getPrototypeOf,dr=Object.prototype.hasOwnProperty,Zr=Object.prototype.propertyIsEnumerable;var Xr=(e,t,r)=>t in e?ur(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,R=(e,t)=>{for(var r in t||(t={}))dr.call(t,r)&&Xr(e,r,t[r]);if(kt)for(var r of kt(t))Zr.call(t,r)&&Xr(e,r,t[r]);return e};var eo=(e,t)=>{var r={};for(var o in e)dr.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&kt)for(var o of kt(e))t.indexOf(o)<0&&Zr.call(e,o)&&(r[o]=e[o]);return r};var hr=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var Mi=(e,t,r,o)=>{if(t&&typeof t=="object"||typeof t=="function")for(let n of Ti(t))!dr.call(e,n)&&n!==r&&ur(e,n,{get:()=>t[n],enumerable:!(o=Si(t,n))||o.enumerable});return e};var Ht=(e,t,r)=>(r=e!=null?wi(Oi(e)):{},Mi(t||!e||!e.__esModule?ur(r,"default",{value:e,enumerable:!0}):r,e));var ro=hr((br,to)=>{(function(e,t){typeof br=="object"&&typeof to!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(br,function(){"use strict";function e(r){var o=!0,n=!1,i=null,s={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function a(C){return!!(C&&C!==document&&C.nodeName!=="HTML"&&C.nodeName!=="BODY"&&"classList"in C&&"contains"in C.classList)}function c(C){var it=C.type,Ne=C.tagName;return!!(Ne==="INPUT"&&s[it]&&!C.readOnly||Ne==="TEXTAREA"&&!C.readOnly||C.isContentEditable)}function p(C){C.classList.contains("focus-visible")||(C.classList.add("focus-visible"),C.setAttribute("data-focus-visible-added",""))}function l(C){C.hasAttribute("data-focus-visible-added")&&(C.classList.remove("focus-visible"),C.removeAttribute("data-focus-visible-added"))}function f(C){C.metaKey||C.altKey||C.ctrlKey||(a(r.activeElement)&&p(r.activeElement),o=!0)}function u(C){o=!1}function d(C){a(C.target)&&(o||c(C.target))&&p(C.target)}function v(C){a(C.target)&&(C.target.classList.contains("focus-visible")||C.target.hasAttribute("data-focus-visible-added"))&&(n=!0,window.clearTimeout(i),i=window.setTimeout(function(){n=!1},100),l(C.target))}function b(C){document.visibilityState==="hidden"&&(n&&(o=!0),z())}function z(){document.addEventListener("mousemove",G),document.addEventListener("mousedown",G),document.addEventListener("mouseup",G),document.addEventListener("pointermove",G),document.addEventListener("pointerdown",G),document.addEventListener("pointerup",G),document.addEventListener("touchmove",G),document.addEventListener("touchstart",G),document.addEventListener("touchend",G)}function K(){document.removeEventListener("mousemove",G),document.removeEventListener("mousedown",G),document.removeEventListener("mouseup",G),document.removeEventListener("pointermove",G),document.removeEventListener("pointerdown",G),document.removeEventListener("pointerup",G),document.removeEventListener("touchmove",G),document.removeEventListener("touchstart",G),document.removeEventListener("touchend",G)}function G(C){C.target.nodeName&&C.target.nodeName.toLowerCase()==="html"||(o=!1,K())}document.addEventListener("keydown",f,!0),document.addEventListener("mousedown",u,!0),document.addEventListener("pointerdown",u,!0),document.addEventListener("touchstart",u,!0),document.addEventListener("visibilitychange",b,!0),z(),r.addEventListener("focus",d,!0),r.addEventListener("blur",v,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var Vr=hr((Ot,Dr)=>{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof Ot=="object"&&typeof Dr=="object"?Dr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof Ot=="object"?Ot.ClipboardJS=r():t.ClipboardJS=r()})(Ot,function(){return function(){var e={686:function(o,n,i){"use strict";i.d(n,{default:function(){return Ei}});var s=i(279),a=i.n(s),c=i(370),p=i.n(c),l=i(817),f=i.n(l);function u(U){try{return document.execCommand(U)}catch(O){return!1}}var d=function(O){var S=f()(O);return u("cut"),S},v=d;function b(U){var O=document.documentElement.getAttribute("dir")==="rtl",S=document.createElement("textarea");S.style.fontSize="12pt",S.style.border="0",S.style.padding="0",S.style.margin="0",S.style.position="absolute",S.style[O?"right":"left"]="-9999px";var $=window.pageYOffset||document.documentElement.scrollTop;return S.style.top="".concat($,"px"),S.setAttribute("readonly",""),S.value=U,S}var z=function(O,S){var $=b(O);S.container.appendChild($);var F=f()($);return u("copy"),$.remove(),F},K=function(O){var S=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},$="";return typeof O=="string"?$=z(O,S):O instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(O==null?void 0:O.type)?$=z(O.value,S):($=f()(O),u("copy")),$},G=K;function C(U){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?C=function(S){return typeof S}:C=function(S){return S&&typeof Symbol=="function"&&S.constructor===Symbol&&S!==Symbol.prototype?"symbol":typeof S},C(U)}var it=function(){var O=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},S=O.action,$=S===void 0?"copy":S,F=O.container,Q=O.target,_e=O.text;if($!=="copy"&&$!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(Q!==void 0)if(Q&&C(Q)==="object"&&Q.nodeType===1){if($==="copy"&&Q.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if($==="cut"&&(Q.hasAttribute("readonly")||Q.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(_e)return G(_e,{container:F});if(Q)return $==="cut"?v(Q):G(Q,{container:F})},Ne=it;function Pe(U){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Pe=function(S){return typeof S}:Pe=function(S){return S&&typeof Symbol=="function"&&S.constructor===Symbol&&S!==Symbol.prototype?"symbol":typeof S},Pe(U)}function ui(U,O){if(!(U instanceof O))throw new TypeError("Cannot call a class as a function")}function Jr(U,O){for(var S=0;S0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof F.action=="function"?F.action:this.defaultAction,this.target=typeof F.target=="function"?F.target:this.defaultTarget,this.text=typeof F.text=="function"?F.text:this.defaultText,this.container=Pe(F.container)==="object"?F.container:document.body}},{key:"listenClick",value:function(F){var Q=this;this.listener=p()(F,"click",function(_e){return Q.onClick(_e)})}},{key:"onClick",value:function(F){var Q=F.delegateTarget||F.currentTarget,_e=this.action(Q)||"copy",Ct=Ne({action:_e,container:this.container,target:this.target(Q),text:this.text(Q)});this.emit(Ct?"success":"error",{action:_e,text:Ct,trigger:Q,clearSelection:function(){Q&&Q.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(F){return fr("action",F)}},{key:"defaultTarget",value:function(F){var Q=fr("target",F);if(Q)return document.querySelector(Q)}},{key:"defaultText",value:function(F){return fr("text",F)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(F){var Q=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return G(F,Q)}},{key:"cut",value:function(F){return v(F)}},{key:"isSupported",value:function(){var F=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],Q=typeof F=="string"?[F]:F,_e=!!document.queryCommandSupported;return Q.forEach(function(Ct){_e=_e&&!!document.queryCommandSupported(Ct)}),_e}}]),S}(a()),Ei=yi},828:function(o){var n=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function s(a,c){for(;a&&a.nodeType!==n;){if(typeof a.matches=="function"&&a.matches(c))return a;a=a.parentNode}}o.exports=s},438:function(o,n,i){var s=i(828);function a(l,f,u,d,v){var b=p.apply(this,arguments);return l.addEventListener(u,b,v),{destroy:function(){l.removeEventListener(u,b,v)}}}function c(l,f,u,d,v){return typeof l.addEventListener=="function"?a.apply(null,arguments):typeof u=="function"?a.bind(null,document).apply(null,arguments):(typeof l=="string"&&(l=document.querySelectorAll(l)),Array.prototype.map.call(l,function(b){return a(b,f,u,d,v)}))}function p(l,f,u,d){return function(v){v.delegateTarget=s(v.target,f),v.delegateTarget&&d.call(l,v)}}o.exports=c},879:function(o,n){n.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},n.nodeList=function(i){var s=Object.prototype.toString.call(i);return i!==void 0&&(s==="[object NodeList]"||s==="[object HTMLCollection]")&&"length"in i&&(i.length===0||n.node(i[0]))},n.string=function(i){return typeof i=="string"||i instanceof String},n.fn=function(i){var s=Object.prototype.toString.call(i);return s==="[object Function]"}},370:function(o,n,i){var s=i(879),a=i(438);function c(u,d,v){if(!u&&!d&&!v)throw new Error("Missing required arguments");if(!s.string(d))throw new TypeError("Second argument must be a String");if(!s.fn(v))throw new TypeError("Third argument must be a Function");if(s.node(u))return p(u,d,v);if(s.nodeList(u))return l(u,d,v);if(s.string(u))return f(u,d,v);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function p(u,d,v){return u.addEventListener(d,v),{destroy:function(){u.removeEventListener(d,v)}}}function l(u,d,v){return Array.prototype.forEach.call(u,function(b){b.addEventListener(d,v)}),{destroy:function(){Array.prototype.forEach.call(u,function(b){b.removeEventListener(d,v)})}}}function f(u,d,v){return a(document.body,u,d,v)}o.exports=c},817:function(o){function n(i){var s;if(i.nodeName==="SELECT")i.focus(),s=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var a=i.hasAttribute("readonly");a||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),a||i.removeAttribute("readonly"),s=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var c=window.getSelection(),p=document.createRange();p.selectNodeContents(i),c.removeAllRanges(),c.addRange(p),s=c.toString()}return s}o.exports=n},279:function(o){function n(){}n.prototype={on:function(i,s,a){var c=this.e||(this.e={});return(c[i]||(c[i]=[])).push({fn:s,ctx:a}),this},once:function(i,s,a){var c=this;function p(){c.off(i,p),s.apply(a,arguments)}return p._=s,this.on(i,p,a)},emit:function(i){var s=[].slice.call(arguments,1),a=((this.e||(this.e={}))[i]||[]).slice(),c=0,p=a.length;for(c;c{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var Ha=/["'&<>]/;Un.exports=$a;function $a(e){var t=""+e,r=Ha.exec(t);if(!r)return t;var o,n="",i=0,s=0;for(i=r.index;i0&&i[i.length-1])&&(p[0]===6||p[0]===2)){r=0;continue}if(p[0]===3&&(!i||p[1]>i[0]&&p[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function N(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],s;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(a){s={error:a}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(s)throw s.error}}return i}function D(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||a(u,d)})})}function a(u,d){try{c(o[u](d))}catch(v){f(i[0][3],v)}}function c(u){u.value instanceof Ze?Promise.resolve(u.value.v).then(p,l):f(i[0][2],u)}function p(u){a("next",u)}function l(u){a("throw",u)}function f(u,d){u(d),i.shift(),i.length&&a(i[0][0],i[0][1])}}function io(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof we=="function"?we(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(s){return new Promise(function(a,c){s=e[i](s),n(a,c,s.done,s.value)})}}function n(i,s,a,c){Promise.resolve(c).then(function(p){i({value:p,done:a})},s)}}function k(e){return typeof e=="function"}function at(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var Rt=at(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(o,n){return n+1+") "+o.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function De(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Ie=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var s=this._parentage;if(s)if(this._parentage=null,Array.isArray(s))try{for(var a=we(s),c=a.next();!c.done;c=a.next()){var p=c.value;p.remove(this)}}catch(b){t={error:b}}finally{try{c&&!c.done&&(r=a.return)&&r.call(a)}finally{if(t)throw t.error}}else s.remove(this);var l=this.initialTeardown;if(k(l))try{l()}catch(b){i=b instanceof Rt?b.errors:[b]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=we(f),d=u.next();!d.done;d=u.next()){var v=d.value;try{ao(v)}catch(b){i=i!=null?i:[],b instanceof Rt?i=D(D([],N(i)),N(b.errors)):i.push(b)}}}catch(b){o={error:b}}finally{try{d&&!d.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new Rt(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)ao(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&De(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&De(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var gr=Ie.EMPTY;function Pt(e){return e instanceof Ie||e&&"closed"in e&&k(e.remove)&&k(e.add)&&k(e.unsubscribe)}function ao(e){k(e)?e():e.unsubscribe()}var Ae={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var st={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,s=n.isStopped,a=n.observers;return i||s?gr:(this.currentObservers=null,a.push(r),new Ie(function(){o.currentObservers=null,De(a,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,s=o.isStopped;n?r.error(i):s&&r.complete()},t.prototype.asObservable=function(){var r=new P;return r.source=this,r},t.create=function(r,o){return new ho(r,o)},t}(P);var ho=function(e){ie(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:gr},t}(x);var yt={now:function(){return(yt.delegate||Date).now()},delegate:void 0};var Et=function(e){ie(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=yt);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,s=o._infiniteTimeWindow,a=o._timestampProvider,c=o._windowTime;n||(i.push(r),!s&&i.push(a.now()+c)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,s=n._buffer,a=s.slice(),c=0;c0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=lt.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var s=r.actions;o!=null&&((i=s[s.length-1])===null||i===void 0?void 0:i.id)!==o&&(lt.cancelAnimationFrame(o),r._scheduled=void 0)},t}(jt);var go=function(e){ie(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o=this._scheduled;this._scheduled=void 0;var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t}(Wt);var Oe=new go(vo);var L=new P(function(e){return e.complete()});function Ut(e){return e&&k(e.schedule)}function Or(e){return e[e.length-1]}function Qe(e){return k(Or(e))?e.pop():void 0}function Me(e){return Ut(Or(e))?e.pop():void 0}function Nt(e,t){return typeof Or(e)=="number"?e.pop():t}var mt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Dt(e){return k(e==null?void 0:e.then)}function Vt(e){return k(e[pt])}function zt(e){return Symbol.asyncIterator&&k(e==null?void 0:e[Symbol.asyncIterator])}function qt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Pi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Kt=Pi();function Qt(e){return k(e==null?void 0:e[Kt])}function Yt(e){return no(this,arguments,function(){var r,o,n,i;return $t(this,function(s){switch(s.label){case 0:r=e.getReader(),s.label=1;case 1:s.trys.push([1,,9,10]),s.label=2;case 2:return[4,Ze(r.read())];case 3:return o=s.sent(),n=o.value,i=o.done,i?[4,Ze(void 0)]:[3,5];case 4:return[2,s.sent()];case 5:return[4,Ze(n)];case 6:return[4,s.sent()];case 7:return s.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function Bt(e){return k(e==null?void 0:e.getReader)}function I(e){if(e instanceof P)return e;if(e!=null){if(Vt(e))return Ii(e);if(mt(e))return Fi(e);if(Dt(e))return ji(e);if(zt(e))return xo(e);if(Qt(e))return Wi(e);if(Bt(e))return Ui(e)}throw qt(e)}function Ii(e){return new P(function(t){var r=e[pt]();if(k(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function Fi(e){return new P(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?M(function(n,i){return e(n,i,o)}):ue,xe(1),r?He(t):Io(function(){return new Jt}))}}function Fo(){for(var e=[],t=0;t=2,!0))}function le(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new x}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,s=i===void 0?!0:i,a=e.resetOnRefCountZero,c=a===void 0?!0:a;return function(p){var l,f,u,d=0,v=!1,b=!1,z=function(){f==null||f.unsubscribe(),f=void 0},K=function(){z(),l=u=void 0,v=b=!1},G=function(){var C=l;K(),C==null||C.unsubscribe()};return g(function(C,it){d++,!b&&!v&&z();var Ne=u=u!=null?u:r();it.add(function(){d--,d===0&&!b&&!v&&(f=Hr(G,c))}),Ne.subscribe(it),!l&&d>0&&(l=new tt({next:function(Pe){return Ne.next(Pe)},error:function(Pe){b=!0,z(),f=Hr(K,n,Pe),Ne.error(Pe)},complete:function(){v=!0,z(),f=Hr(K,s),Ne.complete()}}),I(C).subscribe(l))})(p)}}function Hr(e,t){for(var r=[],o=2;oe.next(document)),e}function q(e,t=document){return Array.from(t.querySelectorAll(e))}function W(e,t=document){let r=ce(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function ce(e,t=document){return t.querySelector(e)||void 0}function Re(){return document.activeElement instanceof HTMLElement&&document.activeElement||void 0}var na=_(h(document.body,"focusin"),h(document.body,"focusout")).pipe(ke(1),V(void 0),m(()=>Re()||document.body),J(1));function Zt(e){return na.pipe(m(t=>e.contains(t)),X())}function Je(e){return{x:e.offsetLeft,y:e.offsetTop}}function No(e){return _(h(window,"load"),h(window,"resize")).pipe(Ce(0,Oe),m(()=>Je(e)),V(Je(e)))}function er(e){return{x:e.scrollLeft,y:e.scrollTop}}function dt(e){return _(h(e,"scroll"),h(window,"resize")).pipe(Ce(0,Oe),m(()=>er(e)),V(er(e)))}function Do(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Do(e,r)}function T(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)Do(o,n);return o}function tr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function ht(e){let t=T("script",{src:e});return H(()=>(document.head.appendChild(t),_(h(t,"load"),h(t,"error").pipe(E(()=>Mr(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),A(()=>document.head.removeChild(t)),xe(1))))}var Vo=new x,ia=H(()=>typeof ResizeObserver=="undefined"?ht("https://unpkg.com/resize-observer-polyfill"):j(void 0)).pipe(m(()=>new ResizeObserver(e=>{for(let t of e)Vo.next(t)})),E(e=>_(Ve,j(e)).pipe(A(()=>e.disconnect()))),J(1));function he(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ye(e){return ia.pipe(w(t=>t.observe(e)),E(t=>Vo.pipe(M(({target:r})=>r===e),A(()=>t.unobserve(e)),m(()=>he(e)))),V(he(e)))}function bt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function zo(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}var qo=new x,aa=H(()=>j(new IntersectionObserver(e=>{for(let t of e)qo.next(t)},{threshold:0}))).pipe(E(e=>_(Ve,j(e)).pipe(A(()=>e.disconnect()))),J(1));function rr(e){return aa.pipe(w(t=>t.observe(e)),E(t=>qo.pipe(M(({target:r})=>r===e),A(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function Ko(e,t=16){return dt(e).pipe(m(({y:r})=>{let o=he(e),n=bt(e);return r>=n.height-o.height-t}),X())}var or={drawer:W("[data-md-toggle=drawer]"),search:W("[data-md-toggle=search]")};function Qo(e){return or[e].checked}function Ke(e,t){or[e].checked!==t&&or[e].click()}function We(e){let t=or[e];return h(t,"change").pipe(m(()=>t.checked),V(t.checked))}function sa(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function ca(){return _(h(window,"compositionstart").pipe(m(()=>!0)),h(window,"compositionend").pipe(m(()=>!1))).pipe(V(!1))}function Yo(){let e=h(window,"keydown").pipe(M(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:Qo("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),M(({mode:t,type:r})=>{if(t==="global"){let o=Re();if(typeof o!="undefined")return!sa(o,r)}return!0}),le());return ca().pipe(E(t=>t?L:e))}function pe(){return new URL(location.href)}function ot(e,t=!1){if(te("navigation.instant")&&!t){let r=T("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function Bo(){return new x}function Go(){return location.hash.slice(1)}function nr(e){let t=T("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function pa(e){return _(h(window,"hashchange"),e).pipe(m(Go),V(Go()),M(t=>t.length>0),J(1))}function Jo(e){return pa(e).pipe(m(t=>ce(`[id="${t}"]`)),M(t=>typeof t!="undefined"))}function Fr(e){let t=matchMedia(e);return Xt(r=>t.addListener(()=>r(t.matches))).pipe(V(t.matches))}function Xo(){let e=matchMedia("print");return _(h(window,"beforeprint").pipe(m(()=>!0)),h(window,"afterprint").pipe(m(()=>!1))).pipe(V(e.matches))}function jr(e,t){return e.pipe(E(r=>r?t():L))}function ir(e,t){return new P(r=>{let o=new XMLHttpRequest;o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network Error"))}),o.addEventListener("abort",()=>{r.error(new Error("Request aborted"))}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{t.progress$.next(n.loaded/n.total*100)}),t.progress$.next(5)),o.send()})}function Ue(e,t){return ir(e,t).pipe(E(r=>r.text()),m(r=>JSON.parse(r)),J(1))}function Zo(e,t){let r=new DOMParser;return ir(e,t).pipe(E(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),J(1))}function en(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function tn(){return _(h(window,"scroll",{passive:!0}),h(window,"resize",{passive:!0})).pipe(m(en),V(en()))}function rn(){return{width:innerWidth,height:innerHeight}}function on(){return h(window,"resize",{passive:!0}).pipe(m(rn),V(rn()))}function nn(){return B([tn(),on()]).pipe(m(([e,t])=>({offset:e,size:t})),J(1))}function ar(e,{viewport$:t,header$:r}){let o=t.pipe(ee("size")),n=B([o,r]).pipe(m(()=>Je(e)));return B([r,t,n]).pipe(m(([{height:i},{offset:s,size:a},{x:c,y:p}])=>({offset:{x:s.x-c,y:s.y-p+i},size:a})))}function la(e){return h(e,"message",t=>t.data)}function ma(e){let t=new x;return t.subscribe(r=>e.postMessage(r)),t}function an(e,t=new Worker(e)){let r=la(t),o=ma(t),n=new x;n.subscribe(o);let i=o.pipe(Z(),re(!0));return n.pipe(Z(),qe(r.pipe(Y(i))),le())}var fa=W("#__config"),vt=JSON.parse(fa.textContent);vt.base=`${new URL(vt.base,pe())}`;function me(){return vt}function te(e){return vt.features.includes(e)}function be(e,t){return typeof t!="undefined"?vt.translations[e].replace("#",t.toString()):vt.translations[e]}function Ee(e,t=document){return W(`[data-md-component=${e}]`,t)}function oe(e,t=document){return q(`[data-md-component=${e}]`,t)}function ua(e){let t=W(".md-typeset > :first-child",e);return h(t,"click",{once:!0}).pipe(m(()=>W(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function sn(e){if(!te("announce.dismiss")||!e.childElementCount)return L;if(!e.hidden){let t=W(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return H(()=>{let t=new x;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),ua(e).pipe(w(r=>t.next(r)),A(()=>t.complete()),m(r=>R({ref:e},r)))})}function da(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function cn(e,t){let r=new x;return r.subscribe(({hidden:o})=>{e.hidden=o}),da(e,t).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))}function ha(e,t){let r=H(()=>B([No(e),dt(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:s,height:a}=he(e);return{x:o-i.x+s/2,y:n-i.y+a/2}}));return Zt(e).pipe(E(o=>r.pipe(m(n=>({active:o,offset:n})),xe(+!o||1/0))))}function pn(e,t,{target$:r}){let[o,n]=Array.from(e.children);return H(()=>{let i=new x,s=i.pipe(Z(),re(!0));return i.subscribe({next({offset:a}){e.style.setProperty("--md-tooltip-x",`${a.x}px`),e.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),rr(e).pipe(Y(s)).subscribe(a=>{e.toggleAttribute("data-md-visible",a)}),_(i.pipe(M(({active:a})=>a)),i.pipe(ke(250),M(({active:a})=>!a))).subscribe({next({active:a}){a?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe(Ce(16,Oe)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(Pr(125,Oe),M(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?e.style.setProperty("--md-tooltip-0",`${-a}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),h(n,"click").pipe(Y(s),M(a=>!(a.metaKey||a.ctrlKey))).subscribe(a=>{a.stopPropagation(),a.preventDefault()}),h(n,"mousedown").pipe(Y(s),ne(i)).subscribe(([a,{active:c}])=>{var p;if(a.button!==0||a.metaKey||a.ctrlKey)a.preventDefault();else if(c){a.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(p=Re())==null||p.blur()}}),r.pipe(Y(s),M(a=>a===o),ze(125)).subscribe(()=>e.focus()),ha(e,t).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))})}function Wr(e){return T("div",{class:"md-tooltip",id:e},T("div",{class:"md-tooltip__inner md-typeset"}))}function ln(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return T("aside",{class:"md-annotation",tabIndex:0},Wr(t),T("a",{href:r,class:"md-annotation__index",tabIndex:-1},T("span",{"data-md-annotation-id":e})))}else return T("aside",{class:"md-annotation",tabIndex:0},Wr(t),T("span",{class:"md-annotation__index",tabIndex:-1},T("span",{"data-md-annotation-id":e})))}function mn(e){return T("button",{class:"md-clipboard md-icon",title:be("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}function Ur(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(c=>!e.terms[c]).reduce((c,p)=>[...c,T("del",null,p)," "],[]).slice(0,-1),i=me(),s=new URL(e.location,i.base);te("search.highlight")&&s.searchParams.set("h",Object.entries(e.terms).filter(([,c])=>c).reduce((c,[p])=>`${c} ${p}`.trim(),""));let{tags:a}=me();return T("a",{href:`${s}`,class:"md-search-result__link",tabIndex:-1},T("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&T("div",{class:"md-search-result__icon md-icon"}),r>0&&T("h1",null,e.title),r<=0&&T("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&e.tags.map(c=>{let p=a?c in a?`md-tag-icon md-tag--${a[c]}`:"md-tag-icon":"";return T("span",{class:`md-tag ${p}`},c)}),o>0&&n.length>0&&T("p",{class:"md-search-result__terms"},be("search.result.term.missing"),": ",...n)))}function fn(e){let t=e[0].score,r=[...e],o=me(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),s=r.findIndex(l=>l.scoreUr(l,1)),...c.length?[T("details",{class:"md-search-result__more"},T("summary",{tabIndex:-1},T("div",null,c.length>0&&c.length===1?be("search.result.more.one"):be("search.result.more.other",c.length))),...c.map(l=>Ur(l,1)))]:[]];return T("li",{class:"md-search-result__item"},p)}function un(e){return T("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>T("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?tr(r):r)))}function Nr(e){let t=`tabbed-control tabbed-control--${e}`;return T("div",{class:t,hidden:!0},T("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function dn(e){return T("div",{class:"md-typeset__scrollwrap"},T("div",{class:"md-typeset__table"},e))}function ba(e){let t=me(),r=new URL(`../${e.version}/`,t.base);return T("li",{class:"md-version__item"},T("a",{href:`${r}`,class:"md-version__link"},e.title))}function hn(e,t){return T("div",{class:"md-version"},T("button",{class:"md-version__current","aria-label":be("select.version")},t.title),T("ul",{class:"md-version__list"},e.map(ba)))}function va(e){return e.tagName==="CODE"?q(".c, .c1, .cm",e):[e]}function ga(e){let t=[];for(let r of va(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let s;for(;s=/(\(\d+\))(!)?/.exec(i.textContent);){let[,a,c]=s;if(typeof c=="undefined"){let p=i.splitText(s.index);i=p.splitText(a.length),t.push(p)}else{i.textContent=a,t.push(i);break}}}}return t}function bn(e,t){t.append(...Array.from(e.childNodes))}function sr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,s=new Map;for(let a of ga(t)){let[,c]=a.textContent.match(/\((\d+)\)/);ce(`:scope > li:nth-child(${c})`,e)&&(s.set(c,ln(c,i)),a.replaceWith(s.get(c)))}return s.size===0?L:H(()=>{let a=new x,c=a.pipe(Z(),re(!0)),p=[];for(let[l,f]of s)p.push([W(".md-typeset",f),W(`:scope > li:nth-child(${l})`,e)]);return o.pipe(Y(c)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of p)l?bn(f,u):bn(u,f)}),_(...[...s].map(([,l])=>pn(l,t,{target$:r}))).pipe(A(()=>a.complete()),le())})}function vn(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return vn(t)}}function gn(e,t){return H(()=>{let r=vn(e);return typeof r!="undefined"?sr(r,e,t):L})}var yn=Ht(Vr());var xa=0;function En(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return En(t)}}function xn(e){return ye(e).pipe(m(({width:t})=>({scrollable:bt(e).width>t})),ee("scrollable"))}function wn(e,t){let{matches:r}=matchMedia("(hover)"),o=H(()=>{let n=new x;if(n.subscribe(({scrollable:s})=>{s&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")}),yn.default.isSupported()&&(e.closest(".copy")||te("content.code.copy")&&!e.closest(".no-copy"))){let s=e.closest("pre");s.id=`__code_${xa++}`,s.insertBefore(mn(s.id),e)}let i=e.closest(".highlight");if(i instanceof HTMLElement){let s=En(i);if(typeof s!="undefined"&&(i.classList.contains("annotate")||te("content.code.annotate"))){let a=sr(s,e,t);return xn(e).pipe(w(c=>n.next(c)),A(()=>n.complete()),m(c=>R({ref:e},c)),qe(ye(i).pipe(m(({width:c,height:p})=>c&&p),X(),E(c=>c?a:L))))}}return xn(e).pipe(w(s=>n.next(s)),A(()=>n.complete()),m(s=>R({ref:e},s)))});return te("content.lazy")?rr(e).pipe(M(n=>n),xe(1),E(()=>o)):o}function ya(e,{target$:t,print$:r}){let o=!0;return _(t.pipe(m(n=>n.closest("details:not([open])")),M(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(M(n=>n||!o),w(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Sn(e,t){return H(()=>{let r=new x;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),ya(e,t).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))})}var Tn=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel rect,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel rect{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color);stroke-width:.05rem}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs #classDiagram-compositionEnd,defs #classDiagram-compositionStart,defs #classDiagram-dependencyEnd,defs #classDiagram-dependencyStart,defs #classDiagram-extensionEnd,defs #classDiagram-extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs #classDiagram-aggregationEnd,defs #classDiagram-aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.attributeBoxEven,.attributeBoxOdd{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var zr,wa=0;function Sa(){return typeof mermaid=="undefined"||mermaid instanceof Element?ht("https://unpkg.com/mermaid@9.4.3/dist/mermaid.min.js"):j(void 0)}function On(e){return e.classList.remove("mermaid"),zr||(zr=Sa().pipe(w(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Tn,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),J(1))),zr.subscribe(()=>{e.classList.add("mermaid");let t=`__mermaid_${wa++}`,r=T("div",{class:"mermaid"}),o=e.textContent;mermaid.mermaidAPI.render(t,o,(n,i)=>{let s=r.attachShadow({mode:"closed"});s.innerHTML=n,e.replaceWith(r),i==null||i(s)})}),zr.pipe(m(()=>({ref:e})))}var Mn=T("table");function Ln(e){return e.replaceWith(Mn),Mn.replaceWith(dn(e)),j({ref:e})}function Ta(e){let t=q(":scope > input",e),r=t.find(o=>o.checked)||t[0];return _(...t.map(o=>h(o,"change").pipe(m(()=>W(`label[for="${o.id}"]`))))).pipe(V(W(`label[for="${r.id}"]`)),m(o=>({active:o})))}function _n(e,{viewport$:t}){let r=Nr("prev");e.append(r);let o=Nr("next");e.append(o);let n=W(".tabbed-labels",e);return H(()=>{let i=new x,s=i.pipe(Z(),re(!0));return B([i,ye(e)]).pipe(Ce(1,Oe),Y(s)).subscribe({next([{active:a},c]){let p=Je(a),{width:l}=he(a);e.style.setProperty("--md-indicator-x",`${p.x}px`),e.style.setProperty("--md-indicator-width",`${l}px`);let f=er(n);(p.xf.x+c.width)&&n.scrollTo({left:Math.max(0,p.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),B([dt(n),ye(n)]).pipe(Y(s)).subscribe(([a,c])=>{let p=bt(n);r.hidden=a.x<16,o.hidden=a.x>p.width-c.width-16}),_(h(r,"click").pipe(m(()=>-1)),h(o,"click").pipe(m(()=>1))).pipe(Y(s)).subscribe(a=>{let{width:c}=he(n);n.scrollBy({left:c*a,behavior:"smooth"})}),te("content.tabs.link")&&i.pipe(je(1),ne(t)).subscribe(([{active:a},{offset:c}])=>{let p=a.innerText.trim();if(a.hasAttribute("data-md-switching"))a.removeAttribute("data-md-switching");else{let l=e.offsetTop-c.y;for(let u of q("[data-tabs]"))for(let d of q(":scope > input",u)){let v=W(`label[for="${d.id}"]`);if(v!==a&&v.innerText.trim()===p){v.setAttribute("data-md-switching",""),d.click();break}}window.scrollTo({top:e.offsetTop-l});let f=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([p,...f])])}}),i.pipe(Y(s)).subscribe(()=>{for(let a of q("audio, video",e))a.pause()}),Ta(e).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))}).pipe(rt(ae))}function An(e,{viewport$:t,target$:r,print$:o}){return _(...q(".annotate:not(.highlight)",e).map(n=>gn(n,{target$:r,print$:o})),...q("pre:not(.mermaid) > code",e).map(n=>wn(n,{target$:r,print$:o})),...q("pre.mermaid",e).map(n=>On(n)),...q("table:not([class])",e).map(n=>Ln(n)),...q("details",e).map(n=>Sn(n,{target$:r,print$:o})),...q("[data-tabs]",e).map(n=>_n(n,{viewport$:t})))}function Oa(e,{alert$:t}){return t.pipe(E(r=>_(j(!0),j(!1).pipe(ze(2e3))).pipe(m(o=>({message:r,active:o})))))}function Cn(e,t){let r=W(".md-typeset",e);return H(()=>{let o=new x;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),Oa(e,t).pipe(w(n=>o.next(n)),A(()=>o.complete()),m(n=>R({ref:e},n)))})}function Ma({viewport$:e}){if(!te("header.autohide"))return j(!1);let t=e.pipe(m(({offset:{y:n}})=>n),Le(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),X()),o=We("search");return B([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),X(),E(n=>n?r:j(!1)),V(!1))}function kn(e,t){return H(()=>B([ye(e),Ma(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),X((r,o)=>r.height===o.height&&r.hidden===o.hidden),J(1))}function Hn(e,{header$:t,main$:r}){return H(()=>{let o=new x,n=o.pipe(Z(),re(!0));return o.pipe(ee("active"),Ge(t)).subscribe(([{active:i},{hidden:s}])=>{e.classList.toggle("md-header--shadow",i&&!s),e.hidden=s}),r.subscribe(o),t.pipe(Y(n),m(i=>R({ref:e},i)))})}function La(e,{viewport$:t,header$:r}){return ar(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=he(e);return{active:o>=n}}),ee("active"))}function $n(e,t){return H(()=>{let r=new x;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=ce(".md-content h1");return typeof o=="undefined"?L:La(o,t).pipe(w(n=>r.next(n)),A(()=>r.complete()),m(n=>R({ref:e},n)))})}function Rn(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),X()),n=o.pipe(E(()=>ye(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),ee("bottom"))));return B([o,n,t]).pipe(m(([i,{top:s,bottom:a},{offset:{y:c},size:{height:p}}])=>(p=Math.max(0,p-Math.max(0,s-c,i)-Math.max(0,p+c-a)),{offset:s-i,height:p,active:s-i<=c})),X((i,s)=>i.offset===s.offset&&i.height===s.height&&i.active===s.active))}function _a(e){let t=__md_get("__palette")||{index:e.findIndex(r=>matchMedia(r.getAttribute("data-md-color-media")).matches)};return j(...e).pipe(se(r=>h(r,"change").pipe(m(()=>r))),V(e[Math.max(0,t.index)]),m(r=>({index:e.indexOf(r),color:{scheme:r.getAttribute("data-md-color-scheme"),primary:r.getAttribute("data-md-color-primary"),accent:r.getAttribute("data-md-color-accent")}})),J(1))}function Pn(e){let t=T("meta",{name:"theme-color"});document.head.appendChild(t);let r=T("meta",{name:"color-scheme"});return document.head.appendChild(r),H(()=>{let o=new x;o.subscribe(i=>{document.body.setAttribute("data-md-color-switching","");for(let[s,a]of Object.entries(i.color))document.body.setAttribute(`data-md-color-${s}`,a);for(let s=0;s{let i=Ee("header"),s=window.getComputedStyle(i);return r.content=s.colorScheme,s.backgroundColor.match(/\d+/g).map(a=>(+a).toString(16).padStart(2,"0")).join("")})).subscribe(i=>t.content=`#${i}`),o.pipe(Se(ae)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")});let n=q("input",e);return _a(n).pipe(w(i=>o.next(i)),A(()=>o.complete()),m(i=>R({ref:e},i)))})}function In(e,{progress$:t}){return H(()=>{let r=new x;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(w(o=>r.next({value:o})),A(()=>r.complete()),m(o=>({ref:e,value:o})))})}var qr=Ht(Vr());function Aa(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r}function Fn({alert$:e}){qr.default.isSupported()&&new P(t=>{new qr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||Aa(W(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(w(t=>{t.trigger.focus()}),m(()=>be("clipboard.copied"))).subscribe(e)}function Ca(e){if(e.length<2)return[""];let[t,r]=[...e].sort((n,i)=>n.length-i.length).map(n=>n.replace(/[^/]+$/,"")),o=0;if(t===r)o=t.length;else for(;t.charCodeAt(o)===r.charCodeAt(o);)o++;return e.map(n=>n.replace(t.slice(0,o),""))}function cr(e){let t=__md_get("__sitemap",sessionStorage,e);if(t)return j(t);{let r=me();return Zo(new URL("sitemap.xml",e||r.base)).pipe(m(o=>Ca(q("loc",o).map(n=>n.textContent))),de(()=>L),He([]),w(o=>__md_set("__sitemap",o,sessionStorage,e)))}}function jn(e){let t=W("[rel=canonical]",e);t.href=t.href.replace("//localhost:","//127.0.0.1");let r=new Map;for(let o of q(":scope > *",e)){let n=o.outerHTML;for(let i of["href","src"]){let s=o.getAttribute(i);if(s===null)continue;let a=new URL(s,t.href),c=o.cloneNode();c.setAttribute(i,`${a}`),n=c.outerHTML;break}r.set(n,o)}return r}function Wn({location$:e,viewport$:t,progress$:r}){let o=me();if(location.protocol==="file:")return L;let n=cr().pipe(m(l=>l.map(f=>`${new URL(f,o.base)}`))),i=h(document.body,"click").pipe(ne(n),E(([l,f])=>{if(!(l.target instanceof Element))return L;let u=l.target.closest("a");if(u===null)return L;if(u.target||l.metaKey||l.ctrlKey)return L;let d=new URL(u.href);return d.search=d.hash="",f.includes(`${d}`)?(l.preventDefault(),j(new URL(u.href))):L}),le());i.pipe(xe(1)).subscribe(()=>{let l=ce("link[rel=icon]");typeof l!="undefined"&&(l.href=l.href)}),h(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),i.pipe(ne(t)).subscribe(([l,{offset:f}])=>{history.scrollRestoration="manual",history.replaceState(f,""),history.pushState(null,"",l)}),i.subscribe(e);let s=e.pipe(V(pe()),ee("pathname"),je(1),E(l=>ir(l,{progress$:r}).pipe(de(()=>(ot(l,!0),L))))),a=new DOMParser,c=s.pipe(E(l=>l.text()),E(l=>{let f=a.parseFromString(l,"text/html");for(let b of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...te("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let z=ce(b),K=ce(b,f);typeof z!="undefined"&&typeof K!="undefined"&&z.replaceWith(K)}let u=jn(document.head),d=jn(f.head);for(let[b,z]of d)z.getAttribute("rel")==="stylesheet"||z.hasAttribute("src")||(u.has(b)?u.delete(b):document.head.appendChild(z));for(let b of u.values())b.getAttribute("rel")==="stylesheet"||b.hasAttribute("src")||b.remove();let v=Ee("container");return Fe(q("script",v)).pipe(E(b=>{let z=f.createElement("script");if(b.src){for(let K of b.getAttributeNames())z.setAttribute(K,b.getAttribute(K));return b.replaceWith(z),new P(K=>{z.onload=()=>K.complete()})}else return z.textContent=b.textContent,b.replaceWith(z),L}),Z(),re(f))}),le());return h(window,"popstate").pipe(m(pe)).subscribe(e),e.pipe(V(pe()),Le(2,1),M(([l,f])=>l.pathname===f.pathname&&l.hash!==f.hash),m(([,l])=>l)).subscribe(l=>{var f,u;history.state!==null||!l.hash?window.scrollTo(0,(u=(f=history.state)==null?void 0:f.y)!=null?u:0):(history.scrollRestoration="auto",nr(l.hash),history.scrollRestoration="manual")}),e.pipe(Cr(i),V(pe()),Le(2,1),M(([l,f])=>l.pathname===f.pathname&&l.hash===f.hash),m(([,l])=>l)).subscribe(l=>{history.scrollRestoration="auto",nr(l.hash),history.scrollRestoration="manual",history.back()}),c.pipe(ne(e)).subscribe(([,l])=>{var f,u;history.state!==null||!l.hash?window.scrollTo(0,(u=(f=history.state)==null?void 0:f.y)!=null?u:0):nr(l.hash)}),t.pipe(ee("offset"),ke(100)).subscribe(({offset:l})=>{history.replaceState(l,"")}),c}var Dn=Ht(Nn());function Vn(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,s)=>`${i}${s}`;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return s=>(0,Dn.default)(s).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function Mt(e){return e.type===1}function pr(e){return e.type===3}function zn(e,t){let r=an(e);return _(j(location.protocol!=="file:"),We("search")).pipe($e(o=>o),E(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:te("search.suggest")}}})),r}function qn({document$:e}){let t=me(),r=Ue(new URL("../versions.json",t.base)).pipe(de(()=>L)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:s,aliases:a})=>s===i||a.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),E(n=>h(document.body,"click").pipe(M(i=>!i.metaKey&&!i.ctrlKey),ne(o),E(([i,s])=>{if(i.target instanceof Element){let a=i.target.closest("a");if(a&&!a.target&&n.has(a.href)){let c=a.href;return!i.target.closest(".md-version")&&n.get(c)===s?L:(i.preventDefault(),j(c))}}return L}),E(i=>{let{version:s}=n.get(i);return cr(new URL(i)).pipe(m(a=>{let p=pe().href.replace(t.base,"");return a.includes(p.split("#")[0])?new URL(`../${s}/${p}`,t.base):new URL(i)}))})))).subscribe(n=>ot(n,!0)),B([r,o]).subscribe(([n,i])=>{W(".md-header__topic").appendChild(hn(n,i))}),e.pipe(E(()=>o)).subscribe(n=>{var s;let i=__md_get("__outdated",sessionStorage);if(i===null){i=!0;let a=((s=t.version)==null?void 0:s.default)||"latest";Array.isArray(a)||(a=[a]);e:for(let c of a)for(let p of n.aliases)if(new RegExp(c,"i").test(p)){i=!1;break e}__md_set("__outdated",i,sessionStorage)}if(i)for(let a of oe("outdated"))a.hidden=!1})}function Pa(e,{worker$:t}){let{searchParams:r}=pe();r.has("q")&&(Ke("search",!0),e.value=r.get("q"),e.focus(),We("search").pipe($e(i=>!i)).subscribe(()=>{let i=pe();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=Zt(e),n=_(t.pipe($e(Mt)),h(e,"keyup"),o).pipe(m(()=>e.value),X());return B([n,o]).pipe(m(([i,s])=>({value:i,focus:s})),J(1))}function Kn(e,{worker$:t}){let r=new x,o=r.pipe(Z(),re(!0));B([t.pipe($e(Mt)),r],(i,s)=>s).pipe(ee("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(ee("focus")).subscribe(({focus:i})=>{i&&Ke("search",i)}),h(e.form,"reset").pipe(Y(o)).subscribe(()=>e.focus());let n=W("header [for=__search]");return h(n,"click").subscribe(()=>e.focus()),Pa(e,{worker$:t}).pipe(w(i=>r.next(i)),A(()=>r.complete()),m(i=>R({ref:e},i)),J(1))}function Qn(e,{worker$:t,query$:r}){let o=new x,n=Ko(e.parentElement).pipe(M(Boolean)),i=e.parentElement,s=W(":scope > :first-child",e),a=W(":scope > :last-child",e);We("search").subscribe(l=>a.setAttribute("role",l?"list":"presentation")),o.pipe(ne(r),$r(t.pipe($e(Mt)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:s.textContent=f.length?be("search.result.none"):be("search.result.placeholder");break;case 1:s.textContent=be("search.result.one");break;default:let u=tr(l.length);s.textContent=be("search.result.other",u)}});let c=o.pipe(w(()=>a.innerHTML=""),E(({items:l})=>_(j(...l.slice(0,10)),j(...l.slice(10)).pipe(Le(4),Ir(n),E(([f])=>f)))),m(fn),le());return c.subscribe(l=>a.appendChild(l)),c.pipe(se(l=>{let f=ce("details",l);return typeof f=="undefined"?L:h(f,"toggle").pipe(Y(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(M(pr),m(({data:l})=>l)).pipe(w(l=>o.next(l)),A(()=>o.complete()),m(l=>R({ref:e},l)))}function Ia(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=pe();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function Yn(e,t){let r=new x,o=r.pipe(Z(),re(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),h(e,"click").pipe(Y(o)).subscribe(n=>n.preventDefault()),Ia(e,t).pipe(w(n=>r.next(n)),A(()=>r.complete()),m(n=>R({ref:e},n)))}function Bn(e,{worker$:t,keyboard$:r}){let o=new x,n=Ee("search-query"),i=_(h(n,"keydown"),h(n,"focus")).pipe(Se(ae),m(()=>n.value),X());return o.pipe(Ge(i),m(([{suggest:a},c])=>{let p=c.split(/([\s-]+)/);if(a!=null&&a.length&&p[p.length-1]){let l=a[a.length-1];l.startsWith(p[p.length-1])&&(p[p.length-1]=l)}else p.length=0;return p})).subscribe(a=>e.innerHTML=a.join("").replace(/\s/g," ")),r.pipe(M(({mode:a})=>a==="search")).subscribe(a=>{switch(a.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(M(pr),m(({data:a})=>a)).pipe(w(a=>o.next(a)),A(()=>o.complete()),m(()=>({ref:e})))}function Gn(e,{index$:t,keyboard$:r}){let o=me();try{let n=zn(o.search,t),i=Ee("search-query",e),s=Ee("search-result",e);h(e,"click").pipe(M(({target:c})=>c instanceof Element&&!!c.closest("a"))).subscribe(()=>Ke("search",!1)),r.pipe(M(({mode:c})=>c==="search")).subscribe(c=>{let p=Re();switch(c.type){case"Enter":if(p===i){let l=new Map;for(let f of q(":first-child [href]",s)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,d])=>d-u);f.click()}c.claim()}break;case"Escape":case"Tab":Ke("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof p=="undefined")i.focus();else{let l=[i,...q(":not(details) > [href], summary, details[open] [href]",s)],f=Math.max(0,(Math.max(0,l.indexOf(p))+l.length+(c.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}c.claim();break;default:i!==Re()&&i.focus()}}),r.pipe(M(({mode:c})=>c==="global")).subscribe(c=>{switch(c.type){case"f":case"s":case"/":i.focus(),i.select(),c.claim();break}});let a=Kn(i,{worker$:n});return _(a,Qn(s,{worker$:n,query$:a})).pipe(qe(...oe("search-share",e).map(c=>Yn(c,{query$:a})),...oe("search-suggest",e).map(c=>Bn(c,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,Ve}}function Jn(e,{index$:t,location$:r}){return B([t,r.pipe(V(pe()),M(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>Vn(o.config)(n.searchParams.get("h"))),m(o=>{var s;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let a=i.nextNode();a;a=i.nextNode())if((s=a.parentElement)!=null&&s.offsetHeight){let c=a.textContent,p=o(c);p.length>c.length&&n.set(a,p)}for(let[a,c]of n){let{childNodes:p}=T("span",null,c);a.replaceWith(...Array.from(p))}return{ref:e,nodes:n}}))}function Fa(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return B([r,t]).pipe(m(([{offset:i,height:s},{offset:{y:a}}])=>(s=s+Math.min(n,Math.max(0,a-i))-n,{height:s,locked:a>=i+n})),X((i,s)=>i.height===s.height&&i.locked===s.locked))}function Kr(e,o){var n=o,{header$:t}=n,r=eo(n,["header$"]);let i=W(".md-sidebar__scrollwrap",e),{y:s}=Je(i);return H(()=>{let a=new x,c=a.pipe(Z(),re(!0)),p=a.pipe(Ce(0,Oe));return p.pipe(ne(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*s}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),p.pipe($e()).subscribe(()=>{for(let l of q(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=he(f);f.scrollTo({top:u-d/2})}}}),ge(q("label[tabindex]",e)).pipe(se(l=>h(l,"click").pipe(Se(ae),m(()=>l),Y(c)))).subscribe(l=>{let f=W(`[id="${l.htmlFor}"]`);W(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),Fa(e,r).pipe(w(l=>a.next(l)),A(()=>a.complete()),m(l=>R({ref:e},l)))})}function Xn(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return St(Ue(`${r}/releases/latest`).pipe(de(()=>L),m(o=>({version:o.tag_name})),He({})),Ue(r).pipe(de(()=>L),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),He({}))).pipe(m(([o,n])=>R(R({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return Ue(r).pipe(m(o=>({repositories:o.public_repos})),He({}))}}function Zn(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return Ue(r).pipe(de(()=>L),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),He({}))}function ei(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return Xn(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return Zn(r,o)}return L}var ja;function Wa(e){return ja||(ja=H(()=>{let t=__md_get("__source",sessionStorage);if(t)return j(t);if(oe("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return L}return ei(e.href).pipe(w(o=>__md_set("__source",o,sessionStorage)))}).pipe(de(()=>L),M(t=>Object.keys(t).length>0),m(t=>({facts:t})),J(1)))}function ti(e){let t=W(":scope > :last-child",e);return H(()=>{let r=new x;return r.subscribe(({facts:o})=>{t.appendChild(un(o)),t.classList.add("md-source__repository--active")}),Wa(e).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))})}function Ua(e,{viewport$:t,header$:r}){return ye(document.body).pipe(E(()=>ar(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),ee("hidden"))}function ri(e,t){return H(()=>{let r=new x;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(te("navigation.tabs.sticky")?j({hidden:!1}):Ua(e,t)).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))})}function Na(e,{viewport$:t,header$:r}){let o=new Map,n=q("[href^=\\#]",e);for(let a of n){let c=decodeURIComponent(a.hash.substring(1)),p=ce(`[id="${c}"]`);typeof p!="undefined"&&o.set(a,p)}let i=r.pipe(ee("height"),m(({height:a})=>{let c=Ee("main"),p=W(":scope > :first-child",c);return a+.8*(p.offsetTop-c.offsetTop)}),le());return ye(document.body).pipe(ee("height"),E(a=>H(()=>{let c=[];return j([...o].reduce((p,[l,f])=>{for(;c.length&&o.get(c[c.length-1]).tagName>=f.tagName;)c.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let d=f.offsetParent;for(;d;d=d.offsetParent)u+=d.offsetTop;return p.set([...c=[...c,l]].reverse(),u)},new Map))}).pipe(m(c=>new Map([...c].sort(([,p],[,l])=>p-l))),Ge(i),E(([c,p])=>t.pipe(kr(([l,f],{offset:{y:u},size:d})=>{let v=u+d.height>=Math.floor(a.height);for(;f.length;){let[,b]=f[0];if(b-p=u&&!v)f=[l.pop(),...f];else break}return[l,f]},[[],[...c]]),X((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([a,c])=>({prev:a.map(([p])=>p),next:c.map(([p])=>p)})),V({prev:[],next:[]}),Le(2,1),m(([a,c])=>a.prev.length{let i=new x,s=i.pipe(Z(),re(!0));if(i.subscribe(({prev:a,next:c})=>{for(let[p]of c)p.classList.remove("md-nav__link--passed"),p.classList.remove("md-nav__link--active");for(let[p,[l]]of a.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",p===a.length-1)}),te("toc.follow")){let a=_(t.pipe(ke(1),m(()=>{})),t.pipe(ke(250),m(()=>"smooth")));i.pipe(M(({prev:c})=>c.length>0),Ge(o.pipe(Se(ae))),ne(a)).subscribe(([[{prev:c}],p])=>{let[l]=c[c.length-1];if(l.offsetHeight){let f=zo(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=he(f);f.scrollTo({top:u-d/2,behavior:p})}}})}return te("navigation.tracking")&&t.pipe(Y(s),ee("offset"),ke(250),je(1),Y(n.pipe(je(1))),Tt({delay:250}),ne(i)).subscribe(([,{prev:a}])=>{let c=pe(),p=a[a.length-1];if(p&&p.length){let[l]=p,{hash:f}=new URL(l.href);c.hash!==f&&(c.hash=f,history.replaceState({},"",`${c}`))}else c.hash="",history.replaceState({},"",`${c}`)}),Na(e,{viewport$:t,header$:r}).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))})}function Da(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:s}})=>s),Le(2,1),m(([s,a])=>s>a&&a>0),X()),i=r.pipe(m(({active:s})=>s));return B([i,n]).pipe(m(([s,a])=>!(s&&a)),X(),Y(o.pipe(je(1))),re(!0),Tt({delay:250}),m(s=>({hidden:s})))}function ni(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new x,s=i.pipe(Z(),re(!0));return i.subscribe({next({hidden:a}){e.hidden=a,a?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(Y(s),ee("height")).subscribe(({height:a})=>{e.style.top=`${a+16}px`}),h(e,"click").subscribe(a=>{a.preventDefault(),window.scrollTo({top:0})}),Da(e,{viewport$:t,main$:o,target$:n}).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))}function ii({document$:e,tablet$:t}){e.pipe(E(()=>q(".md-toggle--indeterminate")),w(r=>{r.indeterminate=!0,r.checked=!1}),se(r=>h(r,"change").pipe(Rr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),ne(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function Va(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function ai({document$:e}){e.pipe(E(()=>q("[data-md-scrollfix]")),w(t=>t.removeAttribute("data-md-scrollfix")),M(Va),se(t=>h(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function si({viewport$:e,tablet$:t}){B([We("search"),t]).pipe(m(([r,o])=>r&&!o),E(r=>j(r).pipe(ze(r?400:100))),ne(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function za(){return location.protocol==="file:"?ht(`${new URL("search/search_index.js",Qr.base)}`).pipe(m(()=>__index),J(1)):Ue(new URL("search/search_index.json",Qr.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var nt=Uo(),_t=Bo(),gt=Jo(_t),Yr=Yo(),Te=nn(),lr=Fr("(min-width: 960px)"),pi=Fr("(min-width: 1220px)"),li=Xo(),Qr=me(),mi=document.forms.namedItem("search")?za():Ve,Br=new x;Fn({alert$:Br});var Gr=new x;te("navigation.instant")&&Wn({location$:_t,viewport$:Te,progress$:Gr}).subscribe(nt);var ci;((ci=Qr.version)==null?void 0:ci.provider)==="mike"&&qn({document$:nt});_(_t,gt).pipe(ze(125)).subscribe(()=>{Ke("drawer",!1),Ke("search",!1)});Yr.pipe(M(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=ce("link[rel=prev]");typeof t!="undefined"&&ot(t);break;case"n":case".":let r=ce("link[rel=next]");typeof r!="undefined"&&ot(r);break;case"Enter":let o=Re();o instanceof HTMLLabelElement&&o.click()}});ii({document$:nt,tablet$:lr});ai({document$:nt});si({viewport$:Te,tablet$:lr});var Xe=kn(Ee("header"),{viewport$:Te}),Lt=nt.pipe(m(()=>Ee("main")),E(e=>Rn(e,{viewport$:Te,header$:Xe})),J(1)),qa=_(...oe("consent").map(e=>cn(e,{target$:gt})),...oe("dialog").map(e=>Cn(e,{alert$:Br})),...oe("header").map(e=>Hn(e,{viewport$:Te,header$:Xe,main$:Lt})),...oe("palette").map(e=>Pn(e)),...oe("progress").map(e=>In(e,{progress$:Gr})),...oe("search").map(e=>Gn(e,{index$:mi,keyboard$:Yr})),...oe("source").map(e=>ti(e))),Ka=H(()=>_(...oe("announce").map(e=>sn(e)),...oe("content").map(e=>An(e,{viewport$:Te,target$:gt,print$:li})),...oe("content").map(e=>te("search.highlight")?Jn(e,{index$:mi,location$:_t}):L),...oe("header-title").map(e=>$n(e,{viewport$:Te,header$:Xe})),...oe("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?jr(pi,()=>Kr(e,{viewport$:Te,header$:Xe,main$:Lt})):jr(lr,()=>Kr(e,{viewport$:Te,header$:Xe,main$:Lt}))),...oe("tabs").map(e=>ri(e,{viewport$:Te,header$:Xe})),...oe("toc").map(e=>oi(e,{viewport$:Te,header$:Xe,main$:Lt,target$:gt})),...oe("top").map(e=>ni(e,{viewport$:Te,header$:Xe,main$:Lt,target$:gt})))),fi=nt.pipe(E(()=>Ka),qe(qa),J(1));fi.subscribe();window.document$=nt;window.location$=_t;window.target$=gt;window.keyboard$=Yr;window.viewport$=Te;window.tablet$=lr;window.screen$=pi;window.print$=li;window.alert$=Br;window.progress$=Gr;window.component$=fi;})(); +//# sourceMappingURL=bundle.aecac24b.min.js.map + diff --git a/assets/javascripts/bundle.aecac24b.min.js.map b/assets/javascripts/bundle.aecac24b.min.js.map new file mode 100644 index 00000000..b1534de5 --- /dev/null +++ b/assets/javascripts/bundle.aecac24b.min.js.map @@ -0,0 +1,7 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/clipboard/dist/clipboard.js", "node_modules/escape-html/index.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/rxjs/node_modules/tslib/tslib.es6.js", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/sample.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"], + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*\n * Copyright (c) 2016-2023 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation.\r\n\r\nPermission to use, copy, modify, and/or distribute this software for any\r\npurpose with or without fee is hereby granted.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\r\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\r\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\r\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\r\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\r\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\r\nPERFORMANCE OF THIS SOFTWARE.\r\n***************************************************************************** */\r\n/* global Reflect, Promise */\r\n\r\nvar extendStatics = function(d, b) {\r\n extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\r\n return extendStatics(d, b);\r\n};\r\n\r\nexport function __extends(d, b) {\r\n if (typeof b !== \"function\" && b !== null)\r\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n}\r\n\r\nexport var __assign = function() {\r\n __assign = Object.assign || function __assign(t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n }\r\n return __assign.apply(this, arguments);\r\n}\r\n\r\nexport function __rest(s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n}\r\n\r\nexport function __decorate(decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n}\r\n\r\nexport function __param(paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n}\r\n\r\nexport function __metadata(metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n}\r\n\r\nexport function __awaiter(thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n}\r\n\r\nexport function __generator(thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n}\r\n\r\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });\r\n}) : (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n o[k2] = m[k];\r\n});\r\n\r\nexport function __exportStar(m, o) {\r\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\r\n}\r\n\r\nexport function __values(o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n}\r\n\r\nexport function __read(o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spread() {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spreadArrays() {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n}\r\n\r\nexport function __spreadArray(to, from, pack) {\r\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\r\n if (ar || !(i in from)) {\r\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\r\n ar[i] = from[i];\r\n }\r\n }\r\n return to.concat(ar || Array.prototype.slice.call(from));\r\n}\r\n\r\nexport function __await(v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n}\r\n\r\nexport function __asyncGenerator(thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n}\r\n\r\nexport function __asyncDelegator(o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n}\r\n\r\nexport function __asyncValues(o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n}\r\n\r\nexport function __makeTemplateObject(cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n};\r\n\r\nvar __setModuleDefault = Object.create ? (function(o, v) {\r\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\r\n}) : function(o, v) {\r\n o[\"default\"] = v;\r\n};\r\n\r\nexport function __importStar(mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\r\n __setModuleDefault(result, mod);\r\n return result;\r\n}\r\n\r\nexport function __importDefault(mod) {\r\n return (mod && mod.__esModule) ? mod : { default: mod };\r\n}\r\n\r\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\r\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\r\n}\r\n\r\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\r\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\r\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\r\n}\r\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n *\n * @class Subscription\n */\nexport class Subscription implements SubscriptionLike {\n /** @nocollapse */\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n * @return {void}\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n *\n * @class Subscriber\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @nocollapse\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param {T} [value] The `next` value.\n * @return {void}\n */\n next(value?: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param {any} [err] The `error` exception.\n * @return {void}\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n * @return {void}\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as (((value: T) => void) | undefined),\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent\n * @param subscriber The stopped subscriber\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n *\n * @class Observable\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @constructor\n * @param {Function} subscribe the function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @owner Observable\n * @method create\n * @param {Function} subscribe? the subscriber function to be passed to the Observable constructor\n * @return {Observable} a new observable\n * @nocollapse\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @method lift\n * @param operator the operator defining the operation to take on the observable\n * @return a new observable with the Operator applied\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening.\n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param {Observer|Function} observerOrNext (optional) Either an observer with methods to be called,\n * or the first of three possible handlers, which is the handler for each value emitted from the subscribed\n * Observable.\n * @param {Function} error (optional) A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param {Function} complete (optional) A handler for a terminal event resulting from successful completion.\n * @return {Subscription} a subscription reference to the registered handlers\n * @method subscribe\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next a handler for each value emitted by the observable\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @method Symbol.observable\n * @return {Observable} this instance of the observable\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n * @method pipe\n * @return {Observable} the Observable result of all of the operators having\n * been called in the order they were passed in.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n */\n pipe(...operations: OperatorFunction[]): Observable {\n return pipeFromArray(operations)(this);\n }\n\n /* tslint:disable:max-line-length */\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: typeof Promise): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: PromiseConstructorLike): Promise;\n /* tslint:enable:max-line-length */\n\n /**\n * Subscribe to this Observable and get a Promise resolving on\n * `complete` with the last emission (if any).\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * @method toPromise\n * @param [promiseCtor] a constructor function used to instantiate\n * the Promise\n * @return A Promise that resolves with the last value emit, or\n * rejects on an error. If there were no emissions, Promise\n * resolves with undefined.\n * @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise\n */\n toPromise(promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n let value: T | undefined;\n this.subscribe(\n (x: T) => (value = x),\n (err: any) => reject(err),\n () => resolve(value)\n );\n }) as Promise;\n }\n}\n\n/**\n * Decides between a passed promise constructor from consuming code,\n * A default configured promise constructor, and the native promise\n * constructor and returns it. If nothing can be found, it will throw\n * an error.\n * @param promiseCtor The optional promise constructor to passed by consuming code\n */\nfunction getPromiseCtor(promiseCtor: PromiseConstructorLike | undefined) {\n return promiseCtor ?? config.Promise ?? Promise;\n}\n\nfunction isObserver(value: any): value is Observer {\n return value && isFunction(value.next) && isFunction(value.error) && isFunction(value.complete);\n}\n\nfunction isSubscriber(value: any): value is Subscriber {\n return (value && value instanceof Subscriber) || (isObserver(value) && isSubscription(value));\n}\n", "import { Observable } from '../Observable';\nimport { Subscriber } from '../Subscriber';\nimport { OperatorFunction } from '../types';\nimport { isFunction } from './isFunction';\n\n/**\n * Used to determine if an object is an Observable with a lift function.\n */\nexport function hasLift(source: any): source is { lift: InstanceType['lift'] } {\n return isFunction(source?.lift);\n}\n\n/**\n * Creates an `OperatorFunction`. Used to define operators throughout the library in a concise way.\n * @param init The logic to connect the liftedSource to the subscriber at the moment of subscription.\n */\nexport function operate(\n init: (liftedSource: Observable, subscriber: Subscriber) => (() => void) | void\n): OperatorFunction {\n return (source: Observable) => {\n if (hasLift(source)) {\n return source.lift(function (this: Subscriber, liftedSource: Observable) {\n try {\n return init(liftedSource, this);\n } catch (err) {\n this.error(err);\n }\n });\n }\n throw new TypeError('Unable to lift unknown Observable type');\n };\n}\n", "import { Subscriber } from '../Subscriber';\n\n/**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional teardown logic here. This will only be called on teardown if the\n * subscriber itself is not already closed. This is called after all other teardown logic is executed.\n */\nexport function createOperatorSubscriber(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n onFinalize?: () => void\n): Subscriber {\n return new OperatorSubscriber(destination, onNext, onComplete, onError, onFinalize);\n}\n\n/**\n * A generic helper for allowing operators to be created with a Subscriber and\n * use closures to capture necessary state from the operator function itself.\n */\nexport class OperatorSubscriber extends Subscriber {\n /**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional finalization logic here. This will only be called on finalization if the\n * subscriber itself is not already closed. This is called after all other finalization logic is executed.\n * @param shouldUnsubscribe An optional check to see if an unsubscribe call should truly unsubscribe.\n * NOTE: This currently **ONLY** exists to support the strange behavior of {@link groupBy}, where unsubscription\n * to the resulting observable does not actually disconnect from the source if there are active subscriptions\n * to any grouped observable. (DO NOT EXPOSE OR USE EXTERNALLY!!!)\n */\n constructor(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n private onFinalize?: () => void,\n private shouldUnsubscribe?: () => boolean\n ) {\n // It's important - for performance reasons - that all of this class's\n // members are initialized and that they are always initialized in the same\n // order. This will ensure that all OperatorSubscriber instances have the\n // same hidden class in V8. This, in turn, will help keep the number of\n // hidden classes involved in property accesses within the base class as\n // low as possible. If the number of hidden classes involved exceeds four,\n // the property accesses will become megamorphic and performance penalties\n // will be incurred - i.e. inline caches won't be used.\n //\n // The reasons for ensuring all instances have the same hidden class are\n // further discussed in this blog post from Benedikt Meurer:\n // https://benediktmeurer.de/2018/03/23/impact-of-polymorphism-on-component-based-frameworks-like-react/\n super(destination);\n this._next = onNext\n ? function (this: OperatorSubscriber, value: T) {\n try {\n onNext(value);\n } catch (err) {\n destination.error(err);\n }\n }\n : super._next;\n this._error = onError\n ? function (this: OperatorSubscriber, err: any) {\n try {\n onError(err);\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._error;\n this._complete = onComplete\n ? function (this: OperatorSubscriber) {\n try {\n onComplete();\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._complete;\n }\n\n unsubscribe() {\n if (!this.shouldUnsubscribe || this.shouldUnsubscribe()) {\n const { closed } = this;\n super.unsubscribe();\n // Execute additional teardown if we have any and we didn't already do so.\n !closed && this.onFinalize?.();\n }\n }\n}\n", "import { Subscription } from '../Subscription';\n\ninterface AnimationFrameProvider {\n schedule(callback: FrameRequestCallback): Subscription;\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n delegate:\n | {\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n }\n | undefined;\n}\n\nexport const animationFrameProvider: AnimationFrameProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n schedule(callback) {\n let request = requestAnimationFrame;\n let cancel: typeof cancelAnimationFrame | undefined = cancelAnimationFrame;\n const { delegate } = animationFrameProvider;\n if (delegate) {\n request = delegate.requestAnimationFrame;\n cancel = delegate.cancelAnimationFrame;\n }\n const handle = request((timestamp) => {\n // Clear the cancel function. The request has been fulfilled, so\n // attempting to cancel the request upon unsubscription would be\n // pointless.\n cancel = undefined;\n callback(timestamp);\n });\n return new Subscription(() => cancel?.(handle));\n },\n requestAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.requestAnimationFrame || requestAnimationFrame)(...args);\n },\n cancelAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.cancelAnimationFrame || cancelAnimationFrame)(...args);\n },\n delegate: undefined,\n};\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface ObjectUnsubscribedError extends Error {}\n\nexport interface ObjectUnsubscribedErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (): ObjectUnsubscribedError;\n}\n\n/**\n * An error thrown when an action is invalid because the object has been\n * unsubscribed.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n *\n * @class ObjectUnsubscribedError\n */\nexport const ObjectUnsubscribedError: ObjectUnsubscribedErrorCtor = createErrorClass(\n (_super) =>\n function ObjectUnsubscribedErrorImpl(this: any) {\n _super(this);\n this.name = 'ObjectUnsubscribedError';\n this.message = 'object unsubscribed';\n }\n);\n", "import { Operator } from './Operator';\nimport { Observable } from './Observable';\nimport { Subscriber } from './Subscriber';\nimport { Subscription, EMPTY_SUBSCRIPTION } from './Subscription';\nimport { Observer, SubscriptionLike, TeardownLogic } from './types';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { arrRemove } from './util/arrRemove';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A Subject is a special type of Observable that allows values to be\n * multicasted to many Observers. Subjects are like EventEmitters.\n *\n * Every Subject is an Observable and an Observer. You can subscribe to a\n * Subject, and you can call next to feed values as well as error and complete.\n */\nexport class Subject extends Observable implements SubscriptionLike {\n closed = false;\n\n private currentObservers: Observer[] | null = null;\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n observers: Observer[] = [];\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n isStopped = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n hasError = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n thrownError: any = null;\n\n /**\n * Creates a \"subject\" by basically gluing an observer to an observable.\n *\n * @nocollapse\n * @deprecated Recommended you do not use. Will be removed at some point in the future. Plans for replacement still under discussion.\n */\n static create: (...args: any[]) => any = (destination: Observer, source: Observable): AnonymousSubject => {\n return new AnonymousSubject(destination, source);\n };\n\n constructor() {\n // NOTE: This must be here to obscure Observable's constructor.\n super();\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n lift(operator: Operator): Observable {\n const subject = new AnonymousSubject(this, this);\n subject.operator = operator as any;\n return subject as any;\n }\n\n /** @internal */\n protected _throwIfClosed() {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n }\n\n next(value: T) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n if (!this.currentObservers) {\n this.currentObservers = Array.from(this.observers);\n }\n for (const observer of this.currentObservers) {\n observer.next(value);\n }\n }\n });\n }\n\n error(err: any) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.hasError = this.isStopped = true;\n this.thrownError = err;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.error(err);\n }\n }\n });\n }\n\n complete() {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.isStopped = true;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.complete();\n }\n }\n });\n }\n\n unsubscribe() {\n this.isStopped = this.closed = true;\n this.observers = this.currentObservers = null!;\n }\n\n get observed() {\n return this.observers?.length > 0;\n }\n\n /** @internal */\n protected _trySubscribe(subscriber: Subscriber): TeardownLogic {\n this._throwIfClosed();\n return super._trySubscribe(subscriber);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._checkFinalizedStatuses(subscriber);\n return this._innerSubscribe(subscriber);\n }\n\n /** @internal */\n protected _innerSubscribe(subscriber: Subscriber) {\n const { hasError, isStopped, observers } = this;\n if (hasError || isStopped) {\n return EMPTY_SUBSCRIPTION;\n }\n this.currentObservers = null;\n observers.push(subscriber);\n return new Subscription(() => {\n this.currentObservers = null;\n arrRemove(observers, subscriber);\n });\n }\n\n /** @internal */\n protected _checkFinalizedStatuses(subscriber: Subscriber) {\n const { hasError, thrownError, isStopped } = this;\n if (hasError) {\n subscriber.error(thrownError);\n } else if (isStopped) {\n subscriber.complete();\n }\n }\n\n /**\n * Creates a new Observable with this Subject as the source. You can do this\n * to create custom Observer-side logic of the Subject and conceal it from\n * code that uses the Observable.\n * @return {Observable} Observable that the Subject casts to\n */\n asObservable(): Observable {\n const observable: any = new Observable();\n observable.source = this;\n return observable;\n }\n}\n\n/**\n * @class AnonymousSubject\n */\nexport class AnonymousSubject extends Subject {\n constructor(\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n public destination?: Observer,\n source?: Observable\n ) {\n super();\n this.source = source;\n }\n\n next(value: T) {\n this.destination?.next?.(value);\n }\n\n error(err: any) {\n this.destination?.error?.(err);\n }\n\n complete() {\n this.destination?.complete?.();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n return this.source?.subscribe(subscriber) ?? EMPTY_SUBSCRIPTION;\n }\n}\n", "import { TimestampProvider } from '../types';\n\ninterface DateTimestampProvider extends TimestampProvider {\n delegate: TimestampProvider | undefined;\n}\n\nexport const dateTimestampProvider: DateTimestampProvider = {\n now() {\n // Use the variable rather than `this` so that the function can be called\n // without being bound to the provider.\n return (dateTimestampProvider.delegate || Date).now();\n },\n delegate: undefined,\n};\n", "import { Subject } from './Subject';\nimport { TimestampProvider } from './types';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * A variant of {@link Subject} that \"replays\" old values to new subscribers by emitting them when they first subscribe.\n *\n * `ReplaySubject` has an internal buffer that will store a specified number of values that it has observed. Like `Subject`,\n * `ReplaySubject` \"observes\" values by having them passed to its `next` method. When it observes a value, it will store that\n * value for a time determined by the configuration of the `ReplaySubject`, as passed to its constructor.\n *\n * When a new subscriber subscribes to the `ReplaySubject` instance, it will synchronously emit all values in its buffer in\n * a First-In-First-Out (FIFO) manner. The `ReplaySubject` will also complete, if it has observed completion; and it will\n * error if it has observed an error.\n *\n * There are two main configuration items to be concerned with:\n *\n * 1. `bufferSize` - This will determine how many items are stored in the buffer, defaults to infinite.\n * 2. `windowTime` - The amount of time to hold a value in the buffer before removing it from the buffer.\n *\n * Both configurations may exist simultaneously. So if you would like to buffer a maximum of 3 values, as long as the values\n * are less than 2 seconds old, you could do so with a `new ReplaySubject(3, 2000)`.\n *\n * ### Differences with BehaviorSubject\n *\n * `BehaviorSubject` is similar to `new ReplaySubject(1)`, with a couple of exceptions:\n *\n * 1. `BehaviorSubject` comes \"primed\" with a single value upon construction.\n * 2. `ReplaySubject` will replay values, even after observing an error, where `BehaviorSubject` will not.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n * @see {@link shareReplay}\n */\nexport class ReplaySubject extends Subject {\n private _buffer: (T | number)[] = [];\n private _infiniteTimeWindow = true;\n\n /**\n * @param bufferSize The size of the buffer to replay on subscription\n * @param windowTime The amount of time the buffered items will stay buffered\n * @param timestampProvider An object with a `now()` method that provides the current timestamp. This is used to\n * calculate the amount of time something has been buffered.\n */\n constructor(\n private _bufferSize = Infinity,\n private _windowTime = Infinity,\n private _timestampProvider: TimestampProvider = dateTimestampProvider\n ) {\n super();\n this._infiniteTimeWindow = _windowTime === Infinity;\n this._bufferSize = Math.max(1, _bufferSize);\n this._windowTime = Math.max(1, _windowTime);\n }\n\n next(value: T): void {\n const { isStopped, _buffer, _infiniteTimeWindow, _timestampProvider, _windowTime } = this;\n if (!isStopped) {\n _buffer.push(value);\n !_infiniteTimeWindow && _buffer.push(_timestampProvider.now() + _windowTime);\n }\n this._trimBuffer();\n super.next(value);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._trimBuffer();\n\n const subscription = this._innerSubscribe(subscriber);\n\n const { _infiniteTimeWindow, _buffer } = this;\n // We use a copy here, so reentrant code does not mutate our array while we're\n // emitting it to a new subscriber.\n const copy = _buffer.slice();\n for (let i = 0; i < copy.length && !subscriber.closed; i += _infiniteTimeWindow ? 1 : 2) {\n subscriber.next(copy[i] as T);\n }\n\n this._checkFinalizedStatuses(subscriber);\n\n return subscription;\n }\n\n private _trimBuffer() {\n const { _bufferSize, _timestampProvider, _buffer, _infiniteTimeWindow } = this;\n // If we don't have an infinite buffer size, and we're over the length,\n // use splice to truncate the old buffer values off. Note that we have to\n // double the size for instances where we're not using an infinite time window\n // because we're storing the values and the timestamps in the same array.\n const adjustedBufferSize = (_infiniteTimeWindow ? 1 : 2) * _bufferSize;\n _bufferSize < Infinity && adjustedBufferSize < _buffer.length && _buffer.splice(0, _buffer.length - adjustedBufferSize);\n\n // Now, if we're not in an infinite time window, remove all values where the time is\n // older than what is allowed.\n if (!_infiniteTimeWindow) {\n const now = _timestampProvider.now();\n let last = 0;\n // Search the array for the first timestamp that isn't expired and\n // truncate the buffer up to that point.\n for (let i = 1; i < _buffer.length && (_buffer[i] as number) <= now; i += 2) {\n last = i;\n }\n last && _buffer.splice(0, last + 1);\n }\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Subscription } from '../Subscription';\nimport { SchedulerAction } from '../types';\n\n/**\n * A unit of work to be executed in a `scheduler`. An action is typically\n * created from within a {@link SchedulerLike} and an RxJS user does not need to concern\n * themselves about creating and manipulating an Action.\n *\n * ```ts\n * class Action extends Subscription {\n * new (scheduler: Scheduler, work: (state?: T) => void);\n * schedule(state?: T, delay: number = 0): Subscription;\n * }\n * ```\n *\n * @class Action\n */\nexport class Action extends Subscription {\n constructor(scheduler: Scheduler, work: (this: SchedulerAction, state?: T) => void) {\n super();\n }\n /**\n * Schedules this action on its parent {@link SchedulerLike} for execution. May be passed\n * some context object, `state`. May happen at some point in the future,\n * according to the `delay` parameter, if specified.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler.\n * @return {void}\n */\n public schedule(state?: T, delay: number = 0): Subscription {\n return this;\n }\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetIntervalFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearIntervalFunction = (handle: TimerHandle) => void;\n\ninterface IntervalProvider {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n delegate:\n | {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n }\n | undefined;\n}\n\nexport const intervalProvider: IntervalProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setInterval(handler: () => void, timeout?: number, ...args) {\n const { delegate } = intervalProvider;\n if (delegate?.setInterval) {\n return delegate.setInterval(handler, timeout, ...args);\n }\n return setInterval(handler, timeout, ...args);\n },\n clearInterval(handle) {\n const { delegate } = intervalProvider;\n return (delegate?.clearInterval || clearInterval)(handle as any);\n },\n delegate: undefined,\n};\n", "import { Action } from './Action';\nimport { SchedulerAction } from '../types';\nimport { Subscription } from '../Subscription';\nimport { AsyncScheduler } from './AsyncScheduler';\nimport { intervalProvider } from './intervalProvider';\nimport { arrRemove } from '../util/arrRemove';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncAction extends Action {\n public id: TimerHandle | undefined;\n public state?: T;\n // @ts-ignore: Property has no initializer and is not definitely assigned\n public delay: number;\n protected pending: boolean = false;\n\n constructor(protected scheduler: AsyncScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (this.closed) {\n return this;\n }\n\n // Always replace the current state with the new state.\n this.state = state;\n\n const id = this.id;\n const scheduler = this.scheduler;\n\n //\n // Important implementation note:\n //\n // Actions only execute once by default, unless rescheduled from within the\n // scheduled callback. This allows us to implement single and repeat\n // actions via the same code path, without adding API surface area, as well\n // as mimic traditional recursion but across asynchronous boundaries.\n //\n // However, JS runtimes and timers distinguish between intervals achieved by\n // serial `setTimeout` calls vs. a single `setInterval` call. An interval of\n // serial `setTimeout` calls can be individually delayed, which delays\n // scheduling the next `setTimeout`, and so on. `setInterval` attempts to\n // guarantee the interval callback will be invoked more precisely to the\n // interval period, regardless of load.\n //\n // Therefore, we use `setInterval` to schedule single and repeat actions.\n // If the action reschedules itself with the same delay, the interval is not\n // canceled. If the action doesn't reschedule, or reschedules with a\n // different delay, the interval will be canceled after scheduled callback\n // execution.\n //\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, delay);\n }\n\n // Set the pending flag indicating that this action has been scheduled, or\n // has recursively rescheduled itself.\n this.pending = true;\n\n this.delay = delay;\n // If this action has already an async Id, don't request a new one.\n this.id = this.id ?? this.requestAsyncId(scheduler, this.id, delay);\n\n return this;\n }\n\n protected requestAsyncId(scheduler: AsyncScheduler, _id?: TimerHandle, delay: number = 0): TimerHandle {\n return intervalProvider.setInterval(scheduler.flush.bind(scheduler, this), delay);\n }\n\n protected recycleAsyncId(_scheduler: AsyncScheduler, id?: TimerHandle, delay: number | null = 0): TimerHandle | undefined {\n // If this action is rescheduled with the same delay time, don't clear the interval id.\n if (delay != null && this.delay === delay && this.pending === false) {\n return id;\n }\n // Otherwise, if the action's delay time is different from the current delay,\n // or the action has been rescheduled before it's executed, clear the interval id\n if (id != null) {\n intervalProvider.clearInterval(id);\n }\n\n return undefined;\n }\n\n /**\n * Immediately executes this action and the `work` it contains.\n * @return {any}\n */\n public execute(state: T, delay: number): any {\n if (this.closed) {\n return new Error('executing a cancelled action');\n }\n\n this.pending = false;\n const error = this._execute(state, delay);\n if (error) {\n return error;\n } else if (this.pending === false && this.id != null) {\n // Dequeue if the action didn't reschedule itself. Don't call\n // unsubscribe(), because the action could reschedule later.\n // For example:\n // ```\n // scheduler.schedule(function doWork(counter) {\n // /* ... I'm a busy worker bee ... */\n // var originalAction = this;\n // /* wait 100ms before rescheduling the action */\n // setTimeout(function () {\n // originalAction.schedule(counter + 1);\n // }, 100);\n // }, 1000);\n // ```\n this.id = this.recycleAsyncId(this.scheduler, this.id, null);\n }\n }\n\n protected _execute(state: T, _delay: number): any {\n let errored: boolean = false;\n let errorValue: any;\n try {\n this.work(state);\n } catch (e) {\n errored = true;\n // HACK: Since code elsewhere is relying on the \"truthiness\" of the\n // return here, we can't have it return \"\" or 0 or false.\n // TODO: Clean this up when we refactor schedulers mid-version-8 or so.\n errorValue = e ? e : new Error('Scheduled action threw falsy error');\n }\n if (errored) {\n this.unsubscribe();\n return errorValue;\n }\n }\n\n unsubscribe() {\n if (!this.closed) {\n const { id, scheduler } = this;\n const { actions } = scheduler;\n\n this.work = this.state = this.scheduler = null!;\n this.pending = false;\n\n arrRemove(actions, this);\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, null);\n }\n\n this.delay = null!;\n super.unsubscribe();\n }\n }\n}\n", "import { Action } from './scheduler/Action';\nimport { Subscription } from './Subscription';\nimport { SchedulerLike, SchedulerAction } from './types';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * An execution context and a data structure to order tasks and schedule their\n * execution. Provides a notion of (potentially virtual) time, through the\n * `now()` getter method.\n *\n * Each unit of work in a Scheduler is called an `Action`.\n *\n * ```ts\n * class Scheduler {\n * now(): number;\n * schedule(work, delay?, state?): Subscription;\n * }\n * ```\n *\n * @class Scheduler\n * @deprecated Scheduler is an internal implementation detail of RxJS, and\n * should not be used directly. Rather, create your own class and implement\n * {@link SchedulerLike}. Will be made internal in v8.\n */\nexport class Scheduler implements SchedulerLike {\n public static now: () => number = dateTimestampProvider.now;\n\n constructor(private schedulerActionCtor: typeof Action, now: () => number = Scheduler.now) {\n this.now = now;\n }\n\n /**\n * A getter method that returns a number representing the current time\n * (at the time this function was called) according to the scheduler's own\n * internal clock.\n * @return {number} A number that represents the current time. May or may not\n * have a relation to wall-clock time. May or may not refer to a time unit\n * (e.g. milliseconds).\n */\n public now: () => number;\n\n /**\n * Schedules a function, `work`, for execution. May happen at some point in\n * the future, according to the `delay` parameter, if specified. May be passed\n * some context object, `state`, which will be passed to the `work` function.\n *\n * The given arguments will be processed an stored as an Action object in a\n * queue of actions.\n *\n * @param {function(state: ?T): ?Subscription} work A function representing a\n * task, or some unit of work to be executed by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler itself.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @return {Subscription} A subscription in order to be able to unsubscribe\n * the scheduled work.\n */\n public schedule(work: (this: SchedulerAction, state?: T) => void, delay: number = 0, state?: T): Subscription {\n return new this.schedulerActionCtor(this, work).schedule(state, delay);\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Action } from './Action';\nimport { AsyncAction } from './AsyncAction';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncScheduler extends Scheduler {\n public actions: Array> = [];\n /**\n * A flag to indicate whether the Scheduler is currently executing a batch of\n * queued actions.\n * @type {boolean}\n * @internal\n */\n public _active: boolean = false;\n /**\n * An internal ID used to track the latest asynchronous task such as those\n * coming from `setTimeout`, `setInterval`, `requestAnimationFrame`, and\n * others.\n * @type {any}\n * @internal\n */\n public _scheduled: TimerHandle | undefined;\n\n constructor(SchedulerAction: typeof Action, now: () => number = Scheduler.now) {\n super(SchedulerAction, now);\n }\n\n public flush(action: AsyncAction): void {\n const { actions } = this;\n\n if (this._active) {\n actions.push(action);\n return;\n }\n\n let error: any;\n this._active = true;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions.shift()!)); // exhaust the scheduler queue\n\n this._active = false;\n\n if (error) {\n while ((action = actions.shift()!)) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\n/**\n *\n * Async Scheduler\n *\n * Schedule task as if you used setTimeout(task, duration)\n *\n * `async` scheduler schedules tasks asynchronously, by putting them on the JavaScript\n * event loop queue. It is best used to delay tasks in time or to schedule tasks repeating\n * in intervals.\n *\n * If you just want to \"defer\" task, that is to perform it right after currently\n * executing synchronous code ends (commonly achieved by `setTimeout(deferredTask, 0)`),\n * better choice will be the {@link asapScheduler} scheduler.\n *\n * ## Examples\n * Use async scheduler to delay task\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * const task = () => console.log('it works!');\n *\n * asyncScheduler.schedule(task, 2000);\n *\n * // After 2 seconds logs:\n * // \"it works!\"\n * ```\n *\n * Use async scheduler to repeat task in intervals\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * function task(state) {\n * console.log(state);\n * this.schedule(state + 1, 1000); // `this` references currently executing Action,\n * // which we reschedule with new state and delay\n * }\n *\n * asyncScheduler.schedule(task, 3000, 0);\n *\n * // Logs:\n * // 0 after 3s\n * // 1 after 4s\n * // 2 after 5s\n * // 3 after 6s\n * ```\n */\n\nexport const asyncScheduler = new AsyncScheduler(AsyncAction);\n\n/**\n * @deprecated Renamed to {@link asyncScheduler}. Will be removed in v8.\n */\nexport const async = asyncScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\nimport { SchedulerAction } from '../types';\nimport { animationFrameProvider } from './animationFrameProvider';\nimport { TimerHandle } from './timerHandle';\n\nexport class AnimationFrameAction extends AsyncAction {\n constructor(protected scheduler: AnimationFrameScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n protected requestAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay is greater than 0, request as an async action.\n if (delay !== null && delay > 0) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n // Push the action to the end of the scheduler queue.\n scheduler.actions.push(this);\n // If an animation frame has already been requested, don't request another\n // one. If an animation frame hasn't been requested yet, request one. Return\n // the current animation frame request id.\n return scheduler._scheduled || (scheduler._scheduled = animationFrameProvider.requestAnimationFrame(() => scheduler.flush(undefined)));\n }\n\n protected recycleAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle | undefined {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n if (delay != null ? delay > 0 : this.delay > 0) {\n return super.recycleAsyncId(scheduler, id, delay);\n }\n // If the scheduler queue has no remaining actions with the same async id,\n // cancel the requested animation frame and set the scheduled flag to\n // undefined so the next AnimationFrameAction will request its own.\n const { actions } = scheduler;\n if (id != null && actions[actions.length - 1]?.id !== id) {\n animationFrameProvider.cancelAnimationFrame(id as number);\n scheduler._scheduled = undefined;\n }\n // Return undefined so the action knows to request a new async id if it's rescheduled.\n return undefined;\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\nexport class AnimationFrameScheduler extends AsyncScheduler {\n public flush(action?: AsyncAction): void {\n this._active = true;\n // The async id that effects a call to flush is stored in _scheduled.\n // Before executing an action, it's necessary to check the action's async\n // id to determine whether it's supposed to be executed in the current\n // flush.\n // Previous implementations of this method used a count to determine this,\n // but that was unsound, as actions that are unsubscribed - i.e. cancelled -\n // are removed from the actions array and that can shift actions that are\n // scheduled to be executed in a subsequent flush into positions at which\n // they are executed within the current flush.\n const flushId = this._scheduled;\n this._scheduled = undefined;\n\n const { actions } = this;\n let error: any;\n action = action || actions.shift()!;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions[0]) && action.id === flushId && actions.shift());\n\n this._active = false;\n\n if (error) {\n while ((action = actions[0]) && action.id === flushId && actions.shift()) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AnimationFrameAction } from './AnimationFrameAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\n\n/**\n *\n * Animation Frame Scheduler\n *\n * Perform task when `window.requestAnimationFrame` would fire\n *\n * When `animationFrame` scheduler is used with delay, it will fall back to {@link asyncScheduler} scheduler\n * behaviour.\n *\n * Without delay, `animationFrame` scheduler can be used to create smooth browser animations.\n * It makes sure scheduled task will happen just before next browser content repaint,\n * thus performing animations as efficiently as possible.\n *\n * ## Example\n * Schedule div height animation\n * ```ts\n * // html:
\n * import { animationFrameScheduler } from 'rxjs';\n *\n * const div = document.querySelector('div');\n *\n * animationFrameScheduler.schedule(function(height) {\n * div.style.height = height + \"px\";\n *\n * this.schedule(height + 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * }, 0, 0);\n *\n * // You will see a div element growing in height\n * ```\n */\n\nexport const animationFrameScheduler = new AnimationFrameScheduler(AnimationFrameAction);\n\n/**\n * @deprecated Renamed to {@link animationFrameScheduler}. Will be removed in v8.\n */\nexport const animationFrame = animationFrameScheduler;\n", "import { Observable } from '../Observable';\nimport { SchedulerLike } from '../types';\n\n/**\n * A simple Observable that emits no items to the Observer and immediately\n * emits a complete notification.\n *\n * Just emits 'complete', and nothing else.\n *\n * ![](empty.png)\n *\n * A simple Observable that only emits the complete notification. It can be used\n * for composing with other Observables, such as in a {@link mergeMap}.\n *\n * ## Examples\n *\n * Log complete notification\n *\n * ```ts\n * import { EMPTY } from 'rxjs';\n *\n * EMPTY.subscribe({\n * next: () => console.log('Next'),\n * complete: () => console.log('Complete!')\n * });\n *\n * // Outputs\n * // Complete!\n * ```\n *\n * Emit the number 7, then complete\n *\n * ```ts\n * import { EMPTY, startWith } from 'rxjs';\n *\n * const result = EMPTY.pipe(startWith(7));\n * result.subscribe(x => console.log(x));\n *\n * // Outputs\n * // 7\n * ```\n *\n * Map and flatten only odd numbers to the sequence `'a'`, `'b'`, `'c'`\n *\n * ```ts\n * import { interval, mergeMap, of, EMPTY } from 'rxjs';\n *\n * const interval$ = interval(1000);\n * const result = interval$.pipe(\n * mergeMap(x => x % 2 === 1 ? of('a', 'b', 'c') : EMPTY),\n * );\n * result.subscribe(x => console.log(x));\n *\n * // Results in the following to the console:\n * // x is equal to the count on the interval, e.g. (0, 1, 2, 3, ...)\n * // x will occur every 1000ms\n * // if x % 2 is equal to 1, print a, b, c (each on its own)\n * // if x % 2 is not equal to 1, nothing will be output\n * ```\n *\n * @see {@link Observable}\n * @see {@link NEVER}\n * @see {@link of}\n * @see {@link throwError}\n */\nexport const EMPTY = new Observable((subscriber) => subscriber.complete());\n\n/**\n * @param scheduler A {@link SchedulerLike} to use for scheduling\n * the emission of the complete notification.\n * @deprecated Replaced with the {@link EMPTY} constant or {@link scheduled} (e.g. `scheduled([], scheduler)`). Will be removed in v8.\n */\nexport function empty(scheduler?: SchedulerLike) {\n return scheduler ? emptyScheduled(scheduler) : EMPTY;\n}\n\nfunction emptyScheduled(scheduler: SchedulerLike) {\n return new Observable((subscriber) => scheduler.schedule(() => subscriber.complete()));\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport function isScheduler(value: any): value is SchedulerLike {\n return value && isFunction(value.schedule);\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\nimport { isScheduler } from './isScheduler';\n\nfunction last(arr: T[]): T | undefined {\n return arr[arr.length - 1];\n}\n\nexport function popResultSelector(args: any[]): ((...args: unknown[]) => unknown) | undefined {\n return isFunction(last(args)) ? args.pop() : undefined;\n}\n\nexport function popScheduler(args: any[]): SchedulerLike | undefined {\n return isScheduler(last(args)) ? args.pop() : undefined;\n}\n\nexport function popNumber(args: any[], defaultValue: number): number {\n return typeof last(args) === 'number' ? args.pop()! : defaultValue;\n}\n", "export const isArrayLike = ((x: any): x is ArrayLike => x && typeof x.length === 'number' && typeof x !== 'function');", "import { isFunction } from \"./isFunction\";\n\n/**\n * Tests to see if the object is \"thennable\".\n * @param value the object to test\n */\nexport function isPromise(value: any): value is PromiseLike {\n return isFunction(value?.then);\n}\n", "import { InteropObservable } from '../types';\nimport { observable as Symbol_observable } from '../symbol/observable';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being Observable (but not necessary an Rx Observable) */\nexport function isInteropObservable(input: any): input is InteropObservable {\n return isFunction(input[Symbol_observable]);\n}\n", "import { isFunction } from './isFunction';\n\nexport function isAsyncIterable(obj: any): obj is AsyncIterable {\n return Symbol.asyncIterator && isFunction(obj?.[Symbol.asyncIterator]);\n}\n", "/**\n * Creates the TypeError to throw if an invalid object is passed to `from` or `scheduled`.\n * @param input The object that was passed.\n */\nexport function createInvalidObservableTypeError(input: any) {\n // TODO: We should create error codes that can be looked up, so this can be less verbose.\n return new TypeError(\n `You provided ${\n input !== null && typeof input === 'object' ? 'an invalid object' : `'${input}'`\n } where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.`\n );\n}\n", "export function getSymbolIterator(): symbol {\n if (typeof Symbol !== 'function' || !Symbol.iterator) {\n return '@@iterator' as any;\n }\n\n return Symbol.iterator;\n}\n\nexport const iterator = getSymbolIterator();\n", "import { iterator as Symbol_iterator } from '../symbol/iterator';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being an Iterable */\nexport function isIterable(input: any): input is Iterable {\n return isFunction(input?.[Symbol_iterator]);\n}\n", "import { ReadableStreamLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport async function* readableStreamLikeToAsyncGenerator(readableStream: ReadableStreamLike): AsyncGenerator {\n const reader = readableStream.getReader();\n try {\n while (true) {\n const { value, done } = await reader.read();\n if (done) {\n return;\n }\n yield value!;\n }\n } finally {\n reader.releaseLock();\n }\n}\n\nexport function isReadableStreamLike(obj: any): obj is ReadableStreamLike {\n // We don't want to use instanceof checks because they would return\n // false for instances from another Realm, like an + +

MLOps with Red Hat OpenShift

+

Red Hat OpenShift includes key capabilities to enable machine learning operations (MLOps) in a consistent way across datacenters, public cloud computing, and edge computing.

+

By applying DevOps and GitOps principles, organizations automate and simplify the iterative process of integrating ML models into software development processes, production rollout, monitoring, retraining, and redeployment for continued prediction accuracy.

+
+

What is a ML lifecycle?

+

A multi-phase process to obtain the power of large volumes and a variety of data, abundant compute, and open source machine learning tools to build intelligent applications.

+

At a high level, there are four steps in the lifecycle:

+
    +
  1. Gather and prepare data to make sure the input data is complete, and of high quality
  2. +
  3. Develop model, including training, testing, and selection of the model with the highest prediction accuracy
  4. +
  5. Integrate models in application development process, and inferencing
  6. +
  7. Model monitoring and management, to measure business performance and address potential production data drift
  8. +
+

On this site, you will find recipes, patterns, demos for various AI/ML tools and applications used through those steps.

+

Why use containers and Kubernetes for your machine learning initiatives?

+

Containers and Kubernetes are key to accelerating the ML lifecycle as these technologies provide data scientists the much needed agility, flexibility, portability, and scalability to train, test, and deploy ML models.

+

Red Hat® OpenShift® is the industry's leading containers and Kubernetes hybrid cloud platform. It provides all these benefits, and through the integrated DevOps capabilities (e.g. OpenShift Pipelines, OpenShift GitOps, and Red Hat Quay) and integration with hardware accelerators, it enables better collaboration between data scientists and software developers, and accelerates the roll out of intelligent applications across hybrid cloud (data center, edge, and public clouds).

+ + + + + + + + + + + + + + + + + + + + + +
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/getting-started/why-this-site/index.html b/getting-started/why-this-site/index.html new file mode 100644 index 00000000..72987ae7 --- /dev/null +++ b/getting-started/why-this-site/index.html @@ -0,0 +1,1584 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Why this site? - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Why this site?

+

As data scientists and engineers, it's easy to find detailed documentation on the tools and libraries we use. But what about end-to-end data pipeline solutions that involve multiple products? Unfortunately, those resources can be harder to come by. Open source communities often don't have the resources to create and maintain them. But don't worry, that's where this website comes in!

+

We've created a one-stop-shop for data practitioners to find recipes, reusable patterns, and actionable demos for building AI/ML solutions on OpenShift. And the best part? It's a community-driven resource site! So, feel free to ask questions, make feature requests, file issues, and even submit PRs to help us improve the content. Together, we can make data pipeline solutions easier to find and implement.

+

Happy Robot

+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/google32e0ce0d0ebf499d.html b/google32e0ce0d0ebf499d.html new file mode 100644 index 00000000..1342c299 --- /dev/null +++ b/google32e0ce0d0ebf499d.html @@ -0,0 +1 @@ +google-site-verification: google32e0ce0d0ebf499d.html \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 00000000..b0482656 --- /dev/null +++ b/index.html @@ -0,0 +1,1856 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + AI on OpenShift - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + + + + + +
+
+
+
+ +
+
+ +

The one-stop shop for Data Science and Data Engineering on OpenShift!

+ + Get started + +
+
+
+
+ + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ +
+
+ + +
+ + + +
+ +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/index.md.bck b/index.md.bck new file mode 100644 index 00000000..93ba37b0 --- /dev/null +++ b/index.md.bck @@ -0,0 +1,10 @@ +--- +hide: + - navigation +--- + +Welcome to your one-stop shop for installation recipes, patterns, demos for various AI/ML tools and applications used in Data Science and Data Engineering projects running on OpenShift! + +More than often, Data Scientists and Data Engineers get wonderful documentation for a specific application, library or product. But they do not often get actionable documentation and sample code on how to compose end-to-end solutions from multiple products. + +The AI on OpenShift site aims at jumpstarting digital transformation productivity on the Red Hat OpenShift platform by combining practical, reusable patterns into use cases with end-to-end real world illustrations. diff --git a/odh-rhods/cm-mlflow-enable.yaml b/odh-rhods/cm-mlflow-enable.yaml new file mode 100644 index 00000000..4f178327 --- /dev/null +++ b/odh-rhods/cm-mlflow-enable.yaml @@ -0,0 +1,7 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: mlflow-enable + namespace: redhat-ods-applications +data: + validation_result: 'true' \ No newline at end of file diff --git a/odh-rhods/configuration/index.html b/odh-rhods/configuration/index.html new file mode 100644 index 00000000..6041f5bf --- /dev/null +++ b/odh-rhods/configuration/index.html @@ -0,0 +1,1820 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Dashboard configuration - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

ODH and RHODS Configuration

+

Standard configuration

+

As an administrator of ODH/RHODS, you have access to different settings through the Settings menu on the dashboard:

+

Settings

+

Custom notebook images

+

This is where you can import other notebook images. You will find resources on available custom images and learn how to create your own in the Custom Notebooks section.

+

To import a new image, follow those steps.

+
    +
  • Click on import image.
  • +
+

Import

+
    +
  • Enter the full address of your container, set a name (this is what will appear in the launcher), and a description.
  • +
+

Import

+
    +
  • On the bottom part, add information regarding the software and the packages that are present in this image. This is purely informative.
  • +
+

Import

+
    +
  • Your image is now listed and enabled. You can hide it without removing it by simply disabling it.
  • +
+

Images

+
    +
  • It is now available in the launcher, as well as in the Data Science Projects.
  • +
+

Image Launcher

+

Cluster settings

+

In this panel, you can adjust:

+
    +
  • The default size of the volumes created for new users.
  • +
  • Whether you want to stop idle notebooks and, if so, after how much time.
  • +
+
+

Note

+

This feature currently looks at running Jupyter kernels, like a Python notebook. If you are only using a Terminal, or another IDE window like VSCode or RStudio from the custom images, this activity is not detected and your Pod can be stopped without notice after the set delay.

+
+
    +
  • Whether you allow usage data to be collected and reported.
  • +
  • Whether you want to add a toleration to the notebook pods to allow them to be scheduled on tainted nodes. That feature is really useful if you want to dedicate specific worker nodes to running notebooks. Tainting them will prevent other workloads from running on them. Of course, you have to add the toleration to the pods.
  • +
+

Cluster settings

+

User management

+

In this panel, you can edit who has access to RHODS by defining the "Data Science user groups", and who has access to the Settings by defining the "Data Science administrator groups".

+

User management

+

Advanced configuration

+

Dashboard configuration

+

RHODS or ODH main configuration is done through a Custom Resource (CR) of type odhdashboardconfigs.opendatahub.io.

+
    +
  • To get access to it, from your OpenShift console, navigate to Home->API Explorer, and filter for OdhDashboardConfig:
  • +
+

API explorer

+
    +
  • Click on OdhDashboardConfig and in the Instances tab, click on odh-dashboard-config:
  • +
+

Instance

+
    +
  • You can now view and edit the YAML file to modify the configuration:
  • +
+

Edit YAML

+

In the spec section, the following items are of interest:

+
    +
  • dashboardConfig: The different toggles will allow you to activate/deactivate certain features. For example, you may want to hide Model Serving for your users or prevent them from importing custom images.
  • +
  • notebookSizes: This is where you can fully customize the sizes of the notebooks. You can modify the resources and add or remove sizes from the default configuration as needed.
  • +
  • modelServerSizes: This setting operates on the same concept as the previous setting but for model servers.
  • +
+

Adding a custom application

+

Let's say you have installed another application in your cluster and want to make it available through the dashboard. That's easy! A tile is, in fact, represented by a custom resource (CR) of type OdhApplication.

+

In this example, we will add a tile to access the MLFlow UI (see the MLFlow installation instructions to test it).

+
    +
  • The file mlflow-tile.yaml provides you with an example of how to create the tile.
  • +
  • Edit this file to set the route (the name of the Route CR) and routeNamespace parameters to where the UI is accessible. In this example, it is mlflow-server(route name) and mlflow (server). Apply this file to create the resource.
  • +
  • Wait 1-2 minutes for the change to take effect. Your tile is now available in the Explore view (bottom left):
  • +
+

Explore tile

+
    +
  • However, it is not yet enabled. To enable this tile, click on it in the Explorer view, then click the "Enable" button at the top of the description. You can also create a ConfigMap from the file cm-mlflow-enable.yaml.
  • +
  • Wait another 1-2 minutes, and your tile is now ready to use in the Enabled view:
  • +
+

Enabled tile

+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/odh-rhods/custom-notebooks/index.html b/odh-rhods/custom-notebooks/index.html new file mode 100644 index 00000000..d588647d --- /dev/null +++ b/odh-rhods/custom-notebooks/index.html @@ -0,0 +1,1868 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Custom notebooks - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Custom Notebooks

+

Custom notebook images are useful if you want to add libraries that you often use, or that you require at a specific version different than the one provided in the base images. It's also useful if you need to use OS packages or applications, which you cannot install on the fly in your running environment.

+

Image source and Pre-built images

+

In the opendatahub-io-contrib/workbench-images repository, you will find the source code as well as pre-built images for a lot of use cases. A few of the available images are:

+
    +
  • Base and CUDA-enabled images for different "lines" of OS: UBI8, UBI9, and Centos Stream 9.
  • +
  • Jupyter images enhanced with:
      +
    • specific libraries like OptaPy or Monai,
    • +
    • with integrated applications like Spark,
    • +
    • providing other IDEs like VSCode or RStudio
    • +
    +
  • +
  • VSCode
  • +
  • RStudio
  • +
+

All those images are constantly and automatically updated and rebuilt for the latest patch and fixes, and new releases are available regularly to provide new versions of the libraries or the applications.

+

Building your own images

+

In the repository above, you will find many examples from the source code to help you understand how to create your own image. Here are a few rules, tips and examples to help you.

+

Rules

+
    +
  • On OpenShift, every containers in a standard namespace (unless you modify security) run with a user with a random user id (uid), and the group id (gid) 0. Therefore, all the folders that you want to write in, and all the files you want to modify (temporarily) in your image must be accessible by this user. The best practice is to set the ownership at 1001:0 (user "default", group "0").
  • +
  • If you don't want/can't do that, another solution is to set permissions properly for any user, like 775.
  • +
  • When launching a notebook from Applications->Enabled, the "personal" volume of a user is mounted at /opt/app-root/src. This is not configurable, so make sure to build your images with this default location for the data that you want persisted.
  • +
+

How-tos

+

Install Python packages

+
    +
  • Start from a base image of your choice. Normally it's already running under user 1001, so no need to change it.
  • +
  • Copy your pipfile.lock or your requirements.txt
  • +
  • Install your packages
  • +
+

Example:

+
FROM BASE_IMAGE
+
+# Copying custom packages
+COPY Pipfile.lock ./
+
+# Install packages and cleanup
+# (all commands are chained to minimize layer size)
+RUN echo "Installing softwares and packages" && \
+    # Install Python packages \
+    micropipenv install && \
+    rm -f ./Pipfile.lock
+    # Fix permissions to support pip in Openshift environments \
+    chmod -R g+w /opt/app-root/lib/python3.9/site-packages && \
+    fix-permissions /opt/app-root -P
+
+WORKDIR /opt/app-root/src
+
+ENTRYPOINT ["start-notebook.sh"]
+
+

In this example, the fix-permissions script (present in all standard images and custom images from the opendatahub-contrib repo) fixes any bad ownership or rights that may be present.

+

Install an OS package

+
    +
  • If you have to install OS packages and Python packages, it's better to start with the OS.
  • +
  • In your Containerfile/Dockerfile, switch to user 0, install your package(s), then switch back to user 1001. Example:
  • +
+
USER 0
+
+RUN INSTALL_PKGS="java-11-openjdk java-11-openjdk-devel" && \
+    yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \
+    yum -y clean all --enablerepo='*'
+
+USER 1001
+
+

Tips and tricks

+

Enabling CodeReady Builder (CRB) and EPEL

+

CRB and EPEL are repositories providing packages absent from a standard RHEL or UBI installation. They are useful and required to be able to install specific software (RStudio, I'm looking at you...).

+
    +
  • Enabling EPEL on UBI9-based images (on UBI9 images CRB is now enabled by default.):
  • +
+
RUN yum install -y https://download.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
+
+
    +
  • Enabling CRB and EPEL on Centos Stream 9-based images:
  • +
+
RUN yum install -y yum-utils && \
+    yum-config-manager --enable crb && \
+    yum install -y https://download.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
+
+

Minimizing image size

+

A container image uses a "layered" filesystem. Every time you have in your file a COPY or a RUN command, a new layer is created. Nothing is ever deleted: removing a file is simply "masking" it in the next layer. Therefore you must bee very careful when you create your Containerfile/Dockerfile.

+
    +
  • If you start from an image that is constantly updated, like ubi9/python-39 from the Red Hat Catalog, don't do a yum update. This will only fetch new metadata, update a few files that may not have any impact, and get you a bigger image.
  • +
  • Rebuilt your images often from scratch, but don't do a yum update on a previous version.
  • +
  • Group your RUN commands as much as you can, add && \ at the end of each line to chain your commands.
  • +
  • If you need to compile something for building an image, use the multi-stage builds approach. Build the library or application in an intermediate container image, then copy the result to your final image. Otherwise, all the build artefacts will persist in your image...
  • +
+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/odh-rhods/custom-runtime-triton/index.html b/odh-rhods/custom-runtime-triton/index.html new file mode 100644 index 00000000..b2c15c88 --- /dev/null +++ b/odh-rhods/custom-runtime-triton/index.html @@ -0,0 +1,1960 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Custom Serving Runtime (Triton) - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + +

Deploying and using a Custom Serving Runtime in ODH/RHODS

+

Although these instructions were tested mostly using RHODS (Red Hat OpenShift Data Science), they apply to ODH (Open Data Hub) as well.

+

Before you start

+

This document will guide you through the broad steps necessary to deploy a custom Serving Runtime in order to serve a model using the Triton Runtime (NVIDIA Triton Inference Server).

+

While RHODS supports your ability to add your own runtime, it does not support the runtimes themselves. Therefore, it is up to you to configure, adjust and maintain your custom runtimes.

+

This document expects a bit of familiarity with RHODS.

+

The sources used to create this document are mostly:

+ +

Adding the custom triton runtime

+
    +
  1. Log in to your OpenShift Data Science with a user who is part of the RHODS admin group.
      +
    1. (by default, cluster-admins and dedicated admins are).
    2. +
    +
  2. +
  3. +

    Navigate to the Settings menu, then Serving Runtimes

    +

    alt_text

    +
  4. +
  5. +

    Click on the Add Serving Runtime button:

    +

    alt_text

    +
  6. +
  7. +

    Click on Start from scratch and in the window that opens up, paste the following YAML: +

    # Copyright 2021 IBM Corporation
    +#
    +# Licensed under the Apache License, Version 2.0 (the "License");
    +# you may not use this file except in compliance with the License.
    +# You may obtain a copy of the License at
    +#
    +#     http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing, software
    +# distributed under the License is distributed on an "AS IS" BASIS,
    +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +# See the License for the specific language governing permissions and
    +# limitations under the License.
    +apiVersion: serving.kserve.io/v1alpha1
    +# kind: ClusterServingRuntime     ## changed by EG
    +kind: ServingRuntime
    +metadata:
    +  name: triton-23.05-20230804
    +  labels:
    +    name: triton-23.05-20230804
    +  annotations:
    +    maxLoadingConcurrency: "2"
    +    openshift.io/display-name: "Triton runtime 23.05 - added on 20230804 - with /dev/shm"
    +spec:
    +  supportedModelFormats:
    +    - name: keras
    +      version: "2" # 2.6.0
    +      autoSelect: true
    +    - name: onnx
    +      version: "1" # 1.5.3
    +      autoSelect: true
    +    - name: pytorch
    +      version: "1" # 1.8.0a0+17f8c32
    +      autoSelect: true
    +    - name: tensorflow
    +      version: "1" # 1.15.4
    +      autoSelect: true
    +    - name: tensorflow
    +      version: "2" # 2.3.1
    +      autoSelect: true
    +    - name: tensorrt
    +      version: "7" # 7.2.1
    +      autoSelect: true
    +
    +  protocolVersions:
    +    - grpc-v2
    +  multiModel: true
    +
    +  grpcEndpoint: "port:8085"
    +  grpcDataEndpoint: "port:8001"
    +
    +  volumes:
    +    - name: shm
    +      emptyDir:
    +        medium: Memory
    +        sizeLimit: 2Gi
    +  containers:
    +    - name: triton
    +      # image: tritonserver-2:replace   ## changed by EG
    +      image: nvcr.io/nvidia/tritonserver:23.05-py3
    +      command: [/bin/sh]
    +      args:
    +        - -c
    +        - 'mkdir -p /models/_triton_models;
    +          chmod 777 /models/_triton_models;
    +          exec tritonserver
    +          "--model-repository=/models/_triton_models"
    +          "--model-control-mode=explicit"
    +          "--strict-model-config=false"
    +          "--strict-readiness=false"
    +          "--allow-http=true"
    +          "--allow-sagemaker=false"
    +          '
    +      volumeMounts:
    +        - name: shm
    +          mountPath: /dev/shm
    +      resources:
    +        requests:
    +          cpu: 500m
    +          memory: 1Gi
    +        limits:
    +          cpu: "5"
    +          memory: 1Gi
    +      livenessProbe:
    +        # the server is listening only on 127.0.0.1, so an httpGet probe sent
    +        # from the kublet running on the node cannot connect to the server
    +        # (not even with the Host header or host field)
    +        # exec a curl call to have the request originate from localhost in the
    +        # container
    +        exec:
    +          command:
    +            - curl
    +            - --fail
    +            - --silent
    +            - --show-error
    +            - --max-time
    +            - "9"
    +            - http://localhost:8000/v2/health/live
    +        initialDelaySeconds: 5
    +        periodSeconds: 30
    +        timeoutSeconds: 10
    +  builtInAdapter:
    +    serverType: triton
    +    runtimeManagementPort: 8001
    +    memBufferBytes: 134217728
    +    modelLoadingTimeoutMillis: 90000
    +

    +
  8. +
  9. You will likely want to update the name , as well as other parameters.
  10. +
  11. Click Add
  12. +
  13. +

    Confirm the new Runtime is in the list, and re-order the list as needed. + (the order chosen here is the order in which the users will see these choices)

    +

    alt_text

    +
  14. +
+

Creating a project

+
    +
  • Create a new Data Science Project
  • +
  • In this example, the project is called fraud
  • +
+

Creating a model server

+
    +
  1. In your project, scroll down to the "Models and Model Servers" Section
  2. +
  3. +

    Click on Configure server

    +

    alt_text

    +
  4. +
  5. +

    Fill out the details:

    +

    alt_text

    +

    alt_text

    +
  6. +
  7. +

    Click Configure

    +
  8. +
+

Deploying a model into it

+
    +
  1. If you don't have any model files handy, you can grab a copy of this file and upload it to your Object Storage of choice.
  2. +
  3. +

    Click on Deploy Model

    +

    alt_text

    +
  4. +
  5. +

    Choose a model name and a framework:

    +

    alt_text

    +
  6. +
  7. +

    Then create a new data connection containing the details of where your model is stored in Object Storage:

    +

    alt_text

    +
  8. +
  9. +

    After a little while, you should see the following:

    +

    alt_text

    +
  10. +
+

Validating the model

+
    +
  1. If you've used the model mentioned earlier in this document, you can run the following command from a Linux prompt: +
    function val-model {
    +    myhost="$1"
    +    echo "validating host $myhost"
    +    time curl -X POST -k "${myhost}" -d '{"inputs": [{ "name": "dense_input", "shape": [1, 7], "datatype": "FP32", "data": [57.87785658389723,0.3111400080477545,1.9459399775518593,1.0,1.0,0.0,0.0]}]}' | jq
    +}
    +
    +val-model "https://fraud-model-fraud.apps.mycluster.openshiftapps.com/v2/models/fraud-model/infer"
    +
  2. +
  3. Change the host to match the address for your model.
  4. +
  5. You should see an output similar to: +
    {
    +  "model_name": "fraud-model__isvc-c1529f9667",
    +  "model_version": "1",
    +  "outputs": [
    +    {
    +      "name": "dense_3",
    +      "datatype": "FP32",
    +      "shape": [
    +        1,
    +        1
    +      ],
    +      "data": [
    +        0.86280495
    +      ]
    +    }
    +  ]
    +}
    +
  6. +
+

Extra considerations for Disconnected environments.

+

The YAML included in this file makes a reference to the following Nvidia Triton Image: +nvcr.io/nvidia/tritonserver:23.05-py3

+

Ensure that this image is properly mirrored into the mirror registry.

+

Also, update the YAML definition as needed to point to the image address that matches the image registry.

+ +

Each of the activities performed via the user interface will create a Kubernetes Object inside your OpenShift Cluster.

+
    +
  • The addition of a new runtime creates a template in the redhat-ods-applications namespace.
  • +
  • Each model server is defined as a ServingRuntime
  • +
  • Each model is defined as an InferenceService
  • +
  • Each Data Connection is stored as a Secret
  • +
+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/odh-rhods/img-triton/ServingRuntimes.png b/odh-rhods/img-triton/ServingRuntimes.png new file mode 100644 index 00000000..0c250fed Binary files /dev/null and b/odh-rhods/img-triton/ServingRuntimes.png differ diff --git a/odh-rhods/img-triton/add.serving.runtime.png b/odh-rhods/img-triton/add.serving.runtime.png new file mode 100644 index 00000000..1b311e6b Binary files /dev/null and b/odh-rhods/img-triton/add.serving.runtime.png differ diff --git a/odh-rhods/img-triton/card.fraud.detection.onnx b/odh-rhods/img-triton/card.fraud.detection.onnx new file mode 100644 index 00000000..6082eff9 Binary files /dev/null and b/odh-rhods/img-triton/card.fraud.detection.onnx differ diff --git a/odh-rhods/img-triton/configure.server.png b/odh-rhods/img-triton/configure.server.png new file mode 100644 index 00000000..c5b2d08b Binary files /dev/null and b/odh-rhods/img-triton/configure.server.png differ diff --git a/odh-rhods/img-triton/data.connection.png b/odh-rhods/img-triton/data.connection.png new file mode 100644 index 00000000..a53c7f06 Binary files /dev/null and b/odh-rhods/img-triton/data.connection.png differ diff --git a/odh-rhods/img-triton/deploy.model.png b/odh-rhods/img-triton/deploy.model.png new file mode 100644 index 00000000..6d14b27f Binary files /dev/null and b/odh-rhods/img-triton/deploy.model.png differ diff --git a/odh-rhods/img-triton/model.deployed.png b/odh-rhods/img-triton/model.deployed.png new file mode 100644 index 00000000..4be89449 Binary files /dev/null and b/odh-rhods/img-triton/model.deployed.png differ diff --git a/odh-rhods/img-triton/model.name.framework.png b/odh-rhods/img-triton/model.name.framework.png new file mode 100644 index 00000000..d65361cd Binary files /dev/null and b/odh-rhods/img-triton/model.name.framework.png differ diff --git a/odh-rhods/img-triton/runtimes.list.png b/odh-rhods/img-triton/runtimes.list.png new file mode 100644 index 00000000..ace386e9 Binary files /dev/null and b/odh-rhods/img-triton/runtimes.list.png differ diff --git a/odh-rhods/img-triton/server.details.01.png b/odh-rhods/img-triton/server.details.01.png new file mode 100644 index 00000000..2be34a7e Binary files /dev/null and b/odh-rhods/img-triton/server.details.01.png differ diff --git a/odh-rhods/img-triton/server.details.02.png b/odh-rhods/img-triton/server.details.02.png new file mode 100644 index 00000000..61cf9964 Binary files /dev/null and b/odh-rhods/img-triton/server.details.02.png differ diff --git a/odh-rhods/img/api-explorer.png b/odh-rhods/img/api-explorer.png new file mode 100644 index 00000000..ce181352 Binary files /dev/null and b/odh-rhods/img/api-explorer.png differ diff --git a/odh-rhods/img/api_notebook.png b/odh-rhods/img/api_notebook.png new file mode 100644 index 00000000..b9c03f48 Binary files /dev/null and b/odh-rhods/img/api_notebook.png differ diff --git a/odh-rhods/img/cluster-policy.png b/odh-rhods/img/cluster-policy.png new file mode 100644 index 00000000..b20e119d Binary files /dev/null and b/odh-rhods/img/cluster-policy.png differ diff --git a/odh-rhods/img/cluster-settings.png b/odh-rhods/img/cluster-settings.png new file mode 100644 index 00000000..e3d1fd59 Binary files /dev/null and b/odh-rhods/img/cluster-settings.png differ diff --git a/odh-rhods/img/custom-image-launcher.png b/odh-rhods/img/custom-image-launcher.png new file mode 100644 index 00000000..49afeb94 Binary files /dev/null and b/odh-rhods/img/custom-image-launcher.png differ diff --git a/odh-rhods/img/custom-images-list.png b/odh-rhods/img/custom-images-list.png new file mode 100644 index 00000000..e5c2f29b Binary files /dev/null and b/odh-rhods/img/custom-images-list.png differ diff --git a/odh-rhods/img/edit-yaml.png b/odh-rhods/img/edit-yaml.png new file mode 100644 index 00000000..fcfa279f Binary files /dev/null and b/odh-rhods/img/edit-yaml.png differ diff --git a/odh-rhods/img/enabled-tile.png b/odh-rhods/img/enabled-tile.png new file mode 100644 index 00000000..c554bc70 Binary files /dev/null and b/odh-rhods/img/enabled-tile.png differ diff --git a/odh-rhods/img/explore-tile.png b/odh-rhods/img/explore-tile.png new file mode 100644 index 00000000..6817b244 Binary files /dev/null and b/odh-rhods/img/explore-tile.png differ diff --git a/odh-rhods/img/import-1.png b/odh-rhods/img/import-1.png new file mode 100644 index 00000000..bfedf63d Binary files /dev/null and b/odh-rhods/img/import-1.png differ diff --git a/odh-rhods/img/import-2.png b/odh-rhods/img/import-2.png new file mode 100644 index 00000000..bcfe3974 Binary files /dev/null and b/odh-rhods/img/import-2.png differ diff --git a/odh-rhods/img/import.png b/odh-rhods/img/import.png new file mode 100644 index 00000000..095bdc80 Binary files /dev/null and b/odh-rhods/img/import.png differ diff --git a/odh-rhods/img/instance.png b/odh-rhods/img/instance.png new file mode 100644 index 00000000..5f89cc52 Binary files /dev/null and b/odh-rhods/img/instance.png differ diff --git a/odh-rhods/img/notebook_instances.png b/odh-rhods/img/notebook_instances.png new file mode 100644 index 00000000..f312d1ad Binary files /dev/null and b/odh-rhods/img/notebook_instances.png differ diff --git a/odh-rhods/img/settings.png b/odh-rhods/img/settings.png new file mode 100644 index 00000000..657b02c6 Binary files /dev/null and b/odh-rhods/img/settings.png differ diff --git a/odh-rhods/img/update.rhods-users.png b/odh-rhods/img/update.rhods-users.png new file mode 100644 index 00000000..50ba33e5 Binary files /dev/null and b/odh-rhods/img/update.rhods-users.png differ diff --git a/odh-rhods/img/user-management.png b/odh-rhods/img/user-management.png new file mode 100644 index 00000000..fc1914fd Binary files /dev/null and b/odh-rhods/img/user-management.png differ diff --git a/odh-rhods/mlflow-tile.yaml b/odh-rhods/mlflow-tile.yaml new file mode 100644 index 00000000..1c4ac302 --- /dev/null +++ b/odh-rhods/mlflow-tile.yaml @@ -0,0 +1,40 @@ +apiVersion: dashboard.opendatahub.io/v1 +kind: OdhApplication +metadata: + annotations: + opendatahub.io/categories: 'Data management,Data preprocessing,Model training' + name: mlflow + namespace: redhat-ods-applications + labels: + app: odh-dashboard + app.kubernetes.io/part-of: odh-dashboard +spec: + enable: + validationConfigMap: mlflow-enable + img: >- + + + + + getStartedLink: 'https://mlflow.org/docs/latest/quickstart.html' + route: mlflow-server + routeNamespace: mlflow + displayName: MLflow + kfdefApplications: [] + support: third party support + csvName: '' + provider: MLflow + docsLink: 'https://mlflow.org/docs/latest/index.html' + quickStart: '' + getStartedMarkDown: >- + # MLFlow + + MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It tackles four primary functions: + + - Tracking experiments to record and compare parameters and results (MLflow Tracking). + - Packaging ML code in a reusable, reproducible form in order to share with other data scientists or transfer to production (MLflow Projects). + - Managing and deploying models from a variety of ML libraries to a variety of model serving and inference platforms (MLflow Models). + - Providing a central model store to collaboratively manage the full lifecycle of an MLflow Model, including model versioning, stage transitions, and annotations (MLflow Model Registry). + description: >- + MLflow is an open source platform for managing the end-to-end machine learning lifecycle. + category: Self-managed diff --git a/odh-rhods/nvidia-gpus/index.html b/odh-rhods/nvidia-gpus/index.html new file mode 100644 index 00000000..fd70f492 --- /dev/null +++ b/odh-rhods/nvidia-gpus/index.html @@ -0,0 +1,2085 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + NVIDIA GPUs - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Working with NVIDIA GPUs

+

Using NVIDIA GPUs on OpenShift

+

How does this work?

+

NVIDIA GPUs can be easily installed on OpenShift. Basically it involves installing two different operators.

+

The Node Feature Discovery operator will "discover" your cards from a hardware perspective and appropriately label the relevant nodes with this information.

+

Then the NVIDIA GPU operator will install the necessary drivers and tooling to those nodes. It will also integrate into Kubernetes so that when a Pod requires GPU resources it will be scheduled on the right node, and make sure that the containers are "injected" with the right drivers, configurations and tools to properly use the GPU.

+

So from a user perspective, the only thing you have to worry about is asking for GPU resources when defining your pods, with something like:

+
spec:
+  containers:
+  - name: app
+    image: ...
+    resources:
+      requests:
+        memory: "64Mi"
+        cpu: "250m"
+        nvidia.com/gpu: 2
+      limits:
+        memory: "128Mi"
+        cpu: "500m"
+
+

But don't worry, OpenShift Data Science and Open Data Hub take care of this part for you when you launch notebooks, workbenches, model servers, or pipeline runtimes!

+

Installation

+

Here is the documentation you can follow:

+ +

Advanced configuration

+

Working with taints

+

In many cases, you will want to restrict access to GPUs, or be able to provide choice between different types of GPUs: simply stating "I want a GPU" is not enough. Also, if you want to make sure that only the Pods requiring GPUs end up on GPU-enabled nodes (and not other Pods that just end up being there at random because that's how Kubernetes works...), you're at the right place!

+

The only supported method at the moment to achieve this is to taint nodes, then apply tolerations on the Pods depending on where you want them scheduled. If you don't pay close attention though when applying taints on Nodes, you may end up with the NVIDIA drivers not installed on those nodes...

+

In this case you must:

+
    +
  • +

    Apply the taints you need to your Nodes or MachineSets, for example:

    +
    apiVersion: machine.openshift.io/v1beta1
    +kind: MachineSet
    +metadata:
    +  ...
    +spec:
    +  replicas: 1
    +  selector:
    +    ...
    +  template:
    +    ...
    +    spec:
    +      ...
    +      taints:
    +        - key: restrictedaccess
    +          value: "yes"
    +          effect: NoSchedule
    +
    +
  • +
  • +

    Apply the relevant toleration to the NVIDIA Operator.

    +
      +
    • +

      In the nvidia-gpu-operator namespace, get to the Installed Operator menu, open the NVIDIA GPU Operator settings, get to the ClusterPolicy tab, and edit the ClusterPolicy.

      +

      Cluster Policy

      +
    • +
    • +

      Edit the YAML, and add the toleration in the daemonset section:

      +
      apiVersion: nvidia.com/v1
      +kind: ClusterPolicy
      +metadata:
      +  ...
      +  name: gpu-cluster-policy
      +spec:
      +  vgpuDeviceManager: ...
      +  migManager: ...
      +  operator: ...
      +  dcgm: ...
      +  gfd: ...
      +  dcgmExporter: ...
      +  cdi: ...
      +  driver: ...
      +  devicePlugin: ...
      +  mig: ...
      +  sandboxDevicePlugin: ...
      +  validator: ...
      +  nodeStatusExporter: ...
      +  daemonsets:
      +    ...
      +    tolerations:
      +      - effect: NoSchedule
      +        key: restrictedaccess
      +        operator: Exists
      +  sandboxWorkloads: ...
      +  gds: ...
      +  vgpuManager: ...
      +  vfioManager: ...
      +  toolkit: ...
      +...
      +
      +
    • +
    +
  • +
+

That's it, the operator is now able to deploy all the NVIDIA tooling on the nodes, even if they have the restrictedaccess taint. Repeat the procedure for any other taint you want to apply to your nodes.

+
+

Note

+

The first taint that you want to apply on GPU nodes is nvidia.com/gpu. This is the standard taint for which the NVIDIA Operator has a built-in toleration, so no need to add it. Likewise, Notebooks, Workbenches or other components from ODH/RHODS that request GPUs will already have this toleration in place. For other Pods you schedule yourself, or using Pipelines, you should make sure the toleration is also applied. Doing this will ensure that only Pods really requiring GPUs are scheduled on those nodes.

+

You can of course apply many different taints at the same time. You would simply have to apply the matching toleration on the NVIDIA GPU Operator, as well as on the Pods that need to run there.

+
+

Time Slicing (GPU sharing)

+

Do you want to share GPUs between different Pods? Time Slicing is one of the solutions you can use!

+

The NVIDIA GPU Operator enables oversubscription of GPUs through a set of extended options for the NVIDIA Kubernetes Device Plugin. GPU time-slicing enables workloads that are scheduled on oversubscribed GPUs to interleave with one another.

+

This mechanism for enabling time-slicing of GPUs in Kubernetes enables a system administrator to define a set of replicas for a GPU, each of which can be handed out independently to a pod to run workloads on. Unlike Multi-Instance GPU (MIG), there is no memory or fault-isolation between replicas, but for some workloads this is better than not being able to share at all. Internally, GPU time-slicing is used to multiplex workloads from replicas of the same underlying GPU.

+

Full reference

+

Configuration

+

This is a simple example on how to quickly setup Time Slicing on your OpenShift cluster. In this example, we have a MachineSet that can provide nodes with one T4 card each that we want to make "seen" as 4 different cards so that multiple Pods requiring GPUs can be launched, even if we only have one node of this type.

+
    +
  • +

    Create the ConfigMap that will define how we want to slice our GPU:

    +
    kind: ConfigMap
    +apiVersion: v1
    +metadata:
    +  name: time-slicing-config
    +  namespace: nvidia-gpu-operator
    +data:
    +  tesla-t4: |-
    +    version: v1
    +    sharing:
    +      timeSlicing:
    +        resources:
    +        - name: nvidia.com/gpu
    +          replicas: 4
    +
    +
    +

    Note

    +
      +
    • The ConfigMap has to be called time-slicing-config and must be created in the nvidia-gpu-operator namespace.
    • +
    • You can add many different resources with different configurations. You simply have to provide the corresponding Node label that has been applied by the operator, for example name: nvidia.com/mig-1g.5gb / replicas: 2 if you have a MIG configuration applied to a Node with a A100.
    • +
    • You can modify the value of replicas to present less/more GPUs. Be warned though: all the Pods on this node will share the GPU memory, with no reservation. The more slices you create, the more risks of OOM errors (out of memory) you get if your Pods are hungry (or even only one!).
    • +
    +
    +
  • +
  • +

    Modify the ClusterPolicy called gpu-cluster-policy (accessible from the NVIDIA Operator view in the nvidia-gpu-operator namespace) to point to this configuration, and eventually add the default configuration (in case you nodes are not labelled correctly, see below)

    +
    apiVersion: nvidia.com/v1
    +kind: ClusterPolicy
    +metadata:
    +  ...
    +  name: gpu-cluster-policy
    +spec:
    +  ...
    +  devicePlugin:
    +    config:
    +      default: tesla-t4
    +      name: time-slicing-config
    +  ...
    +
    +
  • +
  • +

    Apply label to your MachineSet for the specific slicing configuration you want to use on it:

    +
    apiVersion: machine.openshift.io/v1beta1
    +kind: MachineSet
    +metadata:
    +spec:
    +  template:
    +    spec:
    +      metadata:
    +        labels:
    +          nvidia.com/device-plugin.config: tesla-t4
    +
    +
  • +
+

Autoscaler and GPUs

+

As they are expensive, GPUs are good candidates to put behind an Autoscaler. But due to this there are some subtleties if you want everything to go smoothly.

+

Configuration

+
+

Warning

+

For the autoscaler to work properly with GPUs, you have to set a specific label to the MachineSet. It will help to Autoscaler figure out (in fact simulate) what it is allowed to do. This is especially true if you have different MachineSets that feature different types of GPUs.

+

As per the referenced article above, the type for gpus you set through the label cannot be nvidia.com/gpu (as you will sometimes find in the standard documentation), because it's not a valid label. Therefore, only for the autoscaling purpose, you should give the type a specific name with letters, numbers and dashes only, like Tesla-T4-SHARED in this example.

+
+
    +
  • +

    Edit the MachineSet configuration to add the label that the Autoscaler will expect:

    +
    apiVersion: machine.openshift.io/v1beta1
    +kind: MachineSet
    +...
    +spec:
    +  ...
    +  template:
    +    ...
    +    spec:
    +      metadata:
    +        labels:
    +          cluster-api/accelerator: Tesla-T4-SHARED
    +
    +
  • +
  • +

    Create your ClusterAutoscaler configuration (example):

    +
    apiVersion: autoscaling.openshift.io/v1
    +kind: ClusterAutoscaler
    +metadata:
    +  name: "default"
    +spec:
    +  logVerbosity: 4
    +  maxNodeProvisionTime: 15m
    +  podPriorityThreshold: -10
    +  resourceLimits:
    +    gpus:
    +      - type: Tesla-T4-SHARED
    +        min: 0
    +        max: 8
    +  scaleDown:
    +    enabled: true
    +    delayAfterAdd: 20m
    +    delayAfterDelete: 5m
    +    delayAfterFailure: 30s
    +    unneededTime: 5m
    +
    +
    +

    Note

    +

    The delayAfterAdd parameter has to be set higher than standard value as NVIDIA tooling can take a lot of time to deploy, 10-15mn.

    +
    +
  • +
  • +

    Create the MachineSet Autoscaler:

    +
    apiVersion: autoscaling.openshift.io/v1beta1
    +kind: MachineAutoscaler
    +metadata:
    +  name: machineset-name
    +  namespace: "openshift-machine-api"
    +spec:
    +  minReplicas: 1
    +  maxReplicas: 2
    +  scaleTargetRef:
    +    apiVersion: machine.openshift.io/v1beta1
    +    kind: MachineSet
    +    name: machineset-name
    +
    +
  • +
+

Scaling to zero

+

As GPUs are expensive resources, you may want to scale down your MachineSet to zero to save on resources. This will however require some more configuration than just setting the minimum size to zero...

+

First, some background to help you understand and enable you to solve issues that may arise. You can skip the whole explanation, but it's worth it, so please bear with me.

+

When you request resources that aren't available, the Autoscaler looks at all the MachineAutoscalers that are available, with their corresponding MachineSets. But how to know which one to use? Well, it will first simulate the provisioning of a Node from each MachineSet, and see if it would fit the request. Of course, if there is already at least one Node available from a given MachineSet, the simulation would be bypassed as the Autoscaler already knows what it will get. If there are different MachineSets that fit and to choose from, the default and only "Expander" available for now in OpenShift to make its decision is random. So it will simply picks one totally randomly.

+

That's all perfect and everything, but for GPUs, if you don't start the Node for real, we don't know what's in it! So that's where we have to help the Autoscaler with a small hint.

+
    +
  • +

    Set this annotation manually if it's not there. It will stick after the first scale up though, along with some other annotations the Autoscaler will add, thanks for its newly discovered knowledge.

    +
    apiVersion: machine.openshift.io/v1beta1
    +kind: MachineSet
    +metadata:
    +  annotations:
    +    machine.openshift.io/GPU: "1"
    +
    +
  • +
+

Now to the other issue that may happen if you are in an environment with multiple Availability Zones (AZ)...

+

Although when you define a MachineSet you can set the AZ and have all the Nodes spawned properly in it, the Autoscaler simulator is not that clever. So it will simply pick a Zone at random. If this is not the one where you want/need your Pod to run, this will be a problem...

+

For example, you may already have a Persistent Volume (PV) attached to you Notebook. If your storage does now support AZ-spanning (like AWS EBS volumes), your PV is bound to a specific AZ. If the Simulator creates a virtual Node in a different AZ, there will be a mismatch, your Pod would not be schedulable on this Node, and the Autoscaler will (wrongly) conclude that it cannot use this MachineSet for a scale up!

+

Here again, we have to give a hint to the Autoscaler to what the Node will look like in the end.

+
    +
  • +

    In you MachineSet, in the labels that will be added to the node, add information regarding the topology of the Node, as well as for the volumes that may be attached to it. For example:

    +
    apiVersion: machine.openshift.io/v1beta1
    +kind: MachineSet
    +metadata:
    +spec:
    +  template:
    +    spec:
    +      metadata:
    +        labels:
    +          ...
    +          topology.kubernetes.io/zone: us-east-2a
    +          topology.ebs.csi.aws.com/zone: us-east-2a
    +
    +
  • +
+

With this, the simulated Node will be at the right place, and the Autoscaler will consider the MachineSet valid for scale up!

+

Reference material:

+ + + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/odh-rhods/openshift-group-management/index.html b/odh-rhods/openshift-group-management/index.html new file mode 100644 index 00000000..5e809682 --- /dev/null +++ b/odh-rhods/openshift-group-management/index.html @@ -0,0 +1,1760 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + OpenShift Group Management - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + +

OpenShift Group Management

+

In the Red Hat OpenShift Documentation, there are instructions on how to configure a specific list of RHODS Administrators and RHODS Users.

+

However, if the list of users keeps changing, the membership of the groupd called rhods-users will have to be updated frequently. By default, in OpenShift, only OpenShift admins can edit group membership. Being a RHODS Admin does not confer you those admin privileges, and so, it would fall to the OpenShift admin to administer that list.

+

The instructions in this page will show how the OpenShift Admin can create these groups in such a way that any member of the group rhods-admins can edit the users listed in the group rhods-users. +These makes the RHODS Admins more self-sufficient, without giving them unneeded access.

+

For expediency in the instructions, we are using the oc cli, but these can also be achieved using the OpenShift Web Console. We will assume that the user setting this up has admin privileges to the cluster.

+

Creating the groups

+

Here, we will create the groups mentioned above. Note that you can alter those names if you want, but will then need to have the same alterations throughout the instructions.

+
    +
  1. To create the groups: +
    oc adm groups new rhods-users
    +oc adm groups new rhods-admins
    +
  2. +
  3. The above may complain about the group(s) already existing.
  4. +
  5. To confirm both groups exist: +
    oc get groups | grep rhods
    +
  6. +
  7. That should return: +
    bash-4.4 ~ $ oc get groups | grep rhods
    +rhods-admins
    +rhods-users
    +
  8. +
  9. Both groups now exist
  10. +
+

Creating ClusterRole and ClusterRoleBinding

+
    +
  1. This will create a Cluster Role and a Cluster Role Binding: +
    oc apply -f - <<EOF
    +apiVersion: rbac.authorization.k8s.io/v1
    +kind: ClusterRole
    +metadata:
    +  name: update-rhods-users
    +rules:
    +  - apiGroups: ["user.openshift.io"]
    +    resources: ["groups"]
    +    resourceNames: ["rhods-users"]
    +    verbs: ["update", "patch", "get"]
    +---
    +kind: ClusterRoleBinding
    +apiVersion: rbac.authorization.k8s.io/v1
    +metadata:
    +  name: rhods-admin-can-update-rhods-users
    +subjects:
    +  - kind: Group
    +    apiGroup: rbac.authorization.k8s.io
    +    name: rhods-admins
    +roleRef:
    +  apiGroup: rbac.authorization.k8s.io
    +  kind: ClusterRole
    +  name: update-rhods-users
    +EOF
    +
  2. +
  3. To confirm they were both succesfully created, run: +
    oc get ClusterRole,ClusterRoleBinding  | grep 'update\-rhods'
    +
  4. +
  5. You should see: +
    bash-4.4 ~ $ oc get ClusterRole,ClusterRoleBinding  | grep 'update\-rhods'
    +clusterrole.rbac.authorization.k8s.io/update-rhods-users
    +clusterrolebinding.rbac.authorization.k8s.io/rhods-admin-can-update-rhods-users
    +
  6. +
  7. You are pretty much done. You now just need to validate things worked.
  8. +
+

Add some users as rhods-admins

+

To confirm this works, add a user to the rhods-admin group. In my example, I'll add user1

+

Capture the URL needed to edit the rhods-users group

+

Since people who are not cluster admin won't be able to browse the list of groups, capture the URL that allows to control the membership of rhods-users.

+

It should look similar to:

+

https://console-openshift-console.apps.<thecluster>/k8s/cluster/user.openshift.io~v1~Group/rhods-users

+

Ensure that rhods-admins are now able to edit rhods-users

+

Ask someone in the rhods-admins group to confirm that it works for them. (Remember to provide them with the URL to do so).

+

They should be able to do so and successfully save their changes, as shown below:

+

+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/patterns/bucket-notifications/bucket-notifications/index.html b/patterns/bucket-notifications/bucket-notifications/index.html new file mode 100644 index 00000000..b4aac3a5 --- /dev/null +++ b/patterns/bucket-notifications/bucket-notifications/index.html @@ -0,0 +1,2125 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Bucket notifications - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + +

Bucket Notifications

+

Description

+

The Rados Gateway (RGW) component of Ceph provides Object Storage through an S3-compatible API on all Ceph implementations: OpenShift Data Foundation and its upstream version Rook-Ceph, Red Hat Ceph Storage, Ceph,…​

+

Bucket notifications provide a mechanism for sending information from the RGW when certain events are happening on a bucket. Currently, notifications can be sent to: HTTP, AMQP0.9.1 and Kafka endpoints.

+

From a data engineering point of view, bucket notifications allow to create an event-driven architecture, where messages (instead of simply log entries) can be sent to various processing components or event buses whenever something is happening on the object storage: object creation, deletion, with many fine-grained settings available.

+

Use cases

+

Application taking actions on the objects

+

As part of an event-driven architecture, this pattern can be used to trigger an application to perform an action following the storage event. An example could be the automated processing of a new image that has just been uploaded to the object storage (analysis, resizing,…​). Paired with Serverless functions this becomes a pretty efficient architecture compared to having an application constantly monitoring or polling the storage, or to have to implement this triggering process in the application interacting with the storage. This loosely-coupled architecture also gives much more agility for updates, technology evolution,…​

+

External monitoring systems

+

The events sent by the RGW are simple messages containing all the metadata relevant to the event and the object. So it can be an excellent source of information for a monitoring system. For example if you want to keep a trace or send an alert whenever a specific type of file, or with a specific name, is uploaded or deleted from the storage.

+

Implementations examples

+

This pattern is implemented in the XRay pipeline demo

+

How does it work?

+

Characteristics

+
    +
  • Notifications are sent directly from the RGW on which the event happened to an external endpoint.
  • +
  • Pluggable endpoint architecture:
      +
    • HTTP/S
    • +
    • AMQP 0.9.1
    • +
    • Kafka
    • +
    • Knative
    • +
    +
  • +
+

Data Model

+
    +
  • Topics contain the definition of a specific endpoint in “push mode”
  • +
  • Notifications tie topics with buckets, and may also include filter definition on the events
  • +
+

Data model

+

Configuration

+

This configuration shows how to create a notification that will send a message (event) to a Kafka topic when a new object is created in a bucket.

+

Requirements

+
    +
  • Access to a Ceph/ODF/RHCS installation with the RGW deployed.
  • +
  • Endpoint address (URL) for the RGW.
  • +
  • Credentials to connect to the RGW:
      +
    • AWS_ACCESS_KEY_ID
    • +
    • AWS_SECRET_ACCESS_KEY
    • +
    +
  • +
+
+

Note

+

As Ceph implements an S3-Compatible API to access Object Storage, standard naming for variables or procedures used with S3 were retained to stay coherent with examples, demos or documentation related to S3. Therefore the AWS prefix in the previous variables.

+
+

Topic Creation

+

A topic is the definition of a specific endpoint. It must be created first.

+

Method 1: "RAW" configuration

+

As everything is done through the RGW API, you can query it directly. To be fair, this method is almost never used (unless there is no SDK or S3 tool for your environment) but gives a good understanding of the process.

+

Example for a Kafka Endpoint:

+
POST
+Action=CreateTopic
+&Name=my-topic
+&push-endpoint=kafka://my-kafka-broker.my-net:9999
+&Attributes.entry.1.key=verify-ssl
+&Attributes.entry.1.value=true
+&Attributes.entry.2.key=kafka-ack-level
+&Attributes.entry.2.value=broker
+&Attributes.entry.3.key=use-ssl
+&Attributes.entry.3.value=true
+&Attributes.entry.4.key=OpaqueData
+&Attributes.entry.4.value=https://s3-proxy.my-zone.my-net
+
+
+

Note

+

The authentication part is not detailed here as the mechanism is pretty convoluted, but it is directly implemented in most API development tools, like Postman.

+
+

The full reference for the REST API for bucket notifications is available here.

+

Method 2: Python + AWS SDK

+

As the creator of the S3 API, AWS is providing SDKs for the main languages to interact with it. Thanks to this compatibility, you can use those SDKs to interact with Ceph in the same way. For Python, the library to interact with AWS services is called boto3.

+

Example for a Kafka Endpoint:

+
import boto3
+sns = boto3.client('sns',
+                endpoint_url = endpoint_url,
+                aws_access_key_id = aws_access_key_id,
+                aws_secret_access_key= aws_secret_access_key,
+                region_name='default',
+                config=botocore.client.Config(signature_version = 's3'))
+
+attributes = {}
+attributes['push-endpoint'] = 'kafka://my-cluster-kafka-bootstrap:9092'
+attributes['kafka-ack-level'] = 'broker'
+
+topic_arn = sns.create_topic(Name=my-topic, Attributes=attributes)['TopicArn']
+
+

Notification Configuration

+

The notification configuration will "tie" a bucket with a topic.

+

Method 1: "RAW" configuration

+

As previously, you can directly query the RGW REST API. This is done through an XML formatted payload that is sent with a PUT command.

+

Example for a Kafka Endpoint:

+
PUT /my-bucket?notification HTTP/1.1
+
+<NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+    <TopicConfiguration>
+        <Id>my-notification</Id>
+        <Topic>my-topic</Topic>
+        <Event>s3:ObjectCreated:*</Event>
+        <Event>s3:ObjectRemoved:DeleteMarkerCreated</Event>
+    </TopicConfiguration>
+    <TopicConfiguration>
+...
+    </TopicConfiguration>
+</NotificationConfiguration>
+
+

Again, the full reference for the REST API for bucket notifications is available here.

+

Method 2: Python + AWS SDK

+

Example for a Kafka Endpoint:

+
import boto3
+s3 = boto3.client('s3',
+                endpoint_url = endpoint_url,
+                aws_access_key_id = aws_access_key_id,
+                aws_secret_access_key = aws_secret_access_key,
+                region_name = 'default',
+                config=botocore.client.Config(signature_version = 's3'))
+
+bucket_notifications_configuration = {
+            "TopicConfigurations": [
+                {
+                    "Id": 'my-id',
+                    "TopicArn": 'arn:aws:sns:s3a::my-topic',
+                    "Events": ["s3:ObjectCreated:*"]
+                }
+            ]
+        }
+
+s3.put_bucket_notification_configuration(Bucket = bucket_name,
+        NotificationConfiguration=bucket_notifications_configuration)
+
+

Filters

+

Although a notification is specific to a bucket (and you can have multiple configurations on one bucket), you may want that it does not apply to all the objects from this bucket. For example you want to send an event when an image is uploaded, but not do anything it’s another type of file. You can do this with filters! And not only on the filename, but also on the tags associated to it in its metadata.

+

Filter examples, on keys or tags:

+
<Filter>
+    <S3Key>
+        <FilterRule>
+         <Name>regex</Name>
+         <Value>([0-9a-zA-Z\._-]+.(png|gif|jp[e]?g)</Value>
+        </FilterRule>
+    </S3Key>
+    <S3Tags>
+        <FilterRule>
+            <Name>Project</Name><Value>Blue</Value>
+        </FilterRule>
+        <FilterRule>
+            <Name>Classification</Name><Value>Confidential</Value>
+        </FilterRule>
+    </S3Tags>
+</Filter>
+
+

Events

+

The notifications sent to the endpoints are called events, and they are structured like this:

+

Event example:

+
{"Records":[
+    {
+        "eventVersion":"2.1",
+        "eventSource":"ceph:s3",
+        "awsRegion":"us-east-1",
+        "eventTime":"2019-11-22T13:47:35.124724Z",
+        "eventName":"ObjectCreated:Put",
+        "userIdentity":{
+            "principalId":"tester"
+        },
+        "requestParameters":{
+            "sourceIPAddress":""
+        },
+        "responseElements":{
+            "x-amz-request-id":"503a4c37-85eb-47cd-8681-2817e80b4281.5330.903595",
+            "x-amz-id-2":"14d2-zone1-zonegroup1"
+        },
+        "s3":{
+            "s3SchemaVersion":"1.0",
+            "configurationId":"mynotif1",
+            "bucket":{
+                "name":"mybucket1",
+                "ownerIdentity":{
+                    "principalId":"tester"
+                },
+                "arn":"arn:aws:s3:us-east-1::mybucket1",
+                "id":"503a4c37-85eb-47cd-8681-2817e80b4281.5332.38"
+            },
+            "object":{
+                "key":"myimage1.jpg",
+                "size":"1024",
+                "eTag":"37b51d194a7513e45b56f6524f2d51f2",
+                "versionId":"",
+                "sequencer": "F7E6D75DC742D108",
+                "metadata":[],
+                "tags":[]
+            }
+        },
+        "eventId":"",
+        "opaqueData":"me@example.com"
+    }
+]}
+
+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/patterns/bucket-notifications/img/data-model.png b/patterns/bucket-notifications/img/data-model.png new file mode 100644 index 00000000..bf37bc7c Binary files /dev/null and b/patterns/bucket-notifications/img/data-model.png differ diff --git a/patterns/kafka/kafka-to-object-storage/deployment/secor.yaml b/patterns/kafka/kafka-to-object-storage/deployment/secor.yaml new file mode 100644 index 00000000..3d4de560 --- /dev/null +++ b/patterns/kafka/kafka-to-object-storage/deployment/secor.yaml @@ -0,0 +1,82 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: secor + name: secor + namespace: NAMESPACE +spec: + replicas: 1 + selector: + matchLabels: + app: secor + strategy: {} + template: + metadata: + labels: + app: secor + spec: + containers: + - image: quay.io/rh-data-services/secor:0.29-hdp2.9.2_latest + name: secor-0-29-hadoop-2-9-2 + imagePullPolicy: Always + env: + - name: ZOOKEEPER_PATH + value: "/" + - name: ZOOKEEPER_QUORUM + value: "zoo-entrance:2181" + - name: KAFKA_SEED_BROKER_HOST + value: "YOUR_KAFKA-kafka-brokers" + - name: KAFKA_SEED_BROKER_PORT + value: "9092" + - name: AWS_ACCESS_KEY + value: YOUR_KEY + - name: AWS_SECRET_KEY + value: YOUR_SECRET + - name: AWS_ENDPOINT + value: YOUR_ENDPOINT + - name: AWS_PATH_STYLE_ACCESS + value: "true" + - name: SECOR_S3_BUCKET + value: YOUR_BUCKET + - name: SECOR_GROUP + value: "raw_logs" + # - name: SECOR_S3_PATH + # value: "kafka-messages" + - name: KAFKA_OFFSETS_STORAGE + value: "zookeeper" + - name: SECOR_MAX_FILE_SECONDS + value: "10" + - name: SECOR_MAX_FILE_BYTES + value: "10000" + - name: SECOR_UPLOAD_MANAGER + value: "com.pinterest.secor.uploader.S3UploadManager" + - name: SECOR_MESSAGE_PARSER + # value: "com.pinterest.secor.parser.OffsetMessageParser" + value: "com.pinterest.secor.parser.JsonMessageParser" + - name: DEBUG + value: "True" + - name: SECOR_KAFKA_TOPIC_FILTER + value: "my_topic" + - name: SECOR_WRITER_FACTORY + value: "com.pinterest.secor.io.impl.JsonORCFileReaderWriterFactory" + - name: SECOR_COMPRESSION_CODEC + value: "" + - name: SECOR_FILE_EXTENSION + value: "" + - name: PARTITIONER_GRANULARITY_HOUR + value: "false" + - name: PARTITIONER_GRANULARITY_MINUTE + value: "false" + - name: KAFKA_USE_TIMESTAMP + value: "true" + - name: SECOR_FILE_WRITER_DELIMITER + value: "" + - name: SECOR_ORC_MESSAGE_SCHEMA + value: '' + volumeMounts: + - name: "local-mount" + mountPath: "/mnt/secor_data/message_logs/partition" + volumes: + - name: local-mount + emptyDir: {} \ No newline at end of file diff --git a/patterns/kafka/kafka-to-object-storage/deployment/zookeeper-entrance.yaml b/patterns/kafka/kafka-to-object-storage/deployment/zookeeper-entrance.yaml new file mode 100644 index 00000000..ca821d7b --- /dev/null +++ b/patterns/kafka/kafka-to-object-storage/deployment/zookeeper-entrance.yaml @@ -0,0 +1,110 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: zoo-entrance + namespace: NAMESPACE + labels: + app: zoo-entrance +spec: + replicas: 1 + selector: + matchLabels: + app: zoo-entrance + strategy: + type: Recreate + template: + metadata: + labels: + app: zoo-entrance + spec: + containers: + - name: zoo-entrance + image: "quay.io/rh-data-services/zoo-entrance:latest" + command: + - /opt/stunnel/stunnel_run.sh + ports: + - containerPort: 2181 + name: zoo + protocol: TCP + env: + - name: LOG_LEVEL + value: notice + - name: STRIMZI_ZOOKEEPER_CONNECT + value: "YOUR_KAFKA-zookeeper-client:2181" + imagePullPolicy: Always + livenessProbe: + exec: + command: + - /opt/stunnel/stunnel_healthcheck.sh + - "2181" + failureThreshold: 3 + initialDelaySeconds: 15 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - /opt/stunnel/stunnel_healthcheck.sh + - "2181" + failureThreshold: 3 + initialDelaySeconds: 15 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + volumeMounts: + - mountPath: /etc/cluster-operator-certs/ + name: cluster-operator-certs + - mountPath: /etc/cluster-ca-certs/ + name: cluster-ca-certs + restartPolicy: Always + terminationGracePeriodSeconds: 30 + volumes: + - name: cluster-operator-certs + secret: + defaultMode: 288 + secretName: YOUR_KAFKA-cluster-operator-certs + - name: cluster-ca-certs + secret: + defaultMode: 288 + secretName: YOUR_KAFKA-cluster-ca-cert +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: zoo-entrance + name: zoo-entrance + namespace: NAMESPACE +spec: + ports: + - name: zoo + port: 2181 + protocol: TCP + targetPort: 2181 + selector: + app: zoo-entrance + type: ClusterIP +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + labels: + app: zoo-entrance + name: zoo-entrance + namespace: NAMESPACE +spec: + ingress: + - from: + - podSelector: + matchLabels: + app: zoo-entrance + ports: + - port: 2181 + protocol: TCP + podSelector: + matchLabels: + strimzi.io/name: YOUR_KAFKA-zookeeper + policyTypes: + - Ingress \ No newline at end of file diff --git a/patterns/kafka/kafka-to-object-storage/img/kafka-secor.png b/patterns/kafka/kafka-to-object-storage/img/kafka-secor.png new file mode 100644 index 00000000..78c3694c Binary files /dev/null and b/patterns/kafka/kafka-to-object-storage/img/kafka-secor.png differ diff --git a/patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/index.html b/patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/index.html new file mode 100644 index 00000000..cd1cbdfe --- /dev/null +++ b/patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/index.html @@ -0,0 +1,1864 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Kafka to object storage - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Kafka to Object Storage

+

Description

+

Kafka is a distributed event stream processing system which is great for storing hot relevant data. Based on the retention policy of the data, it can be used to store data for a long time. However, it is not suitable for storing data for a long time. This is where we need a mechanism to move data from Kafka to the object storage.

+

Use Cases

+

Long term retention of data

+

As Kafka is not really suited for long term retention of data, persisting it inside an object store will allow you to keep your data for further use, backup or archival purposes. Depending on the solution you use, you can also transform or format you data while storing it, which will ease further retrieval.

+

Move data to Central Data Lake

+

Production Kafka environment may not be the best place to run analytics or do model training. Transferring or copying the date to a central data lake will allow you to decouple those two aspects (production and analytics), bringing peace of mind and further capabilities to the data consumers.

+

Implementations examples

+

This pattern is implemented in the Smart City demo

+

Configuration Using Secor

+

This pattern implements the Secor Kafka Consumer. It can be used to consume kafka messages from a kafka topic and store that to S3 compatible Objet Buckets.

+

Secor is a service persisting Kafka logs to Amazon S3, Google Cloud Storage, Microsoft Azure Blob Storage and Openstack Swift. Its key features are: strong consistency, fault tolerance, load distribution, horizontal scalability, output partitioning, configurable upload policies, monitoring, customizability, event transformation.

+

Kafka with Secor

+

Prerequisites

+

Bucket

+

An S3-compatible bucket, with its access key and secret key.

+

ZooKeeper Entrance

+

Secor needs to connect directly to Zookeeper to keep track of some data. If you have a secured installation of Zookeeper, like when you deploy Kafka using Strimzi or AMQStreams, you need to deploy a ZooKeeper Entrance. This is a special proxy to Zookeeper that will allow this direct connection.

+
+

Note

+

The deployment file is based on a Strimzi or AMQ Streams deployment of Kafka. If you configuration is different you may have to adapt some of the parameters.

+
+

Deployment:

+
    +
  • In the file deployment/zookeeper-entrance.yaml, replace:
      +
    • the occurrences of 'NAMESPACE' by the namespace where the Kafka cluster is.
    • +
    • the occurrences of 'YOUR_KAFKA' by the name of your Kafka cluster.
    • +
    • the parameters YOUR_KEY, YOUR_SECRET, YOUR_ENDPOINT, YOUR_BUCKET with the values corresponding to the bucket where you want to store the data.
    • +
    +
  • +
  • Apply the modified file to deploy ZooKeeper Entrance.
  • +
+

Deployment

+

Secor

+
    +
  • In the file deployment/secor.yaml, replace:
      +
    • the occurrences of 'NAMESPACE' by the namespace where the Kafka cluster is.
    • +
    • the occurrences of 'YOUR_KAFKA' by the name of your Kafka cluster.
    • +
    • adjust all the other Secor parameters or add others depending on the processing you want to do with the data: output format, aggregation,... Full instructions are available here.
    • +
    +
  • +
  • Apply the modified file to deploy Secor.
  • +
+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/patterns/kafka/kafka-to-serverless/deployment/01_knative_serving_eventing_kafka_setup.yaml b/patterns/kafka/kafka-to-serverless/deployment/01_knative_serving_eventing_kafka_setup.yaml new file mode 100644 index 00000000..dd8a3694 --- /dev/null +++ b/patterns/kafka/kafka-to-serverless/deployment/01_knative_serving_eventing_kafka_setup.yaml @@ -0,0 +1,20 @@ +apiVersion: operator.knative.dev/v1alpha1 +kind: KnativeServing +metadata: + name: knative-serving + namespace: knative-serving +--- +apiVersion: operator.knative.dev/v1alpha1 +kind: KnativeEventing +metadata: + name: knative-eventing + namespace: knative-eventing +--- +apiVersion: operator.serverless.openshift.io/v1alpha1 +kind: KnativeKafka +metadata: + name: knative-kafka + namespace: knative-eventing +spec: + source: + enabled: true diff --git a/patterns/kafka/kafka-to-serverless/deployment/02_knative_service.yaml b/patterns/kafka/kafka-to-serverless/deployment/02_knative_service.yaml new file mode 100644 index 00000000..69deb793 --- /dev/null +++ b/patterns/kafka/kafka-to-serverless/deployment/02_knative_service.yaml @@ -0,0 +1,10 @@ +apiVersion: serving.knative.dev/v1 +kind: Service +metadata: + name: greeter + namespace: YOUR_NAMESPACE +spec: + template: + spec: + containers: + - image: quay.io/rhdevelopers/knative-tutorial-greeter:quarkus diff --git a/patterns/kafka/kafka-to-serverless/deployment/03_knative_kafka_source.yaml b/patterns/kafka/kafka-to-serverless/deployment/03_knative_kafka_source.yaml new file mode 100644 index 00000000..f8025981 --- /dev/null +++ b/patterns/kafka/kafka-to-serverless/deployment/03_knative_kafka_source.yaml @@ -0,0 +1,15 @@ +apiVersion: sources.knative.dev/v1beta1 +kind: KafkaSource +metadata: + name: kafka-source +spec: + consumerGroup: "knative-consumer-group" + bootstrapServers: + - YOUR_BOOTSTRAP.YOUR_NAMESPACE.svc:9092 + topics: + - example_topic + sink: + ref: + apiVersion: serving.knative.dev/v1 + kind: Service + name: greeter diff --git a/patterns/kafka/kafka-to-serverless/img/eda.png b/patterns/kafka/kafka-to-serverless/img/eda.png new file mode 100644 index 00000000..a74b04fa Binary files /dev/null and b/patterns/kafka/kafka-to-serverless/img/eda.png differ diff --git a/patterns/kafka/kafka-to-serverless/kafka-to-serverless/index.html b/patterns/kafka/kafka-to-serverless/kafka-to-serverless/index.html new file mode 100644 index 00000000..976a195e --- /dev/null +++ b/patterns/kafka/kafka-to-serverless/kafka-to-serverless/index.html @@ -0,0 +1,2031 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Kafka to serverless - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + +

Kafka to Serverless

+

Description

+

This pattern describes how to use AMQ Streams (Kafka) as an event source to OpenShift Serverless (Knative). You will learn how to implement Knative Eventing that can trigger a Knative Serving function when a messaged is posted to a Kafka Topic (Event).

+

Knative & OpenShift Serverless

+

Knative is an open source project that helps to deploy and manage modern serverless workloads on Kubernetes. Red Hat OpenShift Serverless is an enterprise-grade serverless offering based on knative that provides developers with a complete set of tools to build, deploy, and manage serverless applications on OpenShift Container Platform

+

Knative consists of 3 primary components:

+
    +
  • Build - A flexible approach to building source code into containers.
  • +
  • Serving - Enables rapid deployment and automatic scaling of containers through a request-driven model for serving workloads based on demand.
  • +
  • Eventing - An infrastructure for consuming and producing events to stimulate applications. Applications can be triggered by a variety of sources, such as events from your own applications, cloud services from multiple providers, Software-as-a-Service (SaaS) systems, and Red Hat AMQ streams.
  • +
+

EDA (Event Driven Architecture)

+

Event-Driven Architecture (EDA) is a way of designing applications and services to respond to real-time information based on the sending and receiving of information about individual events. EDA uses events to trigger and communicate between decoupled services and is common in modern applications built with microservices.

+

EDA

+

Use Cases

+
    +
  • Develop an event-driven architecture with serverless applications.
  • +
  • Serverless Business logic processing that is capable of automated scale-up and scale-down to zero.
  • +
+

Implementations examples

+

This pattern is implemented in the XRay Pipeline Demo

+

Deployment example

+

Requirements

+
    +
  • Red Hat OpenShift Container Platform
  • +
  • Red Hat AMQ Streams or Strimzi: the operator should be installed and a Kafka cluster must be created
  • +
  • Red Hat OpenShift Serverless: the operator must be installed
  • +
+

Part 1: Set up KNative

+

Once Red Hat OpenShift Serverless operator has been installed, we can create KnativeServing, KnativeEventing and KnativeKafka instances.

+

Step 1: Create required Knative instances

+ +
oc create -f 01_knative_serving_eventing_kafka_setup.yaml
+
+
+

Note

+

Those instances can also be deployed through the OpenShift Console if you prefer to use a UI. In this case, follow the Serverless deployment instructions (this section and the following ones).

+
+

Step 2: Verify Knative Instances

+
oc get po -n knative-serving
+oc get po -n knative-eventing
+
+
    +
  • Pod with prefix kafka-controller-manager represents Knative Kafka Event Source.
  • +
+

Part 2: Knative Serving

+

Knative Serving is your serverless business logic that you would like to execute based on the event generated by Kafka.

+

For example purpose we are using a simple greeter service here. Depending on your use case you will replace that with your own business logic.

+

Step 1: Create Knative Serving

+
    +
  • From the deployment folder, in the YAML file 02_knative_service.yaml, replace the placeholder YOUR_NAMESPACE with your namespace, and apply the file to create knative serving.
  • +
+
oc create -f 02_knative_service.yaml
+
+

Step 2: Verify Knative Serving

+
oc get serving
+
+

Part 3: Knative Eventing

+

Knative Eventing enables developers to use an event-driven architecture with serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers that create events, and event sinks, or consumers, that receive them.

+

Step 1: Kafka topic

+
    +
  • Create a Kafka topic where the events will be sent. In this example, the topic will be example_topic.
  • +
+

Step 2: Create Knative Eventing

+
    +
  • To create a Knative Eventing, we need to create a Kafka Event Source. Before you apply the following YAML file, 03_knative_kafka_source.yaml, please make sure to edit namespace and bootstrapServers to match your Kafka cluster. Also make sure to use the correct Knative Service (serving) that you have created in the previous step (greeter in this example).
  • +
+
oc create -f 03_knative_kafka_source.yaml
+
+

Step 3: Verify Knative Eventing

+
oc get kafkasource
+
+

At this point, as soon as new messages are received in Kafka topic example_topic, Knative Eventing will trigger the Knative Service greeter to execute the business logic, allowing you to have event-driven serverless application running on OpenShift Container Platform.

+

Part 4: Testing

+
    +
  • Optional: to view the logs of Knative Serving you can install stern to them from the CLI, or use the OpenShift Web Console.
  • +
+
oc get ksvc
+stern --selector=serving.knative.dev/service=greeter -c user-container
+
+
    +
  • Launch a temporary Kafka CLI (kafkacat) in a new terminal
  • +
+
oc run kafkacat -i -t --image debezium/tooling --restart=Never
+
+
    +
  • From the kafkacat container shell, generate kafka messages in the topic example_topic of your Kafka cluster. Here we are generating Kafka messages with CloudEvents (CE) specification.
  • +
+
for i in {1..50} ; do sleep 10 ; \
+echo '{"message":"Hello Red Hat"}' | kafkacat -P -b core-kafka-kafka-bootstrap -t example_topic \
+  -H "content-type=application/json" \
+  -H "ce-id=CE-001" \
+  -H "ce-source=/kafkamessage"\
+  -H "ce-type=dev.knative.kafka.event" \
+  -H "ce-specversion=1.0" \
+  -H "ce-time=2018-04-05T03:56:24Z"
+done ;
+
+

The above command will generate 50 Kafka messages every 10 seconds. Knative Eventing will pick up the messages and invoke the greeter Knative service, that you can verify from the logs of Knative Serving.

+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/patterns/starproxy/img/starproxy-diagram.png b/patterns/starproxy/img/starproxy-diagram.png new file mode 100644 index 00000000..0c2895dd Binary files /dev/null and b/patterns/starproxy/img/starproxy-diagram.png differ diff --git a/patterns/starproxy/starproxy/index.html b/patterns/starproxy/starproxy/index.html new file mode 100644 index 00000000..5937acd0 --- /dev/null +++ b/patterns/starproxy/starproxy/index.html @@ -0,0 +1,1677 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Starburst/Trino proxy - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Starburst/Trino Proxy

+

What it is

+

Starproxy is a fully HTTP compliant proxy that is designed to sit between clients and a Trino/Starburst cluster. The motivation for developing a solution like this is laid out in some prior art below:

+ +

The most attractive items to us are probably:

+
    +
  • Enabling host based security
  • +
  • Detecting "bad" queries and blocking/deprioritizing them with custom rules
  • +
  • Load balancing across regions
  • +
+

How it works

+

First and foremost, starproxy is an http proxy implemented in rust using a combination of axum/hyper.

+

Diagram

+
    +
  1. +

    Parse the query AST, then check a variety of rules:

    +
      +
    • inbound CIDR rule checking
    • +
    • checking for predicates in queries
    • +
    • identifying select * queries with no limit, among other rules
    • +
    +
  2. +
  3. +

    If rules are violated they can be associated with actions, like tagging the query as low priority. This is done by modifying the request headers and injecting special tags. +Rules can also outright block requests by returning error status codes to the client directly.

    +
  4. +
+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..999eca41 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/","title":"Credit Card Fraud Detection Demo using MLFlow and Red Hat OpenShift Data Science","text":"

Info

The full source and instructions for this demo are available in this repo

"},{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/#demo-description-architecture","title":"Demo Description & Architecture","text":"

The goal of this demo is to demonstrate how RHODS and MLFlow can be used together to build an end-to-end MLOps platform where we can:

  • Build and train models in RHODS
  • Track and store those models with MLFlow
  • Serve a model stored in MLFlow using RHODS Model Serving (or MLFlow serving)
  • Deploy a model application in OpenShift that runs sends data to the served model and displays the prediction

The architecture looks like this:

Description of each component:

  • Data Set: The data set contains the data used for training and evaluating the model we will build in this demo.
  • RHODS Notebook: We will build and train the model using a Jupyter Notebook running in RHODS.
  • MLFlow Experiment tracking: We use MLFlow to track the parameters and metrics (such as accuracy, loss, etc) of a model training run. These runs can be grouped under different \"experiments\", making it easy to keep track of the runs.
  • MLFlow Model registry: As we track the experiment we also store the trained model through MLFlow so we can easily version it and assign a stage to it (for example Staging, Production, Archive).
  • S3 (ODF): This is where the models are stored and what the MLFlow model registry interfaces with. We use ODF (OpenShift Data Foundation) according to the MLFlow guide, but it can be replaced with another solution.
  • RHODS Model Serving: We recommend using RHODS Model Serving for serving the model. It's based on ModelMesh and allows us to easily send requests to an endpoint for getting predictions.
  • Application interface: This is the interface used to run predictions with the model. In our case, we will build a visual interface (interactive app) using Gradio and let it load the model from the MLFlow model registry.

The model we will build is a Credit Card Fraud Detection model, which predicts if a credit card usage is fraudulent or not depending on a few parameters such as: distance from home and last transaction, purchase price compared to median, if it's from a retailer that already has been purchased from before, if the PIN number is used and if it's an online order or not.

"},{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/#deploying-the-demo","title":"Deploying the demo","text":""},{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/#pre-requisites","title":"Pre-requisites","text":"
  • Have Red Hat OpenShift Data Science (RHODS) running in a cluster

Note

Note: You can use Open Data Hub instead of RHODS, but some instructions and screenshots may not apply

  • Have MLFlow running in a cluster
"},{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/#11-mlflow-route-through-the-visual-interface","title":"1.1: MLFlow Route through the visual interface","text":"

Start by finding your route to MLFlow. You will need it to send any data to MLFlow.

  • Go to the OpenShift Console as a Developer
  • Select your mlflow project
  • Press Topology
  • Press the mlflow-server circle
    • While you are at it, you can also press the little \"Open URL\" button in the top right corner of the circle to open up the MLFlow UI in a new tab - we will need it later.
  • Go to the Resources tab
  • Press mlflow-server under Services
  • Look at the Hostname and mlflow-server Port.

Note

This route and port only work internally in the cluster.

"},{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/#12-get-the-mlflow-route-using-command-line","title":"1.2: Get the MLFlow Route using command-line","text":"

Alternatively, you can use the OC command to get the hostname through: oc get svc mlflow-server -n mlflow -o go-template --template='{{.metadata.name}}.{{.metadata.namespace}}.svc.cluster.local{{println}}'

The port you will find with: oc get svc mlflow-server -n mlflow -o yaml

"},{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/#2-create-a-rhods-workbench","title":"2: Create a RHODS workbench","text":"

Start by opening up RHODS by clicking on the 9 square symbol in the top menu and choosing \"Red Hat OpenShift Data Science\".

Then create a new Data Science project (see image), this is where we will build and train our model. This will also create a namespace in OpenShift which is where we will be running our application after the model is done. I'm calling my project 'Credit Card Fraud', feel free to call yours something different but be aware that some things further down in the demo may change.

After the project has been created, create a workbench where we can run Jupyter. There are a few important settings here that we need to set:

  • Name: Credit Fraud Model
  • Notebook Image: Standard Data Science
  • Deployment Size: Small
  • Environment Variable: Add a new one that's a Config Map -> Key/value and enter
    • Key: MLFLOW_ROUTE
    • Value: http://<route-to-mlflow>:<port>, replacing <route-to-mlflow> and <port> with the route and port that we found in step one. In my case it is http://mlflow-server.mlflow.svc.cluster.local:8080.
  • Cluster Storage: Create new persistent storage - I call it \"Credit Fraud Storage\" and set the size to 20GB.

Press Create Workbench and wait for it to start - status should say \"Running\" and you should be able to press the Open link.

Open the workbench and login if needed.

"},{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/#3-train-the-model","title":"3: Train the model","text":"

When inside the workbench (Jupyter), we are going to clone a GitHub repository which contains everything we need to train (and run) our model. You can clone the GitHub repository by pressing the GitHub button in the left side menu (see image), then select \"Clone a Repository\" and enter this GitHub URL: https://github.com/red-hat-data-services/credit-fraud-detection-demo

Open up the folder that was added (credit-fraud-detection-demo). It contains:

  • Data for training and evaluating the model.
  • A notebook (model.ipynb) inside the model folder with a Deep Neural Network model we will train.
  • An application (model_application.py) inside the application folder that will fetch the trained model from MLFlow and run a prediction on it whenever it gets any user input.

The model.ipynb is what we are going to use for building and training the model, so open that up and take a look inside, there is documentation outlining what each cell does. What is particularly interesting for this demo are the last two cells.

The second to last cell contains the code for setting up MLFlow tracking:

mlflow.set_tracking_uri(MLFLOW_ROUTE)\nmlflow.set_experiment(\"DNN-credit-card-fraud\")\nmlflow.tensorflow.autolog(registered_model_name=\"DNN-credit-card-fraud\")\n

mlflow.set_tracking_uri(MLFLOW_ROUTE) just points to where we should send our MLFlow data. mlflow.set_experiment(\"DNN-credit-card-fraud\") tells MLFlow that we want to create an experiment, and what we are going to call it. In this case I call it \"DNN-credit-card-fraud\" as we are building a Deep Neural Network. mlflow.tensorflow.autolog(registered_model_name=\"DNN-credit-card-fraud\") enables autologging of a bunch of variables (such as accuracy, loss, etc) so we don't manually have to track them. It also automatically uploads the model to MLFlow after the training completes. Here we name the model the same as the experiment.

Then in the last cell we have our training code:

with mlflow.start_run():\n    epochs = 2\n    history = model.fit(X_train, y_train, epochs=epochs, \\\n                        validation_data=(scaler.transform(X_val),y_val), \\\n                        verbose = True, class_weight = class_weights)\n\n    y_pred_temp = model.predict(scaler.transform(X_test))\n\n    threshold = 0.995\n\n    y_pred = np.where(y_pred_temp > threshold, 1,0)\n    c_matrix = confusion_matrix(y_test,y_pred)\n    ax = sns.heatmap(c_matrix, annot=True, cbar=False, cmap='Blues')\n    ax.set_xlabel(\"Prediction\")\n    ax.set_ylabel(\"Actual\")\n    ax.set_title('Confusion Matrix')\n    plt.show()\n\n    t_n, f_p, f_n, t_p = c_matrix.ravel()\n    mlflow.log_metric(\"tn\", t_n)\n    mlflow.log_metric(\"fp\", f_p)\n    mlflow.log_metric(\"fn\", f_n)\n    mlflow.log_metric(\"tp\", t_p)\n\n    model_proto,_ = tf2onnx.convert.from_keras(model)\n    mlflow.onnx.log_model(model_proto, \"models\")\n

with mlflow.start_run(): is used to tell MLFlow that we are starting a run, and we wrap our training code with it to define exactly what code belongs to the \"run\". Most of the rest of the code in this cell is normal model training and evaluation code, but at the bottom we can see how we send some custom metrics to MLFlow through mlflow.log_metric and then convert the model to ONNX. This is because ONNX is one of the standard formats for RHODS Model Serving which we will use later.

Now run all the cells in the notebook from top to bottom, either by clicking Shift-Enter on every cell, or by going to Run->Run All Cells in the very top menu. If everything is set up correctly it will train the model and push both the run and the model to MLFlow. The run is a record with metrics of how the run went, while the model is the actual tensorflow and ONNX model which we later will use for inference. You may see some warnings in the last cell related to MLFlow, as long as you see a final progressbar for the model being pushed to MLFlow you are fine:

"},{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/#4-view-the-model-in-mlflow","title":"4: View the model in MLFlow","text":"

Let's take a look at how it looks inside MLFlow now that we have trained the model. If you opened the MLFlow UI in a new tab in step 1.1, then just swap over to that tab, otherwise follow these steps:

  • Go to the OpenShift Console
  • Make sure you are in Developer view in the left menu
  • Go to Topology in the left menu
  • At the top left, change your project to \"mlflow\" (or whatever you called it when installing the MLFlow operator in pre-requisites)
  • Press the \"Open URL\" icon in the top right of the MLFlow circle in the topology map

When inside the MLFlow interface you should see your new experiment in the left menu. Click on it to see all the runs under that experiment name, there should only be a single run from the model we just trained. You can now click on the row in the Created column to get more information about the run and how to use the model from MLFlow.

We will need the Full Path of the model in the next section when we are going to serve it, so keep this open.

"},{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/#5-serve-the-model","title":"5: Serve the model","text":"

Note

You can either serve the model using RHODS Model Serving or use the model straight from MLFlow. We will here show how you serve it with RHODS Model Serving as that scales better for large applications and load. At the bottom of this section we'll go through how it would look like to use MLFlow instead.

To start, go to your RHODS Project and click \"Add data connection\". This data connection connects us to a storage we can load our models from.

Here we need to fill out a few details. These are all assuming that you set up MLFlow according to this guide and have it connected to ODF. If that's not the case then enter the relevant details for your use case.

  • Name: mlflow-connection
  • AWS_ACCESS_KEY_ID: Run oc get secrets mlflow-server -n mlflow -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d' in your command prompt, in my case it's nB0z01i0PwD9PMSISQ2W
  • AWS_SECRET_ACCESS_KEY: Run oc get secrets mlflow-server -n mlflow -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d' in your command prompt, in my case it's FLgEJmGQm5CdRQRnXc8jVFcc+QDpM1lcrGpiPBzI.

Note

In my case the cluster and storage has already been shut down, don't share this in normal cases.

  • AWS_S3_ENDPOINT: Run oc get configmap mlflow-server -n mlflow -o yaml | grep BUCKET_HOS in your command prompt, in my case it's http://s3.openshift-storage.svc
  • AWS_DEFAULT_REGION: Where the cluster is being ran
  • AWS_S3_BUCKET: Run oc get obc -n mlflow -o yaml | grep bucketName in your command prompt, in my case it's mlflow-server-576a6525-cc5b-46cb-95f3-62c3986846df

Then press \"Add data connection\". Here's an example of how it can look like:

Then we will configure a model server, which will serve our models.

Just check the 'Make deployed available via an external route' checkbox and then press \"Configure\" at the bottom.

Finally, we will deploy the model, to do that, press the \"Deploy model\" button which is in the same place that \"Configure Model\" was before. We need to fill out a few settings here:

  • Name: credit card fraud
  • Model framework: onnx-1 - Since we saved the model as ONNX in the model training section
  • Model location:
    • Name: mlflow-connection
    • Folder path: This is the full path we can see in the MLFlow interface from the end of the previous section. In my case it's 1/b86481027f9b4b568c9efa3adc01929f/artifacts/models/. Beware that we only need the last part, which looks something like: 1/..../artifacts/models/

Press Deploy and wait for it to complete. It will show a green checkmark when done. You can see the status here:

Click on \"Internal Service\" in the same row to see the endpoints, we will need those when we deploy the model application.

[Optional] MLFlow Serving:

This section is optional

This section explains how to use MLFlow Serving instead of RHODS Model Serving. We recommend using RHODS Model Serving as it scales better. However, if you quickly want to get a model up and running for testing, this would be an easy way.

To use MLFlow serving, simply deploy an application which loads the model straight from MLFlow. You can find the model application code for using MLFlow serving in the \"application_mlflow_serving\" folder in the GitHub repository you cloned in step 3.

If you look inside model_application_mlflow_serve.py you are going to see a few particularly important lines of code:

# Get a few environment variables. These are so we can:\n# - get data from MLFlow\n# - Set server name and port for Gradio\nMLFLOW_ROUTE = os.getenv(\"MLFLOW_ROUTE\")\n...\n\n# Connect to MLFlow using the route.\nmlflow.set_tracking_uri(MLFLOW_ROUTE)\n\n# Specify what model and version we want to load, and then load it.\nmodel_name = \"DNN-credit-card-fraud\"\nmodel_version = 1\nmodel = mlflow.pyfunc.load_model(\n    model_uri=f\"models:/{model_name}/{model_version}\"\n)\n

Here is where we set up everything that's needed for loading the model from MLFlow. The environment variable MLFLOW_ROUTE is set in the Dockerfile. You can also see that we specifically load version 1 of the model called \"DNN-credit-card-fraud\" from MLFlow. This makes sense since we only ran the model once, but is easy to change if any other version or model should go into production

Follow the steps of the next section to see how to deploy an application, but when given the choice for \"Context dir\" and \"Environment variables (runtime only)\", use these settings instead:

  • Context dir: \"/model_application_mlflow_serve\"
  • Environment variables (runtime only) fields:
    • Name: MLFLOW_ROUTE
    • Value: The MLFlow route from step one (http://mlflow-server.mlflow.svc.cluster.local:8080 for example)
"},{"location":"demos/credit-card-fraud-detection-mlflow/credit-card-fraud/#6-deploy-the-model-application","title":"6: Deploy the model application","text":"

The model application is a visual interface for interacting with the model. You can use it to send data to the model and get a prediction of whether a transaction is fraudulent or not. You can find the model application code in the \"application\" folder in the GitHub repository you cloned in step 3.

If you look inside it model_application.py, you will see two particularly important lines of code:

# Get a few environment variables. These are so we:\n# - Know what endpoint we should request\n# - Set server name and port for Gradio\nURL = os.getenv(\"INFERENCE_ENDPOINT\") <----------\n...\n\n    response = requests.post(URL, json=payload, headers=headers)  <----------\n

This is what we use to send a request to our RHODS Model Server with some data we want it to run a prediction on.

We are going to deploy the application with OpenShift by pointing to the GitHub repository. It will pull down the folder, automatically build a container image based on the Dockerfile, and publish it.

To do this, go to the OpenShift Console and make sure you are in Developer view and have selected the credit-card-fraud project. Then press \"+Add\" in the left menu and select Import from Git.

In the \"Git Repo URL\" enter: https://github.com/red-hat-data-services/credit-fraud-detection-demo (this is the same repository we pulled into RHODS earlier). Then press \"Show advanced Git options\" and set \"Context dir\" to \"/application\". Finally, at the very bottom, click the blue \"Deployment\" link:

Set these values in the Environment variables (runtime only) fields:

  • Name: INFERENCE_ENDPOINT
  • Value: In the RHODS projects interface (from the previous section), copy the \"restURL\" and add /v2/models/credit-card-fraud/infer to the end if it's not already there. For example: http://modelmesh-serving.credit-card-fraud:8008/v2/models/credit-card-fraud/infer

Your full settings page should look something like this:

Press Create to start deploying the application.

You should now see three objects in your topology map, one for the Workbench we created earlier, one for the model serving, and one for the application we just added. When the circle of your deployment turns dark blue it means that it has finished deploying.

If you want more details on how the deployment is going, you can press the circle and look at Resources in the right menu that opens up. There you can see how the build is going and what's happening to the pod. The application will be ready when the build is complete and the pod is \"Running\".

When the application has been deployed you can press the \"Open URL\" button to open up the interface in a new tab.

Congratulations, you now have an application running your AI model!

Try entering a few values and see if it predicts it as a credit fraud or not. You can select one of the examples at the bottom of the application page.

"},{"location":"demos/financial-fraud-detection/financial-fraud-detection/","title":"Financial Fraud Detection","text":"

Info

The full source and instructions for this demo are available on this repo

This demo shows how to use OpenShift Data Science to train and test a relatively simplistic fraud detection model. In exploring this content, you will become familiar with the OpenShift Data Science offering and common workflows to use with it.

"},{"location":"demos/llm-chat-doc/llm-chat-doc/","title":"LLMs, Chatbots, Talk with your Documentation","text":"

Info

All source files and examples used in this article are available on this repo!

LLMs (Large Language Models) are the subject of the day. And of course, you can definitely work with them on OpenShift with ODH or RHODS, from creating a simple Chatbot, or using them as simple APIs to summarize or translate texts, to deploying a full application that will allow you to quickly query your documentation or knowledge base in natural language.

You will find on this page instructions and examples on how to set up the different elements that are needed for those different use cases, as well as fully implemented and ready-to-use applications.

"},{"location":"demos/llm-chat-doc/llm-chat-doc/#context-and-definitions","title":"Context and definitions","text":"

Many people are only beginning to discover those technologies. After all, it has been less than a year since the general public is aware of them, and many related technologies, tools or applications are only a few months, even weeks (and sometimes days!) old. So here are a few definitions of the different terms that will be used in this article.

  • LLM: A Large Language Model (LLM) is a sophisticated artificial intelligence system designed for natural language processing. It leverages deep learning techniques to understand and generate human-like text. LLMs use vast datasets to learn language patterns, enabling tasks like text generation, translation, summarization, and more. These models are versatile and can be fine-tuned for specific applications, like chatbots or content creation. LLMs have wide-ranging potential in various industries, from customer support and content generation to research and education, but their use also raises concerns about ethics, bias, and data privacy, necessitating responsible deployment and ongoing research.
  • Fine-tuning: Fine-tuning in the context of Large Language Models (LLMs) is a process of adapting a pre-trained, general-purpose model to perform specific tasks or cater to particular applications. It involves training the model on a narrower dataset related to the desired task, allowing it to specialize and improve performance. Fine-tuning customizes the LLM's capabilities for tasks like sentiment analysis, question answering, or chatbots. This process involves adjusting hyperparameters, data preprocessing, and possibly modifying the model architecture. Fine-tuning enables LLMs to be more effective and efficient in specific domains, extending their utility across various applications while preserving their initial language understanding capabilities.
  • RAG: RAG, or Retrieval-Augmented Generation, is a framework in natural language processing. It combines two key components: retrieval and generation. Retrieval involves selecting relevant information from a vast knowledge base, like the internet, and generation pertains to creating human-like text. RAG models employ a retriever to fetch context and facts related to a specific query or topic and a generator, often a language model, to produce coherent responses. This approach enhances the quality and relevance of generated text, making it useful for tasks like question answering, content summarization, and information synthesis, offering a versatile solution for leveraging external knowledge in AI-powered language understanding and production.
  • Embeddings: Embeddings refer to a technique in natural language processing and machine learning where words, phrases, or entities are represented as multi-dimensional vectors in a continuous vector space. These vectors capture semantic relationships and similarities between words based on their context and usage. Embeddings are created through unsupervised learning, often using models like Word2Vec or GloVe, which transform words into fixed-length numerical representations. These representations enable machines to better understand and process language, as similar words have closer vector representations, allowing algorithms to learn contextual associations. Embeddings are foundational in tasks like text classification, sentiment analysis, machine translation, and recommendation systems.
  • Vector Database: A vector database is a type of database designed to efficiently store and manage vector data, which represents information as multidimensional arrays or vectors. Unlike traditional relational databases, which organize data in structured tables, vector databases excel at handling unstructured or semi-structured data. They are well-suited for applications in data science, machine learning, and spatial data analysis, as they enable efficient storage, retrieval, and manipulation of high-dimensional data points. Vector databases play a crucial role in various fields, such as recommendation systems, image processing, natural language processing, and geospatial analysis, by facilitating complex mathematical operations on vector data for insights and decision-making.
  • Quantization: Model quantization is a technique in machine learning and deep learning aimed at reducing the computational and memory requirements of neural networks. It involves converting high-precision model parameters (usually 32-bit floating-point values) into lower precision formats (typically 8-bit integers or even binary values). This process helps in compressing the model, making it more lightweight and faster to execute on hardware with limited resources, such as edge devices or mobile phones. Quantization can result in some loss of model accuracy, but it's a trade-off that balances efficiency with performance, enabling the deployment of deep learning models in resource-constrained environments without significant sacrifices in functionality.

Fun fact: all those definitions were generated by an LLM...

Do you want to know more?

Here are a few worth reading articles:

  • Best article ever: A jargon-free explanation of how AI large language models work
  • Understanding LLama2 and its architecture
  • RAG vs Fine-Tuning, which is best?
"},{"location":"demos/llm-chat-doc/llm-chat-doc/#llm-serving","title":"LLM Serving","text":"

LLM Serving is not a trivial task, at least in a production environment...

  • LLMs are usually huge (several GBs, tens of GBs...) and require GPU(s) with enough memory if you want decent accuracy and performance. Granted, you can run smaller models on home hardware with good results, but that's not the subject here. After all we are on OpenShift, so more in a large organization environment than in an enthusiastic programmer basement!
  • A served LLM will generally be used by multiple applications and users simultaneously. Since you can't just throw resources at it and scale your infrastructure easily because of the previous point, you want to optimize response time by for example batching queries, caching or buffering them,... Those are special operations that have to be handled specifically.
  • When you load an LLM, there are parameters you want to tweak at load time, so a \"generic\" loader is not the best suited solution.
"},{"location":"demos/llm-chat-doc/llm-chat-doc/#llm-serving-solutions","title":"LLM Serving solutions","text":"

Fortunately, we have different solutions to handle LLM Serving:

  • Caikit-TGIS-Serving is a solution already available in ODH, soon to be included in RHODS, specially designed to serve LLMs. You will find all installation instructions on its repo.
  • Hugging Face Text Generation Inference is another solution that you can deploy on OpenShift following those installation instructions.

What are the differences between the two?

  • At the moment, the Caikit+TGIS stack installation may be a little bit more complicated, requiring different operators, configuration, certificate generation...
  • Also, at the moment, Caikit+TGIS has a gRPC interface only, which makes it more complicated to use, especially with other tools and SDKs that may not have integration with it.
  • HF TGI, while easier and providing a REST interface, comes with a caveat: its special license does not allow you to use it for a business that would provide on-demand LLM endpoints. You can totally use it for your own chatbots, even commercially (meaning the chatbots will be used by customers). But you cannot use it to make a business of simply hosting and serving LLMs.
"},{"location":"demos/llm-chat-doc/llm-chat-doc/#which-model-to-use","title":"Which model to use?","text":"

In this section we will assume that you want to work with a \"local\" open source model, and not consume a commercial one through an API, like OpenAI's ChatGPT or Anthropic's Claude.

There are literally hundreds of thousands of models available, almost all of them available on the Hugging Face site. If you don't know what this site is, you can think of it as what Quay or DockerHub are for containers: a big repository of models and datasets ready to download and use. Of course Hugging Face (the company) is also creating code, providing hosting capabilities,... but that's another story.

So which model to choose will depend on several factors:

  • Of course how good this model is. There are several benchmarks that have been published, as well as constantly updated rankings.
  • The dataset it was trained on. Was it curated or just raw data from anywhere, does it contain nsfw material,...? And of course what the license is (some datasets are provided for research only or non-commercial).
  • The license of the model itself. Some are fully open source, some claim to be... They may be free to use in most cases, but have some restrictions attached to them (looking at you Llama2...).
  • The size of the model. Unfortunately that may be the most restrictive point for your choice. The model simply must fit on the hardware you have at your disposal, or the amount of money you are willing to pay.

Currently, a good model with interesting performance for a relatively small size is Mistral-7B. Fully Open Source with an Apache 2.0 license, it will fit in an unquantized version on about 22GB of VRAM, which is perfect for an A10G card.

"},{"location":"demos/llm-chat-doc/llm-chat-doc/#llm-consumption","title":"LLM Consumption","text":"

Once served, consuming an LLM is pretty straightforward, as at the end of the day it's only an API call.

  • For Caikit+TGIS you will find here a notebook example on how to connect and use the gRPC interface.
  • As HF TGI provides a REST interface, its usage is more straightforward. Here is the full API Swagger doc (also available when you deploy the server yourself).

However, for easier consumption and integration with other tools, a few libraries/SDKs are available to streamline the process. They will allow you to easily connect to Vector Databases or Search Agents, chain multiple models, tweak parameters,... in a few lines of code. The two main libraries at the time of this writing are Langchain and Haystack.

In the LLM on OpenShift repo, you will find several notebooks and full UI examples that will show you how to use those libraries with both Caikit+TGIS and HF-TGI to create your own Chatbot!

"},{"location":"demos/llm-chat-doc/llm-chat-doc/#rag-chatbot-full-walkthrough","title":"RAG Chatbot Full Walkthrough","text":"

Although the available code is normally pretty well documented, especially the notebooks, giving a full overview will surely help you understand how all of the different elements fit together.

For this walkthrough we will be using this application, which is a RAG-based Chatbot that will use a Redis database as the vector store, Hugging Face Text Generation Inference for LLM serving, Langchain as the \"glue\" between those components, and Gradio as the UI engine.

"},{"location":"demos/llm-chat-doc/llm-chat-doc/#requirements","title":"Requirements","text":"
  • An OpenShift cluster with RHODS or ODH deployed.
  • A node with a GPU card. For the model we will use, 24GB memory on the GPU (VRAM) is necessary. If you have less than that you can either use quantization when loading the model, use an already quantized model (results may vary as they are not all compatible with the model server), or choose another compatible smaller model.
  • If you don't want to have to manually install different requirements in the notebooks environment (mostly Langchain and its dependencies), which may take time, you may want to directly import this custom workbench image, quay.io/opendatahub-contrib/workbench-images:cuda-jupyter-langchain-c9s-py311_2023c_latest, inside your RHODS/ODH environment. It comes pre-installed with Langchain and many other LLM-related tools. If you don't know how to do this, see the instructions here.
"},{"location":"demos/llm-chat-doc/llm-chat-doc/#model-serving","title":"Model Serving","text":"

Deploy an HF-TGI instance following the instructions available here.

The model we want to use is Mistral-7B-Instruct as it has been specially fine-tuned for chat interactions. Our deployment must therefore be modified by changing the environment parameters as follows:

          env:\n            - name: MODEL_ID\n              value: mistralai/Mistral-7B-Instruct-v0.1\n            - name: MAX_INPUT_LENGTH\n              value: '1024'\n            - name: MAX_TOTAL_TOKENS\n              value: '2048'\n            - name: HUGGINGFACE_HUB_CACHE\n              value: /models-cache\n            - name: PORT\n              value: '3000'\n            - name: HOST\n              value: 0.0.0.0\n

What has changed compared to the original deployment is:

  • The MODEL_ID, now mistralai/Mistral-7B-Instruct-v0.1
  • QUANTIZATION has been removed. Again, this depends on your VRAM availability.

Once the model is deployed, you can test it as indicated in the instructions on the repo:

"},{"location":"demos/llm-chat-doc/llm-chat-doc/#vector-store","title":"Vector Store","text":""},{"location":"demos/llm-chat-doc/llm-chat-doc/#redis-deployment","title":"Redis deployment","text":"

For our RAG we will need a Vector Database to store the Embeddings of the different documents. In this example we are using Redis.

Deployment instructions are available here.

After you follow those instructions you should have a Database ready to be populated with documents.

"},{"location":"demos/llm-chat-doc/llm-chat-doc/#document-ingestion","title":"Document ingestion","text":"

In this notebook you will find detailed instructions on how to ingest different types of documents: PDFs first, then Web pages.

The examples are based on RHODS documentation, but of course we encourage you to use your own documentation. After all that's the purpose of all of this!

This other notebook will allow you to execute simple queries against your Vector Store to make sure it works alright.

"},{"location":"demos/llm-chat-doc/llm-chat-doc/#testing","title":"Testing","text":"

Now let's put all of this together!

This notebook requires only information about your Model Server (the Inference URL) and about your Vector store.

  • It will first initialize a connection to the vector database (embeddings are necessary for the Retriever to \"understand\" what is stored in the database):
embeddings = HuggingFaceEmbeddings()\nrds = Redis.from_existing_index(\n    embeddings,\n    redis_url=redis_url,\n    index_name=index_name,\n    schema=schema_name\n)\n
  • A prompt template is then defined. You can see that we will give it specific instructions on how the model must answer. This is necessary if you want to keep it focused on its task and not say anything that may not be appropriate (on top of getting you fired!). The format of this prompt is originally the one used for Llama2, but Mistral uses the same one. You may have to adapt this format if you use another model.
template=\"\"\"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant.\nYou will be given a question you need to answer, and a context to provide you with information. You must answer the question based as much as possible on this context.\nAlways answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\nQuestion: {question}\nContext: {context} [/INST]\n\"\"\"\n
  • Now we will define the llm connection itself. As you can see there are many parameters you can define that will modify how the model will answer. Details on those parameters are available here.
llm = HuggingFaceTextGenInference(\n    inference_server_url=inference_server_url,\n    max_new_tokens=512,\n    top_k=10,\n    top_p=0.95,\n    typical_p=0.95,\n    temperature=0.1,\n    repetition_penalty=1.175,\n    streaming=True,\n    callbacks=[StreamingStdOutCallbackHandler()]\n)\n
  • And finally we can tie it all together with a specific chain, RetrievalQA:
qa_chain = RetrievalQA.from_chain_type(llm,\n                                       retriever=rds.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 4, \"distance_threshold\": 0.5}),\n                                       chain_type_kwargs={\"prompt\": QA_CHAIN_PROMPT},\n                                       return_source_documents=True)\n
  • That's it! We can now use this chain to send queries. The retriever will look for relevant documents in the Vector Store, their content will be injected automatically in the prompt, and the LLM will try to create a valid answer based on its own knowledge and this content:
question = \"How can I work with GPU and taints?\"\nresult = qa_chain({\"query\": question})\n
  • The last cell in the notebook will simply filter for duplicates in the sources that were returned in the result, and display them:
def remove_duplicates(input_list):\n    unique_list = []\n    for item in input_list:\n        if item.metadata['source'] not in unique_list:\n            unique_list.append(item.metadata['source'])\n    return unique_list\n\nresults = remove_duplicates(result['source_documents'])\n\nfor s in results:\n    print(s)\n
"},{"location":"demos/llm-chat-doc/llm-chat-doc/#application","title":"Application","text":"

Notebooks are great and everything, but it's not what you want to show to your users. I hope...

So here is a simple UI you can put around the same code we used in the notebooks.

The deployment is already explained in the repo and pretty straightforward as the application will only \"consume\" the same Vector Store and LLM Serving we have used in the notebooks. However I will point out some specificities:

  • As you should have noticed on the document ingestion part, a schema has been created for your index when you imported the first documents. This schema must be included in a ConfigMap that will be mounted in the Pod at runtime. This allows for a more generic Pod image that will work with any schema you will define (there are many things you can do here, like adding metadata, but that's a story for another time...).
  • Don't forget to put your Inference Server and Redis information in the environment variables of the Deployment! This one is scaled down to zero initially to give you time to do it properly, so don't forget to scale it up before opening an issue because the deployment does not start...

Some info on the code itself (app.py):

  • load_dotenv, along with the env.example file (once renamed .env) will allow you to develop locally.
  • As normally your Redis server won't be exposed externally to OpenShift, if you want to develop locally you may want to open a tunnel to it with oc port-forward pod-name 14155:14155 (replace with the name of the Redis Pod where the Service is connected and the ports used). You can use the same technique for the LLM endpoint if you have not exposed it as a route.
  • The class QueueCallback was necessary because the HuggingFaceTextGenInference library used to query the model does not return an iterator in the format Langchain expects it (at the time of this writing). So instead this implementation of the Callback functions for the LLM puts the new tokens in a Queue (L43) that is then retrieved from continuously (L78), with the content being yielded for display. This is a little bit convoluted, but the whole stack is still in full development, so sometimes you have to be creative...
  • Gradio configuration is pretty straightforward trough the ChatInterface component, only hiding some buttons, adding an avatar image for the bot,...

Here is what you RAG-based Chatbot should look like (some tweaking on the App title that you can do through the environment variable):

"},{"location":"demos/retail-object-detection/retail-object-detection/","title":"Object Detection in Retail","text":"

Info

The full source and instructions for this demo are available in this repo

In this demo, you can see how to build an intelligent application that gives a customer the ability to find merchandise discounts, for shirts, as they browse clothing in a department store.

You can download the related presentation.

"},{"location":"demos/robotics-edge/robotics-edge/","title":"Robotics at the Edge","text":"

Info

The full source and instructions for this demo are coming soon!

"},{"location":"demos/smart-city/smart-city/","title":"Smart City, an Edge-to-Core Data Story","text":"

Info

The full source and instructions for this demo are available in this repo

In this demo, we show how to implement this scenario:

  • Using a trained ML model, licence plates are recognized at toll location.
  • Data (plate number, location, timestamp) is send from toll locations (edge) to the core using Kafka mirroring to handle communication issues and recovery.
  • Incoming data is screened real-time to trigger alerts for wanted vehicles (Amber Alert).
  • Data is aggregated and stored into object storage.
  • A central database contains other information coming from licence registry system: car model, color,\u2026\u200b
  • Data analysis leveraging Presto and Superset is done against stored data.

This demo is showcased in this video.

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/","title":"Telecom Customer Churn using Airflow and Red Hat OpenShift Data Science","text":"

Info

The full source and instructions for this demo are available in this repo

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#demo-description","title":"Demo description","text":"

The goal of this demo is to demonstrate how Red Hat OpenShift Data Science (RHODS) and Airflow can be used together to build an easy-to-manage pipeline. To do that, we will show how to build and deploy an airflow pipeline, mainly with Elyra but also some tips if you want to build it manually. In the end, you will have a pipeline that:

  • Loads some data
  • Trains two different models
  • Evaluates which model is best
  • Saves that model to S3

Hint

You can expand on this demo by loading the pushed model into MLFlow, or automatically deploying it into some application, like in the Credit Card Fraud Demo

The models we build are used to predict customer churn for a Telecom company using structured data. The data contains fields such as: If they are a senior citizen, if they are a partner, their tenure, etc.

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#deploying-the-demo","title":"Deploying the demo","text":""},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#pre-requisites","title":"Pre-requisites","text":"
  • Fork this git repository into a GitHub or GitLab repo (the demo shows steps for GitHub, but either works): https://github.com/red-hat-data-services/telecom-customer-churn-airflow
  • Have Airflow running in a cluster and point Airflow to the cloned git repository.
  • Have access to some S3 storage (this guide uses ODF with a bucket created in the namespace \"airflow\").
  • Have Red Hat OpenShift Data Science (RHODS) running in a cluster. Make sure you have admin access in RHODS, or know someone who does.

Note

Note: You can use Open Data Hub instead of RHODS, but some instructions and screenshots may not apply

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#1-open-up-airflow","title":"1: Open up Airflow","text":"

You find the route to the Airflow console through this command: oc get route -n airflow

Enter it in the browser and you will see something like this:

Keep that open in a tab as we will come back to Airflow later on.

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#2-add-elyra-as-a-custom-notebook-image","title":"2: Add Elyra as a Custom Notebook Image","text":"

It's possible to build pipelines by creating an Airflow DAG script in python. Another, arguably simpler, method is to use Elyra to visually build out the pipeline and then submit it to Airflow. Most of this demo is going to be revolving around using Elyra together with Airflow, but at the very end, there will be a bonus section for how to use Airflow independently.

To get access to Elyra, we will simply import it as a custom notebook image. Start by opening up RHODS by clicking on the 9-square symbol in the top menu and choosing \"Red Hat OpenShift Data Science\".

Then go to Settings -> Notebook Images and press \"Import new image\". If you can't see Settings then you are lacking sufficient access. Ask your admin to add this image instead.

Under Repository enter: quay.io/eformat/elyra-base:0.2.1 and then name it something like Elyra.

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#3-create-a-rhods-workbench","title":"3: Create a RHODS workbench","text":"

A workbench in RHODS lets us spin up and down notebooks as needed and bundle them under Projects, which is a great way to get easy access to compute resources and keep track of your work. Start by creating a new Data Science project (see image). I'm calling my project 'Telecom Customer Churn', feel free to call yours something different but be aware that some things further down in the demo may change.

After the project has been created, create a workbench where we can run Jupyter. There are a few important settings here that we need to set:

  • Name: Customer Churn
  • Notebook Image: Elyra
  • Deployment Size: Small
  • Environment Variables: Secret -> AWS with your AWS details

Press Create Workbench and wait for it to start - status should say \"Running\" and you should be able to press the Open link.

Open the workbench and login if needed.

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#4-load-a-git-repository","title":"4: Load a Git repository","text":"

When inside the workbench (Jupyter), we are going to clone a GitHub repository that contains everything we need to build our DAG. You can clone the GitHub repository by pressing the GitHub button in the left side menu (see image), then select \"Clone a Repository\" and enter your GitHub URL (Your forked version of this: https://github.com/red-hat-data-services/telecom-customer-churn-airflow)

The notebooks we will use are inside the include/notebooks folder, there should be 5 in total, 4 for building the pipeline and 1 for verifying that everything worked. They all run standard Python code, which is the beauty of Airflow combined with Elyra. There is no need to worry about additional syntax.

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#5-configure-elyra-to-work-with-airflow","title":"5: Configure Elyra to work with Airflow","text":"

Before we can build and run any DAGs through Elyra, we first need to configure Elyra to talk with our Airflow instance. There will be two ways to configure this, either visually or through the terminal. Chose one for each section. If you want to do it through the terminal, then open the terminal like this:

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#51-create-a-runtime-image","title":"5.1 Create a Runtime Image","text":"

We will start by configuring a Runtime Image, this is the image we will use to run each node in our pipeline. Open Runtime Images on the left-hand side of the screen.

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#511-create-the-runtime-image-visually","title":"5.1.1 Create the Runtime Image visually","text":"

Press the plus icon next to the Runtime Images title to start creating a new Runtime Image. There are only three fields we need to worry about here:

  • Display name: airflow-runner
  • Image Name: quay.io/eformat/airflow-runner:2.5.1
  • Image Pull Policy: Always

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#512-create-the-runtime-image-via-the-terminal","title":"5.1.2 Create the Runtime Image via the terminal","text":"

Execute this in the terminal:

mkdir -p ~/.local/share/jupyter/metadata/runtime-images/\ncat << EOF > ~/.local/share/jupyter/metadata/runtime-images/airflow-runner.json\n{\n  \"display_name\": \"airflow-runner\",\n  \"metadata\": {\n    \"tags\": [],\n    \"display_name\": \"airflow-runner\",\n    \"image_name\": \"quay.io/eformat/airflow-runner:2.5.1\",\n    \"pull_policy\": \"Always\"\n  },\n  \"schema_name\": \"runtime-image\"\n}\nEOF\n

Refresh and you should see airflow-runner appear in the Runtime Images.

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#52-create-a-runtime","title":"5.2 Create a Runtime","text":"

Now we just need a Runtime configuration, which is what Elyra will use to save the DAG (in our Git repo), connect to Airflow and run the pipeline. Just like with the Runtime image, we can configure this visually or via the terminal.

Open Runtimes on the left-hand side of the screen.

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#521-configure-the-runtime-visually","title":"5.2.1 Configure the Runtime visually","text":"

Press the plus icon next to the title, select \"New Apache Airflow runtime configuration\" and enter these fields:

General settings:

  • Display Name: airflow

Airflow settings:

  • Apache Airflow UI Endpoint: run oc get route -n airflow to get the route
  • Apache Airflow User Namespace: airflow

Github/GitLabs settings:

  • Git type: GITHUB or GITLAB, depending on where you stored the repository
  • GitHub or GitLab server API Endpoint: https://api.github.com or your GitLab endpoint
  • GitHub or GitLab DAG Repository: Your repository (red-hat-data-services/telecom-customer-churn-airflow in my case)
  • GitHub or GitLab DAG Repository Branch: Your branch (main in my case)
  • Personal Access Token: A personal access token for pushing to the repository

Cloud Object Storage settings: These completely depend on where and how you set up your S3 storage. If you created a bucket from ODF then it will look similar to this:

  • Cloud Object Storage Endpoint: http://s3.openshift-storage.svc
  • Cloud Object Storage Bucket Name: The name of your bucket (airflow-storage-729b10d1-f44d-451d-badb-fbd140418763 in my case)
  • Cloud Object Storage Authentication Type: KUBERNETES_SECRET
  • Cloud Object Storage Credentials Secret: The name of your secret containing the access and secret key is (in my case it was airflow-storage, which is the name I gave the Object Bucket Claim)
  • Cloud Object Storage Username: your AWS_ACCESS_KEY_ID
  • Cloud Object Storage Password: your AWS_SECRET_ACCESS_KEY
"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#522-configure-the-runtime-via-the-terminal","title":"5.2.2 Configure the Runtime via the terminal","text":"

If you prefer doing this through the terminal, then execute this in the terminal and replace any variables with their values (see the visual section for hints):

mkdir -p ~/.local/share/jupyter/metadata/runtimes\ncat << EOF >  ~/.local/share/jupyter/metadata/runtimes/airflow.json\n{\n  \"display_name\": \"airflow\",\n  \"metadata\": {\n     \"tags\": [],\n     \"display_name\": \"airflow\",\n     \"user_namespace\": \"airflow\",\n     \"git_type\": \"GITHUB\",\n     \"github_api_endpoint\": \"https://${GIT_SERVER}\",\n     \"api_endpoint\": \"${AIRFLOW_ROUTE}\",\n     \"github_repo\": \"${GIT_REPO}\",\n     \"github_branch\": \"main\",\n     \"github_repo_token\": \"${GIT_TOKEN}\",\n     \"cos_auth_type\": \"KUBERNETES_SECRET\",\n     \"cos_endpoint\": \"${STORAGE_ENDPOINT}\",\n     \"cos_bucket\": \"${STORAGE_BUCKET}\",\n     \"cos_secret\": \"airflow-storage\" - the name of your secret,\n     \"cos_username\": \"${AWS_ACCESS_KEY_ID}\",\n     \"cos_password\": \"${AWS_SECRET_ACCESS_KEY}\",\n     \"runtime_type\": \"APACHE_AIRFLOW\"\n  },\n  \"schema_name\": \"airflow\"\n}\nEOF\n

Refresh and you should see airflow appear in the Runtimes.

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#6-create-a-dag-with-elyra","title":"6. Create a DAG with Elyra","text":"

Now that we have a runtime and runtime image defined, we can build and run the pipeline. You can also find this pipeline in /dags/train_and_compare_models.pipeline if you prefer to just open an existing one.

To start creating a new pipeline, open up the launcher (click on the plus next to a notebook tab if you don't have it open), and press the \"Apache Airflow Pipeline Editor\".

Now drag the Notebooks in the correct order and connect them up with each other. You can find the Notebooks in /included/notebooks and the correct order is: process_data -> model_gradient_boost & model_randomforest -> compare_and_push. These are their functions:

  • process_data.ipynb: Downloads data from GitHub that we will use to train the models. Then processes it, splits it into training and testing partitions and finally pushes it to S3.
  • model_gradient_boost.ipynb: Fetches the processed data from S3 and uses it to train the model and evaluate it to get a test accuracy. Then pushes the model and the accompanying accuracy to S3.
  • model_randomforest.ipynb: Fetches the processed data from S3 and uses it to train the model and evaluate it to get a test accuracy. Then pushes the model and the accompanying accuracy to S3.
  • compare_and_push.ipynb: Downloads the models and their accuracies from S3, does a simple compare on which performs better, and pushes that model under the name \"best_model\" to S3.

After the notebooks are added, we need to go through each of them and change their Runtime Images to airflow-runner that we created earlier.

We also need to set some environment variables so that the airflow nodes get access to the bucket name and endpoint when running, without hard-coding it in the notebooks. These details are already added to the Airflow Runtime we set up before, but when running it only passes along the Kubernetes secret which contains AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

Add these two environment variables (both should be the same as you entered in section 5.2):

  • Endpoint:
    • Name: AWS_S3_ENDPOINT
    • Value: http://s3.openshift-storage.svc (or similar endpoint address)
  • Bucket name:
    • Name: AWS_S3_BUCKET
    • Value: The name of your bucket (airflow-storage-729b10d1-f44d-451d-badb-fbd140418763 in my case)

Press Run to start the pipeline:

You can now go to the Airflow UI to see the progress. If you have closed the tab then refer to section 1.

In Airflow you will see a dag called train_and_compare_models with some numbers behind it. Click on it and go open the Graph tab.

If all are dark green that means that the run has completed successfully.

We can now also confirm that the trained model was saved in our bucket by going back to the RHODS notebook and running the notebook test_airflow_success.ipynb. If all went well it should print the model, its type and its accuracy.

And that's how you can use Airflow together with RHODS to create a pipeline!

"},{"location":"demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/#bonus-section-use-an-airflow-dag-file","title":"Bonus section: Use an Airflow DAG file","text":"

Instead of building a pipeline through notebooks in Elyra, we can of course build and use an Airflow DAG. You can develop individual methods (data processing, mode training, etc) in RHODS notebooks and then pull them all together in a DAG python file. This is a more segmented way for a Data Scientist to work than with Elyra, but still very possible within OpenShift and provides some more flexibility.

I have created a simple test_dag.py just to show what it can look like. You can find it in the /dags folder. Then it's up to you what operators you want to run, which secrets you want to load, etc. For inspiration, you can open up the automatically created Elyra DAG we just ran. To do that, go into the DAG and press Code:

Some notes if you wish to manually build a similar DAG:

  • Make sure to add the environment variables
  • Don't hardcode secrets into the DAG, but rather reference a Kubernetes secret. For example:
secrets=[\n        Secret(\"env\", \"AWS_ACCESS_KEY_ID\", \"airflow-storage\", \"AWS_ACCESS_KEY_ID\"),\n        Secret(\n            \"env\", \"AWS_SECRET_ACCESS_KEY\", \"airflow-storage\", \"AWS_SECRET_ACCESS_KEY\"\n        ),\n    ]\n
  • The image that is being used for the KubernetesPodOperator is quay.io/eformat/airflow-runner:2.5.1
  • If you want to run notebooks manually, look at the Papermill Operator
"},{"location":"demos/water-pump-failure-prediction/water-pump-failure-prediction/","title":"Water Pump Failure Prediction","text":"

Info

The full source for this demo is available in this repo. Look in the workshop folder for the full instructions.

This demo shows how to do detection of anomalies in sensor data. This web app allows you to broadcast various sources of data in real time.

"},{"location":"demos/xray-pipeline/xray-pipeline/","title":"XRay Analysis Automated Pipeline","text":"

Info

The full source and instructions for this demo are available in this repo

In this demo, we implement an automated data pipeline for chest Xray analysis:

  • Ingest chest Xrays into an object store based on Ceph.
  • The Object store sends notifications to a Kafka topic.
  • A KNative Eventing Listener to the topic triggers a KNative Serving function.
  • An ML-trained model running in a container makes a risk of Pneumonia assessment for incoming images.
  • A Grafana dashboard displays the pipeline in real time, along with images incoming, processed and anonymized, as well as full metrics.

This pipeline is showcased in this video (slides are also here).

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/","title":"Model Training and Serving - YOLOv5","text":"

In this tutorial, we're going to see how you can customize YOLOv5, an object detection model, to recognize specific objects in pictures, and how to deploy and use this model.

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#yolo-and-yolov5","title":"YOLO and YOLOv5","text":"

YOLO (You Only Look Once) is a popular object detection and image segmentation model developed by Joseph Redmon and Ali Farhadi at the University of Washington. The first version of YOLO was released in 2015 and quickly gained popularity due to its high speed and accuracy.

YOLOv2 was released in 2016 and improved upon the original model by incorporating batch normalization, anchor boxes, and dimension clusters. YOLOv3 was released in 2018 and further improved the model's performance by using a more efficient backbone network, adding a feature pyramid, and making use of focal loss.

In 2020, YOLOv4 was released which introduced a number of innovations such as the use of Mosaic data augmentation, a new anchor-free detection head, and a new loss function.

In 2021, Ultralytics released YOLOv5, which further improved the model's performance and added new features such as support for panoptic segmentation and object tracking.

YOLO has been widely used in a variety of applications, including autonomous vehicles, security and surveillance, and medical imaging. It has also been used to win several competitions, such as the COCO Object Detection Challenge and the DOTA Object Detection Challenge.

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#model-training","title":"Model training","text":"

YOLOv5 has already been trained to recognize some objects. Here we are going to use a technique called Transfer Learning to adjust YOLOv5 to recognize a custom set of images.

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#transfer-learning","title":"Transfer Learning","text":"

Transfer learning is a machine learning technique in which a model trained on one task is repurposed or adapted to another related task. Instead of training a new model from scratch, transfer learning allows the use of a pre-trained model as a starting point, which can significantly reduce the amount of data and computing resources needed for training.

The idea behind transfer learning is that the knowledge gained by a model while solving one task can be applied to a new task, provided that the two tasks are similar in some way. By leveraging pre-trained models, transfer learning has become a powerful tool for solving a wide range of problems in various domains, including natural language processing, computer vision, and speech recognition.

Ultralytics have fully integrated the transfer learning process in YOLOv5, making it easy for us to do. Let's go!

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#environment-and-prerequisites","title":"Environment and prerequisites","text":"
  • This training should be done in a Data Science Project to be able to modify the Workbench configuration (see below the /dev/shm issue).
  • YOLOv5 is using PyTorch, so in RHODS it's better to start with a notebook image already including this library, rather than having to install it afterwards.
  • PyTorch is internally using shared memory (/dev/shm) to exchange data between its internal worker processes. However, default container engine configurations limit this memory to the bare minimum, which can make the process exhaust this memory and crash. The solution is to manually increase this memory by mounting a specific volume with enough space at this emplacement. This problem will be fixed in an upcoming version. Meanwhile you can use this procedure.
  • Finally, a GPU is strongly recommended for this type of training.
"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#data-preparation","title":"Data Preparation","text":"

To train the model we will of course need some data. In this case a sufficient number of images for the various classes we want to recognize, along with their labels and the definitions of the bounding boxes for the object we want to detect.

In this example we will use images from Google's Open Images. We will work with 3 classes: Bicycle, Car and Traffic sign.

We have selected only a few classes in this example to speed up the process, but of course feel free to adapt and choose the ones you want.

For this first step:

  • If not already done, create your Data Science Project,
  • Create a Workbench of type PyTorch, with at least 8Gi of memory, 1 GPU and 20GB of storage.
  • Apply this procedure to increase shared memory.
  • Start the workbench.
  • Clone the repository https://github.com/rh-aiservices-bu/yolov5-transfer-learning, open the notebook 01-data_preparation.ipynb and follow the instructions.

Once you have completed to whole notebook the Dataset is ready for training!

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#training","title":"Training","text":"

In this example, we will do the training with the smallest base model available to save some time. Of course you can change this base model and adapt the various hyperparameters of the training to improve the result.

For this second step, from the same workbench environment, open the notebook 02-model_training.ipynb and follow the instructions.

Warning

The amount of memory you have assigned to your Workbench has a great impact on the batch size you will be able to work with, independently of the size of your GPU. For example, a batch size of 128 will barely fit into an 8Gi of memory Pod. The higher the better, until it breaks... Which you will find out soon anyway, after the first 1-2 epochs.

Note

During the training, you can launch and access Tensorboard by:

  • Opening a Terminal tab in Jupyter
  • Launch Tensorboard from this terminal with tensorboard --logdir yolov5/runs/train
  • Access Tensorboard in your browser using the same Route as your notebook, but replacing the .../lab/... part by .../proxy/6006/. Example: https://yolov5-yolo.apps.cluster-address/notebook/yolo/yolov5/proxy/6006/

Once you have completed to whole notebook you have a model that is able to recognize the three different classes on a given image.

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#model-serving","title":"Model Serving","text":"

We are going to serve a YOLOv5 model using the ONNX format, a general purpose open format built to represent machine learning models. RHODS Model Serving includes the OpenVino serving runtime that accepts two formats for models: OpenVino IR, its own format, and ONNX.

Note

Many files and code we are going to use, especially the ones from the utils and models folders, come directly from the YOLOv5 repository. They includes many utilities and functions needed for image pre-processing and post-processing. We kept only what is needed, rearranged in a way easier to follow within notebooks. YOLOv5 includes many different tools and CLI commands that are worth learning, so don't hesitate to have a look at it directly.

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#environment-and-prerequisites_1","title":"Environment and prerequisites","text":"
  • YOLOv5 is using PyTorch, so in RHODS it's better to start with a notebook image already including this library, rather than having to install it afterwards.
  • Although not necessary as in this example we won't use the model we trained in the previous section, the same environment can totally be reused.
"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#converting-a-yolov5-model-to-onnx","title":"Converting a YOLOv5 model to ONNX","text":"

YOLOv5 is based on PyTorch. So base YOLOv5 models, or the ones you retrain using this framework, will come in the form of a model.pt file. We will first need to convert it to the ONNX format.

  • From your workbench, clone the repository https://github.com/rh-aiservices-bu/yolov5-model-serving.
  • Open the notebook 01-yolov5_to_onnx.ipynb and follow the instructions.
  • The notebook will guide you through all the steps for the conversion. If you don't want to do it at this time, you can also find in this repo the original YOLOv5 \"nano\" model, yolov5n.pt, and its already converted ONNX version, yolov5n.onnx.

Once converted, you can save/upload your ONNX model to the storage you will use in your Data Connection on RHODS. At the moment it has to be an S3-Compatible Object Storage, and the model must be in it own folder (not at the root of the bucket).

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#serving-the-model","title":"Serving the model","text":"

Here we can use the standard configuration path for RHODS Model Serving:

  • Create a Data Connection to the storage where you saved your model. In this example we don't need to expose an external Route, but of course you can. In this case though, you won't be able to directly see the internal gRPC and REST endpoints in the RHODS UI, you will have to get them from the Network->Services panel in the OpenShift Console.
  • Create a Model Server, then deploy the model using the ONNX format.

Note

You can find full detailed versions of this procedure in this Learning Path or in the RHODS documentation.

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#grpc-connection","title":"gRPC connection","text":"

With the gRPC interface of the model server, you have access to different Services. They are described, along with their format, in the grpc_predict_v2.proto file.

There are lots of important information in this file: how to query the service, how to format the data,... This is really important as the data format is not something you can \"invent\", and not exactly the same compared as the REST interface (!).

This proto file, which is a service description meant to be used with any programming language, has already been converted to usable Python modules defining objects and classes to be used to interact with the service: grpc_predict_v2_pb2.py and grpc_predict_v2_pb2_grpc.py. If you want to learn more about this, the conversion can be done using the protoc tool.

You can use the notebook 02-grpc.ipynb to connect to the interface and test some of the services. You will see that many \"possible\" services from ModelMesh are unfortunately simply not implemented with the OpenVino backend at the time of this writing. But at least ModelMetadata will give some information on the formats we have to use for inputs and outputs when doing the inference.

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#consuming-the-model-over-grpc","title":"Consuming the model over gRPC","text":"

In the 03-remote_inference_grpc.ipynb notebook, you will find a full example on how to query the grpc endpoint to make an inference. It is backed by the file remote_infer_grpc.py, where most of the relevant code is:

  • Image preprocessing on L35: reads the image and transforms it in a proper numpy array
  • gRPC request content building on L44: transforms the array in the expected input shape (refer to model metadata obtained in the previous notebook), then flatten it as expected by ModelMesh.
  • gRPC calling on L58.
  • Response processing on L73: reshape the response from flat array to expected output shape (refer to model metadata obtained in the previous notebook), run NMS to remove overlapping boxes, draw the boxes from results.

The notebook gives the example for one image, as well as the processing of several ones from the images folder. This allows for a small benchmark on processing/inference time.

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#consuming-the-model-over-rest","title":"Consuming the model over REST","text":"

In the 04-remote_inference_rest.ipynb notebook, you will find a full example on how to query the gRPC endpoint to make an inference. It is backed by the file remote_infer_rest.py, where most of the relevant code is:

  • Image preprocessing on L30: reads the image and transforms it in a proper numpy array
  • Payload building on L39: transforms the array in the expected input shape (refer to model metadata obtained in the previous notebook).
  • REST calling on L54.
  • Response processing on L60: reshape the response from flat array to expected output shape (refer to model metadata obtained in the previous notebook), run NMS to remove overlapping boxes, draw the boxes from results.

The notebook gives the example for one image, as well as the processing of several ones from the images folder. This allows for a small benchmark on processing/inference time.

"},{"location":"demos/yolov5-training-serving/yolov5-training-serving/#grpc-vs-rest","title":"gRPC vs REST","text":"

Here are a few elements to help you choose between the two available interfaces to query your model:

  • REST is easier to implement: it is a much better known protocol for most people, and involves a little bit less programming. There is no need to create a connection, instantiate objects,... So it's often easier to use.
  • If you want to query the model directly from outside OpenShift, you have to use REST which is the only one exposed. You can expose gRPC too, but it's kind of difficult right now.
  • gRPC is wwwwwaaaayyyyy much faster than REST. With the exact same model serving instance, as showed in the notebooks, inferences are about 30x faster. That is huge when you have score of images to process.
"},{"location":"getting-started/opendatahub/","title":"What is Open Data Hub?","text":"

Open Data Hub (ODH) is an open source project that provides open source AI tools for running large and distributed AI workloads on the OpenShift Container Platform. Currently, the Open Data Hub project provides open source tools for distributed AI and Machine Learning (ML) workflows, Jupyter Notebook development environment and monitoring. The Open Data Hub project roadmap offers a view on new tools and integration the project developers are planning to add.

Included in the Open Data Hub core deployment is several open source components, which can be individually enabled. They include:

  • Jupyter Notebooks
  • ODH Dashboard
  • Data Science Pipelines
  • Model Mesh Serving

Want to know more?

"},{"location":"getting-started/openshift-data-science/","title":"OpenShift Data Science","text":""},{"location":"getting-started/openshift-data-science/#what-is-red-hat-openshift-data-science","title":"What is Red Hat OpenShift Data Science?","text":"

Red Hat\u00ae OpenShift\u00ae Data Science is a managed cloud service that IT operations teams can enable for data scientists and developers of intelligent applications. It provides a fully supported environment in which to rapidly develop, train, and test machine learning (ML) models in the public cloud before deploying in production.

Documentation for Managed RHODS

Documentation for Self-Managed RHODS

"},{"location":"getting-started/openshift-data-science/#accelerate-your-data-science","title":"Accelerate your data science","text":"

Red Hat OpenShift Data Science provides a fully managed cloud service environment on Red Hat OpenShift Service on AWS or Red Hat OpenShift Dedicated

Red Hat OpenShift Data Science allows organizations to quickly build and deploy artificial intelligence (AI)/ML models by integrating open source tooling with commercial partner applications.

The ML models built in Red Hat OpenShift Data Science are easily portable to other platforms, allowing teams to deploy them in production, on containers, whether on-premise, at the edge or in the public cloud.

Want to know more?

"},{"location":"getting-started/openshift/","title":"OpenShift and AI","text":""},{"location":"getting-started/openshift/#what-is-red-hat-openshift","title":"What is Red Hat OpenShift?","text":"

Red Hat OpenShift brings together tested and trusted services to reduce the friction of developing, modernizing, deploying, running, and managing applications. Built on Kubernetes, it delivers a consistent experience across public cloud, on-premise, hybrid cloud, or edge architecture. Choose a self-managed or fully managed solution. No matter how you run it, OpenShift helps teams focus on the work that matters.

Want to know more?

"},{"location":"getting-started/openshift/#why-ai-on-openshift","title":"Why AI on OpenShift?","text":"

AI/ML on OpenShift accelerates AI/ML workflows and the delivery of AI-powered intelligent application.

"},{"location":"getting-started/openshift/#mlops-with-red-hat-openshift","title":"MLOps with Red Hat OpenShift","text":"

Red Hat OpenShift includes key capabilities to enable machine learning operations (MLOps) in a consistent way across datacenters, public cloud computing, and edge computing.

By applying DevOps and GitOps principles, organizations automate and simplify the iterative process of integrating ML models into software development processes, production rollout, monitoring, retraining, and redeployment for continued prediction accuracy.

Learn more

"},{"location":"getting-started/openshift/#what-is-a-ml-lifecycle","title":"What is a ML lifecycle?","text":"

A multi-phase process to obtain the power of large volumes and a variety of data, abundant compute, and open source machine learning tools to build intelligent applications.

At a high level, there are four steps in the lifecycle:

  1. Gather and prepare data to make sure the input data is complete, and of high quality
  2. Develop model, including training, testing, and selection of the model with the highest prediction accuracy
  3. Integrate models in application development process, and inferencing
  4. Model monitoring and management, to measure business performance and address potential production data drift

On this site, you will find recipes, patterns, demos for various AI/ML tools and applications used through those steps.

"},{"location":"getting-started/openshift/#why-use-containers-and-kubernetes-for-your-machine-learning-initiatives","title":"Why use containers and Kubernetes for your machine learning initiatives?","text":"

Containers and Kubernetes are key to accelerating the ML lifecycle as these technologies provide data scientists the much needed agility, flexibility, portability, and scalability to train, test, and deploy ML models.

Red Hat\u00ae OpenShift\u00ae is the industry's leading containers and Kubernetes hybrid cloud platform. It provides all these benefits, and through the integrated DevOps capabilities (e.g. OpenShift Pipelines, OpenShift GitOps, and Red Hat Quay) and integration with hardware accelerators, it enables better collaboration between data scientists and software developers, and accelerates the roll out of intelligent applications across hybrid cloud (data center, edge, and public clouds).

"},{"location":"getting-started/why-this-site/","title":"Why this site?","text":"

As data scientists and engineers, it's easy to find detailed documentation on the tools and libraries we use. But what about end-to-end data pipeline solutions that involve multiple products? Unfortunately, those resources can be harder to come by. Open source communities often don't have the resources to create and maintain them. But don't worry, that's where this website comes in!

We've created a one-stop-shop for data practitioners to find recipes, reusable patterns, and actionable demos for building AI/ML solutions on OpenShift. And the best part? It's a community-driven resource site! So, feel free to ask questions, make feature requests, file issues, and even submit PRs to help us improve the content. Together, we can make data pipeline solutions easier to find and implement.

"},{"location":"odh-rhods/configuration/","title":"ODH and RHODS Configuration","text":""},{"location":"odh-rhods/configuration/#standard-configuration","title":"Standard configuration","text":"

As an administrator of ODH/RHODS, you have access to different settings through the Settings menu on the dashboard:

"},{"location":"odh-rhods/configuration/#custom-notebook-images","title":"Custom notebook images","text":"

This is where you can import other notebook images. You will find resources on available custom images and learn how to create your own in the Custom Notebooks section.

To import a new image, follow those steps.

  • Click on import image.

  • Enter the full address of your container, set a name (this is what will appear in the launcher), and a description.

  • On the bottom part, add information regarding the software and the packages that are present in this image. This is purely informative.

  • Your image is now listed and enabled. You can hide it without removing it by simply disabling it.

  • It is now available in the launcher, as well as in the Data Science Projects.

"},{"location":"odh-rhods/configuration/#cluster-settings","title":"Cluster settings","text":"

In this panel, you can adjust:

  • The default size of the volumes created for new users.
  • Whether you want to stop idle notebooks and, if so, after how much time.

Note

This feature currently looks at running Jupyter kernels, like a Python notebook. If you are only using a Terminal, or another IDE window like VSCode or RStudio from the custom images, this activity is not detected and your Pod can be stopped without notice after the set delay.

  • Whether you allow usage data to be collected and reported.
  • Whether you want to add a toleration to the notebook pods to allow them to be scheduled on tainted nodes. That feature is really useful if you want to dedicate specific worker nodes to running notebooks. Tainting them will prevent other workloads from running on them. Of course, you have to add the toleration to the pods.

"},{"location":"odh-rhods/configuration/#user-management","title":"User management","text":"

In this panel, you can edit who has access to RHODS by defining the \"Data Science user groups\", and who has access to the Settings by defining the \"Data Science administrator groups\".

"},{"location":"odh-rhods/configuration/#advanced-configuration","title":"Advanced configuration","text":""},{"location":"odh-rhods/configuration/#dashboard-configuration","title":"Dashboard configuration","text":"

RHODS or ODH main configuration is done through a Custom Resource (CR) of type odhdashboardconfigs.opendatahub.io.

  • To get access to it, from your OpenShift console, navigate to Home->API Explorer, and filter for OdhDashboardConfig:

  • Click on OdhDashboardConfig and in the Instances tab, click on odh-dashboard-config:

  • You can now view and edit the YAML file to modify the configuration:

In the spec section, the following items are of interest:

  • dashboardConfig: The different toggles will allow you to activate/deactivate certain features. For example, you may want to hide Model Serving for your users or prevent them from importing custom images.
  • notebookSizes: This is where you can fully customize the sizes of the notebooks. You can modify the resources and add or remove sizes from the default configuration as needed.
  • modelServerSizes: This setting operates on the same concept as the previous setting but for model servers.
"},{"location":"odh-rhods/configuration/#adding-a-custom-application","title":"Adding a custom application","text":"

Let's say you have installed another application in your cluster and want to make it available through the dashboard. That's easy! A tile is, in fact, represented by a custom resource (CR) of type OdhApplication.

In this example, we will add a tile to access the MLFlow UI (see the MLFlow installation instructions to test it).

  • The file mlflow-tile.yaml provides you with an example of how to create the tile.
  • Edit this file to set the route (the name of the Route CR) and routeNamespace parameters to where the UI is accessible. In this example, it is mlflow-server(route name) and mlflow (server). Apply this file to create the resource.
  • Wait 1-2 minutes for the change to take effect. Your tile is now available in the Explore view (bottom left):

  • However, it is not yet enabled. To enable this tile, click on it in the Explorer view, then click the \"Enable\" button at the top of the description. You can also create a ConfigMap from the file cm-mlflow-enable.yaml.
  • Wait another 1-2 minutes, and your tile is now ready to use in the Enabled view:

"},{"location":"odh-rhods/custom-notebooks/","title":"Custom Notebooks","text":"

Custom notebook images are useful if you want to add libraries that you often use, or that you require at a specific version different than the one provided in the base images. It's also useful if you need to use OS packages or applications, which you cannot install on the fly in your running environment.

"},{"location":"odh-rhods/custom-notebooks/#image-source-and-pre-built-images","title":"Image source and Pre-built images","text":"

In the opendatahub-io-contrib/workbench-images repository, you will find the source code as well as pre-built images for a lot of use cases. A few of the available images are:

  • Base and CUDA-enabled images for different \"lines\" of OS: UBI8, UBI9, and Centos Stream 9.
  • Jupyter images enhanced with:
    • specific libraries like OptaPy or Monai,
    • with integrated applications like Spark,
    • providing other IDEs like VSCode or RStudio
  • VSCode
  • RStudio

All those images are constantly and automatically updated and rebuilt for the latest patch and fixes, and new releases are available regularly to provide new versions of the libraries or the applications.

"},{"location":"odh-rhods/custom-notebooks/#building-your-own-images","title":"Building your own images","text":"

In the repository above, you will find many examples from the source code to help you understand how to create your own image. Here are a few rules, tips and examples to help you.

"},{"location":"odh-rhods/custom-notebooks/#rules","title":"Rules","text":"
  • On OpenShift, every containers in a standard namespace (unless you modify security) run with a user with a random user id (uid), and the group id (gid) 0. Therefore, all the folders that you want to write in, and all the files you want to modify (temporarily) in your image must be accessible by this user. The best practice is to set the ownership at 1001:0 (user \"default\", group \"0\").
  • If you don't want/can't do that, another solution is to set permissions properly for any user, like 775.
  • When launching a notebook from Applications->Enabled, the \"personal\" volume of a user is mounted at /opt/app-root/src. This is not configurable, so make sure to build your images with this default location for the data that you want persisted.
"},{"location":"odh-rhods/custom-notebooks/#how-tos","title":"How-tos","text":""},{"location":"odh-rhods/custom-notebooks/#install-python-packages","title":"Install Python packages","text":"
  • Start from a base image of your choice. Normally it's already running under user 1001, so no need to change it.
  • Copy your pipfile.lock or your requirements.txt
  • Install your packages

Example:

FROM BASE_IMAGE\n\n# Copying custom packages\nCOPY Pipfile.lock ./\n\n# Install packages and cleanup\n# (all commands are chained to minimize layer size)\nRUN echo \"Installing softwares and packages\" && \\\n    # Install Python packages \\\n    micropipenv install && \\\n    rm -f ./Pipfile.lock\n    # Fix permissions to support pip in Openshift environments \\\n    chmod -R g+w /opt/app-root/lib/python3.9/site-packages && \\\n    fix-permissions /opt/app-root -P\n\nWORKDIR /opt/app-root/src\n\nENTRYPOINT [\"start-notebook.sh\"]\n

In this example, the fix-permissions script (present in all standard images and custom images from the opendatahub-contrib repo) fixes any bad ownership or rights that may be present.

"},{"location":"odh-rhods/custom-notebooks/#install-an-os-package","title":"Install an OS package","text":"
  • If you have to install OS packages and Python packages, it's better to start with the OS.
  • In your Containerfile/Dockerfile, switch to user 0, install your package(s), then switch back to user 1001. Example:
USER 0\n\nRUN INSTALL_PKGS=\"java-11-openjdk java-11-openjdk-devel\" && \\\n    yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \\\n    yum -y clean all --enablerepo='*'\n\nUSER 1001\n
"},{"location":"odh-rhods/custom-notebooks/#tips-and-tricks","title":"Tips and tricks","text":""},{"location":"odh-rhods/custom-notebooks/#enabling-codeready-builder-crb-and-epel","title":"Enabling CodeReady Builder (CRB) and EPEL","text":"

CRB and EPEL are repositories providing packages absent from a standard RHEL or UBI installation. They are useful and required to be able to install specific software (RStudio, I'm looking at you...).

  • Enabling EPEL on UBI9-based images (on UBI9 images CRB is now enabled by default.):
RUN yum install -y https://download.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm\n
  • Enabling CRB and EPEL on Centos Stream 9-based images:
RUN yum install -y yum-utils && \\\n    yum-config-manager --enable crb && \\\n    yum install -y https://download.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm\n
"},{"location":"odh-rhods/custom-notebooks/#minimizing-image-size","title":"Minimizing image size","text":"

A container image uses a \"layered\" filesystem. Every time you have in your file a COPY or a RUN command, a new layer is created. Nothing is ever deleted: removing a file is simply \"masking\" it in the next layer. Therefore you must bee very careful when you create your Containerfile/Dockerfile.

  • If you start from an image that is constantly updated, like ubi9/python-39 from the Red Hat Catalog, don't do a yum update. This will only fetch new metadata, update a few files that may not have any impact, and get you a bigger image.
  • Rebuilt your images often from scratch, but don't do a yum update on a previous version.
  • Group your RUN commands as much as you can, add && \\ at the end of each line to chain your commands.
  • If you need to compile something for building an image, use the multi-stage builds approach. Build the library or application in an intermediate container image, then copy the result to your final image. Otherwise, all the build artefacts will persist in your image...
"},{"location":"odh-rhods/custom-runtime-triton/","title":"Deploying and using a Custom Serving Runtime in ODH/RHODS","text":"

Although these instructions were tested mostly using RHODS (Red Hat OpenShift Data Science), they apply to ODH (Open Data Hub) as well.

"},{"location":"odh-rhods/custom-runtime-triton/#before-you-start","title":"Before you start","text":"

This document will guide you through the broad steps necessary to deploy a custom Serving Runtime in order to serve a model using the Triton Runtime (NVIDIA Triton Inference Server).

While RHODS supports your ability to add your own runtime, it does not support the runtimes themselves. Therefore, it is up to you to configure, adjust and maintain your custom runtimes.

This document expects a bit of familiarity with RHODS.

The sources used to create this document are mostly:

  • https://github.com/kserve/modelmesh-serving/tree/main/config/runtimes
  • https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver
  • Official Red Hat OpenShift Data Science Documentation
"},{"location":"odh-rhods/custom-runtime-triton/#adding-the-custom-triton-runtime","title":"Adding the custom triton runtime","text":"
  1. Log in to your OpenShift Data Science with a user who is part of the RHODS admin group.
    1. (by default, cluster-admins and dedicated admins are).
  2. Navigate to the Settings menu, then Serving Runtimes

  3. Click on the Add Serving Runtime button:

  4. Click on Start from scratch and in the window that opens up, paste the following YAML:

    # Copyright 2021 IBM Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\napiVersion: serving.kserve.io/v1alpha1\n# kind: ClusterServingRuntime     ## changed by EG\nkind: ServingRuntime\nmetadata:\n  name: triton-23.05-20230804\n  labels:\n    name: triton-23.05-20230804\n  annotations:\n    maxLoadingConcurrency: \"2\"\n    openshift.io/display-name: \"Triton runtime 23.05 - added on 20230804 - with /dev/shm\"\nspec:\n  supportedModelFormats:\n    - name: keras\n      version: \"2\" # 2.6.0\n      autoSelect: true\n    - name: onnx\n      version: \"1\" # 1.5.3\n      autoSelect: true\n    - name: pytorch\n      version: \"1\" # 1.8.0a0+17f8c32\n      autoSelect: true\n    - name: tensorflow\n      version: \"1\" # 1.15.4\n      autoSelect: true\n    - name: tensorflow\n      version: \"2\" # 2.3.1\n      autoSelect: true\n    - name: tensorrt\n      version: \"7\" # 7.2.1\n      autoSelect: true\n\n  protocolVersions:\n    - grpc-v2\n  multiModel: true\n\n  grpcEndpoint: \"port:8085\"\n  grpcDataEndpoint: \"port:8001\"\n\n  volumes:\n    - name: shm\n      emptyDir:\n        medium: Memory\n        sizeLimit: 2Gi\n  containers:\n    - name: triton\n      # image: tritonserver-2:replace   ## changed by EG\n      image: nvcr.io/nvidia/tritonserver:23.05-py3\n      command: [/bin/sh]\n      args:\n        - -c\n        - 'mkdir -p /models/_triton_models;\n          chmod 777 /models/_triton_models;\n          exec tritonserver\n          \"--model-repository=/models/_triton_models\"\n          \"--model-control-mode=explicit\"\n          \"--strict-model-config=false\"\n          \"--strict-readiness=false\"\n          \"--allow-http=true\"\n          \"--allow-sagemaker=false\"\n          '\n      volumeMounts:\n        - name: shm\n          mountPath: /dev/shm\n      resources:\n        requests:\n          cpu: 500m\n          memory: 1Gi\n        limits:\n          cpu: \"5\"\n          memory: 1Gi\n      livenessProbe:\n        # the server is listening only on 127.0.0.1, so an httpGet probe sent\n        # from the kublet running on the node cannot connect to the server\n        # (not even with the Host header or host field)\n        # exec a curl call to have the request originate from localhost in the\n        # container\n        exec:\n          command:\n            - curl\n            - --fail\n            - --silent\n            - --show-error\n            - --max-time\n            - \"9\"\n            - http://localhost:8000/v2/health/live\n        initialDelaySeconds: 5\n        periodSeconds: 30\n        timeoutSeconds: 10\n  builtInAdapter:\n    serverType: triton\n    runtimeManagementPort: 8001\n    memBufferBytes: 134217728\n    modelLoadingTimeoutMillis: 90000\n

  5. You will likely want to update the name , as well as other parameters.
  6. Click Add
  7. Confirm the new Runtime is in the list, and re-order the list as needed. (the order chosen here is the order in which the users will see these choices)

"},{"location":"odh-rhods/custom-runtime-triton/#creating-a-project","title":"Creating a project","text":"
  • Create a new Data Science Project
  • In this example, the project is called fraud
"},{"location":"odh-rhods/custom-runtime-triton/#creating-a-model-server","title":"Creating a model server","text":"
  1. In your project, scroll down to the \"Models and Model Servers\" Section
  2. Click on Configure server

  3. Fill out the details:

  4. Click Configure

"},{"location":"odh-rhods/custom-runtime-triton/#deploying-a-model-into-it","title":"Deploying a model into it","text":"
  1. If you don't have any model files handy, you can grab a copy of this file and upload it to your Object Storage of choice.
  2. Click on Deploy Model

  3. Choose a model name and a framework:

  4. Then create a new data connection containing the details of where your model is stored in Object Storage:

  5. After a little while, you should see the following:

"},{"location":"odh-rhods/custom-runtime-triton/#validating-the-model","title":"Validating the model","text":"
  1. If you've used the model mentioned earlier in this document, you can run the following command from a Linux prompt:
    function val-model {\n    myhost=\"$1\"\n    echo \"validating host $myhost\"\n    time curl -X POST -k \"${myhost}\" -d '{\"inputs\": [{ \"name\": \"dense_input\", \"shape\": [1, 7], \"datatype\": \"FP32\", \"data\": [57.87785658389723,0.3111400080477545,1.9459399775518593,1.0,1.0,0.0,0.0]}]}' | jq\n}\n\nval-model \"https://fraud-model-fraud.apps.mycluster.openshiftapps.com/v2/models/fraud-model/infer\"\n
  2. Change the host to match the address for your model.
  3. You should see an output similar to:
    {\n  \"model_name\": \"fraud-model__isvc-c1529f9667\",\n  \"model_version\": \"1\",\n  \"outputs\": [\n    {\n      \"name\": \"dense_3\",\n      \"datatype\": \"FP32\",\n      \"shape\": [\n        1,\n        1\n      ],\n      \"data\": [\n        0.86280495\n      ]\n    }\n  ]\n}\n
"},{"location":"odh-rhods/custom-runtime-triton/#extra-considerations-for-disconnected-environments","title":"Extra considerations for Disconnected environments.","text":"

The YAML included in this file makes a reference to the following Nvidia Triton Image: nvcr.io/nvidia/tritonserver:23.05-py3

Ensure that this image is properly mirrored into the mirror registry.

Also, update the YAML definition as needed to point to the image address that matches the image registry.

"},{"location":"odh-rhods/custom-runtime-triton/#gitops-related-information","title":"GitOps related information","text":"

Each of the activities performed via the user interface will create a Kubernetes Object inside your OpenShift Cluster.

  • The addition of a new runtime creates a template in the redhat-ods-applications namespace.
  • Each model server is defined as a ServingRuntime
  • Each model is defined as an InferenceService
  • Each Data Connection is stored as a Secret
"},{"location":"odh-rhods/nvidia-gpus/","title":"Working with NVIDIA GPUs","text":""},{"location":"odh-rhods/nvidia-gpus/#using-nvidia-gpus-on-openshift","title":"Using NVIDIA GPUs on OpenShift","text":""},{"location":"odh-rhods/nvidia-gpus/#how-does-this-work","title":"How does this work?","text":"

NVIDIA GPUs can be easily installed on OpenShift. Basically it involves installing two different operators.

The Node Feature Discovery operator will \"discover\" your cards from a hardware perspective and appropriately label the relevant nodes with this information.

Then the NVIDIA GPU operator will install the necessary drivers and tooling to those nodes. It will also integrate into Kubernetes so that when a Pod requires GPU resources it will be scheduled on the right node, and make sure that the containers are \"injected\" with the right drivers, configurations and tools to properly use the GPU.

So from a user perspective, the only thing you have to worry about is asking for GPU resources when defining your pods, with something like:

spec:\n  containers:\n  - name: app\n    image: ...\n    resources:\n      requests:\n        memory: \"64Mi\"\n        cpu: \"250m\"\n        nvidia.com/gpu: 2\n      limits:\n        memory: \"128Mi\"\n        cpu: \"500m\"\n

But don't worry, OpenShift Data Science and Open Data Hub take care of this part for you when you launch notebooks, workbenches, model servers, or pipeline runtimes!

"},{"location":"odh-rhods/nvidia-gpus/#installation","title":"Installation","text":"

Here is the documentation you can follow:

  • OpenShift Data Science documentation
  • NVIDIA documentation (more detailed)
"},{"location":"odh-rhods/nvidia-gpus/#advanced-configuration","title":"Advanced configuration","text":""},{"location":"odh-rhods/nvidia-gpus/#working-with-taints","title":"Working with taints","text":"

In many cases, you will want to restrict access to GPUs, or be able to provide choice between different types of GPUs: simply stating \"I want a GPU\" is not enough. Also, if you want to make sure that only the Pods requiring GPUs end up on GPU-enabled nodes (and not other Pods that just end up being there at random because that's how Kubernetes works...), you're at the right place!

The only supported method at the moment to achieve this is to taint nodes, then apply tolerations on the Pods depending on where you want them scheduled. If you don't pay close attention though when applying taints on Nodes, you may end up with the NVIDIA drivers not installed on those nodes...

In this case you must:

  • Apply the taints you need to your Nodes or MachineSets, for example:

    apiVersion: machine.openshift.io/v1beta1\nkind: MachineSet\nmetadata:\n  ...\nspec:\n  replicas: 1\n  selector:\n    ...\n  template:\n    ...\n    spec:\n      ...\n      taints:\n        - key: restrictedaccess\n          value: \"yes\"\n          effect: NoSchedule\n
  • Apply the relevant toleration to the NVIDIA Operator.

    • In the nvidia-gpu-operator namespace, get to the Installed Operator menu, open the NVIDIA GPU Operator settings, get to the ClusterPolicy tab, and edit the ClusterPolicy.

    • Edit the YAML, and add the toleration in the daemonset section:

      apiVersion: nvidia.com/v1\nkind: ClusterPolicy\nmetadata:\n  ...\n  name: gpu-cluster-policy\nspec:\n  vgpuDeviceManager: ...\n  migManager: ...\n  operator: ...\n  dcgm: ...\n  gfd: ...\n  dcgmExporter: ...\n  cdi: ...\n  driver: ...\n  devicePlugin: ...\n  mig: ...\n  sandboxDevicePlugin: ...\n  validator: ...\n  nodeStatusExporter: ...\n  daemonsets:\n    ...\n    tolerations:\n      - effect: NoSchedule\n        key: restrictedaccess\n        operator: Exists\n  sandboxWorkloads: ...\n  gds: ...\n  vgpuManager: ...\n  vfioManager: ...\n  toolkit: ...\n...\n

That's it, the operator is now able to deploy all the NVIDIA tooling on the nodes, even if they have the restrictedaccess taint. Repeat the procedure for any other taint you want to apply to your nodes.

Note

The first taint that you want to apply on GPU nodes is nvidia.com/gpu. This is the standard taint for which the NVIDIA Operator has a built-in toleration, so no need to add it. Likewise, Notebooks, Workbenches or other components from ODH/RHODS that request GPUs will already have this toleration in place. For other Pods you schedule yourself, or using Pipelines, you should make sure the toleration is also applied. Doing this will ensure that only Pods really requiring GPUs are scheduled on those nodes.

You can of course apply many different taints at the same time. You would simply have to apply the matching toleration on the NVIDIA GPU Operator, as well as on the Pods that need to run there.

"},{"location":"odh-rhods/nvidia-gpus/#time-slicing-gpu-sharing","title":"Time Slicing (GPU sharing)","text":"

Do you want to share GPUs between different Pods? Time Slicing is one of the solutions you can use!

The NVIDIA GPU Operator enables oversubscription of GPUs through a set of extended options for the NVIDIA Kubernetes Device Plugin. GPU time-slicing enables workloads that are scheduled on oversubscribed GPUs to interleave with one another.

This mechanism for enabling time-slicing of GPUs in Kubernetes enables a system administrator to define a set of replicas for a GPU, each of which can be handed out independently to a pod to run workloads on. Unlike Multi-Instance GPU (MIG), there is no memory or fault-isolation between replicas, but for some workloads this is better than not being able to share at all. Internally, GPU time-slicing is used to multiplex workloads from replicas of the same underlying GPU.

Full reference

"},{"location":"odh-rhods/nvidia-gpus/#configuration","title":"Configuration","text":"

This is a simple example on how to quickly setup Time Slicing on your OpenShift cluster. In this example, we have a MachineSet that can provide nodes with one T4 card each that we want to make \"seen\" as 4 different cards so that multiple Pods requiring GPUs can be launched, even if we only have one node of this type.

  • Create the ConfigMap that will define how we want to slice our GPU:

    kind: ConfigMap\napiVersion: v1\nmetadata:\n  name: time-slicing-config\n  namespace: nvidia-gpu-operator\ndata:\n  tesla-t4: |-\n    version: v1\n    sharing:\n      timeSlicing:\n        resources:\n        - name: nvidia.com/gpu\n          replicas: 4\n

    Note

    • The ConfigMap has to be called time-slicing-config and must be created in the nvidia-gpu-operator namespace.
    • You can add many different resources with different configurations. You simply have to provide the corresponding Node label that has been applied by the operator, for example name: nvidia.com/mig-1g.5gb / replicas: 2 if you have a MIG configuration applied to a Node with a A100.
    • You can modify the value of replicas to present less/more GPUs. Be warned though: all the Pods on this node will share the GPU memory, with no reservation. The more slices you create, the more risks of OOM errors (out of memory) you get if your Pods are hungry (or even only one!).
  • Modify the ClusterPolicy called gpu-cluster-policy (accessible from the NVIDIA Operator view in the nvidia-gpu-operator namespace) to point to this configuration, and eventually add the default configuration (in case you nodes are not labelled correctly, see below)

    apiVersion: nvidia.com/v1\nkind: ClusterPolicy\nmetadata:\n  ...\n  name: gpu-cluster-policy\nspec:\n  ...\n  devicePlugin:\n    config:\n      default: tesla-t4\n      name: time-slicing-config\n  ...\n
  • Apply label to your MachineSet for the specific slicing configuration you want to use on it:

    apiVersion: machine.openshift.io/v1beta1\nkind: MachineSet\nmetadata:\nspec:\n  template:\n    spec:\n      metadata:\n        labels:\n          nvidia.com/device-plugin.config: tesla-t4\n
"},{"location":"odh-rhods/nvidia-gpus/#autoscaler-and-gpus","title":"Autoscaler and GPUs","text":"

As they are expensive, GPUs are good candidates to put behind an Autoscaler. But due to this there are some subtleties if you want everything to go smoothly.

"},{"location":"odh-rhods/nvidia-gpus/#configuration_1","title":"Configuration","text":"

Warning

For the autoscaler to work properly with GPUs, you have to set a specific label to the MachineSet. It will help to Autoscaler figure out (in fact simulate) what it is allowed to do. This is especially true if you have different MachineSets that feature different types of GPUs.

As per the referenced article above, the type for gpus you set through the label cannot be nvidia.com/gpu (as you will sometimes find in the standard documentation), because it's not a valid label. Therefore, only for the autoscaling purpose, you should give the type a specific name with letters, numbers and dashes only, like Tesla-T4-SHARED in this example.

  • Edit the MachineSet configuration to add the label that the Autoscaler will expect:

    apiVersion: machine.openshift.io/v1beta1\nkind: MachineSet\n...\nspec:\n  ...\n  template:\n    ...\n    spec:\n      metadata:\n        labels:\n          cluster-api/accelerator: Tesla-T4-SHARED\n
  • Create your ClusterAutoscaler configuration (example):

    apiVersion: autoscaling.openshift.io/v1\nkind: ClusterAutoscaler\nmetadata:\n  name: \"default\"\nspec:\n  logVerbosity: 4\n  maxNodeProvisionTime: 15m\n  podPriorityThreshold: -10\n  resourceLimits:\n    gpus:\n      - type: Tesla-T4-SHARED\n        min: 0\n        max: 8\n  scaleDown:\n    enabled: true\n    delayAfterAdd: 20m\n    delayAfterDelete: 5m\n    delayAfterFailure: 30s\n    unneededTime: 5m\n

    Note

    The delayAfterAdd parameter has to be set higher than standard value as NVIDIA tooling can take a lot of time to deploy, 10-15mn.

  • Create the MachineSet Autoscaler:

    apiVersion: autoscaling.openshift.io/v1beta1\nkind: MachineAutoscaler\nmetadata:\n  name: machineset-name\n  namespace: \"openshift-machine-api\"\nspec:\n  minReplicas: 1\n  maxReplicas: 2\n  scaleTargetRef:\n    apiVersion: machine.openshift.io/v1beta1\n    kind: MachineSet\n    name: machineset-name\n
"},{"location":"odh-rhods/nvidia-gpus/#scaling-to-zero","title":"Scaling to zero","text":"

As GPUs are expensive resources, you may want to scale down your MachineSet to zero to save on resources. This will however require some more configuration than just setting the minimum size to zero...

First, some background to help you understand and enable you to solve issues that may arise. You can skip the whole explanation, but it's worth it, so please bear with me.

When you request resources that aren't available, the Autoscaler looks at all the MachineAutoscalers that are available, with their corresponding MachineSets. But how to know which one to use? Well, it will first simulate the provisioning of a Node from each MachineSet, and see if it would fit the request. Of course, if there is already at least one Node available from a given MachineSet, the simulation would be bypassed as the Autoscaler already knows what it will get. If there are different MachineSets that fit and to choose from, the default and only \"Expander\" available for now in OpenShift to make its decision is random. So it will simply picks one totally randomly.

That's all perfect and everything, but for GPUs, if you don't start the Node for real, we don't know what's in it! So that's where we have to help the Autoscaler with a small hint.

  • Set this annotation manually if it's not there. It will stick after the first scale up though, along with some other annotations the Autoscaler will add, thanks for its newly discovered knowledge.

    apiVersion: machine.openshift.io/v1beta1\nkind: MachineSet\nmetadata:\n  annotations:\n    machine.openshift.io/GPU: \"1\"\n

Now to the other issue that may happen if you are in an environment with multiple Availability Zones (AZ)...

Although when you define a MachineSet you can set the AZ and have all the Nodes spawned properly in it, the Autoscaler simulator is not that clever. So it will simply pick a Zone at random. If this is not the one where you want/need your Pod to run, this will be a problem...

For example, you may already have a Persistent Volume (PV) attached to you Notebook. If your storage does now support AZ-spanning (like AWS EBS volumes), your PV is bound to a specific AZ. If the Simulator creates a virtual Node in a different AZ, there will be a mismatch, your Pod would not be schedulable on this Node, and the Autoscaler will (wrongly) conclude that it cannot use this MachineSet for a scale up!

Here again, we have to give a hint to the Autoscaler to what the Node will look like in the end.

  • In you MachineSet, in the labels that will be added to the node, add information regarding the topology of the Node, as well as for the volumes that may be attached to it. For example:

    apiVersion: machine.openshift.io/v1beta1\nkind: MachineSet\nmetadata:\nspec:\n  template:\n    spec:\n      metadata:\n        labels:\n          ...\n          topology.kubernetes.io/zone: us-east-2a\n          topology.ebs.csi.aws.com/zone: us-east-2a\n

With this, the simulated Node will be at the right place, and the Autoscaler will consider the MachineSet valid for scale up!

Reference material:

  • https://cloud.redhat.com/blog/autoscaling-nvidia-gpus-on-red-hat-openshift
  • https://access.redhat.com/solutions/6055181
  • https://bugzilla.redhat.com/show_bug.cgi?id=1943194
"},{"location":"odh-rhods/openshift-group-management/","title":"OpenShift Group Management","text":"

In the Red Hat OpenShift Documentation, there are instructions on how to configure a specific list of RHODS Administrators and RHODS Users.

However, if the list of users keeps changing, the membership of the groupd called rhods-users will have to be updated frequently. By default, in OpenShift, only OpenShift admins can edit group membership. Being a RHODS Admin does not confer you those admin privileges, and so, it would fall to the OpenShift admin to administer that list.

The instructions in this page will show how the OpenShift Admin can create these groups in such a way that any member of the group rhods-admins can edit the users listed in the group rhods-users. These makes the RHODS Admins more self-sufficient, without giving them unneeded access.

For expediency in the instructions, we are using the oc cli, but these can also be achieved using the OpenShift Web Console. We will assume that the user setting this up has admin privileges to the cluster.

"},{"location":"odh-rhods/openshift-group-management/#creating-the-groups","title":"Creating the groups","text":"

Here, we will create the groups mentioned above. Note that you can alter those names if you want, but will then need to have the same alterations throughout the instructions.

  1. To create the groups:
    oc adm groups new rhods-users\noc adm groups new rhods-admins\n
  2. The above may complain about the group(s) already existing.
  3. To confirm both groups exist:
    oc get groups | grep rhods\n
  4. That should return:
    bash-4.4 ~ $ oc get groups | grep rhods\nrhods-admins\nrhods-users\n
  5. Both groups now exist
"},{"location":"odh-rhods/openshift-group-management/#creating-clusterrole-and-clusterrolebinding","title":"Creating ClusterRole and ClusterRoleBinding","text":"
  1. This will create a Cluster Role and a Cluster Role Binding:
    oc apply -f - <<EOF\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: update-rhods-users\nrules:\n  - apiGroups: [\"user.openshift.io\"]\n    resources: [\"groups\"]\n    resourceNames: [\"rhods-users\"]\n    verbs: [\"update\", \"patch\", \"get\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: rhods-admin-can-update-rhods-users\nsubjects:\n  - kind: Group\n    apiGroup: rbac.authorization.k8s.io\n    name: rhods-admins\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: update-rhods-users\nEOF\n
  2. To confirm they were both succesfully created, run:
    oc get ClusterRole,ClusterRoleBinding  | grep 'update\\-rhods'\n
  3. You should see:
    bash-4.4 ~ $ oc get ClusterRole,ClusterRoleBinding  | grep 'update\\-rhods'\nclusterrole.rbac.authorization.k8s.io/update-rhods-users\nclusterrolebinding.rbac.authorization.k8s.io/rhods-admin-can-update-rhods-users\n
  4. You are pretty much done. You now just need to validate things worked.
"},{"location":"odh-rhods/openshift-group-management/#add-some-users-as-rhods-admins","title":"Add some users as rhods-admins","text":"

To confirm this works, add a user to the rhods-admin group. In my example, I'll add user1

"},{"location":"odh-rhods/openshift-group-management/#capture-the-url-needed-to-edit-the-rhods-users-group","title":"Capture the URL needed to edit the rhods-users group","text":"

Since people who are not cluster admin won't be able to browse the list of groups, capture the URL that allows to control the membership of rhods-users.

It should look similar to:

https://console-openshift-console.apps.<thecluster>/k8s/cluster/user.openshift.io~v1~Group/rhods-users

"},{"location":"odh-rhods/openshift-group-management/#ensure-that-rhods-admins-are-now-able-to-edit-rhods-users","title":"Ensure that rhods-admins are now able to edit rhods-users","text":"

Ask someone in the rhods-admins group to confirm that it works for them. (Remember to provide them with the URL to do so).

They should be able to do so and successfully save their changes, as shown below:

"},{"location":"patterns/bucket-notifications/bucket-notifications/","title":"Bucket Notifications","text":""},{"location":"patterns/bucket-notifications/bucket-notifications/#description","title":"Description","text":"

The Rados Gateway (RGW) component of Ceph provides Object Storage through an S3-compatible API on all Ceph implementations: OpenShift Data Foundation and its upstream version Rook-Ceph, Red Hat Ceph Storage, Ceph,\u2026\u200b

Bucket notifications provide a mechanism for sending information from the RGW when certain events are happening on a bucket. Currently, notifications can be sent to: HTTP, AMQP0.9.1 and Kafka endpoints.

From a data engineering point of view, bucket notifications allow to create an event-driven architecture, where messages (instead of simply log entries) can be sent to various processing components or event buses whenever something is happening on the object storage: object creation, deletion, with many fine-grained settings available.

"},{"location":"patterns/bucket-notifications/bucket-notifications/#use-cases","title":"Use cases","text":""},{"location":"patterns/bucket-notifications/bucket-notifications/#application-taking-actions-on-the-objects","title":"Application taking actions on the objects","text":"

As part of an event-driven architecture, this pattern can be used to trigger an application to perform an action following the storage event. An example could be the automated processing of a new image that has just been uploaded to the object storage (analysis, resizing,\u2026\u200b). Paired with Serverless functions this becomes a pretty efficient architecture compared to having an application constantly monitoring or polling the storage, or to have to implement this triggering process in the application interacting with the storage. This loosely-coupled architecture also gives much more agility for updates, technology evolution,\u2026\u200b

"},{"location":"patterns/bucket-notifications/bucket-notifications/#external-monitoring-systems","title":"External monitoring systems","text":"

The events sent by the RGW are simple messages containing all the metadata relevant to the event and the object. So it can be an excellent source of information for a monitoring system. For example if you want to keep a trace or send an alert whenever a specific type of file, or with a specific name, is uploaded or deleted from the storage.

"},{"location":"patterns/bucket-notifications/bucket-notifications/#implementations-examples","title":"Implementations examples","text":"

This pattern is implemented in the XRay pipeline demo

"},{"location":"patterns/bucket-notifications/bucket-notifications/#how-does-it-work","title":"How does it work?","text":""},{"location":"patterns/bucket-notifications/bucket-notifications/#characteristics","title":"Characteristics","text":"
  • Notifications are sent directly from the RGW on which the event happened to an external endpoint.
  • Pluggable endpoint architecture:
    • HTTP/S
    • AMQP 0.9.1
    • Kafka
    • Knative
"},{"location":"patterns/bucket-notifications/bucket-notifications/#data-model","title":"Data Model","text":"
  • Topics contain the definition of a specific endpoint in \u201cpush mode\u201d
  • Notifications tie topics with buckets, and may also include filter definition on the events
"},{"location":"patterns/bucket-notifications/bucket-notifications/#configuration","title":"Configuration","text":"

This configuration shows how to create a notification that will send a message (event) to a Kafka topic when a new object is created in a bucket.

"},{"location":"patterns/bucket-notifications/bucket-notifications/#requirements","title":"Requirements","text":"
  • Access to a Ceph/ODF/RHCS installation with the RGW deployed.
  • Endpoint address (URL) for the RGW.
  • Credentials to connect to the RGW:
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY

Note

As Ceph implements an S3-Compatible API to access Object Storage, standard naming for variables or procedures used with S3 were retained to stay coherent with examples, demos or documentation related to S3. Therefore the AWS prefix in the previous variables.

"},{"location":"patterns/bucket-notifications/bucket-notifications/#topic-creation","title":"Topic Creation","text":"

A topic is the definition of a specific endpoint. It must be created first.

"},{"location":"patterns/bucket-notifications/bucket-notifications/#method-1-raw-configuration","title":"Method 1: \"RAW\" configuration","text":"

As everything is done through the RGW API, you can query it directly. To be fair, this method is almost never used (unless there is no SDK or S3 tool for your environment) but gives a good understanding of the process.

Example for a Kafka Endpoint:

POST\nAction=CreateTopic\n&Name=my-topic\n&push-endpoint=kafka://my-kafka-broker.my-net:9999\n&Attributes.entry.1.key=verify-ssl\n&Attributes.entry.1.value=true\n&Attributes.entry.2.key=kafka-ack-level\n&Attributes.entry.2.value=broker\n&Attributes.entry.3.key=use-ssl\n&Attributes.entry.3.value=true\n&Attributes.entry.4.key=OpaqueData\n&Attributes.entry.4.value=https://s3-proxy.my-zone.my-net\n

Note

The authentication part is not detailed here as the mechanism is pretty convoluted, but it is directly implemented in most API development tools, like Postman.

The full reference for the REST API for bucket notifications is available here.

"},{"location":"patterns/bucket-notifications/bucket-notifications/#method-2-python-aws-sdk","title":"Method 2: Python + AWS SDK","text":"

As the creator of the S3 API, AWS is providing SDKs for the main languages to interact with it. Thanks to this compatibility, you can use those SDKs to interact with Ceph in the same way. For Python, the library to interact with AWS services is called boto3.

Example for a Kafka Endpoint:

import boto3\nsns = boto3.client('sns',\n                endpoint_url = endpoint_url,\n                aws_access_key_id = aws_access_key_id,\n                aws_secret_access_key= aws_secret_access_key,\n                region_name='default',\n                config=botocore.client.Config(signature_version = 's3'))\n\nattributes = {}\nattributes['push-endpoint'] = 'kafka://my-cluster-kafka-bootstrap:9092'\nattributes['kafka-ack-level'] = 'broker'\n\ntopic_arn = sns.create_topic(Name=my-topic, Attributes=attributes)['TopicArn']\n
"},{"location":"patterns/bucket-notifications/bucket-notifications/#notification-configuration","title":"Notification Configuration","text":"

The notification configuration will \"tie\" a bucket with a topic.

"},{"location":"patterns/bucket-notifications/bucket-notifications/#method-1-raw-configuration_1","title":"Method 1: \"RAW\" configuration","text":"

As previously, you can directly query the RGW REST API. This is done through an XML formatted payload that is sent with a PUT command.

Example for a Kafka Endpoint:

PUT /my-bucket?notification HTTP/1.1\n\n<NotificationConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\n    <TopicConfiguration>\n        <Id>my-notification</Id>\n        <Topic>my-topic</Topic>\n        <Event>s3:ObjectCreated:*</Event>\n        <Event>s3:ObjectRemoved:DeleteMarkerCreated</Event>\n    </TopicConfiguration>\n    <TopicConfiguration>\n...\n    </TopicConfiguration>\n</NotificationConfiguration>\n

Again, the full reference for the REST API for bucket notifications is available here.

"},{"location":"patterns/bucket-notifications/bucket-notifications/#method-2-python-aws-sdk_1","title":"Method 2: Python + AWS SDK","text":"

Example for a Kafka Endpoint:

import boto3\ns3 = boto3.client('s3',\n                endpoint_url = endpoint_url,\n                aws_access_key_id = aws_access_key_id,\n                aws_secret_access_key = aws_secret_access_key,\n                region_name = 'default',\n                config=botocore.client.Config(signature_version = 's3'))\n\nbucket_notifications_configuration = {\n            \"TopicConfigurations\": [\n                {\n                    \"Id\": 'my-id',\n                    \"TopicArn\": 'arn:aws:sns:s3a::my-topic',\n                    \"Events\": [\"s3:ObjectCreated:*\"]\n                }\n            ]\n        }\n\ns3.put_bucket_notification_configuration(Bucket = bucket_name,\n        NotificationConfiguration=bucket_notifications_configuration)\n
"},{"location":"patterns/bucket-notifications/bucket-notifications/#filters","title":"Filters","text":"

Although a notification is specific to a bucket (and you can have multiple configurations on one bucket), you may want that it does not apply to all the objects from this bucket. For example you want to send an event when an image is uploaded, but not do anything it\u2019s another type of file. You can do this with filters! And not only on the filename, but also on the tags associated to it in its metadata.

Filter examples, on keys or tags:

<Filter>\n    <S3Key>\n        <FilterRule>\n         <Name>regex</Name>\n         <Value>([0-9a-zA-Z\\._-]+.(png|gif|jp[e]?g)</Value>\n        </FilterRule>\n    </S3Key>\n    <S3Tags>\n        <FilterRule>\n            <Name>Project</Name><Value>Blue</Value>\n        </FilterRule>\n        <FilterRule>\n            <Name>Classification</Name><Value>Confidential</Value>\n        </FilterRule>\n    </S3Tags>\n</Filter>\n
"},{"location":"patterns/bucket-notifications/bucket-notifications/#events","title":"Events","text":"

The notifications sent to the endpoints are called events, and they are structured like this:

Event example:

{\"Records\":[\n    {\n        \"eventVersion\":\"2.1\",\n        \"eventSource\":\"ceph:s3\",\n        \"awsRegion\":\"us-east-1\",\n        \"eventTime\":\"2019-11-22T13:47:35.124724Z\",\n        \"eventName\":\"ObjectCreated:Put\",\n        \"userIdentity\":{\n            \"principalId\":\"tester\"\n        },\n        \"requestParameters\":{\n            \"sourceIPAddress\":\"\"\n        },\n        \"responseElements\":{\n            \"x-amz-request-id\":\"503a4c37-85eb-47cd-8681-2817e80b4281.5330.903595\",\n            \"x-amz-id-2\":\"14d2-zone1-zonegroup1\"\n        },\n        \"s3\":{\n            \"s3SchemaVersion\":\"1.0\",\n            \"configurationId\":\"mynotif1\",\n            \"bucket\":{\n                \"name\":\"mybucket1\",\n                \"ownerIdentity\":{\n                    \"principalId\":\"tester\"\n                },\n                \"arn\":\"arn:aws:s3:us-east-1::mybucket1\",\n                \"id\":\"503a4c37-85eb-47cd-8681-2817e80b4281.5332.38\"\n            },\n            \"object\":{\n                \"key\":\"myimage1.jpg\",\n                \"size\":\"1024\",\n                \"eTag\":\"37b51d194a7513e45b56f6524f2d51f2\",\n                \"versionId\":\"\",\n                \"sequencer\": \"F7E6D75DC742D108\",\n                \"metadata\":[],\n                \"tags\":[]\n            }\n        },\n        \"eventId\":\"\",\n        \"opaqueData\":\"me@example.com\"\n    }\n]}\n
"},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/","title":"Kafka to Object Storage","text":""},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#description","title":"Description","text":"

Kafka is a distributed event stream processing system which is great for storing hot relevant data. Based on the retention policy of the data, it can be used to store data for a long time. However, it is not suitable for storing data for a long time. This is where we need a mechanism to move data from Kafka to the object storage.

"},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#use-cases","title":"Use Cases","text":""},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#long-term-retention-of-data","title":"Long term retention of data","text":"

As Kafka is not really suited for long term retention of data, persisting it inside an object store will allow you to keep your data for further use, backup or archival purposes. Depending on the solution you use, you can also transform or format you data while storing it, which will ease further retrieval.

"},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#move-data-to-central-data-lake","title":"Move data to Central Data Lake","text":"

Production Kafka environment may not be the best place to run analytics or do model training. Transferring or copying the date to a central data lake will allow you to decouple those two aspects (production and analytics), bringing peace of mind and further capabilities to the data consumers.

"},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#implementations-examples","title":"Implementations examples","text":"

This pattern is implemented in the Smart City demo

"},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#configuration-using-secor","title":"Configuration Using Secor","text":"

This pattern implements the Secor Kafka Consumer. It can be used to consume kafka messages from a kafka topic and store that to S3 compatible Objet Buckets.

Secor is a service persisting Kafka logs to Amazon S3, Google Cloud Storage, Microsoft Azure Blob Storage and Openstack Swift. Its key features are: strong consistency, fault tolerance, load distribution, horizontal scalability, output partitioning, configurable upload policies, monitoring, customizability, event transformation.

"},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#prerequisites","title":"Prerequisites","text":""},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#bucket","title":"Bucket","text":"

An S3-compatible bucket, with its access key and secret key.

"},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#zookeeper-entrance","title":"ZooKeeper Entrance","text":"

Secor needs to connect directly to Zookeeper to keep track of some data. If you have a secured installation of Zookeeper, like when you deploy Kafka using Strimzi or AMQStreams, you need to deploy a ZooKeeper Entrance. This is a special proxy to Zookeeper that will allow this direct connection.

Note

The deployment file is based on a Strimzi or AMQ Streams deployment of Kafka. If you configuration is different you may have to adapt some of the parameters.

Deployment:

  • In the file deployment/zookeeper-entrance.yaml, replace:
    • the occurrences of 'NAMESPACE' by the namespace where the Kafka cluster is.
    • the occurrences of 'YOUR_KAFKA' by the name of your Kafka cluster.
    • the parameters YOUR_KEY, YOUR_SECRET, YOUR_ENDPOINT, YOUR_BUCKET with the values corresponding to the bucket where you want to store the data.
  • Apply the modified file to deploy ZooKeeper Entrance.
"},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#deployment","title":"Deployment","text":""},{"location":"patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/#secor","title":"Secor","text":"
  • In the file deployment/secor.yaml, replace:
    • the occurrences of 'NAMESPACE' by the namespace where the Kafka cluster is.
    • the occurrences of 'YOUR_KAFKA' by the name of your Kafka cluster.
    • adjust all the other Secor parameters or add others depending on the processing you want to do with the data: output format, aggregation,... Full instructions are available here.
  • Apply the modified file to deploy Secor.
"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/","title":"Kafka to Serverless","text":""},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#description","title":"Description","text":"

This pattern describes how to use AMQ Streams (Kafka) as an event source to OpenShift Serverless (Knative). You will learn how to implement Knative Eventing that can trigger a Knative Serving function when a messaged is posted to a Kafka Topic (Event).

"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#knative-openshift-serverless","title":"Knative & OpenShift Serverless","text":"

Knative is an open source project that helps to deploy and manage modern serverless workloads on Kubernetes. Red Hat OpenShift Serverless is an enterprise-grade serverless offering based on knative that provides developers with a complete set of tools to build, deploy, and manage serverless applications on OpenShift Container Platform

Knative consists of 3 primary components:

  • Build - A flexible approach to building source code into containers.
  • Serving - Enables rapid deployment and automatic scaling of containers through a request-driven model for serving workloads based on demand.
  • Eventing - An infrastructure for consuming and producing events to stimulate applications. Applications can be triggered by a variety of sources, such as events from your own applications, cloud services from multiple providers, Software-as-a-Service (SaaS) systems, and Red Hat AMQ streams.
"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#eda-event-driven-architecture","title":"EDA (Event Driven Architecture)","text":"

Event-Driven Architecture (EDA) is a way of designing applications and services to respond to real-time information based on the sending and receiving of information about individual events. EDA uses events to trigger and communicate between decoupled services and is common in modern applications built with microservices.

"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#use-cases","title":"Use Cases","text":"
  • Develop an event-driven architecture with serverless applications.
  • Serverless Business logic processing that is capable of automated scale-up and scale-down to zero.
"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#implementations-examples","title":"Implementations examples","text":"

This pattern is implemented in the XRay Pipeline Demo

"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#deployment-example","title":"Deployment example","text":""},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#requirements","title":"Requirements","text":"
  • Red Hat OpenShift Container Platform
  • Red Hat AMQ Streams or Strimzi: the operator should be installed and a Kafka cluster must be created
  • Red Hat OpenShift Serverless: the operator must be installed
"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#part-1-set-up-knative","title":"Part 1: Set up KNative","text":"

Once Red Hat OpenShift Serverless operator has been installed, we can create KnativeServing, KnativeEventing and KnativeKafka instances.

"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#step-1-create-required-knative-instances","title":"Step 1: Create required Knative instances","text":"
  • From the deployment folder, apply the YAML file 01_knative_serving_eventing_kafka_setup.yaml to create knative instances
oc create -f 01_knative_serving_eventing_kafka_setup.yaml\n

Note

Those instances can also be deployed through the OpenShift Console if you prefer to use a UI. In this case, follow the Serverless deployment instructions (this section and the following ones).

"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#step-2-verify-knative-instances","title":"Step 2: Verify Knative Instances","text":"
oc get po -n knative-serving\noc get po -n knative-eventing\n
  • Pod with prefix kafka-controller-manager represents Knative Kafka Event Source.
"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#part-2-knative-serving","title":"Part 2: Knative Serving","text":"

Knative Serving is your serverless business logic that you would like to execute based on the event generated by Kafka.

For example purpose we are using a simple greeter service here. Depending on your use case you will replace that with your own business logic.

"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#step-1-create-knative-serving","title":"Step 1: Create Knative Serving","text":"
  • From the deployment folder, in the YAML file 02_knative_service.yaml, replace the placeholder YOUR_NAMESPACE with your namespace, and apply the file to create knative serving.
oc create -f 02_knative_service.yaml\n
"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#step-2-verify-knative-serving","title":"Step 2: Verify Knative Serving","text":"
oc get serving\n
"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#part-3-knative-eventing","title":"Part 3: Knative Eventing","text":"

Knative Eventing enables developers to use an event-driven architecture with serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers that create events, and event sinks, or consumers, that receive them.

"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#step-1-kafka-topic","title":"Step 1: Kafka topic","text":"
  • Create a Kafka topic where the events will be sent. In this example, the topic will be example_topic.
"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#step-2-create-knative-eventing","title":"Step 2: Create Knative Eventing","text":"
  • To create a Knative Eventing, we need to create a Kafka Event Source. Before you apply the following YAML file, 03_knative_kafka_source.yaml, please make sure to edit namespace and bootstrapServers to match your Kafka cluster. Also make sure to use the correct Knative Service (serving) that you have created in the previous step (greeter in this example).
oc create -f 03_knative_kafka_source.yaml\n
"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#step-3-verify-knative-eventing","title":"Step 3: Verify Knative Eventing","text":"
oc get kafkasource\n

At this point, as soon as new messages are received in Kafka topic example_topic, Knative Eventing will trigger the Knative Service greeter to execute the business logic, allowing you to have event-driven serverless application running on OpenShift Container Platform.

"},{"location":"patterns/kafka/kafka-to-serverless/kafka-to-serverless/#part-4-testing","title":"Part 4: Testing","text":"
  • Optional: to view the logs of Knative Serving you can install stern to them from the CLI, or use the OpenShift Web Console.
oc get ksvc\nstern --selector=serving.knative.dev/service=greeter -c user-container\n
  • Launch a temporary Kafka CLI (kafkacat) in a new terminal
oc run kafkacat -i -t --image debezium/tooling --restart=Never\n
  • From the kafkacat container shell, generate kafka messages in the topic example_topic of your Kafka cluster. Here we are generating Kafka messages with CloudEvents (CE) specification.
for i in {1..50} ; do sleep 10 ; \\\necho '{\"message\":\"Hello Red Hat\"}' | kafkacat -P -b core-kafka-kafka-bootstrap -t example_topic \\\n  -H \"content-type=application/json\" \\\n  -H \"ce-id=CE-001\" \\\n  -H \"ce-source=/kafkamessage\"\\\n  -H \"ce-type=dev.knative.kafka.event\" \\\n  -H \"ce-specversion=1.0\" \\\n  -H \"ce-time=2018-04-05T03:56:24Z\"\ndone ;\n

The above command will generate 50 Kafka messages every 10 seconds. Knative Eventing will pick up the messages and invoke the greeter Knative service, that you can verify from the logs of Knative Serving.

"},{"location":"patterns/starproxy/starproxy/","title":"Starburst/Trino Proxy","text":""},{"location":"patterns/starproxy/starproxy/#what-it-is","title":"What it is","text":"

Starproxy is a fully HTTP compliant proxy that is designed to sit between clients and a Trino/Starburst cluster. The motivation for developing a solution like this is laid out in some prior art below:

  • Facebook Engineering Blog - Static Analysis
  • Strata Conference Talk
  • Uber Case Study - Prism

The most attractive items to us are probably:

  • Enabling host based security
  • Detecting \"bad\" queries and blocking/deprioritizing them with custom rules
  • Load balancing across regions
"},{"location":"patterns/starproxy/starproxy/#how-it-works","title":"How it works","text":"

First and foremost, starproxy is an http proxy implemented in rust using a combination of axum/hyper.

  1. Parse the query AST, then check a variety of rules:

    • inbound CIDR rule checking
    • checking for predicates in queries
    • identifying select * queries with no limit, among other rules
  2. If rules are violated they can be associated with actions, like tagging the query as low priority. This is done by modifying the request headers and injecting special tags. Rules can also outright block requests by returning error status codes to the client directly.

"},{"location":"tools-and-applications/airflow/airflow/","title":"Apache Airflow","text":""},{"location":"tools-and-applications/airflow/airflow/#what-is-it","title":"What is it?","text":"

Apache Airflow is a platform created by the community to programmatically author, schedule and monitor workflows. It has become popular because of how easy it is to use and how extendable it is, covering a wide variety of tasks and allowing you to connect your workflows with virtually any technology. Since it's a Python framework it has also gathered a lot of interest from the Data Science field.

One important concept used in Airflow is DAGs (Directed Acyclical Graphs). A DAG is a graph without any cycles. In other words, a node in your graph may never point back to a node higher up in your workflow. DAGs are used to model your workflows/pipelines, which essentially means that you are building and executing graphs when working with Airflow. You can read more about DAGs here: https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html

The key features of Airflow are:

  • Webserver: It's a user interface where you can see the status of your jobs, as well as inspect, trigger, and debug your DAGs and tasks. It also gives a database interface and lets you read logs from the remote file store.
  • Scheduler: The Scheduler is a component that monitors and manages all your tasks and DAGs, it checks their status and triggers them in the correct order once their dependencies are complete.
  • Executors: It handles running your task when they are assigned by the scheduler. It can either run the tasks inside the scheduler or push task execution out to workers. Airflow supports a variety of different executors which you can choose between.
  • Metadata database: The metadata database is used by the executor, webserver, and scheduler to store state.

"},{"location":"tools-and-applications/airflow/airflow/#installing-apache-airflow-on-openshift","title":"Installing Apache Airflow on OpenShift","text":"

Airflow can be run as a pip package, through docker, or a Helm chart. The official Helm chart can be found here: https://airflow.apache.org/docs/apache-airflow/stable/installation/index.html#using-official-airflow-helm-chart

See OpenDataHub Airflow - Example Helm Values

A modified version of the Helm chart which can be installed on OpenShift 4.12: https://github.com/eformat/openshift-airflow

"},{"location":"tools-and-applications/apache-nifi/apache-nifi/","title":"Apache NiFi","text":""},{"location":"tools-and-applications/apache-nifi/apache-nifi/#what-is-it","title":"What is it?","text":"

Apache NiFi is an open-source data integration tool that helps automate the flow of data between systems. It is designed to be easy to use and allows users to quickly and efficiently process, transmit, and securely distribute data. NiFi provides a web-based interface for monitoring and controlling data flows, as well as a library of processors for common data manipulation tasks such as filtering, routing, and transformation. It is highly configurable and can be used in a variety of scenarios including data ingestion, ETL, and dataflow management.

Nifi is a powerful tool to move data between systems and can handle real-time data with ease. It can be used in conjunction with other big data technologies such as Apache Kafka and Apache Spark to create a complete data pipeline. It supports a wide range of protocols and data formats, making it a versatile solution for any organization looking to manage and process large amounts of data.

"},{"location":"tools-and-applications/apache-nifi/apache-nifi/#installing-apache-nifi-on-openshift","title":"Installing Apache Nifi on OpenShift","text":"

The easiest way to install it is to follow the instructions available on the Nifi on OpenShift repo.

Contrary to other recipes or the images you can find on the Nifi project, the container images available on this repo are all based on UBI8 and follow OpenShift guidelines and constraints, like running with minimal privileges.

Several deployment options are available:

  • Choice on the number of nodes to deploy,
  • Basic, OIDC or LDAP authentication.
"},{"location":"tools-and-applications/apache-spark/apache-spark/","title":"Apache Spark","text":""},{"location":"tools-and-applications/apache-spark/apache-spark/#what-is-it","title":"What is it?","text":"

Apache Spark is an open-source, distributed computing system used for big data processing. It can process large amounts of data quickly and efficiently, and handle both batch and streaming data. Spark uses the in-memory computing concept, which allows it to process data much faster than traditional disk-based systems.

Spark supports a wide range of programming languages including Java, Python, and Scala. It provides a number of high-level libraries and APIs, such as Spark SQL, Spark Streaming, and MLlib, that make it easy for developers to perform complex data processing tasks. Spark SQL allows for querying structured data using SQL and the DataFrame API, Spark Streaming allows for processing real-time data streams, and MLlib is a machine learning library for building and deploying machine learning models. Spark also supports graph processing and graph computation through GraphX and GraphFrames.

"},{"location":"tools-and-applications/apache-spark/apache-spark/#working-with-spark-on-openshift","title":"Working with Spark on OpenShift","text":"

Spark can be fully containerized. Therefore a standalone Spark cluster can of course be installed on OpenShift. However, it sorts of breaks the cloud-native approach brought by Kubernetes of ephemeral workloads. There are in fact many ways to work with Spark on OpenShift, either with Spark-on-Kubernetes operator, or directly through PySpark or spark-submit commands.

In this Spark on OpenShift repository, you will find all the instructions to work with Sparl on OpenShift.

It includes:

  • pre-built UBI-based Spark images including the drivers to work with S3 storage,
  • instructions and examples to build your own images (to include your own libraries for example),
  • instructions to deploy the Spark history server to gather your processing logs,
  • instructions to deploy the Spark on Kubernetes operator,
  • Prometheus and Grafana configuration to monitor your data processing and operator in real time,
  • instructions to work without the operator, from a Notebook or a Terminal, inside or outside the OpenShit Cluster,
  • various examples to test your installation and the different methods.
"},{"location":"tools-and-applications/minio/minio/","title":"Minio","text":""},{"location":"tools-and-applications/minio/minio/#what-is-it","title":"What is it?","text":"

Minio is a high-performance, S3 compatible object store. It can be deployed on a wide variety of platforms, and it comes in multiple flavors.

"},{"location":"tools-and-applications/minio/minio/#why-this-guide","title":"Why this guide?","text":"

This guide is a very quick way of deploying the community version of Minio in order to quickly setup a fully standalone Object Store, in an OpenShift Cluster. This can then be used for various prototyping tasks that require Object Storage.

Note that nothing in this guide should be used in production-grade environments. Also, Minio is not included in RHODS, and Red Hat does not provide support for Minio.

"},{"location":"tools-and-applications/minio/minio/#pre-requisites","title":"Pre-requisites","text":"
  • Access to an OpenShift cluster
  • Namespace-level admin permissions, or permission to create your own project
"},{"location":"tools-and-applications/minio/minio/#deploying-minio-on-openshift","title":"Deploying Minio on OpenShift","text":""},{"location":"tools-and-applications/minio/minio/#create-a-data-science-project-optional","title":"Create a Data Science Project (Optional)","text":"

If you already have your own Data Science Project, or OpenShift project, you can skip this step.

  1. If your cluster already has Red Hat OpenShift Data Science installed, you can use the Dashboard Web Interface to create a Data Science project.
  2. Simply navigate to Data Science Projects
  3. And click Create Project
  4. Choose a name for your project (here, Showcase) and click Create:

  5. Make sure to make a note of the Resource name, in case it's different from the name.

"},{"location":"tools-and-applications/minio/minio/#log-on-to-your-project-in-openshift-console","title":"Log on to your project in OpenShift Console","text":"
  1. Go to your cluster's OpenShift Console:

  2. Make sure you use the Administrator view, not the developer view.

  3. Go to Workloads then Pods, and confirm the selected project is the right one

  4. You now have a project in which to deploy Minio

"},{"location":"tools-and-applications/minio/minio/#deploy-minio-in-your-project","title":"Deploy Minio in your project","text":"
  1. Click on the + (\"Import YAML\") button:

  2. Paste the following YAML in the box, but don't press ok yet!:

    ---\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: minio-pvc\nspec:\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 20Gi\n  volumeMode: Filesystem\n---\nkind: Secret\napiVersion: v1\nmetadata:\n  name: minio-secret\nstringData:\n  # change the username and password to your own values.\n  # ensure that the user is at least 3 characters long and the password at least 8\n  minio_root_user: minio\n  minio_root_password: minio123\n---\nkind: Deployment\napiVersion: apps/v1\nmetadata:\n  name: minio\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: minio\n  template:\n    metadata:\n      creationTimestamp: null\n      labels:\n        app: minio\n    spec:\n      volumes:\n        - name: data\n          persistentVolumeClaim:\n            claimName: minio-pvc\n      containers:\n        - resources:\n            limits:\n              cpu: 250m\n              memory: 1Gi\n            requests:\n              cpu: 20m\n              memory: 100Mi\n          readinessProbe:\n            tcpSocket:\n              port: 9000\n            initialDelaySeconds: 5\n            timeoutSeconds: 1\n            periodSeconds: 5\n            successThreshold: 1\n            failureThreshold: 3\n          terminationMessagePath: /dev/termination-log\n          name: minio\n          livenessProbe:\n            tcpSocket:\n              port: 9000\n            initialDelaySeconds: 30\n            timeoutSeconds: 1\n            periodSeconds: 5\n            successThreshold: 1\n            failureThreshold: 3\n          env:\n            - name: MINIO_ROOT_USER\n              valueFrom:\n                secretKeyRef:\n                  name: minio-secret\n                  key: minio_root_user\n            - name: MINIO_ROOT_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: minio-secret\n                  key: minio_root_password\n          ports:\n            - containerPort: 9000\n              protocol: TCP\n            - containerPort: 9090\n              protocol: TCP\n          imagePullPolicy: IfNotPresent\n          volumeMounts:\n            - name: data\n              mountPath: /data\n              subPath: minio\n          terminationMessagePolicy: File\n          image: >-\n            quay.io/minio/minio:RELEASE.2023-06-19T19-52-50Z\n          args:\n            - server\n            - /data\n            - --console-address\n            - :9090\n      restartPolicy: Always\n      terminationGracePeriodSeconds: 30\n      dnsPolicy: ClusterFirst\n      securityContext: {}\n      schedulerName: default-scheduler\n  strategy:\n    type: Recreate\n  revisionHistoryLimit: 10\n  progressDeadlineSeconds: 600\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: minio-service\nspec:\n  ipFamilies:\n    - IPv4\n  ports:\n    - name: api\n      protocol: TCP\n      port: 9000\n      targetPort: 9000\n    - name: ui\n      protocol: TCP\n      port: 9090\n      targetPort: 9090\n  internalTrafficPolicy: Cluster\n  type: ClusterIP\n  ipFamilyPolicy: SingleStack\n  sessionAffinity: None\n  selector:\n    app: minio\n---\nkind: Route\napiVersion: route.openshift.io/v1\nmetadata:\n  name: minio-api\nspec:\n  to:\n    kind: Service\n    name: minio-service\n    weight: 100\n  port:\n    targetPort: api\n  wildcardPolicy: None\n  tls:\n    termination: edge\n    insecureEdgeTerminationPolicy: Redirect\n---\nkind: Route\napiVersion: route.openshift.io/v1\nmetadata:\n  name: minio-ui\nspec:\n  to:\n    kind: Service\n    name: minio-service\n    weight: 100\n  port:\n    targetPort: ui\n  wildcardPolicy: None\n  tls:\n    termination: edge\n    insecureEdgeTerminationPolicy: Redirect\n

  3. By default, the size of the storage is 20 GB. (see line 11). Change it if you need to.

  4. If you want to, edit lines 21-22 to change the default user/password.
  5. Press Create.
  6. You should see:

  7. And there should now be a running minio pod:

  8. As well as two minio routes:

  9. The -api route is for programmatic access to Minio

  10. The -ui route is for browser-based access to Minio
  11. Your Minio Object Store is now deployed, but we still need to create at least one bucket in it, to make it useful.
"},{"location":"tools-and-applications/minio/minio/#creating-a-bucket-in-minio","title":"Creating a bucket in Minio","text":""},{"location":"tools-and-applications/minio/minio/#log-in-to-minio","title":"Log in to Minio","text":"
  1. Locate the minio-ui Route, and open its location URL in a web browser:
  2. When prompted, log in

    • if you kept the default values, then:
    • user: minio
    • pass: minio123

  3. You should now be logged into your Minio instance.

"},{"location":"tools-and-applications/minio/minio/#create-a-bucket","title":"Create a bucket","text":"
  1. Click on Create a Bucket

  2. Choose a name for your bucket (for example mybucket) and click Create Bucket:

  3. Repeat those steps to create as many buckets as you will need.

"},{"location":"tools-and-applications/minio/minio/#create-a-matching-data-connection-for-minio","title":"Create a matching Data Connection for Minio","text":"
  1. Back in RHODS, inside of your Data Science Project, Click on Add data connection:

  2. Then, fill out the required field to match with your newly-deployed Minio Object Storage

  3. You now have a Data Connection that maps to your mybucket bucket in your Minio Instance.

  4. This data connection can be used, among other things
    • In your Workbenches
    • For your Model Serving
    • For your Pipeline Server Configuration
"},{"location":"tools-and-applications/minio/minio/#notes-and-faq","title":"Notes and FAQ","text":"
  • As long as you are using the Route URLs, a Minio running in one namespace can be used by any other application, even running in another namespace, or even in another cluster altogether.
"},{"location":"tools-and-applications/minio/minio/#uninstall-instructions","title":"Uninstall instructions:","text":"

This will completely remove Minio and all its content. Make sure you have a backup of the things your need before doing so!

  1. Track down those objects created earlier:

  2. Delete them all.

"},{"location":"tools-and-applications/mlflow/mlflow/","title":"MLFlow","text":""},{"location":"tools-and-applications/mlflow/mlflow/#what-is-it","title":"What is it?","text":"

MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. MLflow currently offers four components: Read more here: https://mlflow.org/

"},{"location":"tools-and-applications/mlflow/mlflow/#helm-installation-into-openshift-namespace","title":"Helm installation into OpenShift namespace","text":""},{"location":"tools-and-applications/mlflow/mlflow/#pre-requisites","title":"Pre-requisites","text":"
  • Install the \"Crunchy Postgres for Kubernetes\" operator (can be found in OperatorHub) - To store the MLFlow config
  • Install the \"OpenShift Data Foundation\" operator (can be found in OperatorHub) - To provide S3 storage for the experiments and models
"},{"location":"tools-and-applications/mlflow/mlflow/#install","title":"Install","text":"
<Create an OpenShift project, either through the OpenShift UI or 'oc new-project project-name'>\nhelm repo add strangiato https://strangiato.github.io/helm-charts/\nhelm repo update\n<Log in to the correct OpenShift project through 'oc project project-name'>\nhelm upgrade -i mlflow-server strangiato/mlflow-server\n
"},{"location":"tools-and-applications/mlflow/mlflow/#additional-options","title":"Additional Options","text":"

The MLFlow Server helm chart provides a number of customizable options when deploying MLFlow. These options can be configured using the --set flag with helm install or helm upgrade to set options directly on the command line or through a values.yaml file using the --values flag.

For a full list of configurable options, see the helm chart documentation:

https://github.com/strangiato/helm-charts/tree/main/charts/mlflow-server#values

"},{"location":"tools-and-applications/mlflow/mlflow/#opendatahub-dashboard-application-tile","title":"OpenDataHub Dashboard Application Tile","text":"

As discussed in the Dashboard Configuration, ODH/RHODS allows administrators to add a custom application tile for additional components on the cluster.

The MLFlow Server helm chart supports creation of the Dashboard Application tile as a configurable value. If MLFlow Server is installed in the same namespace as ODH/RHODS you can install the dashboard tile run the following command:

helm upgrade -i mlflow-server strangiato/mlflow-server \\\n    --set odhApplication.enabled=true\n

The MLFlow Server helm chart also supports installing the odhApplication object in a different namespace, if MLFlow Server is not installed in the same namespace as ODH/RHODS:

helm upgrade -i mlflow-server strangiato/mlflow-server \\\n    --set odhApplication.enabled=true \\\n    --set odhApplication.namespaceOverride=redhat-ods-applications\n

After enabling the odhApplication component, wait 1-2 minutes and the tile should appear in the Explorer view of the dashboard.

Note

This feature requires ODH v1.4.1 or newer

"},{"location":"tools-and-applications/mlflow/mlflow/#test-mlflow","title":"Test MLFlow","text":"
  • Go to the OpenShift Console and switch to Developer view.
  • Go to the Topology view and make sure that you are on the MLFlow project.
  • Check that the MLFlow circle is dark blue (this means it has finished deploying).
  • Press the \"External URL\" link in the top right corner of the MLFlow circle to open up the MLFlow UI.
  • Run helm test mlflow-server in your command prompt to test MLFlow. If successful, you should see a new experiment called \"helm-test\" show up in the MLFlow UI with 3 experiments inside it.
"},{"location":"tools-and-applications/mlflow/mlflow/#adding-mlflow-to-training-code","title":"Adding MLFlow to Training Code","text":"
import mlflow\nfrom sklearn.linear_model import LogisticRegression\n\n# Set tracking URI\nmlflow.set_tracking_uri(\u201chttps://<route-to-mlflow>\u201d)\n\n# Setting the experiment\nmlflow.set_experiment(\"my-experiment\")\n\nif __name__ == \"__main__\":\n    # Enabling automatic logging for scikit-learn runs\n    mlflow.sklearn.autolog()\n\n    # Starting a logging run\n    with mlflow.start_run():\n        # train\n
"},{"location":"tools-and-applications/mlflow/mlflow/#source-code","title":"Source Code","text":"

MLFlow Server Source Code: https://github.com/strangiato/mlflow-server

MLFlow Server Helm Chart Source Code: https://github.com/strangiato/helm-charts/tree/main/charts/mlflow-server

"},{"location":"tools-and-applications/mlflow/mlflow/#demos","title":"Demos","text":"
  • Credit Card Fraud Detection pipeline using MLFlow together with RHODS: Demo
"},{"location":"tools-and-applications/rclone/rclone/","title":"Rclone","text":""},{"location":"tools-and-applications/rclone/rclone/#what-is-it","title":"What is it?","text":"

Rclone is a program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

Users call rclone \"The Swiss army knife of cloud storage\", and \"Technology indistinguishable from magic\".

Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server-side transfers to minimize local bandwidth use and transfers from one provider to another without using local disk.

Rclone is mature, open-source software originally inspired by rsync and written in Go. The friendly support community is familiar with varied use cases.

The implementation described here is a containerized version of Rclone to run on OpenShift, alongside or integrated within ODH/RHODS.

"},{"location":"tools-and-applications/rclone/rclone/#deployment","title":"Deployment","text":""},{"location":"tools-and-applications/rclone/rclone/#integrated-in-open-data-hub-or-openshift-data-science","title":"Integrated in Open Data Hub or OpenShift Data Science","text":"

Use this method if you want to use Rclone from the ODH/RHODS launcher or in a Data Science Project.

  • In the Cluster Settings menu, import the image quay.io/guimou/rclone-web-openshift:odh-rhods_latest. You can name it Rclone.
  • In your DSP project, create a new workbench using the Rclone image. You can set the storage size as minimal as it's only there to store the configuration of the endpoints.

Tip

The minimal size allowed by the dashboard for a storage volume is currently 1GB, which is way more than what is required for the Rclone configuration. So you can also create a much smaller PVC manually in the namespace corresponding to your Data Science Project, for example 100MB or less, and select this volume when creating the workbench.

  • Launch Rclone from the link once it's deployed!
  • After the standard authentication, you end up on the Rclone Login page. There is nothing to enter, but I have not found yet how to bypass it. So simply click on \"Login\".
"},{"location":"tools-and-applications/rclone/rclone/#standalone-deployment","title":"Standalone deployment","text":"

Use this method if you want to use Rclone on its own in a namespace. You can still optionally make a shortcut appear in the ODH/RHODS dashboard.

  • Create a project/namespace for your installation.
  • Clone or head to this repo.
  • From the deploy folder, apply the different YAML files:
    • 01-pvc.yaml: creates a persistent volume to hold the configuration
    • 02-deployment.yaml: creates the deployment. Modify admin account and password if you want to restrict access. You should!
    • 03-service.yaml, 04-route.yaml: create the external access so that you can connect to the Web UI.
    • Optionally, to create a tile on the ODH/RHODS dashboard:
      • modify the 05-tile.yaml file with the address of the Route that was created previously (namespace and name of the Route object).
      • the will appear under the available applications in the dashboard. Select it and click on \"Enable\" to make it appear in the \"Enabled\" menu.
"},{"location":"tools-and-applications/rclone/rclone/#configuration","title":"Configuration","text":"

In this example, we will create an S3 configuration that connects to a bucket on the MCG from OpenShift Data Foundation. So you must have created this bucket in advance and have all the information about it: endpoint, access and secret keys, bucket name.

  • In Rclone, click on \"Configs\" to create the new Remote.
  • Create new configuration, give it a name, and select \"Amazon S3 Compliant Storage Providers...\", which includes Ceph and MCG (even if not listed).
  • Enter the connection info. You only have to enter the Access key and Secret, as well as the Endpoint in \"Endpoint for S3 API\". This last info is automatically copied in other fields, that's normal.
  • Finalize the config by clicking on \"Next\" at the bottom.

Now that you have the Remote set up, you can go on the Explorer, select the Remote, and browse it!

"},{"location":"tools-and-applications/rclone/rclone/#usage-example","title":"Usage Example","text":"

In this simple example, we will transfer a dump sample from Wikipedia. Wikimedia publishes those dumps daily, and they are mirrored by different organizations. In a \"standard\" setup, loading those information into your object store would not be really practical, sometimes involving downloading it first locally to then push it to your storage.

This is how we can do it with Rclone.

  • Create your Bucket Remote as described in Configuration.
  • Create another remote of type \"HTTP\", and enter the address of one of the mirrors. Here I used https://dumps.wikimedia.your.org/wikidatawiki/.
  • Open the Explorer view, set it in dual-pane layout. In the first pane open your Bucket Remote, and in the other one the HTTP. This is what it will look like:
  • Browse to the folder you want, select a file or a folder, and simply drag and drop it from the Wikidump to your bucket. You can select a big one to make things more interesting!
  • Head for the dashboard where you will see the file transfer happening in the background.

That's it! Nothing to install, high speed optimized transfer, and you could even do multiple transfers in the background,...

"},{"location":"tools-and-applications/riva/riva/","title":"NVIDIA RIVA","text":"

NVIDIA\u00ae Riva is a GPU-accelerated SDK for building Speech AI applications that are customized for your use case and deliver real-time performance.

Riva offers pretrained speech models in NVIDIA NGC\u2122 that can be fine-tuned with the NVIDIA NeMo on a custom data set, accelerating the development of domain-specific models by 10x.

Models can be easily exported, optimized, and deployed as a speech service on premises or in the cloud with a single command using Helm charts.

Riva\u2019s high-performance inference is powered by NVIDIA TensorRT\u2122 optimizations and served using the NVIDIA Triton\u2122 Inference Server, which are both part of the NVIDIA AI platform.

Riva services are available as gRPC-based microservices for low-latency streaming, as well as high-throughput offline use cases.

Riva is fully containerized and can easily scale to hundreds and thousands of parallel streams.

"},{"location":"tools-and-applications/riva/riva/#deployment","title":"Deployment","text":"

The guide to deploy Riva on Kubernetes has to be adapted for OpenShift. Here are the different steps.

"},{"location":"tools-and-applications/riva/riva/#prerequisites","title":"Prerequisites","text":"
  1. You have access and are logged into NVIDIA NGC. For step-by-step instructions, refer to the NGC Getting Started Guide. Specifically you will need your API Key from NVIDIA NGC.
  2. You have at least one worker node with an NVIDIA Volta\u2122, NVIDIA Turing\u2122, or an NVIDIA Ampere architecture-based GPU. For more information, refer to the Support Matrix.
  3. The Node Feature Discovery and the NVIDIA operators have been properly installed and configured on your OpenShift Cluster to enable your GPU(s). Full instructions here
  4. The Pod that will be deployed will consume about 10GB of RAM. Make sure you have enough resources on your node (on top of the GPU itself), and you don't have limits in place that would restrict this. GPU memory consumption will be about 12GB with all models loaded.
"},{"location":"tools-and-applications/riva/riva/#installation","title":"Installation","text":"

Included in the NGC Helm Repository is a chart designed to automate deployment to a Kubernetes cluster. This chart must be modified for OpenShift.

The Riva Speech AI Helm Chart deploys the ASR, NLP, and TTS services automatically. The Helm chart performs a number of functions:

  • Pulls Docker images from NGC for the Riva Speech AI server and utility containers for downloading and converting models.
  • Downloads the requested model artifacts from NGC as configured in the values.yaml file.
  • Generates the Triton Inference Server model repository.
  • Starts the Riva Speech AI server as configured in a Kubernetes pod.
  • Exposes the Riva Speech AI server as a configured service.

Examples of pretrained models are released with Riva for each of the services. The Helm chart comes preconfigured for downloading and deploying all of these models.

Installation Steps:

  1. Download the Helm chart

    export NGC_API_KEY=<your_api_key>\nhelm fetch https://helm.ngc.nvidia.com/nvidia/riva/charts/riva-api-2.11.0.tgz \\\n        --username=\\$oauthtoken --password=$NGC_API_KEY --untar\n
  2. Switch to the newly created folder, riva-api

  3. In the templates folder, modify the file deployment.yaml. For both the container riva-speech-api and the initContainer riva-model-init you must add the following security context information:

    securityContext:\n    allowPrivilegeEscalation: false\n    capabilities:\n      drop: [\"ALL\"]\n    seccompProfile:\n      type: \"RuntimeDefault\"\n    runAsNonRoot: true\n
  4. The file deployment.yaml should now look like this:

    ...\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: {{ template \"riva-server.fullname\" . }}\n  ...\nspec:\n  ...\n  template:\n    ...\n    spec:\n      containers:\n        - name: riva-speech-api\n          securityContext:\n            allowPrivilegeEscalation: false\n            capabilities:\n              drop: [\"ALL\"]\n            seccompProfile:\n              type: \"RuntimeDefault\"\n            runAsNonRoot: true\n          image: {{ $server_image }}\n          ...\n      initContainers:\n        - name: riva-model-init\n          securityContext:\n            allowPrivilegeEscalation: false\n            capabilities:\n              drop: [\"ALL\"]\n            seccompProfile:\n              type: \"RuntimeDefault\"\n            runAsNonRoot: true\n          image: {{ $servicemaker_image }}\n          ...\n
  5. At the root of riva-api, modify the file values.yaml:

    1. You will need to convert your API Key to a password value. In a Terminal run:

      echo -n $NGC_API_KEY | base64 -w0\n
    2. In the ngcCredentials section ov values.yaml, enter the password you obtained above and your email

    3. In the modelRepoGenerator section, for the modelDeployKey value, enter dGx0X2VuY29kZQ==. (This value is obtained from the command echo -n tlt_encode | base64 -w0.
    4. In the persistentVolumeClaim section, set usePVC to true. This is very important as it will disable the hostPath configuration for storage that is not permitted by default on OpenShift.
    5. If you don't have a storageClass set as default, or want to you another one, enter the name of the class you want to use in storageClassName. Otherwise leave this field empty and the default class will be used.
    6. Optionally, modify the storageSize.
    7. Leave the ingress section as is, we will create an OpenShift Route later.
    8. Optionally you can modify other values in the file to enable/disable certain models, or modify their configuration.
  6. Log into your OpenShift cluster from a Terminal, and create a project riva-api:

    oc new-project riva-api\n
  7. Move up one folder (so outside of the riva-api folder), and install NVIDIA Riva with the modified Helm chart:

    helm install riva-api riva-api\n

The deployment will now start.

Info

Beware that the deployment can be really long the first time, about 45mn if you have all the models and features selected. Containers and models have to be downloaded and configured. Please be patient...

"},{"location":"tools-and-applications/riva/riva/#usage","title":"Usage","text":"

The Helm chart had automatically created a Service, riva-api in the namespace where you have deployed it. If you followed this guide, this should also be riva-api. So within the OpenShift cluster, the API is accessible at riva-api.riva-api.svc.cluster.local.

Different ports are accessible:

  • http (8000): HTTP port of the Triton server.
  • grpc (8001): gRPC port of the Triton server.
  • metrics (8002): port for the metrics of the Triton server.
  • speech-grpc (50051): gRPC port of the Speech that exposes directly the different services you can use. This is normally the one that you will use.

If you want to use the API outside of the OpenShift cluster, you will have to create one or multiple Routes to those different endpoints.

"},{"location":"tools-and-applications/riva/riva/#example","title":"Example","text":"
  • On the same cluster where NVIDIA Riva is deployed, deploy RHODS or ODH and launch a Notebook (Standard DataScience is enough).
  • Clone the NVIDIA Riva tutorials repository at https://github.com/nvidia-riva/tutorials
  • Open a Terminal and install the client with pip install nvidia-riva-client:

(depending on the base image you used this may yield errors that you can ignore most of times).

  • In the tutorials folder, open the notebook asr-basics.ipynb.
  • In the cell that defines the uri of the API server, modify the default (localhost) for the address of the API server: riva-api.riva-api.svc.cluster.local

  • Run the notebook!

Note

In this example, only the first part of the notebook will work as only the English models have been deployed. You would have to adapt the configuration for other languages.

"},{"location":"whats-new/whats-new/","title":"What's new?","text":"

2023-08-01: Update to Spark documentation to include usage without the operator Tools and Applications->Apache Spark

2023-07-05: Add documentation on Time Slicing and Autoscaling for NVIDIA GPUs ODH/RHODS How-Tos->NVIDIA GPUs

2023-07-05: New example of how to configure a Custom Serving Runtime with Triton.

2023-07-03: New Minio tutorial on how to quickly deploy a simple Object Storage inside your OpenShift Project, for quick prototyping.

2023-06-30: New NVIDIA GPU installation documentation with Node tainting in ODH/RHODS How-Tos->NVIDIA GPUs

2023-06-02: NVIDIA Riva documentation in Tools and Applications->NVIDIA Riva

NVIDIA\u00ae Riva is a GPU-accelerated SDK for building Speech AI applications that are customized for your use case and deliver real-time performance.

2023-02-06: Rclone documentation in Tools and Applications->Rclone.

Rclone is a program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

2023-02-02: Addition of VSCode and RStudio images to custom workbenches.

2023-01-22: Addition of StarProxy to Patterns->Starburst/Trino Proxy.

Starproxy is a fully HTTP compliant proxy that is designed to sit between clients and a Trino/Starburst cluster.

"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..5a4dbc9b --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,163 @@ + + + + https://ai-on-openshift.io/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/demos/credit-card-fraud-detection-mlflow/credit-card-fraud/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/demos/financial-fraud-detection/financial-fraud-detection/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/demos/llm-chat-doc/llm-chat-doc/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/demos/retail-object-detection/retail-object-detection/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/demos/robotics-edge/robotics-edge/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/demos/smart-city/smart-city/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/demos/telecom-customer-churn-airflow/telecom-customer-churn-airflow/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/demos/water-pump-failure-prediction/water-pump-failure-prediction/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/demos/xray-pipeline/xray-pipeline/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/demos/yolov5-training-serving/yolov5-training-serving/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/getting-started/opendatahub/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/getting-started/openshift-data-science/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/getting-started/openshift/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/getting-started/why-this-site/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/odh-rhods/configuration/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/odh-rhods/custom-notebooks/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/odh-rhods/custom-runtime-triton/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/odh-rhods/nvidia-gpus/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/odh-rhods/openshift-group-management/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/patterns/bucket-notifications/bucket-notifications/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/patterns/kafka/kafka-to-object-storage/kafka-to-object-storage/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/patterns/kafka/kafka-to-serverless/kafka-to-serverless/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/patterns/starproxy/starproxy/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/tools-and-applications/airflow/airflow/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/tools-and-applications/apache-nifi/apache-nifi/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/tools-and-applications/apache-spark/apache-spark/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/tools-and-applications/minio/minio/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/tools-and-applications/mlflow/mlflow/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/tools-and-applications/rclone/rclone/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/tools-and-applications/riva/riva/ + 2023-10-25 + daily + + + https://ai-on-openshift.io/whats-new/whats-new/ + 2023-10-25 + daily + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..ad051a4c Binary files /dev/null and b/sitemap.xml.gz differ diff --git a/stylesheets/extra.css b/stylesheets/extra.css new file mode 100644 index 00000000..a586f24b --- /dev/null +++ b/stylesheets/extra.css @@ -0,0 +1,20 @@ +img { + border: 1px solid #cccccc; + transition: transform ease-in-out 0.3s; +} + +div.tx-hero__content img { + border: none; +} + +div.tx-hero__image img { + border: none; +} + +.noborder { + border: none; +} + +a.md-logo img { + border: none; +} \ No newline at end of file diff --git a/theme_override/home.html b/theme_override/home.html new file mode 100644 index 00000000..f386a875 --- /dev/null +++ b/theme_override/home.html @@ -0,0 +1,359 @@ + + +{% extends "main.html" %} +{% block tabs %} +{{ super() }} + + + +
+
+
+
+ +
+
+ +

The one-stop shop for Data Science and Data Engineering on OpenShift!

+ + Get started + +
+
+
+
+ + + + + + +{% endblock %} +{% block content %}{% endblock %} +{% block footer %}{% endblock %} diff --git a/theme_override/main.html b/theme_override/main.html new file mode 100644 index 00000000..b4120ffa --- /dev/null +++ b/theme_override/main.html @@ -0,0 +1,29 @@ + +{% extends "base.html" %} + + +{% block footer %} +
+ +
+{% endblock %} \ No newline at end of file diff --git a/tools-and-applications/airflow/airflow/index.html b/tools-and-applications/airflow/airflow/index.html new file mode 100644 index 00000000..847cd7dc --- /dev/null +++ b/tools-and-applications/airflow/airflow/index.html @@ -0,0 +1,1668 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Apache Airflow - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Apache Airflow

+

What is it?

+

logoApache Airflow is a platform created by the community to programmatically author, schedule and monitor workflows.
+It has become popular because of how easy it is to use and how extendable it is, covering a wide variety of tasks and allowing you to connect your workflows with virtually any technology.
+Since it's a Python framework it has also gathered a lot of interest from the Data Science field.

+

One important concept used in Airflow is DAGs (Directed Acyclical Graphs).
+A DAG is a graph without any cycles. In other words, a node in your graph may never point back to a node higher up in your workflow.
+DAGs are used to model your workflows/pipelines, which essentially means that you are building and executing graphs when working with Airflow.
+You can read more about DAGs here: https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html

+

The key features of Airflow are:

+
    +
  • Webserver: It's a user interface where you can see the status of your jobs, as well as inspect, trigger, and debug your DAGs and tasks. It also gives a database interface and lets you read logs from the remote file store.
  • +
  • Scheduler: The Scheduler is a component that monitors and manages all your tasks and DAGs, it checks their status and triggers them in the correct order once their dependencies are complete.
  • +
  • Executors: It handles running your task when they are assigned by the scheduler. It can either run the tasks inside the scheduler or push task execution out to workers. Airflow supports a variety of different executors which you can choose between.
  • +
  • Metadata database: The metadata database is used by the executor, webserver, and scheduler to store state.
  • +
+

graph

+

Installing Apache Airflow on OpenShift

+

Airflow can be run as a pip package, through docker, or a Helm chart.
+The official Helm chart can be found here: https://airflow.apache.org/docs/apache-airflow/stable/installation/index.html#using-official-airflow-helm-chart

+

See OpenDataHub Airflow - Example Helm Values

+

A modified version of the Helm chart which can be installed on OpenShift 4.12: https://github.com/eformat/openshift-airflow

+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/tools-and-applications/airflow/img/graph.png b/tools-and-applications/airflow/img/graph.png new file mode 100644 index 00000000..860fbff7 Binary files /dev/null and b/tools-and-applications/airflow/img/graph.png differ diff --git a/tools-and-applications/airflow/img/logo.png b/tools-and-applications/airflow/img/logo.png new file mode 100644 index 00000000..ff203ce8 Binary files /dev/null and b/tools-and-applications/airflow/img/logo.png differ diff --git a/tools-and-applications/apache-nifi/apache-nifi/index.html b/tools-and-applications/apache-nifi/apache-nifi/index.html new file mode 100644 index 00000000..cb011876 --- /dev/null +++ b/tools-and-applications/apache-nifi/apache-nifi/index.html @@ -0,0 +1,1660 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Apache NiFi - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Apache NiFi

+

What is it?

+

Apache NiFi is an open-source data integration tool that helps automate the flow of data between systems. It is designed to be easy to use and allows users to quickly and efficiently process, transmit, and securely distribute data. NiFi provides a web-based interface for monitoring and controlling data flows, as well as a library of processors for common data manipulation tasks such as filtering, routing, and transformation. It is highly configurable and can be used in a variety of scenarios including data ingestion, ETL, and dataflow management.

+

NiFi preview

+

Nifi is a powerful tool to move data between systems and can handle real-time data with ease. It can be used in conjunction with other big data technologies such as Apache Kafka and Apache Spark to create a complete data pipeline. It supports a wide range of protocols and data formats, making it a versatile solution for any organization looking to manage and process large amounts of data.

+

Installing Apache Nifi on OpenShift

+

The easiest way to install it is to follow the instructions available on the Nifi on OpenShift repo.

+

Nifi on OpenShift

+

Contrary to other recipes or the images you can find on the Nifi project, the container images available on this repo are all based on UBI8 and follow OpenShift guidelines and constraints, like running with minimal privileges.

+

Several deployment options are available:

+
    +
  • Choice on the number of nodes to deploy,
  • +
  • Basic, OIDC or LDAP authentication.
  • +
+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/tools-and-applications/apache-nifi/img/nifi-openshift.png b/tools-and-applications/apache-nifi/img/nifi-openshift.png new file mode 100644 index 00000000..3a988c06 Binary files /dev/null and b/tools-and-applications/apache-nifi/img/nifi-openshift.png differ diff --git a/tools-and-applications/apache-nifi/img/nifi-prview.png b/tools-and-applications/apache-nifi/img/nifi-prview.png new file mode 100644 index 00000000..440cb224 Binary files /dev/null and b/tools-and-applications/apache-nifi/img/nifi-prview.png differ diff --git a/tools-and-applications/apache-spark/apache-spark/index.html b/tools-and-applications/apache-spark/apache-spark/index.html new file mode 100644 index 00000000..0f8e37bf --- /dev/null +++ b/tools-and-applications/apache-spark/apache-spark/index.html @@ -0,0 +1,1663 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Apache Spark - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Apache Spark

+

What is it?

+

SparkApache Spark is an open-source, distributed computing system used for big data processing. It can process large amounts of data quickly and efficiently, and handle both batch and streaming data. Spark uses the in-memory computing concept, which allows it to process data much faster than traditional disk-based systems.

+

Spark supports a wide range of programming languages including Java, Python, and Scala. It provides a number of high-level libraries and APIs, such as Spark SQL, Spark Streaming, and MLlib, that make it easy for developers to perform complex data processing tasks. Spark SQL allows for querying structured data using SQL and the DataFrame API, Spark Streaming allows for processing real-time data streams, and MLlib is a machine learning library for building and deploying machine learning models. Spark also supports graph processing and graph computation through GraphX and GraphFrames.

+

Working with Spark on OpenShift

+

Spark can be fully containerized. Therefore a standalone Spark cluster can of course be installed on OpenShift. However, it sorts of breaks the cloud-native approach brought by Kubernetes of ephemeral workloads. There are in fact many ways to work with Spark on OpenShift, either with Spark-on-Kubernetes operator, or directly through PySpark or spark-submit commands.

+

In this Spark on OpenShift repository, you will find all the instructions to work with Sparl on OpenShift.

+

It includes:

+
    +
  • pre-built UBI-based Spark images including the drivers to work with S3 storage,
  • +
  • instructions and examples to build your own images (to include your own libraries for example),
  • +
  • instructions to deploy the Spark history server to gather your processing logs,
  • +
  • instructions to deploy the Spark on Kubernetes operator,
  • +
  • Prometheus and Grafana configuration to monitor your data processing and operator in real time,
  • +
  • instructions to work without the operator, from a Notebook or a Terminal, inside or outside the OpenShit Cluster,
  • +
  • various examples to test your installation and the different methods.
  • +
+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/tools-and-applications/apache-spark/img/spark-logo.png b/tools-and-applications/apache-spark/img/spark-logo.png new file mode 100644 index 00000000..fdfdde7f Binary files /dev/null and b/tools-and-applications/apache-spark/img/spark-logo.png differ diff --git a/tools-and-applications/minio/img/add.connection.png b/tools-and-applications/minio/img/add.connection.png new file mode 100644 index 00000000..f1a9938c Binary files /dev/null and b/tools-and-applications/minio/img/add.connection.png differ diff --git a/tools-and-applications/minio/img/connection.details.png b/tools-and-applications/minio/img/connection.details.png new file mode 100644 index 00000000..7dacf531 Binary files /dev/null and b/tools-and-applications/minio/img/connection.details.png differ diff --git a/tools-and-applications/minio/img/create.bucket.01.png b/tools-and-applications/minio/img/create.bucket.01.png new file mode 100644 index 00000000..f2f01f18 Binary files /dev/null and b/tools-and-applications/minio/img/create.bucket.01.png differ diff --git a/tools-and-applications/minio/img/create.bucket.02.png b/tools-and-applications/minio/img/create.bucket.02.png new file mode 100644 index 00000000..2f214836 Binary files /dev/null and b/tools-and-applications/minio/img/create.bucket.02.png differ diff --git a/tools-and-applications/minio/img/create.project.png b/tools-and-applications/minio/img/create.project.png new file mode 100644 index 00000000..a06d9429 Binary files /dev/null and b/tools-and-applications/minio/img/create.project.png differ diff --git a/tools-and-applications/minio/img/import.yaml.png b/tools-and-applications/minio/img/import.yaml.png new file mode 100644 index 00000000..3f4f5a30 Binary files /dev/null and b/tools-and-applications/minio/img/import.yaml.png differ diff --git a/tools-and-applications/minio/img/minio.login.png b/tools-and-applications/minio/img/minio.login.png new file mode 100644 index 00000000..d48f0768 Binary files /dev/null and b/tools-and-applications/minio/img/minio.login.png differ diff --git a/tools-and-applications/minio/img/openshift.console.png b/tools-and-applications/minio/img/openshift.console.png new file mode 100644 index 00000000..0165635a Binary files /dev/null and b/tools-and-applications/minio/img/openshift.console.png differ diff --git a/tools-and-applications/minio/img/resources.created.png b/tools-and-applications/minio/img/resources.created.png new file mode 100644 index 00000000..5e972680 Binary files /dev/null and b/tools-and-applications/minio/img/resources.created.png differ diff --git a/tools-and-applications/minio/img/routes.png b/tools-and-applications/minio/img/routes.png new file mode 100644 index 00000000..d7e4b654 Binary files /dev/null and b/tools-and-applications/minio/img/routes.png differ diff --git a/tools-and-applications/minio/img/running.pod.png b/tools-and-applications/minio/img/running.pod.png new file mode 100644 index 00000000..f7cdd1d4 Binary files /dev/null and b/tools-and-applications/minio/img/running.pod.png differ diff --git a/tools-and-applications/minio/img/workloads.pods.png b/tools-and-applications/minio/img/workloads.pods.png new file mode 100644 index 00000000..c32bf3c1 Binary files /dev/null and b/tools-and-applications/minio/img/workloads.pods.png differ diff --git a/tools-and-applications/minio/minio/index.html b/tools-and-applications/minio/minio/index.html new file mode 100644 index 00000000..afa238b2 --- /dev/null +++ b/tools-and-applications/minio/minio/index.html @@ -0,0 +1,2127 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Minio - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + +

Minio

+

What is it?

+

Minio is a high-performance, S3 compatible object store. It can be deployed on a wide variety of platforms, and it comes in multiple flavors.

+

Why this guide?

+

This guide is a very quick way of deploying the community version of Minio in order to quickly setup a fully standalone Object Store, in an OpenShift Cluster. This can then be used for various prototyping tasks that require Object Storage.

+

Note that nothing in this guide should be used in production-grade environments. Also, Minio is not included in RHODS, and Red Hat does not provide support for Minio.

+

Pre-requisites

+
    +
  • Access to an OpenShift cluster
  • +
  • Namespace-level admin permissions, or permission to create your own project
  • +
+

Deploying Minio on OpenShift

+

Create a Data Science Project (Optional)

+

If you already have your own Data Science Project, or OpenShift project, you can skip this step.

+
    +
  1. If your cluster already has Red Hat OpenShift Data Science installed, you can use the Dashboard Web Interface to create a Data Science project.
  2. +
  3. Simply navigate to Data Science Projects
  4. +
  5. And click Create Project
  6. +
  7. +

    Choose a name for your project (here, Showcase) and click Create:

    +

    alt_text

    +
  8. +
  9. +

    Make sure to make a note of the Resource name, in case it's different from the name.

    +
  10. +
+

Log on to your project in OpenShift Console

+
    +
  1. +

    Go to your cluster's OpenShift Console:

    +

    alt_text

    +
  2. +
  3. +

    Make sure you use the Administrator view, not the developer view.

    +
  4. +
  5. +

    Go to Workloads then Pods, and confirm the selected project is the right one

    +

    alt_text

    +
  6. +
  7. +

    You now have a project in which to deploy Minio

    +
  8. +
+

Deploy Minio in your project

+
    +
  1. +

    Click on the + ("Import YAML") button:

    +

    alt_text

    +
  2. +
  3. +

    Paste the following YAML in the box, but don't press ok yet!: +

    ---
    +kind: PersistentVolumeClaim
    +apiVersion: v1
    +metadata:
    +  name: minio-pvc
    +spec:
    +  accessModes:
    +    - ReadWriteOnce
    +  resources:
    +    requests:
    +      storage: 20Gi
    +  volumeMode: Filesystem
    +---
    +kind: Secret
    +apiVersion: v1
    +metadata:
    +  name: minio-secret
    +stringData:
    +  # change the username and password to your own values.
    +  # ensure that the user is at least 3 characters long and the password at least 8
    +  minio_root_user: minio
    +  minio_root_password: minio123
    +---
    +kind: Deployment
    +apiVersion: apps/v1
    +metadata:
    +  name: minio
    +spec:
    +  replicas: 1
    +  selector:
    +    matchLabels:
    +      app: minio
    +  template:
    +    metadata:
    +      creationTimestamp: null
    +      labels:
    +        app: minio
    +    spec:
    +      volumes:
    +        - name: data
    +          persistentVolumeClaim:
    +            claimName: minio-pvc
    +      containers:
    +        - resources:
    +            limits:
    +              cpu: 250m
    +              memory: 1Gi
    +            requests:
    +              cpu: 20m
    +              memory: 100Mi
    +          readinessProbe:
    +            tcpSocket:
    +              port: 9000
    +            initialDelaySeconds: 5
    +            timeoutSeconds: 1
    +            periodSeconds: 5
    +            successThreshold: 1
    +            failureThreshold: 3
    +          terminationMessagePath: /dev/termination-log
    +          name: minio
    +          livenessProbe:
    +            tcpSocket:
    +              port: 9000
    +            initialDelaySeconds: 30
    +            timeoutSeconds: 1
    +            periodSeconds: 5
    +            successThreshold: 1
    +            failureThreshold: 3
    +          env:
    +            - name: MINIO_ROOT_USER
    +              valueFrom:
    +                secretKeyRef:
    +                  name: minio-secret
    +                  key: minio_root_user
    +            - name: MINIO_ROOT_PASSWORD
    +              valueFrom:
    +                secretKeyRef:
    +                  name: minio-secret
    +                  key: minio_root_password
    +          ports:
    +            - containerPort: 9000
    +              protocol: TCP
    +            - containerPort: 9090
    +              protocol: TCP
    +          imagePullPolicy: IfNotPresent
    +          volumeMounts:
    +            - name: data
    +              mountPath: /data
    +              subPath: minio
    +          terminationMessagePolicy: File
    +          image: >-
    +            quay.io/minio/minio:RELEASE.2023-06-19T19-52-50Z
    +          args:
    +            - server
    +            - /data
    +            - --console-address
    +            - :9090
    +      restartPolicy: Always
    +      terminationGracePeriodSeconds: 30
    +      dnsPolicy: ClusterFirst
    +      securityContext: {}
    +      schedulerName: default-scheduler
    +  strategy:
    +    type: Recreate
    +  revisionHistoryLimit: 10
    +  progressDeadlineSeconds: 600
    +---
    +kind: Service
    +apiVersion: v1
    +metadata:
    +  name: minio-service
    +spec:
    +  ipFamilies:
    +    - IPv4
    +  ports:
    +    - name: api
    +      protocol: TCP
    +      port: 9000
    +      targetPort: 9000
    +    - name: ui
    +      protocol: TCP
    +      port: 9090
    +      targetPort: 9090
    +  internalTrafficPolicy: Cluster
    +  type: ClusterIP
    +  ipFamilyPolicy: SingleStack
    +  sessionAffinity: None
    +  selector:
    +    app: minio
    +---
    +kind: Route
    +apiVersion: route.openshift.io/v1
    +metadata:
    +  name: minio-api
    +spec:
    +  to:
    +    kind: Service
    +    name: minio-service
    +    weight: 100
    +  port:
    +    targetPort: api
    +  wildcardPolicy: None
    +  tls:
    +    termination: edge
    +    insecureEdgeTerminationPolicy: Redirect
    +---
    +kind: Route
    +apiVersion: route.openshift.io/v1
    +metadata:
    +  name: minio-ui
    +spec:
    +  to:
    +    kind: Service
    +    name: minio-service
    +    weight: 100
    +  port:
    +    targetPort: ui
    +  wildcardPolicy: None
    +  tls:
    +    termination: edge
    +    insecureEdgeTerminationPolicy: Redirect
    +

    +
  4. +
  5. +

    By default, the size of the storage is 20 GB. (see line 11). Change it if you need to.

    +
  6. +
  7. If you want to, edit lines 21-22 to change the default user/password.
  8. +
  9. Press Create.
  10. +
  11. +

    You should see:

    +

    alt_text

    +
  12. +
  13. +

    And there should now be a running minio pod:

    +

    alt_text

    +
  14. +
  15. +

    As well as two minio routes:

    +

    alt_text

    +
  16. +
  17. +

    The -api route is for programmatic access to Minio

    +
  18. +
  19. The -ui route is for browser-based access to Minio
  20. +
  21. Your Minio Object Store is now deployed, but we still need to create at least one bucket in it, to make it useful.
  22. +
+

Creating a bucket in Minio

+

Log in to Minio

+
    +
  1. Locate the minio-ui Route, and open its location URL in a web browser:
  2. +
  3. +

    When prompted, log in

    +
      +
    • if you kept the default values, then:
    • +
    • user: minio
    • +
    • pass: minio123
    • +
    +

    alt_text

    +
  4. +
  5. +

    You should now be logged into your Minio instance.

    +
  6. +
+

Create a bucket

+
    +
  1. +

    Click on Create a Bucket

    +

    alt_text

    +
  2. +
  3. +

    Choose a name for your bucket (for example mybucket) and click Create Bucket:

    +

    alt_text

    +
  4. +
  5. +

    Repeat those steps to create as many buckets as you will need.

    +
  6. +
+

Create a matching Data Connection for Minio

+
    +
  1. +

    Back in RHODS, inside of your Data Science Project, Click on Add data connection:

    +

    alt_text

    +
  2. +
  3. +

    Then, fill out the required field to match with your newly-deployed Minio Object Storage

    +

    alt_text

    +
  4. +
  5. +

    You now have a Data Connection that maps to your mybucket bucket in your Minio Instance.

    +
  6. +
  7. This data connection can be used, among other things
      +
    • In your Workbenches
    • +
    • For your Model Serving
    • +
    • For your Pipeline Server Configuration
    • +
    +
  8. +
+

Notes and FAQ

+
    +
  • As long as you are using the Route URLs, a Minio running in one namespace can be used by any other application, even running in another namespace, or even in another cluster altogether.
  • +
+

Uninstall instructions:

+

This will completely remove Minio and all its content. Make sure you have a backup of the things your need before doing so!

+
    +
  1. +

    Track down those objects created earlier:

    +

    alt_text

    +
  2. +
  3. +

    Delete them all.

    +
  4. +
+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/tools-and-applications/mlflow/img/MLFlow_capabilities.png b/tools-and-applications/mlflow/img/MLFlow_capabilities.png new file mode 100644 index 00000000..241649e8 Binary files /dev/null and b/tools-and-applications/mlflow/img/MLFlow_capabilities.png differ diff --git a/tools-and-applications/mlflow/img/enabled-tile.png b/tools-and-applications/mlflow/img/enabled-tile.png new file mode 100644 index 00000000..c554bc70 Binary files /dev/null and b/tools-and-applications/mlflow/img/enabled-tile.png differ diff --git a/tools-and-applications/mlflow/mlflow/index.html b/tools-and-applications/mlflow/mlflow/index.html new file mode 100644 index 00000000..9aba1b9f --- /dev/null +++ b/tools-and-applications/mlflow/mlflow/index.html @@ -0,0 +1,1868 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + MLflow - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

MLFlow

+

What is it?

+

MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. MLflow currently offers four components: +The 4 capabilities of MLFlow. Source: https://mlflow.org/ +Read more here: https://mlflow.org/

+

Helm installation into OpenShift namespace

+

Pre-requisites

+
    +
  • Install the "Crunchy Postgres for Kubernetes" operator (can be found in OperatorHub) - To store the MLFlow config
  • +
  • Install the "OpenShift Data Foundation" operator (can be found in OperatorHub) - To provide S3 storage for the experiments and models
  • +
+

Install

+
<Create an OpenShift project, either through the OpenShift UI or 'oc new-project project-name'>
+helm repo add strangiato https://strangiato.github.io/helm-charts/
+helm repo update
+<Log in to the correct OpenShift project through 'oc project project-name'>
+helm upgrade -i mlflow-server strangiato/mlflow-server
+
+

Additional Options

+

The MLFlow Server helm chart provides a number of customizable options when deploying MLFlow. These options can be configured using the --set flag with helm install or helm upgrade to set options directly on the command line or through a values.yaml file using the --values flag.

+

For a full list of configurable options, see the helm chart documentation:

+

https://github.com/strangiato/helm-charts/tree/main/charts/mlflow-server#values

+
OpenDataHub Dashboard Application Tile
+

As discussed in the Dashboard Configuration, ODH/RHODS allows administrators to add a custom application tile for additional components on the cluster.

+

Enabled tile

+

The MLFlow Server helm chart supports creation of the Dashboard Application tile as a configurable value. If MLFlow Server is installed in the same namespace as ODH/RHODS you can install the dashboard tile run the following command:

+
helm upgrade -i mlflow-server strangiato/mlflow-server \
+    --set odhApplication.enabled=true
+
+

The MLFlow Server helm chart also supports installing the odhApplication object in a different namespace, if MLFlow Server is not installed in the same namespace as ODH/RHODS:

+
helm upgrade -i mlflow-server strangiato/mlflow-server \
+    --set odhApplication.enabled=true \
+    --set odhApplication.namespaceOverride=redhat-ods-applications
+
+

After enabling the odhApplication component, wait 1-2 minutes and the tile should appear in the Explorer view of the dashboard.

+
+

Note

+

This feature requires ODH v1.4.1 or newer

+
+

Test MLFlow

+
    +
  • Go to the OpenShift Console and switch to Developer view.
  • +
  • Go to the Topology view and make sure that you are on the MLFlow project.
  • +
  • Check that the MLFlow circle is dark blue (this means it has finished deploying).
  • +
  • Press the "External URL" link in the top right corner of the MLFlow circle to open up the MLFlow UI.
  • +
  • Run helm test mlflow-server in your command prompt to test MLFlow. If successful, you should see a new experiment called "helm-test" show up in the MLFlow UI with 3 experiments inside it.
  • +
+

Adding MLFlow to Training Code

+
import mlflow
+from sklearn.linear_model import LogisticRegression
+
+# Set tracking URI
+mlflow.set_tracking_uri(https://<route-to-mlflow>)
+
+# Setting the experiment
+mlflow.set_experiment("my-experiment")
+
+if __name__ == "__main__":
+    # Enabling automatic logging for scikit-learn runs
+    mlflow.sklearn.autolog()
+
+    # Starting a logging run
+    with mlflow.start_run():
+        # train
+
+

Source Code

+

MLFlow Server Source Code: +https://github.com/strangiato/mlflow-server

+

MLFlow Server Helm Chart Source Code: +https://github.com/strangiato/helm-charts/tree/main/charts/mlflow-server

+

Demos

+
    +
  • Credit Card Fraud Detection pipeline using MLFlow together with RHODS: Demo
  • +
+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/tools-and-applications/rclone/img/config-step-1.png b/tools-and-applications/rclone/img/config-step-1.png new file mode 100644 index 00000000..5f084bda Binary files /dev/null and b/tools-and-applications/rclone/img/config-step-1.png differ diff --git a/tools-and-applications/rclone/img/config-step2.png b/tools-and-applications/rclone/img/config-step2.png new file mode 100644 index 00000000..83baed2b Binary files /dev/null and b/tools-and-applications/rclone/img/config-step2.png differ diff --git a/tools-and-applications/rclone/img/configs.png b/tools-and-applications/rclone/img/configs.png new file mode 100644 index 00000000..62075a72 Binary files /dev/null and b/tools-and-applications/rclone/img/configs.png differ diff --git a/tools-and-applications/rclone/img/example-2-panes.png b/tools-and-applications/rclone/img/example-2-panes.png new file mode 100644 index 00000000..71abe505 Binary files /dev/null and b/tools-and-applications/rclone/img/example-2-panes.png differ diff --git a/tools-and-applications/rclone/img/explorer-2.png b/tools-and-applications/rclone/img/explorer-2.png new file mode 100644 index 00000000..dc2b3921 Binary files /dev/null and b/tools-and-applications/rclone/img/explorer-2.png differ diff --git a/tools-and-applications/rclone/img/explorer.png b/tools-and-applications/rclone/img/explorer.png new file mode 100644 index 00000000..d33bde09 Binary files /dev/null and b/tools-and-applications/rclone/img/explorer.png differ diff --git a/tools-and-applications/rclone/img/import-rclone-image.png b/tools-and-applications/rclone/img/import-rclone-image.png new file mode 100644 index 00000000..3c4839d3 Binary files /dev/null and b/tools-and-applications/rclone/img/import-rclone-image.png differ diff --git a/tools-and-applications/rclone/img/launch.png b/tools-and-applications/rclone/img/launch.png new file mode 100644 index 00000000..ec1fc1c8 Binary files /dev/null and b/tools-and-applications/rclone/img/launch.png differ diff --git a/tools-and-applications/rclone/img/login.png b/tools-and-applications/rclone/img/login.png new file mode 100644 index 00000000..4b33b8b7 Binary files /dev/null and b/tools-and-applications/rclone/img/login.png differ diff --git a/tools-and-applications/rclone/img/rclone-logo.svg b/tools-and-applications/rclone/img/rclone-logo.svg new file mode 100644 index 00000000..35360ef4 --- /dev/null +++ b/tools-and-applications/rclone/img/rclone-logo.svg @@ -0,0 +1,45 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/tools-and-applications/rclone/img/transfer.png b/tools-and-applications/rclone/img/transfer.png new file mode 100644 index 00000000..aea3a047 Binary files /dev/null and b/tools-and-applications/rclone/img/transfer.png differ diff --git a/tools-and-applications/rclone/img/workbench-1.png b/tools-and-applications/rclone/img/workbench-1.png new file mode 100644 index 00000000..7e7dae47 Binary files /dev/null and b/tools-and-applications/rclone/img/workbench-1.png differ diff --git a/tools-and-applications/rclone/img/workbench-2.png b/tools-and-applications/rclone/img/workbench-2.png new file mode 100644 index 00000000..5590b082 Binary files /dev/null and b/tools-and-applications/rclone/img/workbench-2.png differ diff --git a/tools-and-applications/rclone/rclone/index.html b/tools-and-applications/rclone/rclone/index.html new file mode 100644 index 00000000..4cc66f3e --- /dev/null +++ b/tools-and-applications/rclone/rclone/index.html @@ -0,0 +1,1786 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Rclone - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

Rclone

+

What is it?

+

Spark

+

Rclone is a program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

+

Users call rclone "The Swiss army knife of cloud storage", and "Technology indistinguishable from magic".

+

Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server-side transfers to minimize local bandwidth use and transfers from one provider to another without using local disk.

+

Rclone is mature, open-source software originally inspired by rsync and written in Go. The friendly support community is familiar with varied use cases.

+

The implementation described here is a containerized version of Rclone to run on OpenShift, alongside or integrated within ODH/RHODS.

+

Deployment

+

Integrated in Open Data Hub or OpenShift Data Science

+

Use this method if you want to use Rclone from the ODH/RHODS launcher or in a Data Science Project.

+
    +
  • In the Cluster Settings menu, import the image quay.io/guimou/rclone-web-openshift:odh-rhods_latest. You can name it Rclone. +Import image
  • +
  • In your DSP project, create a new workbench using the Rclone image. You can set the storage size as minimal as it's only there to store the configuration of the endpoints. +Workbench +Workbench
  • +
+
+

Tip

+

The minimal size allowed by the dashboard for a storage volume is currently 1GB, which is way more than what is required for the Rclone configuration. So you can also create a much smaller PVC manually in the namespace corresponding to your Data Science Project, for example 100MB or less, and select this volume when creating the workbench.

+
+
    +
  • Launch Rclone from the link once it's deployed! +Launch
  • +
  • After the standard authentication, you end up on the Rclone Login page. There is nothing to enter, but I have not found yet how to bypass it. So simply click on "Login". +Login
  • +
+

Standalone deployment

+

Use this method if you want to use Rclone on its own in a namespace. You can still optionally make a shortcut appear in the ODH/RHODS dashboard.

+
    +
  • Create a project/namespace for your installation.
  • +
  • Clone or head to this repo.
  • +
  • From the deploy folder, apply the different YAML files:
      +
    • 01-pvc.yaml: creates a persistent volume to hold the configuration
    • +
    • 02-deployment.yaml: creates the deployment. Modify admin account and password if you want to restrict access. You should!
    • +
    • 03-service.yaml, 04-route.yaml: create the external access so that you can connect to the Web UI.
    • +
    • Optionally, to create a tile on the ODH/RHODS dashboard:
        +
      • modify the 05-tile.yaml file with the address of the Route that was created previously (namespace and name of the Route object).
      • +
      • the will appear under the available applications in the dashboard. Select it and click on "Enable" to make it appear in the "Enabled" menu.
      • +
      +
    • +
    +
  • +
+

Configuration

+

In this example, we will create an S3 configuration that connects to a bucket on the MCG from OpenShift Data Foundation. So you must have created this bucket in advance and have all the information about it: endpoint, access and secret keys, bucket name.

+
    +
  • In Rclone, click on "Configs" to create the new Remote. +Configs
  • +
  • Create new configuration, give it a name, and select "Amazon S3 Compliant Storage Providers...", which includes Ceph and MCG (even if not listed). +Step 1
  • +
  • Enter the connection info. You only have to enter the Access key and Secret, as well as the Endpoint in "Endpoint for S3 API". This last info is automatically copied in other fields, that's normal. +Step 2
  • +
  • Finalize the config by clicking on "Next" at the bottom.
  • +
+

Now that you have the Remote set up, you can go on the Explorer, select the Remote, and browse it! +Explorer +Explorer

+

Usage Example

+

In this simple example, we will transfer a dump sample from Wikipedia. Wikimedia publishes those dumps daily, and they are mirrored by different organizations. In a "standard" setup, loading those information into your object store would not be really practical, sometimes involving downloading it first locally to then push it to your storage.

+

This is how we can do it with Rclone.

+
    +
  • Create your Bucket Remote as described in Configuration.
  • +
  • Create another remote of type "HTTP", and enter the address of one of the mirrors. Here I used https://dumps.wikimedia.your.org/wikidatawiki/.
  • +
  • Open the Explorer view, set it in dual-pane layout. In the first pane open your Bucket Remote, and in the other one the HTTP. This is what it will look like: +Explorer
  • +
  • Browse to the folder you want, select a file or a folder, and simply drag and drop it from the Wikidump to your bucket. You can select a big one to make things more interesting!
  • +
  • Head for the dashboard where you will see the file transfer happening in the background. +Transfer
  • +
+

That's it! Nothing to install, high speed optimized transfer, and you could even do multiple transfers in the background,...

+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/tools-and-applications/riva/img/api.png b/tools-and-applications/riva/img/api.png new file mode 100644 index 00000000..a1c3725a Binary files /dev/null and b/tools-and-applications/riva/img/api.png differ diff --git a/tools-and-applications/riva/img/asr.png b/tools-and-applications/riva/img/asr.png new file mode 100644 index 00000000..96f94463 Binary files /dev/null and b/tools-and-applications/riva/img/asr.png differ diff --git a/tools-and-applications/riva/img/nvidia-riva-client.png b/tools-and-applications/riva/img/nvidia-riva-client.png new file mode 100644 index 00000000..9fcfecc5 Binary files /dev/null and b/tools-and-applications/riva/img/nvidia-riva-client.png differ diff --git a/tools-and-applications/riva/riva/index.html b/tools-and-applications/riva/riva/index.html new file mode 100644 index 00000000..b0ae4cd6 --- /dev/null +++ b/tools-and-applications/riva/riva/index.html @@ -0,0 +1,1859 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + NVIDIA Riva - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

NVIDIA RIVA

+

NVIDIA® Riva is a GPU-accelerated SDK for building Speech AI applications that are customized for your use case and deliver real-time performance.

+

Riva offers pretrained speech models in NVIDIA NGC™ that can be fine-tuned with the NVIDIA NeMo on a custom data set, accelerating the development of domain-specific models by 10x.

+

Models can be easily exported, optimized, and deployed as a speech service on premises or in the cloud with a single command using Helm charts.

+

Riva’s high-performance inference is powered by NVIDIA TensorRT™ optimizations and served using the NVIDIA Triton™ Inference Server, which are both part of the NVIDIA AI platform.

+

Riva services are available as gRPC-based microservices for low-latency streaming, as well as high-throughput offline use cases.

+

Riva is fully containerized and can easily scale to hundreds and thousands of parallel streams.

+

Deployment

+

The guide to deploy Riva on Kubernetes has to be adapted for OpenShift. Here are the different steps.

+

Prerequisites

+
    +
  1. You have access and are logged into NVIDIA NGC. For step-by-step instructions, refer to the NGC Getting Started Guide. Specifically you will need your API Key from NVIDIA NGC.
  2. +
  3. You have at least one worker node with an NVIDIA Volta™, NVIDIA Turing™, or an NVIDIA Ampere architecture-based GPU. For more information, refer to the Support Matrix.
  4. +
  5. The Node Feature Discovery and the NVIDIA operators have been properly installed and configured on your OpenShift Cluster to enable your GPU(s). Full instructions here
  6. +
  7. The Pod that will be deployed will consume about 10GB of RAM. Make sure you have enough resources on your node (on top of the GPU itself), and you don't have limits in place that would restrict this. GPU memory consumption will be about 12GB with all models loaded.
  8. +
+

Installation

+

Included in the NGC Helm Repository is a chart designed to automate deployment to a Kubernetes cluster. This chart must be modified for OpenShift.

+

The Riva Speech AI Helm Chart deploys the ASR, NLP, and TTS services automatically. The Helm chart performs a number of functions:

+
    +
  • Pulls Docker images from NGC for the Riva Speech AI server and utility containers for downloading and converting models.
  • +
  • Downloads the requested model artifacts from NGC as configured in the values.yaml file.
  • +
  • Generates the Triton Inference Server model repository.
  • +
  • Starts the Riva Speech AI server as configured in a Kubernetes pod.
  • +
  • Exposes the Riva Speech AI server as a configured service.
  • +
+

Examples of pretrained models are released with Riva for each of the services. The Helm chart comes preconfigured for downloading and deploying all of these models.

+

Installation Steps:

+
    +
  1. +

    Download the Helm chart

    +
    export NGC_API_KEY=<your_api_key>
    +helm fetch https://helm.ngc.nvidia.com/nvidia/riva/charts/riva-api-2.11.0.tgz \
    +        --username=\$oauthtoken --password=$NGC_API_KEY --untar
    +
    + +
  2. +
  3. +

    Switch to the newly created folder, riva-api

    +
  4. +
  5. +

    In the templates folder, modify the file deployment.yaml. For both the container riva-speech-api and the initContainer riva-model-init you must add the following security context information:

    +
    securityContext:
    +    allowPrivilegeEscalation: false
    +    capabilities:
    +      drop: ["ALL"]
    +    seccompProfile:
    +      type: "RuntimeDefault"
    +    runAsNonRoot: true
    +
    + +
  6. +
  7. +

    The file deployment.yaml should now look like this:

    +
    ...
    +apiVersion: apps/v1
    +kind: Deployment
    +metadata:
    +  name: {{ template "riva-server.fullname" . }}
    +  ...
    +spec:
    +  ...
    +  template:
    +    ...
    +    spec:
    +      containers:
    +        - name: riva-speech-api
    +          securityContext:
    +            allowPrivilegeEscalation: false
    +            capabilities:
    +              drop: ["ALL"]
    +            seccompProfile:
    +              type: "RuntimeDefault"
    +            runAsNonRoot: true
    +          image: {{ $server_image }}
    +          ...
    +      initContainers:
    +        - name: riva-model-init
    +          securityContext:
    +            allowPrivilegeEscalation: false
    +            capabilities:
    +              drop: ["ALL"]
    +            seccompProfile:
    +              type: "RuntimeDefault"
    +            runAsNonRoot: true
    +          image: {{ $servicemaker_image }}
    +          ...
    +
    + +
  8. +
  9. +

    At the root of riva-api, modify the file values.yaml:

    +
      +
    1. +

      You will need to convert your API Key to a password value. In a Terminal run:

      +
      echo -n $NGC_API_KEY | base64 -w0
      +
      + +
    2. +
    3. +

      In the ngcCredentials section ov values.yaml, enter the password you obtained above and your email

      +
    4. +
    5. In the modelRepoGenerator section, for the modelDeployKey value, enter dGx0X2VuY29kZQ==. (This value is obtained from the command echo -n tlt_encode | base64 -w0.
    6. +
    7. In the persistentVolumeClaim section, set usePVC to true. This is very important as it will disable the hostPath configuration for storage that is not permitted by default on OpenShift.
    8. +
    9. If you don't have a storageClass set as default, or want to you another one, enter the name of the class you want to use in storageClassName. Otherwise leave this field empty and the default class will be used.
    10. +
    11. Optionally, modify the storageSize.
    12. +
    13. Leave the ingress section as is, we will create an OpenShift Route later.
    14. +
    15. Optionally you can modify other values in the file to enable/disable certain models, or modify their configuration.
    16. +
    +
  10. +
  11. +

    Log into your OpenShift cluster from a Terminal, and create a project riva-api:

    +
    oc new-project riva-api
    +
    + +
  12. +
  13. +

    Move up one folder (so outside of the riva-api folder), and install NVIDIA Riva with the modified Helm chart:

    +
    helm install riva-api riva-api
    +
    + +
  14. +
+

The deployment will now start.

+
+

Info

+

Beware that the deployment can be really long the first time, about 45mn if you have all the models and features selected. Containers and models have to be downloaded and configured. Please be patient...

+
+

Usage

+

The Helm chart had automatically created a Service, riva-api in the namespace where you have deployed it. If you followed this guide, this should also be riva-api. So within the OpenShift cluster, the API is accessible at riva-api.riva-api.svc.cluster.local.

+

Different ports are accessible:

+
    +
  • http (8000): HTTP port of the Triton server.
  • +
  • grpc (8001): gRPC port of the Triton server.
  • +
  • metrics (8002): port for the metrics of the Triton server.
  • +
  • speech-grpc (50051): gRPC port of the Speech that exposes directly the different services you can use. This is normally the one that you will use.
  • +
+

If you want to use the API outside of the OpenShift cluster, you will have to create one or multiple Routes to those different endpoints.

+

Example

+
    +
  • On the same cluster where NVIDIA Riva is deployed, deploy RHODS or ODH and launch a Notebook (Standard DataScience is enough).
  • +
  • Clone the NVIDIA Riva tutorials repository at https://github.com/nvidia-riva/tutorials
  • +
  • Open a Terminal and install the client with pip install nvidia-riva-client:
  • +
+

client

+

(depending on the base image you used this may yield errors that you can ignore most of times).

+
    +
  • In the tutorials folder, open the notebook asr-basics.ipynb.
  • +
  • In the cell that defines the uri of the API server, modify the default (localhost) for the address of the API server: riva-api.riva-api.svc.cluster.local
  • +
+

api

+
    +
  • Run the notebook!
  • +
+

asr

+
+

Note

+

In this example, only the first part of the notebook will work as only the English models have been deployed. You would have to adapt the configuration for other languages.

+
+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/whats-new/whats-new/index.html b/whats-new/whats-new/index.html new file mode 100644 index 00000000..80c041b8 --- /dev/null +++ b/whats-new/whats-new/index.html @@ -0,0 +1,1593 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + What's new? - AI on OpenShift + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + +

What's new?

+

2023-08-01: Update to Spark documentation to include usage without the operator Tools and Applications->Apache Spark

+

2023-07-05: Add documentation on Time Slicing and Autoscaling for NVIDIA GPUs ODH/RHODS How-Tos->NVIDIA GPUs

+

2023-07-05: New example of how to configure a Custom Serving Runtime with Triton.

+

2023-07-03: New Minio tutorial on how to quickly deploy a simple Object Storage inside your OpenShift Project, for quick prototyping.

+

2023-06-30: New NVIDIA GPU installation documentation with Node tainting in ODH/RHODS How-Tos->NVIDIA GPUs

+

2023-06-02: NVIDIA Riva documentation in Tools and Applications->NVIDIA Riva

+

NVIDIA® Riva is a GPU-accelerated SDK for building Speech AI applications that are customized for your use case and deliver real-time performance.

+

2023-02-06: Rclone documentation in Tools and Applications->Rclone.

+

Rclone is a program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

+
+

2023-02-02: Addition of VSCode and RStudio images to custom workbenches.

+
+

2023-01-22: Addition of StarProxy to Patterns->Starburst/Trino Proxy.

+

Starproxy is a fully HTTP compliant proxy that is designed to sit between clients and a Trino/Starburst cluster.

+ + + + + + + + +
+
+ + +
+ + + +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file

Learn more