diff --git a/7.2/Advanced/airgapped/index.html b/7.2/Advanced/airgapped/index.html index d6888c3..4fd9611 100644 --- a/7.2/Advanced/airgapped/index.html +++ b/7.2/Advanced/airgapped/index.html @@ -14,14 +14,14 @@ </span></code></pre></div> <p>If <code>--containerd-namespace</code> is not specified, images will be imported into <code>k8s.io</code> namespace. </p> <div class="admonition note"> <p class=admonition-title><code>sudo</code> required</p> <p>Depending on how <code>containerd</code> has been installed and configured it may require running the above command with <code>sudo</code></p> </div> <h4 id=mindthegap-import-to-an-internal-oci-registry>mindthegap - Import to an internal OCI Registry<a class=headerlink href=#mindthegap-import-to-an-internal-oci-registry title="Permanent link">¶</a></h4> <div class=highlight><span class=filename>mindthegap import image-bundle</span><pre><span></span><code><span id=__span-3-1><a id=__codelineno-3-1 name=__codelineno-3-1 href=#__codelineno-3-1></a>mindthegap<span class=w> </span>push<span class=w> </span>bundle<span class=w> </span>--bundle<span class=w> </span><path/to/bundle.tar><span class=w> </span><span class=se>\</span> </span><span id=__span-3-2><a id=__codelineno-3-2 name=__codelineno-3-2 href=#__codelineno-3-2></a>--to-registry<span class=w> </span><registry.address><span class=w> </span><span class=se>\</span> </span><span id=__span-3-3><a id=__codelineno-3-3 name=__codelineno-3-3 href=#__codelineno-3-3></a><span class=o>[</span>--to-registry-insecure-skip-tls-verify<span class=o>]</span> -</span></code></pre></div> </div> <div class=tabbed-block> <p>It is possible with <code>containerd</code> to pull images, save them and load them either into a Container Registry in the air gapped environment or directly into another <code>containerd</code> instance. </p> <p>If the target <code>containerd</code> is on a node running a Kubernetes Cluster then these images will be sourced by Kubernetes from the loaded images, via <abbr title="Container Runtime Interface">CRI</abbr>, with no requirement to pull them from an external source e.g. a Registry or Mirror.</p> <div class="admonition note"> <p class=admonition-title><code>sudo</code> required</p> <p>Depending on how <code>containerd</code> has been installed and configured many of the example calls below may require running with <code>sudo</code></p> </div> <h3 id=containerd-using-containerd-to-pull-and-export-an-image>containerd - Using <code>containerd</code> to pull and export an image<a class=headerlink href=#containerd-using-containerd-to-pull-and-export-an-image title="Permanent link">¶</a></h3> <p>Similar to <code>docker pull</code> we can use <code>ctr image pull</code> so to pull the core Kinetica DB cpu based image</p> <div class=highlight><span class=filename>Pull a remote image (containerd)</span><pre><span></span><code><span id=__span-4-1><a id=__codelineno-4-1 name=__codelineno-4-1 href=#__codelineno-4-1></a>ctr<span class=w> </span>image<span class=w> </span>pull<span class=w> </span>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2 -</span></code></pre></div> <p>We now need to export the pulled image as an archive to the local filesystem.</p> <div class=highlight><span class=filename>Export a local image (containerd)</span><pre><span></span><code><span id=__span-5-1><a id=__codelineno-5-1 name=__codelineno-5-1 href=#__codelineno-5-1></a>ctr<span class=w> </span>image<span class=w> </span><span class=nb>export</span><span class=w> </span>kinetica-k8s-cpu-v7.2.2-3.ga-2.tar<span class=w> </span><span class=se>\</span> -</span><span id=__span-5-2><a id=__codelineno-5-2 name=__codelineno-5-2 href=#__codelineno-5-2></a>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2 -</span></code></pre></div> <p>We can now transfer this archive (<code>kinetica-k8s-cpu-v7.2.2-3.ga-2.tar</code>) to the Kubernetes Node inside the air-gapped environment.</p> <h3 id=containerd-using-containerd-to-import-an-image>containerd - Using <code>containerd</code> to import an image<a class=headerlink href=#containerd-using-containerd-to-import-an-image title="Permanent link">¶</a></h3> <p>Using <code>containerd</code> to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.</p> <div class=highlight><span class=filename>Import the Images</span><pre><span></span><code><span id=__span-6-1><a id=__codelineno-6-1 name=__codelineno-6-1 href=#__codelineno-6-1></a>ctr<span class=w> </span>-n<span class=o>=</span>k8s.io<span class=w> </span>images<span class=w> </span>import<span class=w> </span>kinetica-k8s-cpu-v7.2.2-3.ga-2.tar -</span></code></pre></div> <div class="admonition warning"> <p class=admonition-title><code>-n=k8s.io</code></p> <p>It is possible to use <code>ctr images import kinetica-k8s-cpu-v7.2.2-3.ga-2.rc-3.tar</code> to import the image to <code>containerd</code>.</p> <p>However, in order for the image to be visible to the Kubernetes Cluster running on <code>containerd</code> it is necessary to add the parameter <code>-n=k8s.io</code>.</p> </div> <h3 id=containerd-verifying-the-image-is-available>containerd - Verifying the image is available<a class=headerlink href=#containerd-verifying-the-image-is-available title="Permanent link">¶</a></h3> <p>To verify the image is loaded into <code>containerd</code> on the node run the following on the node: -</p> <div class=highlight><span class=filename>Verify containerd Images</span><pre><span></span><code><span id=__span-7-1><a id=__codelineno-7-1 name=__codelineno-7-1 href=#__codelineno-7-1></a>ctr<span class=w> </span>image<span class=w> </span>ls +</span></code></pre></div> </div> <div class=tabbed-block> <p>It is possible with <code>containerd</code> to pull images, save them and load them either into a Container Registry in the air gapped environment or directly into another <code>containerd</code> instance. </p> <p>If the target <code>containerd</code> is on a node running a Kubernetes Cluster then these images will be sourced by Kubernetes from the loaded images, via <abbr title="Container Runtime Interface">CRI</abbr>, with no requirement to pull them from an external source e.g. a Registry or Mirror.</p> <div class="admonition note"> <p class=admonition-title><code>sudo</code> required</p> <p>Depending on how <code>containerd</code> has been installed and configured many of the example calls below may require running with <code>sudo</code></p> </div> <h3 id=containerd-using-containerd-to-pull-and-export-an-image>containerd - Using <code>containerd</code> to pull and export an image<a class=headerlink href=#containerd-using-containerd-to-pull-and-export-an-image title="Permanent link">¶</a></h3> <p>Similar to <code>docker pull</code> we can use <code>ctr image pull</code> so to pull the core Kinetica DB cpu based image</p> <div class=highlight><span class=filename>Pull a remote image (containerd)</span><pre><span></span><code><span id=__span-4-1><a id=__codelineno-4-1 name=__codelineno-4-1 href=#__codelineno-4-1></a>ctr<span class=w> </span>image<span class=w> </span>pull<span class=w> </span>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-5.ga-1 +</span></code></pre></div> <p>We now need to export the pulled image as an archive to the local filesystem.</p> <div class=highlight><span class=filename>Export a local image (containerd)</span><pre><span></span><code><span id=__span-5-1><a id=__codelineno-5-1 name=__codelineno-5-1 href=#__codelineno-5-1></a>ctr<span class=w> </span>image<span class=w> </span><span class=nb>export</span><span class=w> </span>kinetica-k8s-cpu-v7.2.2-5.ga-1.tar<span class=w> </span><span class=se>\</span> +</span><span id=__span-5-2><a id=__codelineno-5-2 name=__codelineno-5-2 href=#__codelineno-5-2></a>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-5.ga-1 +</span></code></pre></div> <p>We can now transfer this archive (<code>kinetica-k8s-cpu-v7.2.2-5.ga-1.tar</code>) to the Kubernetes Node inside the air-gapped environment.</p> <h3 id=containerd-using-containerd-to-import-an-image>containerd - Using <code>containerd</code> to import an image<a class=headerlink href=#containerd-using-containerd-to-import-an-image title="Permanent link">¶</a></h3> <p>Using <code>containerd</code> to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.</p> <div class=highlight><span class=filename>Import the Images</span><pre><span></span><code><span id=__span-6-1><a id=__codelineno-6-1 name=__codelineno-6-1 href=#__codelineno-6-1></a>ctr<span class=w> </span>-n<span class=o>=</span>k8s.io<span class=w> </span>images<span class=w> </span>import<span class=w> </span>kinetica-k8s-cpu-v7.2.2-5.ga-1.tar +</span></code></pre></div> <div class="admonition warning"> <p class=admonition-title><code>-n=k8s.io</code></p> <p>It is possible to use <code>ctr images import kinetica-k8s-cpu-v7.2.2-5.ga-1.rc-3.tar</code> to import the image to <code>containerd</code>.</p> <p>However, in order for the image to be visible to the Kubernetes Cluster running on <code>containerd</code> it is necessary to add the parameter <code>-n=k8s.io</code>.</p> </div> <h3 id=containerd-verifying-the-image-is-available>containerd - Verifying the image is available<a class=headerlink href=#containerd-verifying-the-image-is-available title="Permanent link">¶</a></h3> <p>To verify the image is loaded into <code>containerd</code> on the node run the following on the node: -</p> <div class=highlight><span class=filename>Verify containerd Images</span><pre><span></span><code><span id=__span-7-1><a id=__codelineno-7-1 name=__codelineno-7-1 href=#__codelineno-7-1></a>ctr<span class=w> </span>image<span class=w> </span>ls </span></code></pre></div> <p>To verify the image is visible to Kubernetes on the node run the following: -</p> <div class=highlight><span class=filename>Verify CRI Images</span><pre><span></span><code><span id=__span-8-1><a id=__codelineno-8-1 name=__codelineno-8-1 href=#__codelineno-8-1></a>crictl<span class=w> </span>images -</span></code></pre></div> </div> <div class=tabbed-block> <p>It is possible with <code>docker</code> to pull images, save them and load them into an OCI Container Registry in the air gapped environment.</p> <div class=highlight><span class=filename>Pull a remote image (docker)</span><pre><span></span><code><span id=__span-9-1><a id=__codelineno-9-1 name=__codelineno-9-1 href=#__codelineno-9-1></a>docker<span class=w> </span>pull<span class=w> </span>--platformlinux/amd64<span class=w> </span>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2 -</span></code></pre></div> <div class=highlight><span class=filename>Export a local image (docker)</span><pre><span></span><code><span id=__span-10-1><a id=__codelineno-10-1 name=__codelineno-10-1 href=#__codelineno-10-1></a>docker<span class=w> </span><span class=nb>export</span><span class=w> </span>--platformlinux/amd64<span class=w> </span>-o<span class=w> </span>kinetica-k8s-cpu-v7.2.2-3.ga-2.tar<span class=w> </span><span class=se>\</span> -</span><span id=__span-10-2><a id=__codelineno-10-2 name=__codelineno-10-2 href=#__codelineno-10-2></a>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2 -</span></code></pre></div> <p>We can now transfer this archive (<code>kinetica-k8s-cpu-v7.2.2-3.ga-2.rc-3.tar</code>) to the Kubernetes Node inside the air-gapped environment.</p> <h3 id=docker-using-docker-to-import-an-image>docker - Using <code>docker</code> to import an image<a class=headerlink href=#docker-using-docker-to-import-an-image title="Permanent link">¶</a></h3> <p>Using <code>docker</code> to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.</p> <div class=highlight><span class=filename>Import the Images</span><pre><span></span><code><span id=__span-11-1><a id=__codelineno-11-1 name=__codelineno-11-1 href=#__codelineno-11-1></a>docker<span class=w> </span>import<span class=w> </span>--platformlinux/amd64<span class=w> </span>kinetica-k8s-cpu-v7.2.2-3.ga-2.tar<span class=w> </span>registry:repository/kinetica-k8s-cpu:v7.2.0-3.rc-3 +</span></code></pre></div> </div> <div class=tabbed-block> <p>It is possible with <code>docker</code> to pull images, save them and load them into an OCI Container Registry in the air gapped environment.</p> <div class=highlight><span class=filename>Pull a remote image (docker)</span><pre><span></span><code><span id=__span-9-1><a id=__codelineno-9-1 name=__codelineno-9-1 href=#__codelineno-9-1></a>docker<span class=w> </span>pull<span class=w> </span>--platformlinux/amd64<span class=w> </span>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-5.ga-1 +</span></code></pre></div> <div class=highlight><span class=filename>Export a local image (docker)</span><pre><span></span><code><span id=__span-10-1><a id=__codelineno-10-1 name=__codelineno-10-1 href=#__codelineno-10-1></a>docker<span class=w> </span><span class=nb>export</span><span class=w> </span>--platformlinux/amd64<span class=w> </span>-o<span class=w> </span>kinetica-k8s-cpu-v7.2.2-5.ga-1.tar<span class=w> </span><span class=se>\</span> +</span><span id=__span-10-2><a id=__codelineno-10-2 name=__codelineno-10-2 href=#__codelineno-10-2></a>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-5.ga-1 +</span></code></pre></div> <p>We can now transfer this archive (<code>kinetica-k8s-cpu-v7.2.2-5.ga-1.rc-3.tar</code>) to the Kubernetes Node inside the air-gapped environment.</p> <h3 id=docker-using-docker-to-import-an-image>docker - Using <code>docker</code> to import an image<a class=headerlink href=#docker-using-docker-to-import-an-image title="Permanent link">¶</a></h3> <p>Using <code>docker</code> to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.</p> <div class=highlight><span class=filename>Import the Images</span><pre><span></span><code><span id=__span-11-1><a id=__codelineno-11-1 name=__codelineno-11-1 href=#__codelineno-11-1></a>docker<span class=w> </span>import<span class=w> </span>--platformlinux/amd64<span class=w> </span>kinetica-k8s-cpu-v7.2.2-5.ga-1.tar<span class=w> </span>registry:repository/kinetica-k8s-cpu:v7.2.0-3.rc-3 </span></code></pre></div> </div> </div> </div> <hr> </article> </div> <script>var tabs=__md_get("__tabs");if(Array.isArray(tabs))e:for(var set of document.querySelectorAll(".tabbed-set")){var tab,labels=set.querySelector(".tabbed-labels");for(tab of tabs)for(var label of labels.getElementsByTagName("label"))if(label.innerText.trim()===tab){var input=document.getElementById(label.htmlFor);input.checked=!0;continue e}}</script> <script>var target=document.getElementById(location.hash.slice(1));target&&target.name&&(target.checked=target.name.startsWith("__tabbed_"))</script> </div> <button type=button class="md-top md-icon" data-md-component=top hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M13 20h-2V8l-5.5 5.5-1.42-1.42L12 4.16l7.92 7.92-1.42 1.42L13 8v12Z"/></svg> Back to top </button> </main> <footer class=md-footer> <nav class="md-footer__inner md-grid" aria-label=Footer> <a href=../ingress_configuration/ class="md-footer__link md-footer__link--prev" aria-label="Previous: Ingress Configuration"> <div class="md-footer__button md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M20 11v2H8l5.5 5.5-1.42 1.42L4.16 12l7.92-7.92L13.5 5.5 8 11h12Z"/></svg> </div> <div class=md-footer__title> <span class=md-footer__direction> Previous </span> <div class=md-ellipsis> Ingress Configuration </div> </div> </a> <a href=../minio_s3_dev_test/ class="md-footer__link md-footer__link--next" aria-label="Next: S3 Storage for Dev/Test"> <div class=md-footer__title> <span class=md-footer__direction> Next </span> <div class=md-ellipsis> S3 Storage for Dev/Test </div> </div> <div class="md-footer__button md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M4 11v2h12l-5.5 5.5 1.42 1.42L19.84 12l-7.92-7.92L10.5 5.5 16 11H4Z"/></svg> </div> </a> </nav> <div class="md-footer-meta md-typeset"> <div class="md-footer-meta__inner md-grid"> <div class=md-copyright> <div class=md-copyright__highlight> Copyright © 2016 - 2024 Kinetica DB Inc. </div> Made with <a href=https://squidfunk.github.io/mkdocs-material/ target=_blank rel=noopener> Material for MkDocs </a> </div> </div> </div> </footer> </div> <div class=md-dialog data-md-component=dialog> <div class="md-dialog__inner md-typeset"></div> </div> <div class=md-progress data-md-component=progress role=progressbar></div> <script id=__config type=application/json>{"base": "../..", "features": ["announce.dismiss", "content.tooltips", "content.code.copy", "content.code.annotate", "content.tabs.link", "header.autohide", "navigation.expand", "navigation.footer", "navigation.indexes", "navigation.instant", "navigation.instant.progress", "navigation.prune", "navigation.sections", "navigation.top", "navigation.tracking", "search.highlight", "search.share", "search.suggest", "tags", "navigation.tabs", "navigation.tabs.sticky"], "search": "../../assets/javascripts/workers/search.b8dbb3d2.min.js", "translations": {"clipboard.copied": "Copied to clipboard", "clipboard.copy": "Copy to clipboard", "search.result.more.one": "1 more on this page", "search.result.more.other": "# more on this page", "search.result.none": "No matching documents", "search.result.one": "1 matching document", "search.result.other": "# matching documents", "search.result.placeholder": "Type to start searching", "search.result.term.missing": "Missing", "select.version": "Select version"}, "version": {"provider": "mike"}}</script> <script src=../../assets/javascripts/bundle.c8d2eff1.min.js></script> <script>document$.subscribe(() => {const lightbox = GLightbox({"touchNavigation": true, "loop": false, "zoomable": true, "draggable": true, "openEffect": "zoom", "closeEffect": "zoom", "slideEffect": "slide"});})</script></body> </html> \ No newline at end of file diff --git a/7.2/Advanced/kinetica_images_list_for_airgapped_environments/index.html b/7.2/Advanced/kinetica_images_list_for_airgapped_environments/index.html index 14449e7..37cecbd 100644 --- a/7.2/Advanced/kinetica_images_list_for_airgapped_environments/index.html +++ b/7.2/Advanced/kinetica_images_list_for_airgapped_environments/index.html @@ -7,4 +7,4 @@ .gdesc-inner { font-size: 0.75rem; } body[data-md-color-scheme="slate"] .gdesc-inner { background: var(--md-default-bg-color);} body[data-md-color-scheme="slate"] .gslide-title { color: var(--md-default-fg-color);} - body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);}</style><script src=../../assets/javascripts/glightbox.min.js></script></head> <body dir=ltr data-md-color-scheme=default data-md-color-primary=indigo data-md-color-accent=indigo> <input class=md-toggle data-md-toggle=drawer type=checkbox id=__drawer autocomplete=off> <input class=md-toggle data-md-toggle=search type=checkbox id=__search autocomplete=off> <label class=md-overlay for=__drawer></label> <div data-md-component=skip> <a href=#required-container-images class=md-skip> Skip to content </a> </div> <div data-md-component=announce> </div> <div data-md-color-scheme=default data-md-component=outdated hidden> </div> <header class="md-header md-header--shadow md-header--lifted" data-md-component=header> <nav class="md-header__inner md-grid" aria-label=Header> <a href=https://www.kinetica.com title="Kinetica for Kubernetes" class="md-header__button md-logo" aria-label="Kinetica for Kubernetes" data-md-component=logo> <img src=../../assets/kinetica_logo.png alt=logo> </a> <label class="md-header__button md-icon" for=__drawer> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M3 6h18v2H3V6m0 5h18v2H3v-2m0 5h18v2H3v-2Z"/></svg> </label> <div class=md-header__title data-md-component=header-title> <div class=md-header__ellipsis> <div class=md-header__topic> <span class=md-ellipsis> Kinetica for Kubernetes </span> </div> <div class=md-header__topic data-md-component=header-topic> <span class=md-ellipsis> Kinetica images list for airgapped environments </span> </div> </div> </div> <form class=md-header__option data-md-component=palette> <input class=md-option data-md-color-media data-md-color-scheme=default data-md-color-primary=indigo data-md-color-accent=indigo aria-label="Switch to dark mode" type=radio name=__palette id=__palette_0> <label class="md-header__button md-icon" title="Switch to dark mode" for=__palette_1 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M17 6H7c-3.31 0-6 2.69-6 6s2.69 6 6 6h10c3.31 0 6-2.69 6-6s-2.69-6-6-6zm0 10H7c-2.21 0-4-1.79-4-4s1.79-4 4-4h10c2.21 0 4 1.79 4 4s-1.79 4-4 4zM7 9c-1.66 0-3 1.34-3 3s1.34 3 3 3 3-1.34 3-3-1.34-3-3-3z"/></svg> </label> <input class=md-option data-md-color-media data-md-color-scheme=slate data-md-color-primary=red data-md-color-accent=red aria-label="Switch to light mode" type=radio name=__palette id=__palette_1> <label class="md-header__button md-icon" title="Switch to light mode" for=__palette_0 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M17 7H7a5 5 0 0 0-5 5 5 5 0 0 0 5 5h10a5 5 0 0 0 5-5 5 5 0 0 0-5-5m0 8a3 3 0 0 1-3-3 3 3 0 0 1 3-3 3 3 0 0 1 3 3 3 3 0 0 1-3 3Z"/></svg> </label> </form> <script>var media,input,key,value,palette=__md_get("__palette");if(palette&&palette.color){"(prefers-color-scheme)"===palette.color.media&&(media=matchMedia("(prefers-color-scheme: light)"),input=document.querySelector(media.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']"),palette.color.media=input.getAttribute("data-md-color-media"),palette.color.scheme=input.getAttribute("data-md-color-scheme"),palette.color.primary=input.getAttribute("data-md-color-primary"),palette.color.accent=input.getAttribute("data-md-color-accent"));for([key,value]of Object.entries(palette.color))document.body.setAttribute("data-md-color-"+key,value)}</script> <label class="md-header__button md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> </label> <div class=md-search data-md-component=search role=dialog> <label class=md-search__overlay for=__search></label> <div class=md-search__inner role=search> <form class=md-search__form name=search> <input type=text class=md-search__input name=query aria-label=Search placeholder=Search autocapitalize=off autocorrect=off autocomplete=off spellcheck=false data-md-component=search-query required> <label class="md-search__icon md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M20 11v2H8l5.5 5.5-1.42 1.42L4.16 12l7.92-7.92L13.5 5.5 8 11h12Z"/></svg> </label> <nav class=md-search__options aria-label=Search> <a href=javascript:void(0) class="md-search__icon md-icon" title=Share aria-label=Share data-clipboard data-clipboard-text data-md-component=search-share tabindex=-1> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M18 16.08c-.76 0-1.44.3-1.96.77L8.91 12.7c.05-.23.09-.46.09-.7 0-.24-.04-.47-.09-.7l7.05-4.11c.54.5 1.25.81 2.04.81a3 3 0 0 0 3-3 3 3 0 0 0-3-3 3 3 0 0 0-3 3c0 .24.04.47.09.7L8.04 9.81C7.5 9.31 6.79 9 6 9a3 3 0 0 0-3 3 3 3 0 0 0 3 3c.79 0 1.5-.31 2.04-.81l7.12 4.15c-.05.21-.08.43-.08.66 0 1.61 1.31 2.91 2.92 2.91 1.61 0 2.92-1.3 2.92-2.91A2.92 2.92 0 0 0 18 16.08Z"/></svg> </a> <button type=reset class="md-search__icon md-icon" title=Clear aria-label=Clear tabindex=-1> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M19 6.41 17.59 5 12 10.59 6.41 5 5 6.41 10.59 12 5 17.59 6.41 19 12 13.41 17.59 19 19 17.59 13.41 12 19 6.41Z"/></svg> </button> </nav> <div class=md-search__suggest data-md-component=search-suggest></div> </form> <div class=md-search__output> <div class=md-search__scrollwrap data-md-scrollfix> <div class=md-search-result data-md-component=search-result> <div class=md-search-result__meta> Initializing search </div> <ol class=md-search-result__list role=presentation></ol> </div> </div> </div> </div> </div> <div class=md-header__source> <a href=https://github.com/kineticadb/charts title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 496 512"><!-- Font Awesome Free 6.5.1 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg> </div> <div class=md-source__repository> kineticadb/charts </div> </a> </div> </nav> <nav class=md-tabs aria-label=Tabs data-md-component=tabs> <div class=md-grid> <ul class=md-tabs__list> <li class=md-tabs__item> <a href=../.. class=md-tabs__link> Home </a> </li> <li class=md-tabs__item> <a href=../../Setup/ class=md-tabs__link> Setup </a> </li> <li class=md-tabs__item> <a href=../ class=md-tabs__link> Advanced </a> </li> <li class=md-tabs__item> <a href=../../Operations/ class=md-tabs__link> Operations </a> </li> <li class=md-tabs__item> <a href=../../Administration/ class=md-tabs__link> Administration </a> </li> <li class=md-tabs__item> <a href=../../Architecture/ class=md-tabs__link> Architecture & Design </a> </li> <li class=md-tabs__item> <a href=../../Support/ class=md-tabs__link> Support </a> </li> <li class=md-tabs__item> <a href=../../Reference/ class=md-tabs__link> Reference </a> </li> <li class=md-tabs__item> <a href=../../tags/ class=md-tabs__link> Categories </a> </li> </ul> </div> </nav> </header> <div class=md-container data-md-component=container> <main class=md-main data-md-component=main> <div class="md-main__inner md-grid"> <div class="md-sidebar md-sidebar--primary" data-md-component=sidebar data-md-type=navigation> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--primary md-nav--lifted" aria-label=Navigation data-md-level=0> <label class=md-nav__title for=__drawer> <a href=https://www.kinetica.com title="Kinetica for Kubernetes" class="md-nav__button md-logo" aria-label="Kinetica for Kubernetes" data-md-component=logo> <img src=../../assets/kinetica_logo.png alt=logo> </a> Kinetica for Kubernetes </label> <div class=md-nav__source> <a href=https://github.com/kineticadb/charts title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 496 512"><!-- Font Awesome Free 6.5.1 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg> </div> <div class=md-source__repository> kineticadb/charts </div> </a> </div> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../.. class=md-nav__link> <span class=md-ellipsis> Home </span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Setup/ class=md-nav__link> <span class=md-ellipsis> Setup </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class=md-nav__item> <a href=../ class=md-nav__link> <span class=md-ellipsis> Advanced </span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Operations/ class=md-nav__link> <span class=md-ellipsis> Operations </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Administration/ class=md-nav__link> <span class=md-ellipsis> Administration </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Architecture/ class=md-nav__link> <span class=md-ellipsis> Architecture & Design </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Support/ class=md-nav__link> <span class=md-ellipsis> Support </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Reference/ class=md-nav__link> <span class=md-ellipsis> Reference </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class=md-nav__item> <a href=../../tags/ class=md-nav__link> <span class=md-ellipsis> Categories </span> </a> </li> </ul> </nav> </div> </div> </div> <div class="md-sidebar md-sidebar--secondary" data-md-component=sidebar data-md-type=toc> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--secondary" aria-label="Table of contents"> <label class=md-nav__title for=__toc> <span class="md-nav__icon md-icon"></span> Table of contents </label> <ul class=md-nav__list data-md-component=toc data-md-scrollfix> <li class=md-nav__item> <a href=#required-container-images class=md-nav__link> <span class=md-ellipsis> Required Container Images </span> </a> <nav class=md-nav aria-label="Required Container Images"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#dockerio-required-kinetica-images-for-all-installations class=md-nav__link> <span class=md-ellipsis> docker.io (Required Kinetica Images for All Installations) </span> </a> </li> <li class=md-nav__item> <a href=#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu class=md-nav__link> <span class=md-ellipsis> nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu) </span> </a> </li> <li class=md-nav__item> <a href=#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu class=md-nav__link> <span class=md-ellipsis> registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu) </span> </a> </li> <li class=md-nav__item> <a href=#dockerio-required-supporting-images class=md-nav__link> <span class=md-ellipsis> docker.io (Required Supporting Images) </span> </a> </li> <li class=md-nav__item> <a href=#quayio-required-supporting-images class=md-nav__link> <span class=md-ellipsis> quay.io (Required Supporting Images) </span> </a> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=#optional-container-images class=md-nav__link> <span class=md-ellipsis> Optional Container Images </span> </a> <nav class=md-nav aria-label="Optional Container Images"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#quayio-optional-supporting-images class=md-nav__link> <span class=md-ellipsis> quay.io (Optional Supporting Images) </span> </a> </li> <li class=md-nav__item> <a href=#registryk8sio-optional-supporting-images class=md-nav__link> <span class=md-ellipsis> registry.k8s.io (Optional Supporting Images) </span> </a> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=#which-kinetica-core-image-do-i-use class=md-nav__link> <span class=md-ellipsis> Which Kinetica Core Image do I use? </span> </a> </li> </ul> </nav> </div> </div> </div> <div class=md-content data-md-component=content> <article class="md-content__inner md-typeset"> <h1>Kinetica images list for airgapped environments</h1> <details class=info> <summary>Kinetica Images for an Air-Gapped Environment</summary> <p>If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.</p> <p>For information on ways to transfer the files into an air-gapped environment <a href=../airgapped/ title="Ways to transfer images"><strong><em>See here</em></strong></a>.</p> <h3 id=required-container-images>Required Container Images<a class=headerlink href=#required-container-images title="Permanent link">¶</a></h3> <h4 id=dockerio-required-kinetica-images-for-all-installations>docker.io (Required Kinetica Images for All Installations)<a class=headerlink href=#dockerio-required-kinetica-images-for-all-installations title="Permanent link">¶</a></h4> <ul> <li>docker.io/kinetica/kinetica-k8s-operator:v7.2.2-3.ga-2<ul> <li>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2 <strong>or</strong></li> <li>docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.2-3.ga-2 <strong>or</strong></li> <li>docker.io/kinetica/kinetica-k8s-gpu:v7.2.2-3.ga-2</li> </ul> </li> <li>docker.io/kinetica/workbench-operator:v7.2.2-3.ga-2</li> <li>docker.io/kinetica/workbench:v7.2.2-3.ga-2</li> <li>docker.io/kinetica/kinetica-k8s-monitor:v7.2.2-3.ga-2</li> <li>docker.io/kinetica/busybox:v7.2.2-3.ga-2</li> <li>docker.io/kinetica/fluent-bit:v7.2.2-3.ga-2</li> <li>docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga</li> </ul> <h4 id=nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu>nvcr.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)<a class=headerlink href=#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu title="Permanent link">¶</a></h4> <ul> <li>nvcr.io/nvidia/gpu-operator:v23.9.1</li> </ul> <h4 id=registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu>registry.k8s.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)<a class=headerlink href=#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu title="Permanent link">¶</a></h4> <ul> <li>registry.k8s.io/nfd/node-feature-discovery:v0.14.2</li> </ul> <h4 id=dockerio-required-supporting-images>docker.io (Required Supporting Images)<a class=headerlink href=#dockerio-required-supporting-images title="Permanent link">¶</a></h4> <ul> <li>docker.io/bitnami/openldap:2.6.7</li> <li>docker.io/alpine/openssl:latest (used by bitnami/openldap)</li> <li>docker.io/otel/opentelemetry-collector-contrib:0.95.0</li> </ul> <h4 id=quayio-required-supporting-images>quay.io (Required Supporting Images)<a class=headerlink href=#quayio-required-supporting-images title="Permanent link">¶</a></h4> <ul> <li>quay.io/brancz/kube-rbac-proxy:v0.14.2</li> </ul> <h3 id=optional-container-images>Optional Container Images<a class=headerlink href=#optional-container-images title="Permanent link">¶</a></h3> <p>These images are only required if certain features are enabled as part of the Helm installation: -</p> <ul> <li>CertManager</li> <li>ingress-ninx</li> </ul> <h4 id=quayio-optional-supporting-images>quay.io (Optional Supporting Images)<a class=headerlink href=#quayio-optional-supporting-images title="Permanent link">¶</a></h4> <ul> <li>quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> </ul> <h4 id=registryk8sio-optional-supporting-images>registry.k8s.io (Optional Supporting Images)<a class=headerlink href=#registryk8sio-optional-supporting-images title="Permanent link">¶</a></h4> <ul> <li>registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)</li> <li>registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c</li> </ul> <h3 id=which-kinetica-core-image-do-i-use>Which Kinetica Core Image do I use?<a class=headerlink href=#which-kinetica-core-image-do-i-use title="Permanent link">¶</a></h3> <table> <thead> <tr> <th style="text-align: left;">Container Image</th> <th style="text-align: center;">Intel (AMD64)</th> <th style="text-align: center;">Intel (AMD64 AVX512)</th> <th style="text-align: center;">Amd (AMD64)</th> <th style="text-align: center;">Graviton (aarch64)</th> <th style="text-align: center;">Apple Silicon (aarch64)</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">kinetica-k8s-cpu</td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span></td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span>(1)</td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span></td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span></td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span></td> </tr> <tr> <td style="text-align: left;">kinetica-k8s-cpu-avx512</td> <td style="text-align: center;"></td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span></td> <td style="text-align: center;"></td> <td style="text-align: center;"></td> <td style="text-align: center;"></td> </tr> <tr> <td style="text-align: left;">kinetica-k8s-gpu</td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span>(2)</td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span>(2)</td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span>(2)</td> <td style="text-align: center;"></td> <td style="text-align: center;"></td> </tr> </tbody> </table> <ol> <li>It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image</li> <li>With a supported nVidia GPU.</li> </ol> </details> <hr> </article> </div> <script>var tabs=__md_get("__tabs");if(Array.isArray(tabs))e:for(var set of document.querySelectorAll(".tabbed-set")){var tab,labels=set.querySelector(".tabbed-labels");for(tab of tabs)for(var label of labels.getElementsByTagName("label"))if(label.innerText.trim()===tab){var input=document.getElementById(label.htmlFor);input.checked=!0;continue e}}</script> <script>var target=document.getElementById(location.hash.slice(1));target&&target.name&&(target.checked=target.name.startsWith("__tabbed_"))</script> </div> <button type=button class="md-top md-icon" data-md-component=top hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M13 20h-2V8l-5.5 5.5-1.42-1.42L12 4.16l7.92 7.92-1.42 1.42L13 8v12Z"/></svg> Back to top </button> </main> <footer class=md-footer> <div class="md-footer-meta md-typeset"> <div class="md-footer-meta__inner md-grid"> <div class=md-copyright> <div class=md-copyright__highlight> Copyright © 2016 - 2024 Kinetica DB Inc. </div> Made with <a href=https://squidfunk.github.io/mkdocs-material/ target=_blank rel=noopener> Material for MkDocs </a> </div> </div> </div> </footer> </div> <div class=md-dialog data-md-component=dialog> <div class="md-dialog__inner md-typeset"></div> </div> <div class=md-progress data-md-component=progress role=progressbar></div> <script id=__config type=application/json>{"base": "../..", "features": ["announce.dismiss", "content.tooltips", "content.code.copy", "content.code.annotate", "content.tabs.link", "header.autohide", "navigation.expand", "navigation.footer", "navigation.indexes", "navigation.instant", "navigation.instant.progress", "navigation.prune", "navigation.sections", "navigation.top", "navigation.tracking", "search.highlight", "search.share", "search.suggest", "tags", "navigation.tabs", "navigation.tabs.sticky"], "search": "../../assets/javascripts/workers/search.b8dbb3d2.min.js", "translations": {"clipboard.copied": "Copied to clipboard", "clipboard.copy": "Copy to clipboard", "search.result.more.one": "1 more on this page", "search.result.more.other": "# more on this page", "search.result.none": "No matching documents", "search.result.one": "1 matching document", "search.result.other": "# matching documents", "search.result.placeholder": "Type to start searching", "search.result.term.missing": "Missing", "select.version": "Select version"}, "version": {"provider": "mike"}}</script> <script src=../../assets/javascripts/bundle.c8d2eff1.min.js></script> <script>document$.subscribe(() => {const lightbox = GLightbox({"touchNavigation": true, "loop": false, "zoomable": true, "draggable": true, "openEffect": "zoom", "closeEffect": "zoom", "slideEffect": "slide"});})</script></body> </html> \ No newline at end of file + body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);}</style><script src=../../assets/javascripts/glightbox.min.js></script></head> <body dir=ltr data-md-color-scheme=default data-md-color-primary=indigo data-md-color-accent=indigo> <input class=md-toggle data-md-toggle=drawer type=checkbox id=__drawer autocomplete=off> <input class=md-toggle data-md-toggle=search type=checkbox id=__search autocomplete=off> <label class=md-overlay for=__drawer></label> <div data-md-component=skip> <a href=#required-container-images class=md-skip> Skip to content </a> </div> <div data-md-component=announce> </div> <div data-md-color-scheme=default data-md-component=outdated hidden> </div> <header class="md-header md-header--shadow md-header--lifted" data-md-component=header> <nav class="md-header__inner md-grid" aria-label=Header> <a href=https://www.kinetica.com title="Kinetica for Kubernetes" class="md-header__button md-logo" aria-label="Kinetica for Kubernetes" data-md-component=logo> <img src=../../assets/kinetica_logo.png alt=logo> </a> <label class="md-header__button md-icon" for=__drawer> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M3 6h18v2H3V6m0 5h18v2H3v-2m0 5h18v2H3v-2Z"/></svg> </label> <div class=md-header__title data-md-component=header-title> <div class=md-header__ellipsis> <div class=md-header__topic> <span class=md-ellipsis> Kinetica for Kubernetes </span> </div> <div class=md-header__topic data-md-component=header-topic> <span class=md-ellipsis> Kinetica images list for airgapped environments </span> </div> </div> </div> <form class=md-header__option data-md-component=palette> <input class=md-option data-md-color-media data-md-color-scheme=default data-md-color-primary=indigo data-md-color-accent=indigo aria-label="Switch to dark mode" type=radio name=__palette id=__palette_0> <label class="md-header__button md-icon" title="Switch to dark mode" for=__palette_1 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M17 6H7c-3.31 0-6 2.69-6 6s2.69 6 6 6h10c3.31 0 6-2.69 6-6s-2.69-6-6-6zm0 10H7c-2.21 0-4-1.79-4-4s1.79-4 4-4h10c2.21 0 4 1.79 4 4s-1.79 4-4 4zM7 9c-1.66 0-3 1.34-3 3s1.34 3 3 3 3-1.34 3-3-1.34-3-3-3z"/></svg> </label> <input class=md-option data-md-color-media data-md-color-scheme=slate data-md-color-primary=red data-md-color-accent=red aria-label="Switch to light mode" type=radio name=__palette id=__palette_1> <label class="md-header__button md-icon" title="Switch to light mode" for=__palette_0 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M17 7H7a5 5 0 0 0-5 5 5 5 0 0 0 5 5h10a5 5 0 0 0 5-5 5 5 0 0 0-5-5m0 8a3 3 0 0 1-3-3 3 3 0 0 1 3-3 3 3 0 0 1 3 3 3 3 0 0 1-3 3Z"/></svg> </label> </form> <script>var media,input,key,value,palette=__md_get("__palette");if(palette&&palette.color){"(prefers-color-scheme)"===palette.color.media&&(media=matchMedia("(prefers-color-scheme: light)"),input=document.querySelector(media.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']"),palette.color.media=input.getAttribute("data-md-color-media"),palette.color.scheme=input.getAttribute("data-md-color-scheme"),palette.color.primary=input.getAttribute("data-md-color-primary"),palette.color.accent=input.getAttribute("data-md-color-accent"));for([key,value]of Object.entries(palette.color))document.body.setAttribute("data-md-color-"+key,value)}</script> <label class="md-header__button md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> </label> <div class=md-search data-md-component=search role=dialog> <label class=md-search__overlay for=__search></label> <div class=md-search__inner role=search> <form class=md-search__form name=search> <input type=text class=md-search__input name=query aria-label=Search placeholder=Search autocapitalize=off autocorrect=off autocomplete=off spellcheck=false data-md-component=search-query required> <label class="md-search__icon md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M20 11v2H8l5.5 5.5-1.42 1.42L4.16 12l7.92-7.92L13.5 5.5 8 11h12Z"/></svg> </label> <nav class=md-search__options aria-label=Search> <a href=javascript:void(0) class="md-search__icon md-icon" title=Share aria-label=Share data-clipboard data-clipboard-text data-md-component=search-share tabindex=-1> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M18 16.08c-.76 0-1.44.3-1.96.77L8.91 12.7c.05-.23.09-.46.09-.7 0-.24-.04-.47-.09-.7l7.05-4.11c.54.5 1.25.81 2.04.81a3 3 0 0 0 3-3 3 3 0 0 0-3-3 3 3 0 0 0-3 3c0 .24.04.47.09.7L8.04 9.81C7.5 9.31 6.79 9 6 9a3 3 0 0 0-3 3 3 3 0 0 0 3 3c.79 0 1.5-.31 2.04-.81l7.12 4.15c-.05.21-.08.43-.08.66 0 1.61 1.31 2.91 2.92 2.91 1.61 0 2.92-1.3 2.92-2.91A2.92 2.92 0 0 0 18 16.08Z"/></svg> </a> <button type=reset class="md-search__icon md-icon" title=Clear aria-label=Clear tabindex=-1> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M19 6.41 17.59 5 12 10.59 6.41 5 5 6.41 10.59 12 5 17.59 6.41 19 12 13.41 17.59 19 19 17.59 13.41 12 19 6.41Z"/></svg> </button> </nav> <div class=md-search__suggest data-md-component=search-suggest></div> </form> <div class=md-search__output> <div class=md-search__scrollwrap data-md-scrollfix> <div class=md-search-result data-md-component=search-result> <div class=md-search-result__meta> Initializing search </div> <ol class=md-search-result__list role=presentation></ol> </div> </div> </div> </div> </div> <div class=md-header__source> <a href=https://github.com/kineticadb/charts title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 496 512"><!-- Font Awesome Free 6.5.1 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg> </div> <div class=md-source__repository> kineticadb/charts </div> </a> </div> </nav> <nav class=md-tabs aria-label=Tabs data-md-component=tabs> <div class=md-grid> <ul class=md-tabs__list> <li class=md-tabs__item> <a href=../.. class=md-tabs__link> Home </a> </li> <li class=md-tabs__item> <a href=../../Setup/ class=md-tabs__link> Setup </a> </li> <li class=md-tabs__item> <a href=../ class=md-tabs__link> Advanced </a> </li> <li class=md-tabs__item> <a href=../../Operations/ class=md-tabs__link> Operations </a> </li> <li class=md-tabs__item> <a href=../../Administration/ class=md-tabs__link> Administration </a> </li> <li class=md-tabs__item> <a href=../../Architecture/ class=md-tabs__link> Architecture & Design </a> </li> <li class=md-tabs__item> <a href=../../Support/ class=md-tabs__link> Support </a> </li> <li class=md-tabs__item> <a href=../../Reference/ class=md-tabs__link> Reference </a> </li> <li class=md-tabs__item> <a href=../../tags/ class=md-tabs__link> Categories </a> </li> </ul> </div> </nav> </header> <div class=md-container data-md-component=container> <main class=md-main data-md-component=main> <div class="md-main__inner md-grid"> <div class="md-sidebar md-sidebar--primary" data-md-component=sidebar data-md-type=navigation> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--primary md-nav--lifted" aria-label=Navigation data-md-level=0> <label class=md-nav__title for=__drawer> <a href=https://www.kinetica.com title="Kinetica for Kubernetes" class="md-nav__button md-logo" aria-label="Kinetica for Kubernetes" data-md-component=logo> <img src=../../assets/kinetica_logo.png alt=logo> </a> Kinetica for Kubernetes </label> <div class=md-nav__source> <a href=https://github.com/kineticadb/charts title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 496 512"><!-- Font Awesome Free 6.5.1 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg> </div> <div class=md-source__repository> kineticadb/charts </div> </a> </div> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../.. class=md-nav__link> <span class=md-ellipsis> Home </span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Setup/ class=md-nav__link> <span class=md-ellipsis> Setup </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class=md-nav__item> <a href=../ class=md-nav__link> <span class=md-ellipsis> Advanced </span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Operations/ class=md-nav__link> <span class=md-ellipsis> Operations </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Administration/ class=md-nav__link> <span class=md-ellipsis> Administration </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Architecture/ class=md-nav__link> <span class=md-ellipsis> Architecture & Design </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Support/ class=md-nav__link> <span class=md-ellipsis> Support </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Reference/ class=md-nav__link> <span class=md-ellipsis> Reference </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class=md-nav__item> <a href=../../tags/ class=md-nav__link> <span class=md-ellipsis> Categories </span> </a> </li> </ul> </nav> </div> </div> </div> <div class="md-sidebar md-sidebar--secondary" data-md-component=sidebar data-md-type=toc> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--secondary" aria-label="Table of contents"> <label class=md-nav__title for=__toc> <span class="md-nav__icon md-icon"></span> Table of contents </label> <ul class=md-nav__list data-md-component=toc data-md-scrollfix> <li class=md-nav__item> <a href=#required-container-images class=md-nav__link> <span class=md-ellipsis> Required Container Images </span> </a> <nav class=md-nav aria-label="Required Container Images"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#dockerio-required-kinetica-images-for-all-installations class=md-nav__link> <span class=md-ellipsis> docker.io (Required Kinetica Images for All Installations) </span> </a> </li> <li class=md-nav__item> <a href=#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu class=md-nav__link> <span class=md-ellipsis> nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu) </span> </a> </li> <li class=md-nav__item> <a href=#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu class=md-nav__link> <span class=md-ellipsis> registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu) </span> </a> </li> <li class=md-nav__item> <a href=#dockerio-required-supporting-images class=md-nav__link> <span class=md-ellipsis> docker.io (Required Supporting Images) </span> </a> </li> <li class=md-nav__item> <a href=#quayio-required-supporting-images class=md-nav__link> <span class=md-ellipsis> quay.io (Required Supporting Images) </span> </a> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=#optional-container-images class=md-nav__link> <span class=md-ellipsis> Optional Container Images </span> </a> <nav class=md-nav aria-label="Optional Container Images"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#quayio-optional-supporting-images class=md-nav__link> <span class=md-ellipsis> quay.io (Optional Supporting Images) </span> </a> </li> <li class=md-nav__item> <a href=#registryk8sio-optional-supporting-images class=md-nav__link> <span class=md-ellipsis> registry.k8s.io (Optional Supporting Images) </span> </a> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=#which-kinetica-core-image-do-i-use class=md-nav__link> <span class=md-ellipsis> Which Kinetica Core Image do I use? </span> </a> </li> </ul> </nav> </div> </div> </div> <div class=md-content data-md-component=content> <article class="md-content__inner md-typeset"> <h1>Kinetica images list for airgapped environments</h1> <details class=info> <summary>Kinetica Images for an Air-Gapped Environment</summary> <p>If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.</p> <p>For information on ways to transfer the files into an air-gapped environment <a href=../airgapped/ title="Ways to transfer images"><strong><em>See here</em></strong></a>.</p> <h3 id=required-container-images>Required Container Images<a class=headerlink href=#required-container-images title="Permanent link">¶</a></h3> <h4 id=dockerio-required-kinetica-images-for-all-installations>docker.io (Required Kinetica Images for All Installations)<a class=headerlink href=#dockerio-required-kinetica-images-for-all-installations title="Permanent link">¶</a></h4> <ul> <li>docker.io/kinetica/kinetica-k8s-operator:v7.2.2-5.ga-1<ul> <li>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-5.ga-1 <strong>or</strong></li> <li>docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.2-5.ga-1 <strong>or</strong></li> <li>docker.io/kinetica/kinetica-k8s-gpu:v7.2.2-5.ga-1</li> </ul> </li> <li>docker.io/kinetica/workbench-operator:v7.2.2-5.ga-1</li> <li>docker.io/kinetica/workbench:v7.2.2-5.ga-1</li> <li>docker.io/kinetica/kinetica-k8s-monitor:v7.2.2-5.ga-1</li> <li>docker.io/kinetica/busybox:v7.2.2-5.ga-1</li> <li>docker.io/kinetica/fluent-bit:v7.2.2-5.ga-1</li> <li>docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga</li> </ul> <h4 id=nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu>nvcr.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)<a class=headerlink href=#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu title="Permanent link">¶</a></h4> <ul> <li>nvcr.io/nvidia/gpu-operator:v23.9.1</li> </ul> <h4 id=registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu>registry.k8s.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)<a class=headerlink href=#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu title="Permanent link">¶</a></h4> <ul> <li>registry.k8s.io/nfd/node-feature-discovery:v0.14.2</li> </ul> <h4 id=dockerio-required-supporting-images>docker.io (Required Supporting Images)<a class=headerlink href=#dockerio-required-supporting-images title="Permanent link">¶</a></h4> <ul> <li>docker.io/bitnami/openldap:2.6.7</li> <li>docker.io/alpine/openssl:latest (used by bitnami/openldap)</li> <li>docker.io/otel/opentelemetry-collector-contrib:0.95.0</li> </ul> <h4 id=quayio-required-supporting-images>quay.io (Required Supporting Images)<a class=headerlink href=#quayio-required-supporting-images title="Permanent link">¶</a></h4> <ul> <li>quay.io/brancz/kube-rbac-proxy:v0.14.2</li> </ul> <h3 id=optional-container-images>Optional Container Images<a class=headerlink href=#optional-container-images title="Permanent link">¶</a></h3> <p>These images are only required if certain features are enabled as part of the Helm installation: -</p> <ul> <li>CertManager</li> <li>ingress-ninx</li> </ul> <h4 id=quayio-optional-supporting-images>quay.io (Optional Supporting Images)<a class=headerlink href=#quayio-optional-supporting-images title="Permanent link">¶</a></h4> <ul> <li>quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> </ul> <h4 id=registryk8sio-optional-supporting-images>registry.k8s.io (Optional Supporting Images)<a class=headerlink href=#registryk8sio-optional-supporting-images title="Permanent link">¶</a></h4> <ul> <li>registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)</li> <li>registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c</li> </ul> <h3 id=which-kinetica-core-image-do-i-use>Which Kinetica Core Image do I use?<a class=headerlink href=#which-kinetica-core-image-do-i-use title="Permanent link">¶</a></h3> <table> <thead> <tr> <th style="text-align: left;">Container Image</th> <th style="text-align: center;">Intel (AMD64)</th> <th style="text-align: center;">Intel (AMD64 AVX512)</th> <th style="text-align: center;">Amd (AMD64)</th> <th style="text-align: center;">Graviton (aarch64)</th> <th style="text-align: center;">Apple Silicon (aarch64)</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">kinetica-k8s-cpu</td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span></td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span>(1)</td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span></td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span></td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span></td> </tr> <tr> <td style="text-align: left;">kinetica-k8s-cpu-avx512</td> <td style="text-align: center;"></td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span></td> <td style="text-align: center;"></td> <td style="text-align: center;"></td> <td style="text-align: center;"></td> </tr> <tr> <td style="text-align: left;">kinetica-k8s-gpu</td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span>(2)</td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span>(2)</td> <td style="text-align: center;"><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M1 12C1 5.925 5.925 1 12 1s11 4.925 11 11-4.925 11-11 11S1 18.075 1 12Zm16.28-2.72a.751.751 0 0 0-.018-1.042.751.751 0 0 0-1.042-.018l-5.97 5.97-2.47-2.47a.751.751 0 0 0-1.042.018.751.751 0 0 0-.018 1.042l3 3a.75.75 0 0 0 1.06 0Z"/></svg></span>(2)</td> <td style="text-align: center;"></td> <td style="text-align: center;"></td> </tr> </tbody> </table> <ol> <li>It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image</li> <li>With a supported nVidia GPU.</li> </ol> </details> <hr> </article> </div> <script>var tabs=__md_get("__tabs");if(Array.isArray(tabs))e:for(var set of document.querySelectorAll(".tabbed-set")){var tab,labels=set.querySelector(".tabbed-labels");for(tab of tabs)for(var label of labels.getElementsByTagName("label"))if(label.innerText.trim()===tab){var input=document.getElementById(label.htmlFor);input.checked=!0;continue e}}</script> <script>var target=document.getElementById(location.hash.slice(1));target&&target.name&&(target.checked=target.name.startsWith("__tabbed_"))</script> </div> <button type=button class="md-top md-icon" data-md-component=top hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M13 20h-2V8l-5.5 5.5-1.42-1.42L12 4.16l7.92 7.92-1.42 1.42L13 8v12Z"/></svg> Back to top </button> </main> <footer class=md-footer> <div class="md-footer-meta md-typeset"> <div class="md-footer-meta__inner md-grid"> <div class=md-copyright> <div class=md-copyright__highlight> Copyright © 2016 - 2024 Kinetica DB Inc. </div> Made with <a href=https://squidfunk.github.io/mkdocs-material/ target=_blank rel=noopener> Material for MkDocs </a> </div> </div> </div> </footer> </div> <div class=md-dialog data-md-component=dialog> <div class="md-dialog__inner md-typeset"></div> </div> <div class=md-progress data-md-component=progress role=progressbar></div> <script id=__config type=application/json>{"base": "../..", "features": ["announce.dismiss", "content.tooltips", "content.code.copy", "content.code.annotate", "content.tabs.link", "header.autohide", "navigation.expand", "navigation.footer", "navigation.indexes", "navigation.instant", "navigation.instant.progress", "navigation.prune", "navigation.sections", "navigation.top", "navigation.tracking", "search.highlight", "search.share", "search.suggest", "tags", "navigation.tabs", "navigation.tabs.sticky"], "search": "../../assets/javascripts/workers/search.b8dbb3d2.min.js", "translations": {"clipboard.copied": "Copied to clipboard", "clipboard.copy": "Copy to clipboard", "search.result.more.one": "1 more on this page", "search.result.more.other": "# more on this page", "search.result.none": "No matching documents", "search.result.one": "1 matching document", "search.result.other": "# matching documents", "search.result.placeholder": "Type to start searching", "search.result.term.missing": "Missing", "select.version": "Select version"}, "version": {"provider": "mike"}}</script> <script src=../../assets/javascripts/bundle.c8d2eff1.min.js></script> <script>document$.subscribe(() => {const lightbox = GLightbox({"touchNavigation": true, "loop": false, "zoomable": true, "draggable": true, "openEffect": "zoom", "closeEffect": "zoom", "slideEffect": "slide"});})</script></body> </html> \ No newline at end of file diff --git a/7.2/GettingStarted/preparation_and_prerequisites/index.html b/7.2/GettingStarted/preparation_and_prerequisites/index.html index 759966c..4385b7a 100644 --- a/7.2/GettingStarted/preparation_and_prerequisites/index.html +++ b/7.2/GettingStarted/preparation_and_prerequisites/index.html @@ -15,7 +15,7 @@ </span></code></pre></div> <p>whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label <code>app.kinetica.com/pool=compute-gpu</code>.</p> <div class=highlight><span class=filename>Label the Database Nodes</span><pre><span></span><code><span id=__span-5-1><a id=__codelineno-5-1 name=__codelineno-5-1 href=#__codelineno-5-1></a><span class=w> </span>kubectl<span class=w> </span>label<span class=w> </span>node<span class=w> </span>k8snode2<span class=w> </span>app.kinetica.com/pool<span class=o>=</span>compute-gpu </span></code></pre></div> <div class="admonition warning"> <p class=admonition-title>On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & <abbr title="VRAM is, in principle, the same thing as CPU system RAM but for the use of the GPU.">VRAM</abbr> Memory</p> <p>To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled <code>app.kinetica.com/pool=compute-llm</code>. In order for the On-Prem Kinetica LLM to run it requires <strong>40GB GPU <abbr title="VRAM is, in principle, the same thing as CPU system RAM but for the use of the GPU.">VRAM</abbr></strong> therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB <abbr title="VRAM is, in principle, the same thing as CPU system RAM but for the use of the GPU.">VRAM</abbr> is available e.g. 1x A100 GPU or 2x A10G GPU. </p> <div class=highlight><span class=filename>Label Kubernetes Nodes for LLM</span><pre><span></span><code><span id=__span-6-1><a id=__codelineno-6-1 name=__codelineno-6-1 href=#__codelineno-6-1></a>kubectl<span class=w> </span>label<span class=w> </span>node<span class=w> </span>k8snode3<span class=w> </span>app.kinetica.com/pool<span class=o>=</span>compute-llm </span></code></pre></div> </div> <hr> </div> </div> </div> <div class="admonition warning"> <p class=admonition-title>Pods Not Scheduling</p> <p>If the Kubernetes are not labeled you may have a situation where Kinetica pods not schedule and sit in a 'Pending' state.</p> </div> <h2 id=install-the-kinetica-operators-chart>Install the kinetica-operators chart<a class=headerlink href=#install-the-kinetica-operators-chart title="Permanent link">¶</a></h2> <p>This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.</p> <h3 id=add-the-kinetica-chart-repository>Add the Kinetica chart repository<a class=headerlink href=#add-the-kinetica-chart-repository title="Permanent link">¶</a></h3> <p>Add the repo locally as <em>kinetica-operators</em>:</p> <div class=highlight><span class=filename>Helm repo add</span><pre><span></span><code><span id=__span-7-1><a id=__codelineno-7-1 name=__codelineno-7-1 href=#__codelineno-7-1></a>helm<span class=w> </span>repo<span class=w> </span>add<span class=w> </span>kinetica-operators<span class=w> </span>https://kineticadb.github.io/charts/latest -</span></code></pre></div> <details class=example> <summary>Helm Repo Add</summary> <p><a class=glightbox href=../../images/helm_repo_add.gif data-type=image data-width=auto data-height=auto data-title="Helm Repo Add" data-desc-position=bottom><img alt="Helm Repo Add" src=../../images/helm_repo_add.gif title="Add the Kinetica Helm Repository to the local machine"></a></p> </details> <h3 id=obtain-the-default-helm-values-file>Obtain the default Helm values file<a class=headerlink href=#obtain-the-default-helm-values-file title="Permanent link">¶</a></h3> <p>For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.</p> <div class=highlight><span class=filename>Helm values.yaml download</span><pre><span></span><code><span id=__span-8-1><a id=__codelineno-8-1 name=__codelineno-8-1 href=#__codelineno-8-1></a>wget<span class=w> </span>https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k8s.yaml +</span></code></pre></div> <details class=example> <summary>Helm Repo Add</summary> <p><a class=glightbox href=../../images/helm_repo_add.gif data-type=image data-width=auto data-height=auto data-title="Helm Repo Add" data-desc-position=bottom><img alt="Helm Repo Add" src=../../images/helm_repo_add.gif title="Add the Kinetica Helm Repository to the local machine"></a></p> </details> <h3 id=obtain-the-default-helm-values-file>Obtain the default Helm values file<a class=headerlink href=#obtain-the-default-helm-values-file title="Permanent link">¶</a></h3> <p>For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.</p> <div class=highlight><span class=filename>Helm values.yaml download</span><pre><span></span><code><span id=__span-8-1><a id=__codelineno-8-1 name=__codelineno-8-1 href=#__codelineno-8-1></a>wget<span class=w> </span>https://raw.githubusercontent.com/kineticadb/charts/72.2.5/kinetica-operators/values.onPrem.k8s.yaml </span></code></pre></div> <h3 id=determine-the-following-prior-to-the-chart-install>Determine the following prior to the chart install<a class=headerlink href=#determine-the-following-prior-to-the-chart-install title="Permanent link">¶</a></h3> <div class="admonition info inline end"> <p class=admonition-title>Default Admin User</p> <p>the default admin user in the Helm chart is <code>kadmin</code> but this is configurable. Non-ASCII characters and typographical symbols in the password must be escaped with a "\". For example, <code>--set dbAdminUser.password="MyPassword\!"</code></p> </div> <ol> <li>Obtain a LICENSE-KEY as described in the introduction above.</li> <li>Choose a PASSWORD for the initial administrator user</li> <li>As the storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain the DEFAULT-STORAGE-CLASS name with the command:</li> </ol> <p><br></p> <div class=highlight><span class=filename>Find the default storageclass</span><pre><span></span><code><span id=__span-9-1><a id=__codelineno-9-1 name=__codelineno-9-1 href=#__codelineno-9-1></a>kubectl<span class=w> </span>get<span class=w> </span>sc<span class=w> </span>-o<span class=w> </span>name<span class=w> </span> </span></code></pre></div> <details class=example> <summary>List StorageClass</summary> <p><a class=glightbox href=../../images/find_storage_class.gif data-type=image data-width=auto data-height=auto data-title="Find Storage Class" data-desc-position=bottom><img alt="Find Storage Class" src=../../images/find_storage_class.gif title="List all the Storage Classes on the Kubernetes Cluster"></a></p> </details> <p>use the name found after the /, For example, in <code>storageclass.storage.k8s.io/local-path</code> use "local-path" as the parameter.</p> <details class=warning> <summary>Amazon EKS</summary> <p>If installing on Amazon EKS <a href=../eks/#ebs-csi-driver><em>See here</em></a></p> </details> <h4 id=planning-access-to-your-kinetica-cluster>Planning access to your Kinetica Cluster<a class=headerlink href=#planning-access-to-your-kinetica-cluster title="Permanent link">¶</a></h4> <details class=question> <summary>Existing Ingress Controller?</summary> <p>If you have an existing Ingress Controller in your Kubernetes cluster and do not want Kinetica to install an <code>ingresss-nginx</code> to expose it's endpoints then you can disable <code>ingresss-nginx</code> installation in the <code>values.yaml</code> by editing the file and setting <code>install: true</code> to <code>install: false</code>: -</p> <div class=highlight><span class=filename>Text Only</span><pre><span></span><code>```` yaml nodeSelector: {} diff --git a/7.2/GettingStarted/quickstart/index.html b/7.2/GettingStarted/quickstart/index.html index b9f3a18..cb025be 100644 --- a/7.2/GettingStarted/quickstart/index.html +++ b/7.2/GettingStarted/quickstart/index.html @@ -7,17 +7,17 @@ .gdesc-inner { font-size: 0.75rem; } body[data-md-color-scheme="slate"] .gdesc-inner { background: var(--md-default-bg-color);} body[data-md-color-scheme="slate"] .gslide-title { color: var(--md-default-fg-color);} - body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);}</style><script src=../../assets/javascripts/glightbox.min.js></script></head> <body dir=ltr data-md-color-scheme=default data-md-color-primary=indigo data-md-color-accent=indigo> <input class=md-toggle data-md-toggle=drawer type=checkbox id=__drawer autocomplete=off> <input class=md-toggle data-md-toggle=search type=checkbox id=__search autocomplete=off> <label class=md-overlay for=__drawer></label> <div data-md-component=skip> <a href=#quickstart class=md-skip> Skip to content </a> </div> <div data-md-component=announce> </div> <div data-md-color-scheme=default data-md-component=outdated hidden> </div> <header class="md-header md-header--shadow md-header--lifted" data-md-component=header> <nav class="md-header__inner md-grid" aria-label=Header> <a href=https://www.kinetica.com title="Kinetica for Kubernetes" class="md-header__button md-logo" aria-label="Kinetica for Kubernetes" data-md-component=logo> <img src=../../assets/kinetica_logo.png alt=logo> </a> <label class="md-header__button md-icon" for=__drawer> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M3 6h18v2H3V6m0 5h18v2H3v-2m0 5h18v2H3v-2Z"/></svg> </label> <div class=md-header__title data-md-component=header-title> <div class=md-header__ellipsis> <div class=md-header__topic> <span class=md-ellipsis> Kinetica for Kubernetes </span> </div> <div class=md-header__topic data-md-component=header-topic> <span class=md-ellipsis> Quickstart </span> </div> </div> </div> <form class=md-header__option data-md-component=palette> <input class=md-option data-md-color-media data-md-color-scheme=default data-md-color-primary=indigo data-md-color-accent=indigo aria-label="Switch to dark mode" type=radio name=__palette id=__palette_0> <label class="md-header__button md-icon" title="Switch to dark mode" for=__palette_1 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M17 6H7c-3.31 0-6 2.69-6 6s2.69 6 6 6h10c3.31 0 6-2.69 6-6s-2.69-6-6-6zm0 10H7c-2.21 0-4-1.79-4-4s1.79-4 4-4h10c2.21 0 4 1.79 4 4s-1.79 4-4 4zM7 9c-1.66 0-3 1.34-3 3s1.34 3 3 3 3-1.34 3-3-1.34-3-3-3z"/></svg> </label> <input class=md-option data-md-color-media data-md-color-scheme=slate data-md-color-primary=red data-md-color-accent=red aria-label="Switch to light mode" type=radio name=__palette id=__palette_1> <label class="md-header__button md-icon" title="Switch to light mode" for=__palette_0 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M17 7H7a5 5 0 0 0-5 5 5 5 0 0 0 5 5h10a5 5 0 0 0 5-5 5 5 0 0 0-5-5m0 8a3 3 0 0 1-3-3 3 3 0 0 1 3-3 3 3 0 0 1 3 3 3 3 0 0 1-3 3Z"/></svg> </label> </form> <script>var media,input,key,value,palette=__md_get("__palette");if(palette&&palette.color){"(prefers-color-scheme)"===palette.color.media&&(media=matchMedia("(prefers-color-scheme: light)"),input=document.querySelector(media.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']"),palette.color.media=input.getAttribute("data-md-color-media"),palette.color.scheme=input.getAttribute("data-md-color-scheme"),palette.color.primary=input.getAttribute("data-md-color-primary"),palette.color.accent=input.getAttribute("data-md-color-accent"));for([key,value]of Object.entries(palette.color))document.body.setAttribute("data-md-color-"+key,value)}</script> <label class="md-header__button md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> </label> <div class=md-search data-md-component=search role=dialog> <label class=md-search__overlay for=__search></label> <div class=md-search__inner role=search> <form class=md-search__form name=search> <input type=text class=md-search__input name=query aria-label=Search placeholder=Search autocapitalize=off autocorrect=off autocomplete=off spellcheck=false data-md-component=search-query required> <label class="md-search__icon md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M20 11v2H8l5.5 5.5-1.42 1.42L4.16 12l7.92-7.92L13.5 5.5 8 11h12Z"/></svg> </label> <nav class=md-search__options aria-label=Search> <a href=javascript:void(0) class="md-search__icon md-icon" title=Share aria-label=Share data-clipboard data-clipboard-text data-md-component=search-share tabindex=-1> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M18 16.08c-.76 0-1.44.3-1.96.77L8.91 12.7c.05-.23.09-.46.09-.7 0-.24-.04-.47-.09-.7l7.05-4.11c.54.5 1.25.81 2.04.81a3 3 0 0 0 3-3 3 3 0 0 0-3-3 3 3 0 0 0-3 3c0 .24.04.47.09.7L8.04 9.81C7.5 9.31 6.79 9 6 9a3 3 0 0 0-3 3 3 3 0 0 0 3 3c.79 0 1.5-.31 2.04-.81l7.12 4.15c-.05.21-.08.43-.08.66 0 1.61 1.31 2.91 2.92 2.91 1.61 0 2.92-1.3 2.92-2.91A2.92 2.92 0 0 0 18 16.08Z"/></svg> </a> <button type=reset class="md-search__icon md-icon" title=Clear aria-label=Clear tabindex=-1> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M19 6.41 17.59 5 12 10.59 6.41 5 5 6.41 10.59 12 5 17.59 6.41 19 12 13.41 17.59 19 19 17.59 13.41 12 19 6.41Z"/></svg> </button> </nav> <div class=md-search__suggest data-md-component=search-suggest></div> </form> <div class=md-search__output> <div class=md-search__scrollwrap data-md-scrollfix> <div class=md-search-result data-md-component=search-result> <div class=md-search-result__meta> Initializing search </div> <ol class=md-search-result__list role=presentation></ol> </div> </div> </div> </div> </div> <div class=md-header__source> <a href=https://github.com/kineticadb/charts title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 496 512"><!-- Font Awesome Free 6.5.1 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg> </div> <div class=md-source__repository> kineticadb/charts </div> </a> </div> </nav> <nav class=md-tabs aria-label=Tabs data-md-component=tabs> <div class=md-grid> <ul class=md-tabs__list> <li class=md-tabs__item> <a href=../.. class=md-tabs__link> Home </a> </li> <li class="md-tabs__item md-tabs__item--active"> <a href=../../Setup/ class=md-tabs__link> Setup </a> </li> <li class=md-tabs__item> <a href=../../Advanced/ class=md-tabs__link> Advanced </a> </li> <li class=md-tabs__item> <a href=../../Operations/ class=md-tabs__link> Operations </a> </li> <li class=md-tabs__item> <a href=../../Administration/ class=md-tabs__link> Administration </a> </li> <li class=md-tabs__item> <a href=../../Architecture/ class=md-tabs__link> Architecture & Design </a> </li> <li class=md-tabs__item> <a href=../../Support/ class=md-tabs__link> Support </a> </li> <li class=md-tabs__item> <a href=../../Reference/ class=md-tabs__link> Reference </a> </li> <li class=md-tabs__item> <a href=../../tags/ class=md-tabs__link> Categories </a> </li> </ul> </div> </nav> </header> <div class=md-container data-md-component=container> <main class=md-main data-md-component=main> <div class="md-main__inner md-grid"> <div class="md-sidebar md-sidebar--primary" data-md-component=sidebar data-md-type=navigation hidden> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--primary md-nav--lifted" aria-label=Navigation data-md-level=0> <label class=md-nav__title for=__drawer> <a href=https://www.kinetica.com title="Kinetica for Kubernetes" class="md-nav__button md-logo" aria-label="Kinetica for Kubernetes" data-md-component=logo> <img src=../../assets/kinetica_logo.png alt=logo> </a> Kinetica for Kubernetes </label> <div class=md-nav__source> <a href=https://github.com/kineticadb/charts title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 496 512"><!-- Font Awesome Free 6.5.1 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg> </div> <div class=md-source__repository> kineticadb/charts </div> </a> </div> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../.. class=md-nav__link> <span class=md-ellipsis> Home </span> </a> </li> <li class="md-nav__item md-nav__item--active md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle " type=checkbox id=__nav_2 checked> <div class="md-nav__link md-nav__container"> <a href=../../Setup/ class="md-nav__link "> <span class=md-ellipsis> Setup </span> </a> <label class="md-nav__link " for=__nav_2 id=__nav_2_label tabindex> <span class="md-nav__icon md-icon"></span> </label> </div> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_2_label aria-expanded=true> <label class=md-nav__title for=__nav_2> <span class="md-nav__icon md-icon"></span> Setup </label> <ul class=md-nav__list data-md-scrollfix> <li class="md-nav__item md-nav__item--active md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle " type=checkbox id=__nav_2_2 checked> <div class="md-nav__link md-nav__container"> <a href=../ class="md-nav__link "> <span class=md-ellipsis> Getting Started </span> </a> <label class="md-nav__link " for=__nav_2_2 id=__nav_2_2_label tabindex> <span class="md-nav__icon md-icon"></span> </label> </div> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_2_2_label aria-expanded=true> <label class=md-nav__title for=__nav_2_2> <span class="md-nav__icon md-icon"></span> Getting Started </label> <ul class=md-nav__list data-md-scrollfix> <li class="md-nav__item md-nav__item--active"> <input class="md-nav__toggle md-toggle" type=checkbox id=__toc> <label class="md-nav__link md-nav__link--active" for=__toc> <span class=md-ellipsis> Quickstart </span> <span class="md-nav__icon md-icon"></span> </label> <a href=./ class="md-nav__link md-nav__link--active"> <span class=md-ellipsis> Quickstart </span> </a> <nav class="md-nav md-nav--secondary" aria-label="Table of contents"> <label class=md-nav__title for=__toc> <span class="md-nav__icon md-icon"></span> Table of contents </label> <ul class=md-nav__list data-md-component=toc data-md-scrollfix> <li class=md-nav__item> <a href=#please-select-your-target-kubernetes-variant class=md-nav__link> <span class=md-ellipsis> Please select your target Kubernetes variant: </span> </a> <nav class=md-nav aria-label="Please select your target Kubernetes variant:"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#kind-kubernetes-in-docker-kindsigsk8sio class=md-nav__link> <span class=md-ellipsis> Kind (kubernetes in docker kind.sigs.k8s.io) </span> </a> <nav class=md-nav aria-label="Kind (kubernetes in docker kind.sigs.k8s.io)"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#create-kind-cluster-129 class=md-nav__link> <span class=md-ellipsis> Create Kind Cluster 1.29 </span> </a> </li> <li class=md-nav__item> <a href=#kind-install-kinetica-operators-including-a-sample-db-to-try-out class=md-nav__link> <span class=md-ellipsis> Kind - Install kinetica-operators including a sample db to try out </span> </a> <nav class=md-nav aria-label="Kind - Install kinetica-operators including a sample db to try out"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#kind-install-the-kinetica-operators-chart class=md-nav__link> <span class=md-ellipsis> Kind - Install the Kinetica-Operators Chart </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=#k3s-k3sio class=md-nav__link> <span class=md-ellipsis> k3s (k3s.io) </span> </a> <nav class=md-nav aria-label="k3s (k3s.io)"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#install-k3s-129 class=md-nav__link> <span class=md-ellipsis> Install k3s 1.29 </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-kinetica-operators-including-a-sample-db-to-try-out class=md-nav__link> <span class=md-ellipsis> K3s - Install kinetica-operators including a sample db to try out </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-the-kinetica-operators-chart-cpu class=md-nav__link> <span class=md-ellipsis> K3S - Install the Kinetica-Operators Chart (CPU) </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-the-kinetica-operators-chart-gpu class=md-nav__link> <span class=md-ellipsis> K3S - Install the Kinetica-Operators Chart (GPU) </span> </a> </li> <li class=md-nav__item> <a href=#uninstall-k3s class=md-nav__link> <span class=md-ellipsis> Uninstall k3s </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=../preparation_and_prerequisites/ class=md-nav__link> <span class=md-ellipsis> Preparation & Prerequisites </span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../installation/ class=md-nav__link> <span class=md-ellipsis> Installation </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class=md-nav__item> <a href=../eks/ class=md-nav__link> <span class=md-ellipsis> Amazon EKS </span> </a> </li> <li class=md-nav__item> <a href=../aks/ class=md-nav__link> <span class=md-ellipsis> Azure AKS </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_2_3> <div class="md-nav__link md-nav__container"> <a href=../../Advanced/ class="md-nav__link "> <span class=md-ellipsis> Advanced Topics </span> </a> <label class="md-nav__link " for=__nav_2_3 id=__nav_2_3_label tabindex> <span class="md-nav__icon md-icon"></span> </label> </div> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_2_3_label aria-expanded=false> <label class=md-nav__title for=__nav_2_3> <span class="md-nav__icon md-icon"></span> Advanced Topics </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../Advanced/alternative_charts/ class=md-nav__link> <span class=md-ellipsis> Alternative Charts </span> </a> </li> <li class=md-nav__item> <a href=../../Advanced/ingress_configuration/ class=md-nav__link> <span class=md-ellipsis> Ingress Configuration </span> <span class="md-status md-status--new" title="Recently added"> </span> </a> </li> <li class=md-nav__item> <a href=../../Advanced/airgapped/ class=md-nav__link> <span class=md-ellipsis> Air-Gapped Environments </span> </a> </li> <li class=md-nav__item> <a href=../../Advanced/minio_s3_dev_test/ class=md-nav__link> <span class=md-ellipsis> S3 Storage for Dev/Test </span> <span class="md-status md-status--new" title="Recently added"> </span> </a> </li> <li class=md-nav__item> <a href=../../Advanced/velero_backup_restore.md class=md-nav__link> <span class=md-ellipsis> Enabling Backup/Restore </span> </a> </li> <li class=md-nav__item> <a href=../../Advanced/kinetica_mac_arm_k8s/ class=md-nav__link> <span class=md-ellipsis> Kinetica DB on OS X (Arm64) </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=../../Advanced/ class=md-nav__link> <span class=md-ellipsis> Advanced </span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Operations/ class=md-nav__link> <span class=md-ellipsis> Operations </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Administration/ class=md-nav__link> <span class=md-ellipsis> Administration </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Architecture/ class=md-nav__link> <span class=md-ellipsis> Architecture & Design </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Support/ class=md-nav__link> <span class=md-ellipsis> Support </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Reference/ class=md-nav__link> <span class=md-ellipsis> Reference </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class=md-nav__item> <a href=../../tags/ class=md-nav__link> <span class=md-ellipsis> Categories </span> </a> </li> </ul> </nav> </div> </div> </div> <div class="md-sidebar md-sidebar--secondary" data-md-component=sidebar data-md-type=toc> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--secondary" aria-label="Table of contents"> <label class=md-nav__title for=__toc> <span class="md-nav__icon md-icon"></span> Table of contents </label> <ul class=md-nav__list data-md-component=toc data-md-scrollfix> <li class=md-nav__item> <a href=#please-select-your-target-kubernetes-variant class=md-nav__link> <span class=md-ellipsis> Please select your target Kubernetes variant: </span> </a> <nav class=md-nav aria-label="Please select your target Kubernetes variant:"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#kind-kubernetes-in-docker-kindsigsk8sio class=md-nav__link> <span class=md-ellipsis> Kind (kubernetes in docker kind.sigs.k8s.io) </span> </a> <nav class=md-nav aria-label="Kind (kubernetes in docker kind.sigs.k8s.io)"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#create-kind-cluster-129 class=md-nav__link> <span class=md-ellipsis> Create Kind Cluster 1.29 </span> </a> </li> <li class=md-nav__item> <a href=#kind-install-kinetica-operators-including-a-sample-db-to-try-out class=md-nav__link> <span class=md-ellipsis> Kind - Install kinetica-operators including a sample db to try out </span> </a> <nav class=md-nav aria-label="Kind - Install kinetica-operators including a sample db to try out"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#kind-install-the-kinetica-operators-chart class=md-nav__link> <span class=md-ellipsis> Kind - Install the Kinetica-Operators Chart </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=#k3s-k3sio class=md-nav__link> <span class=md-ellipsis> k3s (k3s.io) </span> </a> <nav class=md-nav aria-label="k3s (k3s.io)"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#install-k3s-129 class=md-nav__link> <span class=md-ellipsis> Install k3s 1.29 </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-kinetica-operators-including-a-sample-db-to-try-out class=md-nav__link> <span class=md-ellipsis> K3s - Install kinetica-operators including a sample db to try out </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-the-kinetica-operators-chart-cpu class=md-nav__link> <span class=md-ellipsis> K3S - Install the Kinetica-Operators Chart (CPU) </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-the-kinetica-operators-chart-gpu class=md-nav__link> <span class=md-ellipsis> K3S - Install the Kinetica-Operators Chart (GPU) </span> </a> </li> <li class=md-nav__item> <a href=#uninstall-k3s class=md-nav__link> <span class=md-ellipsis> Uninstall k3s </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> </ul> </nav> </div> </div> </div> <div class=md-content data-md-component=content> <article class="md-content__inner md-typeset"> <nav class=md-tags> <a href=../../tags/#development class=md-tag>Development</a> <a href=../../tags/#getting-started class=md-tag>Getting Started</a> <a href=../../tags/#installation class=md-tag>Installation</a> </nav> <h1 id=quickstart><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M15 4a8 8 0 0 1 8 8 8 8 0 0 1-8 8 8 8 0 0 1-8-8 8 8 0 0 1 8-8m0 2a6 6 0 0 0-6 6 6 6 0 0 0 6 6 6 6 0 0 0 6-6 6 6 0 0 0-6-6m-1 2h1.5v3.78l2.33 2.33-1.06 1.06L14 12.4V8M2 18a1 1 0 0 1-1-1 1 1 0 0 1 1-1h3.83c.31.71.71 1.38 1.17 2H2m1-5a1 1 0 0 1-1-1 1 1 0 0 1 1-1h2.05L5 12l.05 1H3m1-5a1 1 0 0 1-1-1 1 1 0 0 1 1-1h3c-.46.62-.86 1.29-1.17 2H4Z"/></svg></span> Quickstart <span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M7.73 11.93c0 1.72-.02 1.83-.23 2.07-.19.17-.38.23-.76.23l-.51.01-.03-2.27-.02-2.27h.52c.35 0 .6.07.77.21.24.21.26.25.26 2.02M22 7.5v9c0 1.11-.89 2-2 2H4c-1.11 0-2-.89-2-2v-9c0-1.11.89-2 2-2h16c1.11 0 2 .89 2 2M8.93 11.73c-.03-1.84-.05-1.99-.29-2.39-.4-.68-.85-.84-2.36-.84H5v7h1.21c1.33 0 1.89-.17 2.29-.71.41-.53.5-.98.43-3.06m4.19-3.23h-1.48c-1.49 0-1.5 0-1.71.28S9.7 9.21 9.7 12v2.96l.27.27c.25.27.31.27 1.71.27h1.44v-1.19l-1.09-.04-1.1-.03V12.6l.68-.03.66-.04v-1.19h-1.39V9.7h2.24V8.5m5.88.06c0-.06-.3-.06-.66-.06l-.68.06-.59 2.35c-.38 1.48-.62 2.27-.67 2.13-.08-.27-1.14-4.44-1.14-4.49 0-.05-.31-.05-.68-.05h-.69l.41 1.55c.2.87.59 2.28.81 3.15.34 1.35.46 1.65.75 1.94.2.22.45.36.61.36.33 0 .76-.34.9-.73C17.5 14.5 19 8.69 19 8.56Z"/></svg></span><a class=headerlink href=#quickstart title="Permanent link">¶</a></h1> <p>For the quickstart we have examples for <a href=https://kind.sigs.k8s.io title="Kind Homepage">Kind</a> or <a href=https://k3s.io title="k3s Homepage">k3s</a>.</p> <ul> <li>Kind - is suitable for CPU only installations.</li> <li>k3s - is suitable for CPU or GPU installations.</li> </ul> <div class="admonition note"> <p class=admonition-title>Kubernetes >= 1.25</p> <p>The current version of the chart supports kubernetes version 1.25 and above.</p> </div> <h2 id=please-select-your-target-kubernetes-variant>Please select your target Kubernetes variant:<a class=headerlink href=#please-select-your-target-kubernetes-variant title="Permanent link">¶</a></h2> <div class="tabbed-set tabbed-alternate" data-tabs=1:2><input checked=checked id=kind name=__tabbed_1 type=radio><input id=k3s name=__tabbed_1 type=radio><div class=tabbed-labels><label for=kind>kind</label><label for=k3s><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M21.46 2.172H2.54A2.548 2.548 0 0 0 0 4.712v14.575a2.548 2.548 0 0 0 2.54 2.54h18.92a2.548 2.548 0 0 0 2.54-2.54V4.713a2.548 2.548 0 0 0-2.54-2.54ZM10.14 16.465 5.524 19.15a1.235 1.235 0 1 1-1.242-2.137L8.9 14.33a1.235 1.235 0 1 1 1.241 2.136zm1.817-4.088h-.006a1.235 1.235 0 0 1-1.23-1.24l.023-5.32a1.236 1.236 0 0 1 1.236-1.23h.005a1.235 1.235 0 0 1 1.23 1.241l-.023 5.32a1.236 1.236 0 0 1-1.235 1.23zm8.17 6.32a1.235 1.235 0 0 1-1.688.453l-4.624-2.67a1.235 1.235 0 1 1 1.235-2.14l4.624 2.67a1.235 1.235 0 0 1 .452 1.688z"/></svg></span> k3s</label></div> <div class=tabbed-content> <div class=tabbed-block> <h3 id=kind-kubernetes-in-docker-kindsigsk8sio>Kind (kubernetes in docker kind.sigs.k8s.io)<a class=headerlink href=#kind-kubernetes-in-docker-kindsigsk8sio title="Permanent link">¶</a></h3> <p>This installation in a kind cluster is for trying out the operators and the database in a non-production environment.</p> <div class="admonition note"> <p class=admonition-title>CPU Only</p> <p>This method currently only supports installing a CPU version of the database.</p> <p><strong>Please contact <a href=mailto:support@kinetica.com title="Kinetica Support Email">Kinetica Support</a> to request a trial key.</strong></p> </div> <h4 id=create-kind-cluster-129>Create Kind Cluster 1.29<a class=headerlink href=#create-kind-cluster-129 title="Permanent link">¶</a></h4> <div class=highlight><span class=filename>Create a new Kind Cluster</span><pre><span></span><code><span id=__span-0-1><a id=__codelineno-0-1 name=__codelineno-0-1 href=#__codelineno-0-1></a>wget<span class=w> </span>https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/kind.yaml + body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);}</style><script src=../../assets/javascripts/glightbox.min.js></script></head> <body dir=ltr data-md-color-scheme=default data-md-color-primary=indigo data-md-color-accent=indigo> <input class=md-toggle data-md-toggle=drawer type=checkbox id=__drawer autocomplete=off> <input class=md-toggle data-md-toggle=search type=checkbox id=__search autocomplete=off> <label class=md-overlay for=__drawer></label> <div data-md-component=skip> <a href=#quickstart class=md-skip> Skip to content </a> </div> <div data-md-component=announce> </div> <div data-md-color-scheme=default data-md-component=outdated hidden> </div> <header class="md-header md-header--shadow md-header--lifted" data-md-component=header> <nav class="md-header__inner md-grid" aria-label=Header> <a href=https://www.kinetica.com title="Kinetica for Kubernetes" class="md-header__button md-logo" aria-label="Kinetica for Kubernetes" data-md-component=logo> <img src=../../assets/kinetica_logo.png alt=logo> </a> <label class="md-header__button md-icon" for=__drawer> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M3 6h18v2H3V6m0 5h18v2H3v-2m0 5h18v2H3v-2Z"/></svg> </label> <div class=md-header__title data-md-component=header-title> <div class=md-header__ellipsis> <div class=md-header__topic> <span class=md-ellipsis> Kinetica for Kubernetes </span> </div> <div class=md-header__topic data-md-component=header-topic> <span class=md-ellipsis> Quickstart </span> </div> </div> </div> <form class=md-header__option data-md-component=palette> <input class=md-option data-md-color-media data-md-color-scheme=default data-md-color-primary=indigo data-md-color-accent=indigo aria-label="Switch to dark mode" type=radio name=__palette id=__palette_0> <label class="md-header__button md-icon" title="Switch to dark mode" for=__palette_1 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M17 6H7c-3.31 0-6 2.69-6 6s2.69 6 6 6h10c3.31 0 6-2.69 6-6s-2.69-6-6-6zm0 10H7c-2.21 0-4-1.79-4-4s1.79-4 4-4h10c2.21 0 4 1.79 4 4s-1.79 4-4 4zM7 9c-1.66 0-3 1.34-3 3s1.34 3 3 3 3-1.34 3-3-1.34-3-3-3z"/></svg> </label> <input class=md-option data-md-color-media data-md-color-scheme=slate data-md-color-primary=red data-md-color-accent=red aria-label="Switch to light mode" type=radio name=__palette id=__palette_1> <label class="md-header__button md-icon" title="Switch to light mode" for=__palette_0 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M17 7H7a5 5 0 0 0-5 5 5 5 0 0 0 5 5h10a5 5 0 0 0 5-5 5 5 0 0 0-5-5m0 8a3 3 0 0 1-3-3 3 3 0 0 1 3-3 3 3 0 0 1 3 3 3 3 0 0 1-3 3Z"/></svg> </label> </form> <script>var media,input,key,value,palette=__md_get("__palette");if(palette&&palette.color){"(prefers-color-scheme)"===palette.color.media&&(media=matchMedia("(prefers-color-scheme: light)"),input=document.querySelector(media.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']"),palette.color.media=input.getAttribute("data-md-color-media"),palette.color.scheme=input.getAttribute("data-md-color-scheme"),palette.color.primary=input.getAttribute("data-md-color-primary"),palette.color.accent=input.getAttribute("data-md-color-accent"));for([key,value]of Object.entries(palette.color))document.body.setAttribute("data-md-color-"+key,value)}</script> <label class="md-header__button md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> </label> <div class=md-search data-md-component=search role=dialog> <label class=md-search__overlay for=__search></label> <div class=md-search__inner role=search> <form class=md-search__form name=search> <input type=text class=md-search__input name=query aria-label=Search placeholder=Search autocapitalize=off autocorrect=off autocomplete=off spellcheck=false data-md-component=search-query required> <label class="md-search__icon md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M20 11v2H8l5.5 5.5-1.42 1.42L4.16 12l7.92-7.92L13.5 5.5 8 11h12Z"/></svg> </label> <nav class=md-search__options aria-label=Search> <a href=javascript:void(0) class="md-search__icon md-icon" title=Share aria-label=Share data-clipboard data-clipboard-text data-md-component=search-share tabindex=-1> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M18 16.08c-.76 0-1.44.3-1.96.77L8.91 12.7c.05-.23.09-.46.09-.7 0-.24-.04-.47-.09-.7l7.05-4.11c.54.5 1.25.81 2.04.81a3 3 0 0 0 3-3 3 3 0 0 0-3-3 3 3 0 0 0-3 3c0 .24.04.47.09.7L8.04 9.81C7.5 9.31 6.79 9 6 9a3 3 0 0 0-3 3 3 3 0 0 0 3 3c.79 0 1.5-.31 2.04-.81l7.12 4.15c-.05.21-.08.43-.08.66 0 1.61 1.31 2.91 2.92 2.91 1.61 0 2.92-1.3 2.92-2.91A2.92 2.92 0 0 0 18 16.08Z"/></svg> </a> <button type=reset class="md-search__icon md-icon" title=Clear aria-label=Clear tabindex=-1> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M19 6.41 17.59 5 12 10.59 6.41 5 5 6.41 10.59 12 5 17.59 6.41 19 12 13.41 17.59 19 19 17.59 13.41 12 19 6.41Z"/></svg> </button> </nav> <div class=md-search__suggest data-md-component=search-suggest></div> </form> <div class=md-search__output> <div class=md-search__scrollwrap data-md-scrollfix> <div class=md-search-result data-md-component=search-result> <div class=md-search-result__meta> Initializing search </div> <ol class=md-search-result__list role=presentation></ol> </div> </div> </div> </div> </div> <div class=md-header__source> <a href=https://github.com/kineticadb/charts title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 496 512"><!-- Font Awesome Free 6.5.1 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg> </div> <div class=md-source__repository> kineticadb/charts </div> </a> </div> </nav> <nav class=md-tabs aria-label=Tabs data-md-component=tabs> <div class=md-grid> <ul class=md-tabs__list> <li class=md-tabs__item> <a href=../.. class=md-tabs__link> Home </a> </li> <li class="md-tabs__item md-tabs__item--active"> <a href=../../Setup/ class=md-tabs__link> Setup </a> </li> <li class=md-tabs__item> <a href=../../Advanced/ class=md-tabs__link> Advanced </a> </li> <li class=md-tabs__item> <a href=../../Operations/ class=md-tabs__link> Operations </a> </li> <li class=md-tabs__item> <a href=../../Administration/ class=md-tabs__link> Administration </a> </li> <li class=md-tabs__item> <a href=../../Architecture/ class=md-tabs__link> Architecture & Design </a> </li> <li class=md-tabs__item> <a href=../../Support/ class=md-tabs__link> Support </a> </li> <li class=md-tabs__item> <a href=../../Reference/ class=md-tabs__link> Reference </a> </li> <li class=md-tabs__item> <a href=../../tags/ class=md-tabs__link> Categories </a> </li> </ul> </div> </nav> </header> <div class=md-container data-md-component=container> <main class=md-main data-md-component=main> <div class="md-main__inner md-grid"> <div class="md-sidebar md-sidebar--primary" data-md-component=sidebar data-md-type=navigation hidden> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--primary md-nav--lifted" aria-label=Navigation data-md-level=0> <label class=md-nav__title for=__drawer> <a href=https://www.kinetica.com title="Kinetica for Kubernetes" class="md-nav__button md-logo" aria-label="Kinetica for Kubernetes" data-md-component=logo> <img src=../../assets/kinetica_logo.png alt=logo> </a> Kinetica for Kubernetes </label> <div class=md-nav__source> <a href=https://github.com/kineticadb/charts title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 496 512"><!-- Font Awesome Free 6.5.1 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg> </div> <div class=md-source__repository> kineticadb/charts </div> </a> </div> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../.. class=md-nav__link> <span class=md-ellipsis> Home </span> </a> </li> <li class="md-nav__item md-nav__item--active md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle " type=checkbox id=__nav_2 checked> <div class="md-nav__link md-nav__container"> <a href=../../Setup/ class="md-nav__link "> <span class=md-ellipsis> Setup </span> </a> <label class="md-nav__link " for=__nav_2 id=__nav_2_label tabindex> <span class="md-nav__icon md-icon"></span> </label> </div> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_2_label aria-expanded=true> <label class=md-nav__title for=__nav_2> <span class="md-nav__icon md-icon"></span> Setup </label> <ul class=md-nav__list data-md-scrollfix> <li class="md-nav__item md-nav__item--active md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle " type=checkbox id=__nav_2_2 checked> <div class="md-nav__link md-nav__container"> <a href=../ class="md-nav__link "> <span class=md-ellipsis> Getting Started </span> </a> <label class="md-nav__link " for=__nav_2_2 id=__nav_2_2_label tabindex> <span class="md-nav__icon md-icon"></span> </label> </div> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_2_2_label aria-expanded=true> <label class=md-nav__title for=__nav_2_2> <span class="md-nav__icon md-icon"></span> Getting Started </label> <ul class=md-nav__list data-md-scrollfix> <li class="md-nav__item md-nav__item--active"> <input class="md-nav__toggle md-toggle" type=checkbox id=__toc> <label class="md-nav__link md-nav__link--active" for=__toc> <span class=md-ellipsis> Quickstart </span> <span class="md-nav__icon md-icon"></span> </label> <a href=./ class="md-nav__link md-nav__link--active"> <span class=md-ellipsis> Quickstart </span> </a> <nav class="md-nav md-nav--secondary" aria-label="Table of contents"> <label class=md-nav__title for=__toc> <span class="md-nav__icon md-icon"></span> Table of contents </label> <ul class=md-nav__list data-md-component=toc data-md-scrollfix> <li class=md-nav__item> <a href=#please-select-your-target-kubernetes-variant class=md-nav__link> <span class=md-ellipsis> Please select your target Kubernetes variant: </span> </a> <nav class=md-nav aria-label="Please select your target Kubernetes variant:"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#kind-kubernetes-in-docker-kindsigsk8sio class=md-nav__link> <span class=md-ellipsis> Kind (kubernetes in docker kind.sigs.k8s.io) </span> </a> <nav class=md-nav aria-label="Kind (kubernetes in docker kind.sigs.k8s.io)"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#create-kind-cluster-129 class=md-nav__link> <span class=md-ellipsis> Create Kind Cluster 1.29 </span> </a> </li> <li class=md-nav__item> <a href=#kind-install-kinetica-operators-including-a-sample-db-to-try-out class=md-nav__link> <span class=md-ellipsis> Kind - Install kinetica-operators including a sample db to try out </span> </a> <nav class=md-nav aria-label="Kind - Install kinetica-operators including a sample db to try out"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#kind-install-the-kinetica-operators-chart class=md-nav__link> <span class=md-ellipsis> Kind - Install the Kinetica-Operators Chart </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=#k3s-k3sio class=md-nav__link> <span class=md-ellipsis> k3s (k3s.io) </span> </a> <nav class=md-nav aria-label="k3s (k3s.io)"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#install-k3s-129 class=md-nav__link> <span class=md-ellipsis> Install k3s 1.29 </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-kinetica-operators-including-a-sample-db-to-try-out class=md-nav__link> <span class=md-ellipsis> K3s - Install kinetica-operators including a sample db to try out </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-the-kinetica-operators-chart-cpu class=md-nav__link> <span class=md-ellipsis> K3S - Install the Kinetica-Operators Chart (CPU) </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-the-kinetica-operators-chart-gpu class=md-nav__link> <span class=md-ellipsis> K3S - Install the Kinetica-Operators Chart (GPU) </span> </a> </li> <li class=md-nav__item> <a href=#uninstall-k3s class=md-nav__link> <span class=md-ellipsis> Uninstall k3s </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=../preparation_and_prerequisites/ class=md-nav__link> <span class=md-ellipsis> Preparation & Prerequisites </span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../installation/ class=md-nav__link> <span class=md-ellipsis> Installation </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class=md-nav__item> <a href=../eks/ class=md-nav__link> <span class=md-ellipsis> Amazon EKS </span> </a> </li> <li class=md-nav__item> <a href=../aks/ class=md-nav__link> <span class=md-ellipsis> Azure AKS </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_2_3> <div class="md-nav__link md-nav__container"> <a href=../../Advanced/ class="md-nav__link "> <span class=md-ellipsis> Advanced Topics </span> </a> <label class="md-nav__link " for=__nav_2_3 id=__nav_2_3_label tabindex> <span class="md-nav__icon md-icon"></span> </label> </div> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_2_3_label aria-expanded=false> <label class=md-nav__title for=__nav_2_3> <span class="md-nav__icon md-icon"></span> Advanced Topics </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../Advanced/alternative_charts/ class=md-nav__link> <span class=md-ellipsis> Alternative Charts </span> </a> </li> <li class=md-nav__item> <a href=../../Advanced/ingress_configuration/ class=md-nav__link> <span class=md-ellipsis> Ingress Configuration </span> <span class="md-status md-status--new" title="Recently added"> </span> </a> </li> <li class=md-nav__item> <a href=../../Advanced/airgapped/ class=md-nav__link> <span class=md-ellipsis> Air-Gapped Environments </span> </a> </li> <li class=md-nav__item> <a href=../../Advanced/minio_s3_dev_test/ class=md-nav__link> <span class=md-ellipsis> S3 Storage for Dev/Test </span> <span class="md-status md-status--new" title="Recently added"> </span> </a> </li> <li class=md-nav__item> <a href=../../Advanced/velero_backup_restore.md class=md-nav__link> <span class=md-ellipsis> Enabling Backup/Restore </span> </a> </li> <li class=md-nav__item> <a href=../../Advanced/kinetica_mac_arm_k8s/ class=md-nav__link> <span class=md-ellipsis> Kinetica DB on OS X (Arm64) </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=../../Advanced/ class=md-nav__link> <span class=md-ellipsis> Advanced </span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Operations/ class=md-nav__link> <span class=md-ellipsis> Operations </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Administration/ class=md-nav__link> <span class=md-ellipsis> Administration </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Architecture/ class=md-nav__link> <span class=md-ellipsis> Architecture & Design </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Support/ class=md-nav__link> <span class=md-ellipsis> Support </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class="md-nav__item md-nav__item--pruned md-nav__item--nested"> <a href=../../Reference/ class=md-nav__link> <span class=md-ellipsis> Reference </span> <span class="md-nav__icon md-icon"></span> </a> </li> <li class=md-nav__item> <a href=../../tags/ class=md-nav__link> <span class=md-ellipsis> Categories </span> </a> </li> </ul> </nav> </div> </div> </div> <div class="md-sidebar md-sidebar--secondary" data-md-component=sidebar data-md-type=toc> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--secondary" aria-label="Table of contents"> <label class=md-nav__title for=__toc> <span class="md-nav__icon md-icon"></span> Table of contents </label> <ul class=md-nav__list data-md-component=toc data-md-scrollfix> <li class=md-nav__item> <a href=#please-select-your-target-kubernetes-variant class=md-nav__link> <span class=md-ellipsis> Please select your target Kubernetes variant: </span> </a> <nav class=md-nav aria-label="Please select your target Kubernetes variant:"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#kind-kubernetes-in-docker-kindsigsk8sio class=md-nav__link> <span class=md-ellipsis> Kind (kubernetes in docker kind.sigs.k8s.io) </span> </a> <nav class=md-nav aria-label="Kind (kubernetes in docker kind.sigs.k8s.io)"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#create-kind-cluster-129 class=md-nav__link> <span class=md-ellipsis> Create Kind Cluster 1.29 </span> </a> </li> <li class=md-nav__item> <a href=#kind-install-kinetica-operators-including-a-sample-db-to-try-out class=md-nav__link> <span class=md-ellipsis> Kind - Install kinetica-operators including a sample db to try out </span> </a> <nav class=md-nav aria-label="Kind - Install kinetica-operators including a sample db to try out"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#kind-install-the-kinetica-operators-chart class=md-nav__link> <span class=md-ellipsis> Kind - Install the Kinetica-Operators Chart </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=#k3s-k3sio class=md-nav__link> <span class=md-ellipsis> k3s (k3s.io) </span> </a> <nav class=md-nav aria-label="k3s (k3s.io)"> <ul class=md-nav__list> <li class=md-nav__item> <a href=#install-k3s-129 class=md-nav__link> <span class=md-ellipsis> Install k3s 1.29 </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-kinetica-operators-including-a-sample-db-to-try-out class=md-nav__link> <span class=md-ellipsis> K3s - Install kinetica-operators including a sample db to try out </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-the-kinetica-operators-chart-cpu class=md-nav__link> <span class=md-ellipsis> K3S - Install the Kinetica-Operators Chart (CPU) </span> </a> </li> <li class=md-nav__item> <a href=#k3s-install-the-kinetica-operators-chart-gpu class=md-nav__link> <span class=md-ellipsis> K3S - Install the Kinetica-Operators Chart (GPU) </span> </a> </li> <li class=md-nav__item> <a href=#uninstall-k3s class=md-nav__link> <span class=md-ellipsis> Uninstall k3s </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> </ul> </nav> </div> </div> </div> <div class=md-content data-md-component=content> <article class="md-content__inner md-typeset"> <nav class=md-tags> <a href=../../tags/#development class=md-tag>Development</a> <a href=../../tags/#getting-started class=md-tag>Getting Started</a> <a href=../../tags/#installation class=md-tag>Installation</a> </nav> <h1 id=quickstart><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M15 4a8 8 0 0 1 8 8 8 8 0 0 1-8 8 8 8 0 0 1-8-8 8 8 0 0 1 8-8m0 2a6 6 0 0 0-6 6 6 6 0 0 0 6 6 6 6 0 0 0 6-6 6 6 0 0 0-6-6m-1 2h1.5v3.78l2.33 2.33-1.06 1.06L14 12.4V8M2 18a1 1 0 0 1-1-1 1 1 0 0 1 1-1h3.83c.31.71.71 1.38 1.17 2H2m1-5a1 1 0 0 1-1-1 1 1 0 0 1 1-1h2.05L5 12l.05 1H3m1-5a1 1 0 0 1-1-1 1 1 0 0 1 1-1h3c-.46.62-.86 1.29-1.17 2H4Z"/></svg></span> Quickstart <span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M7.73 11.93c0 1.72-.02 1.83-.23 2.07-.19.17-.38.23-.76.23l-.51.01-.03-2.27-.02-2.27h.52c.35 0 .6.07.77.21.24.21.26.25.26 2.02M22 7.5v9c0 1.11-.89 2-2 2H4c-1.11 0-2-.89-2-2v-9c0-1.11.89-2 2-2h16c1.11 0 2 .89 2 2M8.93 11.73c-.03-1.84-.05-1.99-.29-2.39-.4-.68-.85-.84-2.36-.84H5v7h1.21c1.33 0 1.89-.17 2.29-.71.41-.53.5-.98.43-3.06m4.19-3.23h-1.48c-1.49 0-1.5 0-1.71.28S9.7 9.21 9.7 12v2.96l.27.27c.25.27.31.27 1.71.27h1.44v-1.19l-1.09-.04-1.1-.03V12.6l.68-.03.66-.04v-1.19h-1.39V9.7h2.24V8.5m5.88.06c0-.06-.3-.06-.66-.06l-.68.06-.59 2.35c-.38 1.48-.62 2.27-.67 2.13-.08-.27-1.14-4.44-1.14-4.49 0-.05-.31-.05-.68-.05h-.69l.41 1.55c.2.87.59 2.28.81 3.15.34 1.35.46 1.65.75 1.94.2.22.45.36.61.36.33 0 .76-.34.9-.73C17.5 14.5 19 8.69 19 8.56Z"/></svg></span><a class=headerlink href=#quickstart title="Permanent link">¶</a></h1> <p>For the quickstart we have examples for <a href=https://kind.sigs.k8s.io title="Kind Homepage">Kind</a> or <a href=https://k3s.io title="k3s Homepage">k3s</a>.</p> <ul> <li>Kind - is suitable for CPU only installations.</li> <li>k3s - is suitable for CPU or GPU installations.</li> </ul> <div class="admonition note"> <p class=admonition-title>Kubernetes >= 1.25</p> <p>The current version of the chart supports kubernetes version 1.25 and above.</p> </div> <h2 id=please-select-your-target-kubernetes-variant>Please select your target Kubernetes variant:<a class=headerlink href=#please-select-your-target-kubernetes-variant title="Permanent link">¶</a></h2> <div class="tabbed-set tabbed-alternate" data-tabs=1:2><input checked=checked id=kind name=__tabbed_1 type=radio><input id=k3s name=__tabbed_1 type=radio><div class=tabbed-labels><label for=kind>kind</label><label for=k3s><span class=twemoji><svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M21.46 2.172H2.54A2.548 2.548 0 0 0 0 4.712v14.575a2.548 2.548 0 0 0 2.54 2.54h18.92a2.548 2.548 0 0 0 2.54-2.54V4.713a2.548 2.548 0 0 0-2.54-2.54ZM10.14 16.465 5.524 19.15a1.235 1.235 0 1 1-1.242-2.137L8.9 14.33a1.235 1.235 0 1 1 1.241 2.136zm1.817-4.088h-.006a1.235 1.235 0 0 1-1.23-1.24l.023-5.32a1.236 1.236 0 0 1 1.236-1.23h.005a1.235 1.235 0 0 1 1.23 1.241l-.023 5.32a1.236 1.236 0 0 1-1.235 1.23zm8.17 6.32a1.235 1.235 0 0 1-1.688.453l-4.624-2.67a1.235 1.235 0 1 1 1.235-2.14l4.624 2.67a1.235 1.235 0 0 1 .452 1.688z"/></svg></span> k3s</label></div> <div class=tabbed-content> <div class=tabbed-block> <h3 id=kind-kubernetes-in-docker-kindsigsk8sio>Kind (kubernetes in docker kind.sigs.k8s.io)<a class=headerlink href=#kind-kubernetes-in-docker-kindsigsk8sio title="Permanent link">¶</a></h3> <p>This installation in a kind cluster is for trying out the operators and the database in a non-production environment.</p> <div class="admonition note"> <p class=admonition-title>CPU Only</p> <p>This method currently only supports installing a CPU version of the database.</p> <p><strong>Please contact <a href=mailto:support@kinetica.com title="Kinetica Support Email">Kinetica Support</a> to request a trial key.</strong></p> </div> <h4 id=create-kind-cluster-129>Create Kind Cluster 1.29<a class=headerlink href=#create-kind-cluster-129 title="Permanent link">¶</a></h4> <div class=highlight><span class=filename>Create a new Kind Cluster</span><pre><span></span><code><span id=__span-0-1><a id=__codelineno-0-1 name=__codelineno-0-1 href=#__codelineno-0-1></a>wget<span class=w> </span>https://raw.githubusercontent.com/kineticadb/charts/72.2.5/kinetica-operators/kind.yaml </span><span id=__span-0-2><a id=__codelineno-0-2 name=__codelineno-0-2 href=#__codelineno-0-2></a>kind<span class=w> </span>create<span class=w> </span>cluster<span class=w> </span>--name<span class=w> </span>kinetica<span class=w> </span>--config<span class=w> </span>kind.yaml </span></code></pre></div> <div class=highlight><span class=filename>List Kind clusters</span><pre><span></span><code><span id=__span-1-1><a id=__codelineno-1-1 name=__codelineno-1-1 href=#__codelineno-1-1></a><span class=w> </span>kind<span class=w> </span>get<span class=w> </span>clusters </span></code></pre></div> <div class="admonition tip"> <p class=admonition-title>Set Kubernetes Context</p> <p>Please set your Kubernetes Context to <code>kind-kinetica</code> before performing the following steps. </p> </div> <h4 id=kind-install-kinetica-operators-including-a-sample-db-to-try-out>Kind - Install kinetica-operators including a sample db to try out<a class=headerlink href=#kind-install-kinetica-operators-including-a-sample-db-to-try-out title="Permanent link">¶</a></h4> <p>Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.</p> <p>As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.</p> <h5 id=kind-install-the-kinetica-operators-chart>Kind - Install the Kinetica-Operators Chart<a class=headerlink href=#kind-install-the-kinetica-operators-chart title="Permanent link">¶</a></h5> <div class=highlight><span class=filename>Add Kinetica Operators Chart Repo</span><pre><span></span><code><span id=__span-2-1><a id=__codelineno-2-1 name=__codelineno-2-1 href=#__codelineno-2-1></a>helm<span class=w> </span>repo<span class=w> </span>add<span class=w> </span>kinetica-operators<span class=w> </span>https://kineticadb.github.io/charts/latest </span></code></pre></div> <div class="admonition tip"> <p class=admonition-title>FQDN or Local Access</p> <p>By default we create an ingress pointing towards <code>local.kinetica</code>. If you have a domain pointing to your machine, replace/set the FQDN in the <code>values.yaml</code> with the correct domain name or by adding <code>--set</code>.</p> <p>If you are on a local machine which is not having a domain name, you add the following entry to your <code>/etc/hosts</code> file or equivalent.</p> <div class=highlight><span class=filename>Configure local access - /etc/hosts</span><pre><span></span><code><span id=__span-3-1><a id=__codelineno-3-1 name=__codelineno-3-1 href=#__codelineno-3-1></a><span class=m>127</span>.0.0.1<span class=w> </span>local.kinetica -</span></code></pre></div> </div> <div class=highlight><span class=filename>Get & install the Kinetica-Operators Chart</span><pre><span></span><code><span id=__span-4-1><a id=__codelineno-4-1 name=__codelineno-4-1 href=#__codelineno-4-1></a>wget<span class=w> </span>https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.kind.yaml +</span></code></pre></div> </div> <div class=highlight><span class=filename>Get & install the Kinetica-Operators Chart</span><pre><span></span><code><span id=__span-4-1><a id=__codelineno-4-1 name=__codelineno-4-1 href=#__codelineno-4-1></a>wget<span class=w> </span>https://raw.githubusercontent.com/kineticadb/charts/72.2.5/kinetica-operators/values.onPrem.kind.yaml </span><span id=__span-4-2><a id=__codelineno-4-2 name=__codelineno-4-2 href=#__codelineno-4-2></a> </span><span id=__span-4-3><a id=__codelineno-4-3 name=__codelineno-4-3 href=#__codelineno-4-3></a>helm<span class=w> </span>-n<span class=w> </span>kinetica-system<span class=w> </span>upgrade<span class=w> </span>-i<span class=w> </span>kinetica-operators<span class=w> </span>kinetica-operators/kinetica-operators<span class=w> </span>--create-namespace<span class=w> </span>--values<span class=w> </span>values.onPrem.kind.yaml<span class=w> </span>--set<span class=w> </span>db.gpudbCluster.license<span class=o>=</span><span class=s2>"your_license_key"</span><span class=w> </span>--set<span class=w> </span>dbAdminUser.password<span class=o>=</span><span class=s2>"your_password"</span> </span></code></pre></div> <p>or if you have been asked by the Kinetica Support team to try a development version</p> <div class=highlight><span class=filename>Using a development version</span><pre><span></span><code><span id=__span-5-1><a id=__codelineno-5-1 name=__codelineno-5-1 href=#__codelineno-5-1></a>helm<span class=w> </span>search<span class=w> </span>repo<span class=w> </span>kinetica-operators<span class=w> </span>--devel<span class=w> </span>--versions </span><span id=__span-5-2><a id=__codelineno-5-2 name=__codelineno-5-2 href=#__codelineno-5-2></a> -</span><span id=__span-5-3><a id=__codelineno-5-3 name=__codelineno-5-3 href=#__codelineno-5-3></a>helm<span class=w> </span>-n<span class=w> </span>kinetica-system<span class=w> </span>upgrade<span class=w> </span>-i<span class=w> </span>kinetica-operators<span class=w> </span>kinetica-operators/kinetica-operators/<span class=w> </span>--create-namespace<span class=w> </span>--values<span class=w> </span>values.onPrem.kind.yaml<span class=w> </span>--set<span class=w> </span>db.gpudbCluster.license<span class=o>=</span><span class=s2>"your_license_key"</span><span class=w> </span>--set<span class=w> </span>dbAdminUser.password<span class=o>=</span><span class=s2>"your_password"</span><span class=w> </span>--devel<span class=w> </span>--version<span class=w> </span><span class=m>72</span>.2.3 +</span><span id=__span-5-3><a id=__codelineno-5-3 name=__codelineno-5-3 href=#__codelineno-5-3></a>helm<span class=w> </span>-n<span class=w> </span>kinetica-system<span class=w> </span>upgrade<span class=w> </span>-i<span class=w> </span>kinetica-operators<span class=w> </span>kinetica-operators/kinetica-operators/<span class=w> </span>--create-namespace<span class=w> </span>--values<span class=w> </span>values.onPrem.kind.yaml<span class=w> </span>--set<span class=w> </span>db.gpudbCluster.license<span class=o>=</span><span class=s2>"your_license_key"</span><span class=w> </span>--set<span class=w> </span>dbAdminUser.password<span class=o>=</span><span class=s2>"your_password"</span><span class=w> </span>--devel<span class=w> </span>--version<span class=w> </span><span class=m>72</span>.2.5 </span></code></pre></div> <div class="admonition success"> <p class=admonition-title>Accessing the Workbench</p> <p>You should be able to access the workbench at <a href=http://local.kinetica title="Workbench URL">http://local.kinetica</a></p> </div> </div> <div class=tabbed-block> <h3 id=k3s-k3sio>k3s (k3s.io)<a class=headerlink href=#k3s-k3sio title="Permanent link">¶</a></h3> <h4 id=install-k3s-129>Install k3s 1.29<a class=headerlink href=#install-k3s-129 title="Permanent link">¶</a></h4> <div class=highlight><span class=filename>Install k3s</span><pre><span></span><code><span id=__span-6-1><a id=__codelineno-6-1 name=__codelineno-6-1 href=#__codelineno-6-1></a>curl<span class=w> </span>-sfL<span class=w> </span>https://get.k3s.io<span class=w> </span><span class=p>|</span><span class=w> </span><span class=nv>INSTALL_K3S_EXEC</span><span class=o>=</span><span class=s2>"--disable=traefik --node-name kinetica-master --token 12345"</span><span class=w> </span><span class=nv>K3S_KUBECONFIG_OUTPUT</span><span class=o>=</span>~/.kube/config_k3s<span class=w> </span><span class=nv>K3S_KUBECONFIG_MODE</span><span class=o>=</span><span class=m>644</span><span class=w> </span><span class=nv>INSTALL_K3S_VERSION</span><span class=o>=</span>v1.29.2+k3s1<span class=w> </span>sh<span class=w> </span>- </span></code></pre></div> <p>Once installed we need to set the current Kubernetes context to point to the newly created k3s cluster.</p> <p>Select if you want local or remote access to the Kubernetes Cluster: -</p> <div class="tabbed-set tabbed-alternate" data-tabs=2:2><input checked=checked id=local-access name=__tabbed_2 type=radio><input id=remote-access name=__tabbed_2 type=radio><div class=tabbed-labels><label for=local-access>Local Access</label><label for=remote-access>Remote Access</label></div> <div class=tabbed-content> <div class=tabbed-block> <p>For only local access to the cluster we can simply set the <code>KUBECONFIG</code> environment variable</p> <div class=highlight><span class=filename>Set kubectl context</span><pre><span></span><code><span id=__span-7-1><a id=__codelineno-7-1 name=__codelineno-7-1 href=#__codelineno-7-1></a><span class=nb>export</span><span class=w> </span><span class=nv>KUBECONFIG</span><span class=o>=</span>/etc/rancher/k3s/k3s.yaml </span></code></pre></div> </div> <div class=tabbed-block> <p>For remote access i.e. outside the host/VM k3s is installed on: -</p> <p>Copy <code>/etc/rancher/k3s/k3s.yaml</code> on your machine located outside the cluster as <code>~/.kube/config</code>. Then edit the file and replace the value of the server field with the IP or name of your K3s server.</p> <div class=highlight><span class=filename>Copy the kube config and set the context</span><pre><span></span><code><span id=__span-8-1><a id=__codelineno-8-1 name=__codelineno-8-1 href=#__codelineno-8-1></a>sudo<span class=w> </span>chmod<span class=w> </span><span class=m>600</span><span class=w> </span>/etc/rancher/k3s/k3s.yaml @@ -28,13 +28,13 @@ </span><span id=__span-8-6><a id=__codelineno-8-6 name=__codelineno-8-6 href=#__codelineno-8-6></a><span class=nb>export</span><span class=w> </span><span class=nv>KUBECONFIG</span><span class=o>=</span>~/.kube/config </span></code></pre></div> </div> </div> </div> <h4 id=k3s-install-kinetica-operators-including-a-sample-db-to-try-out>K3s - Install kinetica-operators including a sample db to try out<a class=headerlink href=#k3s-install-kinetica-operators-including-a-sample-db-to-try-out title="Permanent link">¶</a></h4> <p>Review the values file <code>charts/kinetica-operators/values.onPrem.k3s.yaml</code>. This is trying to install the operators and a simple db with workbench installation for a non production try out.</p> <div class="admonition tip"> <p class=admonition-title>FQDN or Local Access</p> <p>By default we create an ingress pointing towards <code>local.kinetica</code>. If you have a domain pointing to your machine, replace/set the FQDN in the <code>values.yaml</code> with the correct domain name or by adding <code>--set</code>.</p> <p>If you are on a local machine which is not having a domain name, you add the following entry to your <code>/etc/hosts</code> file or equivalent.</p> <div class=highlight><span class=filename>Configure local access - /etc/hosts</span><pre><span></span><code><span id=__span-9-1><a id=__codelineno-9-1 name=__codelineno-9-1 href=#__codelineno-9-1></a><span class=m>127</span>.0.0.1<span class=w> </span>local.kinetica </span></code></pre></div> </div> <h4 id=k3s-install-the-kinetica-operators-chart-cpu>K3S - Install the Kinetica-Operators Chart (CPU)<a class=headerlink href=#k3s-install-the-kinetica-operators-chart-cpu title="Permanent link">¶</a></h4> <div class=highlight><span class=filename>Add Kinetica Operators Chart Repo</span><pre><span></span><code><span id=__span-10-1><a id=__codelineno-10-1 name=__codelineno-10-1 href=#__codelineno-10-1></a>helm<span class=w> </span>repo<span class=w> </span>add<span class=w> </span>kinetica-operators<span class=w> </span>https://kineticadb.github.io/charts/latest -</span></code></pre></div> <div class=highlight><span class=filename>Download Template values.yaml</span><pre><span></span><code><span id=__span-11-1><a id=__codelineno-11-1 name=__codelineno-11-1 href=#__codelineno-11-1></a>wget<span class=w> </span>https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k3s.yaml +</span></code></pre></div> <div class=highlight><span class=filename>Download Template values.yaml</span><pre><span></span><code><span id=__span-11-1><a id=__codelineno-11-1 name=__codelineno-11-1 href=#__codelineno-11-1></a>wget<span class=w> </span>https://raw.githubusercontent.com/kineticadb/charts/72.2.5/kinetica-operators/values.onPrem.k3s.yaml </span><span id=__span-11-2><a id=__codelineno-11-2 name=__codelineno-11-2 href=#__codelineno-11-2></a> </span><span id=__span-11-3><a id=__codelineno-11-3 name=__codelineno-11-3 href=#__codelineno-11-3></a>helm<span class=w> </span>-n<span class=w> </span>kinetica-system<span class=w> </span>install<span class=w> </span>kinetica-operators<span class=w> </span>kinetica-operators/kinetica-operators<span class=w> </span>--create-namespace<span class=w> </span>--values<span class=w> </span>values.onPrem.k3s.yaml<span class=w> </span>--set<span class=w> </span>db.gpudbCluster.license<span class=o>=</span><span class=s2>"your_license_key"</span><span class=w> </span>--set<span class=w> </span>dbAdminUser.password<span class=o>=</span><span class=s2>"your_password"</span> </span></code></pre></div> <p>or if you have been asked by the Kinetica Support team to try a development version</p> <div class=highlight><span class=filename>Using a development version</span><pre><span></span><code><span id=__span-12-1><a id=__codelineno-12-1 name=__codelineno-12-1 href=#__codelineno-12-1></a>helm<span class=w> </span>search<span class=w> </span>repo<span class=w> </span>kinetica-operators<span class=w> </span>--devel<span class=w> </span>--versions </span><span id=__span-12-2><a id=__codelineno-12-2 name=__codelineno-12-2 href=#__codelineno-12-2></a> </span><span id=__span-12-3><a id=__codelineno-12-3 name=__codelineno-12-3 href=#__codelineno-12-3></a>helm<span class=w> </span>-n<span class=w> </span>kinetica-system<span class=w> </span>install<span class=w> </span>kinetica-operators<span class=w> </span>kinetica-operators/kinetica-operators<span class=w> </span>--create-namespace<span class=w> </span>--values<span class=w> </span>values.onPrem.k3s.yaml<span class=w> </span>--set<span class=w> </span>db.gpudbCluster.license<span class=o>=</span><span class=s2>"your_license_key"</span><span class=w> </span>--set<span class=w> </span>dbAdminUser.password<span class=o>=</span><span class=s2>"your_password"</span><span class=w> </span>--devel<span class=w> </span>--version<span class=w> </span><span class=m>7</span>.2.0-2.rc-2 -</span></code></pre></div> <h4 id=k3s-install-the-kinetica-operators-chart-gpu>K3S - Install the Kinetica-Operators Chart (GPU)<a class=headerlink href=#k3s-install-the-kinetica-operators-chart-gpu title="Permanent link">¶</a></h4> <p>If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.</p> <div class=highlight><span class=filename>k3s GPU Installation</span><pre><span></span><code><span id=__span-13-1><a id=__codelineno-13-1 name=__codelineno-13-1 href=#__codelineno-13-1></a>wget<span class=w> </span>https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k3s.gpu.yaml +</span></code></pre></div> <h4 id=k3s-install-the-kinetica-operators-chart-gpu>K3S - Install the Kinetica-Operators Chart (GPU)<a class=headerlink href=#k3s-install-the-kinetica-operators-chart-gpu title="Permanent link">¶</a></h4> <p>If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.</p> <div class=highlight><span class=filename>k3s GPU Installation</span><pre><span></span><code><span id=__span-13-1><a id=__codelineno-13-1 name=__codelineno-13-1 href=#__codelineno-13-1></a>wget<span class=w> </span>https://raw.githubusercontent.com/kineticadb/charts/72.2.5/kinetica-operators/values.onPrem.k3s.gpu.yaml </span><span id=__span-13-2><a id=__codelineno-13-2 name=__codelineno-13-2 href=#__codelineno-13-2></a> </span><span id=__span-13-3><a id=__codelineno-13-3 name=__codelineno-13-3 href=#__codelineno-13-3></a>helm<span class=w> </span>-n<span class=w> </span>kinetica-system<span class=w> </span>install<span class=w> </span>kinetica-operators<span class=w> </span>charts/kinetica-operators/<span class=w> </span>--create-namespace<span class=w> </span>--values<span class=w> </span>values.onPrem.k3s.gpu.yaml<span class=w> </span>--set<span class=w> </span>db.gpudbCluster.license<span class=o>=</span><span class=s2>"your_license_key"</span><span class=w> </span>--set<span class=w> </span>dbAdminUser.password<span class=o>=</span><span class=s2>"your_password"</span> </span></code></pre></div> <div class="admonition success"> <p class=admonition-title>Accessing the Workbench</p> <p>You should be able to access the workbench at <a href=http://local.kinetica title="Workbench URL">http://local.kinetica</a></p> </div> <h4 id=uninstall-k3s>Uninstall k3s<a class=headerlink href=#uninstall-k3s title="Permanent link">¶</a></h4> <div class=highlight><span class=filename>uninstall k3s</span><pre><span></span><code><span id=__span-14-1><a id=__codelineno-14-1 name=__codelineno-14-1 href=#__codelineno-14-1></a>/usr/local/bin/k3s-uninstall.sh diff --git a/7.2/index.yaml b/7.2/index.yaml index c3e7a3b..a22ed6c 100644 --- a/7.2/index.yaml +++ b/7.2/index.yaml @@ -12,6 +12,31 @@ entries: - https://kineticadb.github.io/charts/7.2/genai-operator-72.2.3-dev.2.tgz version: 72.2.3-dev.2 kinetica-operators: + - apiVersion: v2 + appVersion: v7.2.2-5.ga-1 + created: "2024-12-11T18:22:37.734645706Z" + dependencies: + - name: openldap + repository: "" + - condition: certManager.install + name: cert-manager + repository: "" + - condition: ingressNginx.install + name: ingress-nginx + repository: "" + - condition: gpuOperator.install + name: gpu-operator + repository: "" + - condition: supportBundle.install + name: support-bundle + repository: "" + description: A Helm chart for deploying Kinetica Operators + digest: a7930cbea4fd4e72281ed34be82e772e2abf4a197c0a886f405a9cccf1f979cf + name: kinetica-operators + type: application + urls: + - https://kineticadb.github.io/charts/7.2/kinetica-operators-72.2.5.tgz + version: 72.2.5 - apiVersion: v2 appVersion: v7.2.2-5.rc-1 created: "2024-12-11T17:14:13.961857225Z" @@ -1944,4 +1969,4 @@ entries: urls: - https://kineticadb.github.io/charts/7.2/kinetica-operators-0.0.0.tgz version: 0.0.0 -generated: "2024-12-11T17:14:13.903630399Z" +generated: "2024-12-11T18:22:37.673713298Z" diff --git a/7.2/kinetica-operators-72.2.5.tgz b/7.2/kinetica-operators-72.2.5.tgz new file mode 100644 index 0000000..e86c711 Binary files /dev/null and b/7.2/kinetica-operators-72.2.5.tgz differ diff --git a/7.2/search/search_index.json b/7.2/search/search_index.json index 57e2a34..0c68f65 100644 --- a/7.2/search/search_index.json +++ b/7.2/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"<code>kineticadb/charts</code>","text":"<p>Accelerate your AI and analytics. Kinetica harnesses real-time data and the power of CPUs & GPUs for lightning-fast insights due to it being uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.</p> <p>Kinetica DB can be quickly installed into Kubernetes using Helm.</p> <ul> <li> <p> Set up in 15 minutes </p> <p>Install the Kinetica DB locally on <code>Kind</code> or <code>k3s</code> with <code>helm</code> to get up and running in minutes. Quickstart</p> </li> <li> <p> Prepare to Install</p> <p>What you need to know & do before beginning a production installation. Preparation and Prerequisites</p> </li> <li> <p> Production DB Installation</p> <p>Install the Kinetica DB with helm to get up and running quickly Installation</p> </li> <li> <p> Channel Your Inner Ninja</p> <p>Advanced Installation Topics which go beyond the basic installation. Advanced Topics</p> </li> <li> <p> Running and Managing the Platform</p> <p>Metrics, Monitoring, Logs and Telemetry Distribution. Operations</p> </li> <li> <p> Product Architecture</p> <p>The Modern Analytics Database Architected for Performance at Scale. Architecture</p> </li> <li> <p> Support</p> <p>Additional Help, Tutorials and Troubleshooting resources. Support</p> </li> <li> <p> Configuration in Detail</p> <p>Detailed reference material for the Helm Charts & Kinetica for Kubernetes CRDs. Reference Documentation</p> </li> </ul>"},{"location":"tags/","title":"Categories","text":"<p>Following is a list of relevant documentation categories:</p>"},{"location":"tags/#aks","title":"AKS","text":"<ul> <li>Azure AKS</li> </ul>"},{"location":"tags/#administration","title":"Administration","text":"<ul> <li>Administration</li> <li>Grant management</li> <li>Resource group management</li> <li>Role Management</li> <li>Schema management</li> <li>User Management</li> <li>Kinetica Cluster Grants Reference</li> <li>Kinetica Cluster Resource Groups Reference</li> <li>Kinetica Cluster Roles Reference</li> <li>Kinetica Cluster Schemas Reference</li> <li>Kinetica Cluster Users Reference</li> </ul>"},{"location":"tags/#advanced","title":"Advanced","text":"<ul> <li>Advanced</li> <li> Advanced Topics</li> <li>Air-Gapped Environments</li> <li>Alternative Charts</li> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li>Kinetica DB on OS X (Arm64)</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li>Bare Metal/VM Installation - <code>kubeadm</code></li> <li>S3 Storage for Dev/Test</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> </ul>"},{"location":"tags/#architecture","title":"Architecture","text":"<ul> <li>Architecture</li> <li>Core Database Architecture</li> <li>Kubernetes Architecture</li> </ul>"},{"location":"tags/#configuration","title":"Configuration","text":"<ul> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> <li>How to change the Clusters FQDN</li> <li>OpenTelemetry</li> </ul>"},{"location":"tags/#development","title":"Development","text":"<ul> <li>Kinetica DB on OS X (Arm64)</li> <li>S3 Storage for Dev/Test</li> <li>Quickstart</li> </ul>"},{"location":"tags/#eks","title":"EKS","text":"<ul> <li>Amazon EKS</li> </ul>"},{"location":"tags/#getting-started","title":"Getting Started","text":"<ul> <li>Getting Started</li> <li>Azure AKS</li> <li>Amazon EKS</li> <li>Preparation & Prerequisites</li> <li>Quickstart</li> <li>Kinetica for Kubernetes Setup</li> </ul>"},{"location":"tags/#ingress","title":"Ingress","text":"<ul> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> </ul>"},{"location":"tags/#installation","title":"Installation","text":"<ul> <li>Air-Gapped Environments</li> <li>Alternative Charts</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li>Bare Metal/VM Installation - <code>kubeadm</code></li> <li>S3 Storage for Dev/Test</li> <li>Getting Started</li> <li>Kinetica for Kubernetes Installation</li> <li>CPU</li> <li>GPU</li> <li>Preparation & Prerequisites</li> <li>Quickstart</li> <li> Core DB CRDs</li> <li>Kinetica for Kubernetes Setup</li> </ul>"},{"location":"tags/#monitoring","title":"Monitoring","text":"<ul> <li>Logs</li> <li> Metrics Collection & Display</li> <li>OpenTelemetry</li> </ul>"},{"location":"tags/#operations","title":"Operations","text":"<ul> <li>Logs</li> <li> Metrics Collection & Display</li> <li>Operational Management</li> <li>Kinetica for Kubernetes Backup & Restore</li> <li>OpenTelemetry</li> <li>Kinetica for Kubernetes Data Rebalancing</li> <li>Kinetica for Kubernetes Suspend & Resume</li> <li>Kinetica Cluster Backups Reference</li> <li> Core DB CRDs</li> <li>Kinetica Cluster Restores Reference</li> </ul>"},{"location":"tags/#reference","title":"Reference","text":"<ul> <li>Reference Section</li> <li>Kinetica Database Configuration</li> <li>Kinetica Operators</li> <li>Kinetica Cluster Admins Reference</li> <li>Kinetica Cluster Backups Reference</li> <li>Kinetica Cluster Grants Reference</li> <li> Core DB CRDs</li> <li>Kinetica Cluster Resource Groups Reference</li> <li>Kinetica Cluster Restores Reference</li> <li>Kinetica Cluster Roles Reference</li> <li>Kinetica Cluster Schemas Reference</li> <li>Kinetica Cluster Users Reference</li> <li>Kinetica Clusters Reference</li> <li>Kinetica Workbench Reference</li> <li>Kinetica Workbench Configuration</li> </ul>"},{"location":"tags/#storage","title":"Storage","text":"<ul> <li>S3 Storage for Dev/Test</li> <li>Amazon EKS</li> </ul>"},{"location":"tags/#support","title":"Support","text":"<ul> <li>How to change the Clusters FQDN</li> <li>FAQ</li> <li>Help & Tutorials</li> <li>Creating Users, Roles, Schemas and other Kinetica DB Objects</li> <li>Support</li> <li>Troubleshooting</li> </ul>"},{"location":"Administration/","title":"Administration","text":"<ul> <li> <p> DB Clusters</p> <p>Core Kinetica Database Cluster Management.</p> <p> KineticaCluster</p> </li> <li> <p> DB Users</p> <p>Kinetica Database User Management.</p> <p> KineticaUser</p> </li> <li> <p> DB Roles</p> <p>Kinetica Database Role Management.</p> <p> KineticaRole</p> </li> <li> <p> DB Schemas</p> <p>Kinetica Database Schema Management.</p> <p> KineticaSchema</p> </li> <li> <p> DB Grants</p> <p>Kinetica Database Grant Management.</p> <p> KineticaGrant</p> </li> <li> <p> DB Resource Groups</p> <p>Kinetica Database Resource Group Management.</p> <p> KineticaResourceGroup</p> </li> <li> <p> DB Administration</p> <p>Kinetica Database Administration.</p> <p> KineticaAdmin</p> </li> <li> <p> DB Backups</p> <p>Kinetica Database Backup Management.</p> <p>Note</p> <p>This requires Velero to be installed on the Kubernetes Cluster.</p> <p> KineticaBackup</p> </li> <li> <p> DB Restore</p> <p>Kinetica Database Restoration.</p> <p>Note</p> <p>This requires Velero to be installed on the Kubernetes Cluster.</p> <p> KineticaRestore</p> </li> </ul> <p> Home</p>","tags":["Administration"]},{"location":"Administration/role_management/","title":"Role Management","text":"<p>Management of roles is done with the <code>KineticaRole</code> CRD. </p> <p>kubectl Usage</p> <p>From the <code>kubectl</code> command line they are referenced by <code>kineticaroles</code> or the short form is <code>kr</code>.</p>","tags":["Administration"]},{"location":"Administration/role_management/#list-roles","title":"List Roles","text":"<p>To list the roles deployed to a Kinetica DB installation we can use the following from the command-line: -</p> <p><code>kubectl -n gpudb get kineticaroles</code> or <code>kubectl -n gpudb get kr</code></p> <p>where the namespace <code>-n gpudb</code> matches the namespace of the Kinetica DB installation.</p> <p>This outputs</p> Name Ring Name Role Resource Group Name LDAP DB db-users kinetica-k8s-sample db_users OK OK global-admins kinetica-k8s-sample global_admins OK OK","tags":["Administration"]},{"location":"Administration/role_management/#name","title":"Name","text":"<p>The name of the Kubernetes CR i.e. the <code>metadata.name</code> this is not necessarily the name of the user.</p>","tags":["Administration"]},{"location":"Administration/role_management/#ring-name","title":"Ring Name","text":"<p>The name of the <code>KineticaCluster</code> the user is created in.</p>","tags":["Administration"]},{"location":"Administration/role_management/#role-name","title":"Role Name","text":"<p>The name of the role as contained with LDAP & the DB.</p>","tags":["Administration"]},{"location":"Administration/role_management/#role-creation","title":"Role Creation","text":"test-role-2.yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaRole\nmetadata:\n name: test-role-2\n namespace: gpudb\nspec:\n ringName: kineticacluster-sample\n role:\n name: \"test_role2\"\n</code></pre>","tags":["Administration"]},{"location":"Administration/role_management/#role-deletion","title":"Role Deletion","text":"<p>To delete a role from the Kinetica Cluster simply delete the Role CR from Kubernetes: -</p> Delete User<pre><code>kubectl -n gpudb delete kr user-fred-smith \n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/","title":"User Management","text":"<p>Management of users is done with the <code>KineticaUser</code> CRD. </p> <p>kubectl Usage</p> <p>From the <code>kubectl</code> command line they are referenced by <code>kineticausers</code> or the short form is <code>ku</code>.</p>","tags":["Administration"]},{"location":"Administration/user_management/#list-users","title":"List Users","text":"<p>To list the users deployed to a Kinetica DB installation we can use the following from the command-line: -</p> <p><code>kubectl -n gpudb get kineticausers</code> or <code>kubectl -n gpudb get ku</code></p> <p>where the namespace <code>-n gpudb</code> matches the namespace of the Kinetica DB installation.</p> <p>This outputs </p> Name Action Ring Name UID Last Name Given Name Display Name LDAP DB Reveal kadmin upsert kinetica-k8s-sample kadmin Account Admin Admin Account OK OK OK","tags":["Administration"]},{"location":"Administration/user_management/#name","title":"Name","text":"<p>The name of the Kubernetes CR i.e. the <code>metadata.name</code> this is not necessarily the name of the user.</p>","tags":["Administration"]},{"location":"Administration/user_management/#action","title":"Action","text":"<p>There are two actions possible on a <code>KineticaUser</code>. The first is <code>upsert</code> which is for user creation or modification. The second is <code>change-password</code> which shows when a user password reset has been performed.</p>","tags":["Administration"]},{"location":"Administration/user_management/#ring-name","title":"Ring Name","text":"<p>The name of the <code>KineticaCluster</code> the user is created in.</p>","tags":["Administration"]},{"location":"Administration/user_management/#uid","title":"UID","text":"<p>The unique, user id to use in LDAP & the DB to reference this user.</p>","tags":["Administration"]},{"location":"Administration/user_management/#last-name","title":"Last Name","text":"<p>Last Name refers to last name or surname. </p> <p><code>sn</code> in LDAP terms.</p>","tags":["Administration"]},{"location":"Administration/user_management/#given-name","title":"Given Name","text":"<p>Given Name is the Firstname also called Christian name. </p> <p><code>givenName</code> in LDAP terms.</p>","tags":["Administration"]},{"location":"Administration/user_management/#display-name","title":"Display Name","text":"<p>The name shown on any UI representation.</p>","tags":["Administration"]},{"location":"Administration/user_management/#ldap","title":"LDAP","text":"<p>Identifies if the user has been successfully created within LDAP. </p> <ul> <li>'' - if empty the user has not yet been created in LDAP</li> <li>'OK' - shows the user has been successfully created within LDAP</li> <li>'Failed' - shows there was a failure adding the user to LDAP</li> </ul>","tags":["Administration"]},{"location":"Administration/user_management/#db","title":"DB","text":"<p>Identifies if the user has been successfully created within the DB.</p> <ul> <li>'' - if empty the user has not yet been created in the DB</li> <li>'OK' - shows the user has been successfully created within the DB</li> <li>'Failed' - shows there was a failure adding the user to the DB</li> </ul>","tags":["Administration"]},{"location":"Administration/user_management/#reveal","title":"Reveal","text":"<p>Identifies if the user has been successfully created within Reveal.</p> <ul> <li>'' - if empty the user has not yet been created in Reveal</li> <li>'OK' - shows the user has been successfully created within Reveal</li> <li>'Failed' - shows there was a failure adding the user to Reveal</li> </ul>","tags":["Administration"]},{"location":"Administration/user_management/#user-creation","title":"User Creation","text":"<p>User creation requires two Kubernetes CRs to be submitted to Kubernetes and processed by the Kinetica DB Operator.</p> <ul> <li>User Secret (Password)</li> <li>Kinetica User</li> </ul> <p>Creation Sequence</p> <p>It is preferable to create the User Secret prior to creating the <code>KineticaUser</code>.</p> <p>Secret Deletion</p> <p>The User Secret will be deleted once the <code>KineticaUser</code> is created by the operator. The users password will be stored in LDAP and not be present in Kubernetes.</p>","tags":["Administration"]},{"location":"Administration/user_management/#user-secret","title":"User Secret","text":"<p>In this example a user Fred Smith will be created.</p> fred-smith-secret.yaml<pre><code>apiVersion: v1\nkind: Secret\nmetadata:\n name: fred-smith-secret\n namespace: gpudb\nstringData:\n password: testpassword\n</code></pre> Create the User Password Secret<pre><code>kubectl apply -f fred-smith-secret.yaml\n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#kineticauser","title":"<code>KineticaUser</code>","text":"user-fred-smith.yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaUser\nmetadata:\n name: user-fred-smith\n namespace: gpudb\nspec:\n ringName: kineticacluster-sample\n uid: fred\n action: upsert\n reveal: true\n upsert:\n userPrincipalName: fred.smith@example.com\n givenName: Fred\n displayName: FredSmith\n lastName: Smith\n passwordSecret: fred-smith-secret\n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#user-deletion","title":"User Deletion","text":"<p>To delete a user from the Kinetica Cluster simply delete the User CR from Kubernetes: -</p> Delete User<pre><code>kubectl -n gpudb delete ku user-fred-smith \n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#change-password","title":"Change Password","text":"<p>To change a users password we use the <code>change-password</code> action rather than the <code>upsert</code> action we used previously.</p> <p>Creation Sequence</p> <p>It is preferable to create the User Secret prior to creating the <code>KineticaUser</code>.</p> <p>Secret Deletion</p> <p>The User Secret will be deleted once the <code>KineticaUser</code> is created by the operator. The users password will be stored in LDAP and not be present in Kubernetes.</p> fred-smith-change-pwd-secret.yaml<pre><code>apiVersion: v1\nkind: Secret\nmetadata:\n name: fred-smith-change-pwd-secret\n namespace: gpudb\nstringData:\n password: testpassword\n</code></pre> Create the User Password Secret<pre><code>kubectl apply -f fred-smith-change-pwd-secret.yaml\n</code></pre> user-fred-smith-change-password.yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaUser\nmetadata:\n name: user-fred-smith-change-password\n namespace: gpudb\nspec:\n ringName: kineticacluster-sample\n uid: fred\n action: change-password\n changePassword:\n passwordSecret: fred-smith-change-pwd-secret\n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#advanced-topics","title":"Advanced Topics","text":"","tags":["Administration"]},{"location":"Administration/user_management/#limit-user-resources","title":"Limit User Resources","text":"","tags":["Administration"]},{"location":"Administration/user_management/#data-limit","title":"Data Limit","text":"<p>KIFs user data size limit.</p> dataLimit<pre><code>spec:\n upsert:\n dataLimit: 10Gi\n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#user-kifs-usage","title":"User Kifs Usage","text":"<p>Kifs Enablement</p> <p>In order to use the Kifs user features below there is a requirement that Kifs is enabled on the Kinetica DB.</p>","tags":["Administration"]},{"location":"Administration/user_management/#home-directory","title":"Home Directory","text":"<p>When creating a new user it is possible to create that user a 'home' directory within the Kifs filesystem by using the <code>createHomeDirectory</code> option.</p> createHomeDirectory<pre><code>spec:\n upsert:\n createHomeDirectory: true\n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#limit-directory-storage","title":"Limit Directory Storage","text":"<p>It is possible to limit the amount of Kifs file storage the user has by adding <code>kifsDataLimit</code> to the user creation yaml and setting the value to a Kubernetes Quantity e.g. <code>2Gi</code></p> kifsDataLimit<pre><code>spec:\n upsert:\n kifsDataLimit: 2Gi\n</code></pre>","tags":["Administration"]},{"location":"Advanced/","title":"Advanced Topics","text":"<ul> <li> <p> Find alternative chart versions </p> <p>How to use pre-release or development Chart version if requested to by Kinetica Support. Alternative Charts</p> </li> <li> <p> Configuring Ingress Records </p> <p>How to expose Kinetica via Kubernetes Ingress. Ingress Configuration</p> </li> <li> <p> Air-Gapped Environments </p> <p>Specifics for installing Kinetica for Kubernetes in an Air-Gapped Environment Airgapped</p> </li> <li> <p> Using your own OpenTelemetry Collector</p> <p>How to configure Kinetica for Kubernetes to use your open OpenTelemetry collector. </p> <p> External OTEL</p> </li> <li> <p> Minio for Dev/Test S3 Storage </p> <p>Install Minio in order to enable S3 storage for Development.</p> <p> min.io</p> </li> <li> <p> Creating Resources with Kubernetes APIs </p> <p>Create Users, Roles, DB Schema etc. using Kubernertes CRs. Resources</p> </li> <li> <p> Kinetica on OS X (Apple Silicon) </p> <p>Install the Kinetica DB on a new Kubernetes 'production-like' cluster on Apple OS X (Apple Silicon) using UTM. Apple ARM64</p> </li> <li> <p> Bare Metal/VM Installation from Scratch </p> <p>Install the Kinetica DB on a new Kubernetes 'production-like' bare metal (or VMs) cluster via <code>kubeadm</code> using <code>cilium</code> Networking, <code>kube-vip</code> LoadBalancer. Bare Metal/VM Installation</p> </li> <li> <p> Software LoadBalancer </p> <p>Install a software Kubernetes CCM/LoadBalancer for bare metal or VM based Kubernetes CLusters. <code>kube-vip</code> LoadBalancer.</p> <p> Software LoadBalancer</p> </li> </ul>","tags":["Advanced"]},{"location":"Advanced/advanced_topics/","title":"Advanced Topics","text":"","tags":["Advanced"]},{"location":"Advanced/advanced_topics/#install-from-a-developmentpre-release-chart-version","title":"Install from a development/pre-release chart version","text":"<p>Find all alternative chart versions with:</p> Find alternative chart versions<pre><code>helm search repo kinetica-operators --devel --versions\n</code></pre> <p></p> <p>Then append <code>--devel --version [CHART-DEVEL-VERSION]</code> to the end of the Helm install command. See here.</p>","tags":["Advanced"]},{"location":"Advanced/airgapped/","title":"Air-Gapped Environments","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#obtaining-the-kinetica-images","title":"Obtaining the Kinetica Images","text":"Kinetica Images for an Air-Gapped Environment <p>If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.</p> <p>For information on ways to transfer the files into an air-gapped environment See here.</p> <p>Please select the method to transfer the images: -</p> mindthegap containerd docker <p>It is possible to use <code>mesosphere/mindthegap</code></p> <p>mindthegap</p> <p><code>mindthegap</code> provides utilities to manage air-gapped image bundles, both creating image bundles and seeding images from a bundle into an existing OCI registry or directly loading them to <code>containerd</code>.</p> <p>This makes it possible with <code>mindthegap</code> to</p> <ul> <li>create a single archive bundle of all the required images outside the air-gapped environment</li> <li>run <code>mindthegap</code> using the archive bundle on the Kubernetes Nodes to bulk load the images into <code>contained</code> in a single command.</li> </ul> <p>Kinetica provides two <code>mindthegap</code> yaml files which list all the necessary images for Kinetica for Kubernetes.</p> <ul> <li>CPU only</li> <li> CPU & nVidia CUDA GPU</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#required-container-images","title":"Required Container Images","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"<ul> <li>docker.io/kinetica/kinetica-k8s-operator:{{kinetica_full_version}}<ul> <li>docker.io/kinetica/kinetica-k8s-cpu:{{kinetica_full_version}} or</li> <li>docker.io/kinetica/kinetica-k8s-cpu-avx512:{{kinetica_full_version}} or</li> <li>docker.io/kinetica/kinetica-k8s-gpu:{{kinetica_full_version}}</li> </ul> </li> <li>docker.io/kinetica/workbench-operator:{{kinetica_full_version}}</li> <li>docker.io/kinetica/workbench:{{kinetica_full_version}}</li> <li>docker.io/kinetica/kinetica-k8s-monitor:{{kinetica_full_version}}</li> <li>docker.io/kinetica/busybox:{{kinetica_full_version}}</li> <li>docker.io/kinetica/fluent-bit:{{kinetica_full_version}}</li> <li>docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>nvcr.io/nvidia/gpu-operator:v23.9.1</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>registry.k8s.io/nfd/node-feature-discovery:v0.14.2</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"<ul> <li>docker.io/bitnami/openldap:2.6.7</li> <li>docker.io/alpine/openssl:latest (used by bitnami/openldap)</li> <li>docker.io/otel/opentelemetry-collector-contrib:0.95.0</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"<ul> <li>quay.io/brancz/kube-rbac-proxy:v0.14.2</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#optional-container-images","title":"Optional Container Images","text":"<p>These images are only required if certain features are enabled as part of the Helm installation: -</p> <ul> <li>CertManager</li> <li>ingress-ninx</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"<ul> <li>quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"<ul> <li>registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)</li> <li>registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c</li> </ul> <p>It is possible with <code>containerd</code> to pull images, save them and load them either into a Container Registry in the air gapped environment or directly into another <code>containerd</code> instance. </p> <p>If the target <code>containerd</code> is on a node running a Kubernetes Cluster then these images will be sourced by Kubernetes from the loaded images, via CRI, with no requirement to pull them from an external source e.g. a Registry or Mirror.</p> <p><code>sudo</code> required</p> <p>Depending on how <code>containerd</code> has been installed and configured many of the example calls below may require running with <code>sudo</code></p> <p>It is possible with <code>docker</code> to pull images, save them and load them into an OCI Container Registry in the air gapped environment.</p> Pull a remote image (docker)<pre><code>docker pull --platformlinux/amd64 docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2\n</code></pre> Export a local image (docker)<pre><code>docker export --platformlinux/amd64 -o kinetica-k8s-cpu-v7.2.2-3.ga-2.tar \\\ndocker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2\n</code></pre> <p>We can now transfer this archive (<code>kinetica-k8s-cpu-v7.2.2-3.ga-2.rc-3.tar</code>) to the Kubernetes Node inside the air-gapped environment.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2) <ol> <li>It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image</li> <li>With a supported nVidia GPU.</li> </ol>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#install-mindthegap","title":"Install <code>mindthegap</code>","text":"Install mindthegap<pre><code>wget https://github.com/mesosphere/mindthegap/releases/download/v1.13.1/mindthegap_v1.13.1_linux_amd64.tar.gz\ntar zxvf mindthegap_v1.13.1_linux_amd64.tar.gz\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-create-the-bundle","title":"mindthegap - Create the Bundle","text":"mindthegap create image-bundle<pre><code>mindthegap create image-bundle --images-file mtg.yaml --platform linux/amd64\n</code></pre> <p>where <code>--images-file</code> is either the CPU or GPU Kinetica <code>mindthegap</code> yaml file.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-the-bundle","title":"mindthegap - Import the Bundle","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-to-containerd","title":"mindthegap - Import to <code>containerd</code>","text":"mindthegap import image-bundle<pre><code>mindthegap import image-bundle --image-bundle images.tar [--containerd-namespace k8s.io]\n</code></pre> <p>If <code>--containerd-namespace</code> is not specified, images will be imported into <code>k8s.io</code> namespace. </p> <p><code>sudo</code> required</p> <p>Depending on how <code>containerd</code> has been installed and configured it may require running the above command with <code>sudo</code></p>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-to-an-internal-oci-registry","title":"mindthegap - Import to an internal OCI Registry","text":"mindthegap import image-bundle<pre><code>mindthegap push bundle --bundle <path/to/bundle.tar> \\\n--to-registry <registry.address> \\\n[--to-registry-insecure-skip-tls-verify]\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-using-containerd-to-pull-and-export-an-image","title":"containerd - Using <code>containerd</code> to pull and export an image","text":"<p>Similar to <code>docker pull</code> we can use <code>ctr image pull</code> so to pull the core Kinetica DB cpu based image</p> Pull a remote image (containerd)<pre><code>ctr image pull docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2\n</code></pre> <p>We now need to export the pulled image as an archive to the local filesystem.</p> Export a local image (containerd)<pre><code>ctr image export kinetica-k8s-cpu-v7.2.2-3.ga-2.tar \\\ndocker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2\n</code></pre> <p>We can now transfer this archive (<code>kinetica-k8s-cpu-v7.2.2-3.ga-2.tar</code>) to the Kubernetes Node inside the air-gapped environment.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-using-containerd-to-import-an-image","title":"containerd - Using <code>containerd</code> to import an image","text":"<p>Using <code>containerd</code> to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.</p> Import the Images<pre><code>ctr -n=k8s.io images import kinetica-k8s-cpu-v7.2.2-3.ga-2.tar\n</code></pre> <p><code>-n=k8s.io</code></p> <p>It is possible to use <code>ctr images import kinetica-k8s-cpu-v7.2.2-3.ga-2.rc-3.tar</code> to import the image to <code>containerd</code>.</p> <p>However, in order for the image to be visible to the Kubernetes Cluster running on <code>containerd</code> it is necessary to add the parameter <code>-n=k8s.io</code>.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-verifying-the-image-is-available","title":"containerd - Verifying the image is available","text":"<p>To verify the image is loaded into <code>containerd</code> on the node run the following on the node: -</p> Verify containerd Images<pre><code>ctr image ls\n</code></pre> <p>To verify the image is visible to Kubernetes on the node run the following: -</p> Verify CRI Images<pre><code>crictl images\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#docker-using-docker-to-import-an-image","title":"docker - Using <code>docker</code> to import an image","text":"<p>Using <code>docker</code> to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.</p> Import the Images<pre><code>docker import --platformlinux/amd64 kinetica-k8s-cpu-v7.2.2-3.ga-2.tar registry:repository/kinetica-k8s-cpu:v7.2.0-3.rc-3\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/alternative_charts/","title":"Using Alternative Helm Charts","text":"<p>If requested by Kinetica Support you can search and use pre-release versions of the Kinetica Helm Charts.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/alternative_charts/#install-from-a-developmentpre-release-chart-version","title":"Install from a development/pre-release chart version","text":"<p>Find all alternative chart versions with:</p> Find alternative chart versions<pre><code>helm search repo kinetica-operators --devel --versions\n</code></pre> <p></p> <p>Then append <code>--devel --version [CHART-DEVEL-VERSION]</code> to the end of the Helm install command.</p> Helm install kinetica-operators<pre><code>helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--devel \\\n--version 72.0 \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/ingress_configuration/","title":"Ingress Configuration","text":"<ul> <li> <p> <code>ingress-nginx</code> Configuration</p> <p>How to enable Ingress with <code>ingress-nginx</code> for Kinetica DB.</p> <p> <code>ingress-nginx</code></p> </li> <li> <p> <code>nginx-ingress</code> Configuration</p> <p>How to enable Ingress with <code>nginx-ingress</code> for Kinetica DB.</p> <p> <code>nginx-ingress</code></p> </li> </ul>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/","title":"<code>ingress-nginx</code> Ingress Configuration","text":"<p>To use an 'external' ingress-nginx controller i.e. not the one optionally installed by the Kinetica Operators Helm chart it is necessary to disable ingress in the <code>KineticaCluster</code> CR.</p> <p>The field <code>spec.ingressController: nginx</code> should be set to <code>spec.ingressController: none</code>.</p> <p>It is then necessary to create the required Ingress CRs by hand. Below is a list of the Ingress paths that need to be exposed along with sample ingress-nginx CRs.</p>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#required-ingress-routes","title":"Required Ingress Routes","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#ingress-routes","title":"Ingress Routes","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#gadmin-paths","title":"GAdmin Paths","text":"Path Service Port <code>/gadmin</code> <code>cluster-name-gadmin-service</code> <code>gadmin</code> (8080/TCP) <code>/tableau</code> <code>cluster-name-gadmin-service</code> <code>gadmin</code> (8080/TCP) <code>/files</code> <code>cluster-name^-gadmin-service</code> <code>gadmin</code> (8080/TCP) <p>where <code>cluster-name</code> is the name of the Kinetica Cluster i.e. what is in the <code>.spec.gpudbCluster.clusterName</code> in the KineticaCluster CR.</p>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#workbench-paths","title":"Workbench Paths","text":"Path Service Port <code>/</code> <code>workbench-workbench-service</code> <code>workbench-port</code> (8000/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#db-rank-0-paths","title":"DB <code>rank-0</code> Paths","text":"Path Service Port <code>/cluster-145025b8(/gpudb-0(/.*|$))</code> <code>cluster-145025b8-rank0-service</code> <code>httpd</code> (8082/TCP) <code>/cluster-145025b8/gpudb-0/hostmanager(.*)</code> <code>cluster-145025b8-rank0-service</code> <code>hostmanager</code> (9300/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#db-rank-n-paths","title":"DB <code>rank-N</code> Paths","text":"Path Service Port <code>/cluster-145025b8(/gpudb-N(/.*|$))</code> <code>cluster-145025b8-rank1-service</code> <code>httpd</code> (8082/TCP) <code>/cluster-145025b8/gpudb-N/hostmanager(.*)</code> <code>cluster-145025b8-rank1-service</code> <code>hostmanager</code> (9300/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#reveal-paths","title":"Reveal Paths","text":"Path Service Port <code>/reveal</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/caravel</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/static</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logout</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/resetmypassword</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/dashboardmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/dashboardmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/slicemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/slicemodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/sliceaddview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databaseview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databaseasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databasetablesasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/tablemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/csstemplatemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/csstemplatemodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/users</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/roles</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/userstatschartview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/permissions</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/viewmenus</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/permissionviews</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/accessrequestsmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/accessrequestsmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/userinfoeditview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/tablecolumninlineview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/sqlmetricinlineview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-ingress-crs","title":"Example Ingress CRs","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-gadmin-ingress-cr","title":"Example GAdmin Ingress CR","text":"Example GAdmin Ingress CR <p>Example GAdmin Ingress CR<pre><code>apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: cluster-name-gadmin-ingress #(1)!\n namespace: gpudb\nspec:\n ingressClassName: nginx\n tls:\n - hosts:\n - cluster-name.example.com #(1)!\n secretName: kinetica-tls\n rules:\n - host: cluster-name.example.com #(1)!\n http:\n paths:\n - path: /gadmin\n pathType: Prefix\n backend:\n service:\n name: cluster-name-gadmin-service #(1)!\n port:\n name: gadmin\n - path: /tableau\n pathType: Prefix\n backend:\n service:\n name: cluster-name-gadmin-service #(1)!\n port:\n name: gadmin\n - path: /files\n pathType: Prefix\n backend:\n service:\n name: cluster-name-gadmin-service #(1)!\n port:\n name: gadmin\n</code></pre> 1. where <code>cluster-name</code> is the name of the Kinetica Cluster</p>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-rank-ingress-cr","title":"Example Rank Ingress CR","text":"Example Rank Ingress CR Example Rank Ingress CR<pre><code>apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: cluster-name-rank1-ingress\n namespace: gpudb\nspec:\n ingressClassName: nginx\n tls:\n - hosts:\n - cluster-name.example.com\n secretName: kinetica-tls\n rules:\n - host: cluster-name.example.com\n http:\n paths:\n - path: /cluster-name(/gpudb-1(/.*|$))\n pathType: Prefix\n backend:\n service:\n name: cluster-name-rank1-service\n port:\n name: httpd\n - path: /cluster-name/gpudb-1/hostmanager(.*)\n pathType: Prefix\n backend:\n service:\n name: cluster-name-rank1-service\n port:\n name: hostmanager\n</code></pre> <ol> <li>where <code>cluster-name</code> is the name of the Kinetica Cluster</li> </ol>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-reveal-ingress-cr","title":"Example Reveal Ingress CR","text":"Example Reveal Ingress CR <p>Example Reveal Ingress CR<pre><code> apiVersion: networking.k8s.io/v1\n kind: Ingress\n metadata:\n name: cluster-name-reveal-ingress\n namespace: gpudb\n spec:\n ingressClassName: nginx\n tls:\n - hosts:\n - cluster-name.example.com\n secretName: kinetica-tls\n rules:\n - host: cluster-name.example.com\n http:\n paths:\n - path: /reveal\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /caravel\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /static\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /logout\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /resetmypassword\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /dashboardmodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /dashboardmodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /slicemodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /slicemodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /sliceaddview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /databaseview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /databaseasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /databasetablesasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /tablemodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /tablemodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /csstemplatemodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /csstemplatemodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /users\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /roles\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /userstatschartview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /permissions\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /viewmenus\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /permissionviews\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /accessrequestsmodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /accessrequestsmodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /logmodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /logmodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /userinfoeditview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /tablecolumninlineview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /sqlmetricinlineview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n</code></pre> 1. where <code>cluster-name</code> is the name of the Kinetica Cluster</p>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#exposing-the-postgres-proxy-port","title":"Exposing the Postgres Proxy Port","text":"<p>In order to access Kinetica's Postgres functionality some TCP (not HTTP) ports need to be open externally.</p> <p>For <code>ingress-nginx</code> a configuration file needs to be created to enable port 5432.</p> <p>tcp-services.yaml<pre><code>apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: tcp-services\n namespace: kinetica-system # (1)!\ndata:\n '5432': gpudb/kinetica-k8s-sample-rank0-service:5432 #(2)!\n '9002': gpudb/kinetica-k8s-sample-rank0-service:9002 #(3)!\n</code></pre> 1. Change the namespace to the namespace your ingress-nginx controller is running in. e.g. <code>ingress-nginx</code> 2. This exposes the postgres proxy port on the default <code>5432</code> port. If you wish to change this to a non-standard port then it needs to be changed here but also in the Helm <code>values.yaml</code> to match. 3. This port is the Table Monitor port and should always be exposed alongside the Postgres Proxy.</p>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_urls/","title":"Ingress urls","text":""},{"location":"Advanced/ingress_urls/#gadmin-paths","title":"GAdmin Paths","text":"Path Service Port <code>/gadmin</code> <code>cluster-name-gadmin-service</code> <code>gadmin</code> (8080/TCP) <code>/tableau</code> <code>cluster-name-gadmin-service</code> <code>gadmin</code> (8080/TCP) <code>/files</code> <code>cluster-name^-gadmin-service</code> <code>gadmin</code> (8080/TCP) <p>where <code>cluster-name</code> is the name of the Kinetica Cluster i.e. what is in the <code>.spec.gpudbCluster.clusterName</code> in the KineticaCluster CR.</p>"},{"location":"Advanced/ingress_urls/#workbench-paths","title":"Workbench Paths","text":"Path Service Port <code>/</code> <code>workbench-workbench-service</code> <code>workbench-port</code> (8000/TCP)"},{"location":"Advanced/ingress_urls/#db-rank-0-paths","title":"DB <code>rank-0</code> Paths","text":"Path Service Port <code>/cluster-145025b8(/gpudb-0(/.*|$))</code> <code>cluster-145025b8-rank0-service</code> <code>httpd</code> (8082/TCP) <code>/cluster-145025b8/gpudb-0/hostmanager(.*)</code> <code>cluster-145025b8-rank0-service</code> <code>hostmanager</code> (9300/TCP)"},{"location":"Advanced/ingress_urls/#db-rank-n-paths","title":"DB <code>rank-N</code> Paths","text":"Path Service Port <code>/cluster-145025b8(/gpudb-N(/.*|$))</code> <code>cluster-145025b8-rank1-service</code> <code>httpd</code> (8082/TCP) <code>/cluster-145025b8/gpudb-N/hostmanager(.*)</code> <code>cluster-145025b8-rank1-service</code> <code>hostmanager</code> (9300/TCP)"},{"location":"Advanced/ingress_urls/#reveal-paths","title":"Reveal Paths","text":"Path Service Port <code>/reveal</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/caravel</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/static</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logout</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/resetmypassword</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/dashboardmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/dashboardmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/slicemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/slicemodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/sliceaddview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databaseview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databaseasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databasetablesasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/tablemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/csstemplatemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/csstemplatemodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/users</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/roles</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/userstatschartview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/permissions</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/viewmenus</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/permissionviews</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/accessrequestsmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/accessrequestsmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/userinfoeditview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/tablecolumninlineview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/sqlmetricinlineview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP)"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/","title":"Kinetica images list for airgapped environments","text":"Kinetica Images for an Air-Gapped Environment <p>If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.</p> <p>For information on ways to transfer the files into an air-gapped environment See here.</p>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#required-container-images","title":"Required Container Images","text":""},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"<ul> <li>docker.io/kinetica/kinetica-k8s-operator:v7.2.2-3.ga-2<ul> <li>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2 or</li> <li>docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.2-3.ga-2 or</li> <li>docker.io/kinetica/kinetica-k8s-gpu:v7.2.2-3.ga-2</li> </ul> </li> <li>docker.io/kinetica/workbench-operator:v7.2.2-3.ga-2</li> <li>docker.io/kinetica/workbench:v7.2.2-3.ga-2</li> <li>docker.io/kinetica/kinetica-k8s-monitor:v7.2.2-3.ga-2</li> <li>docker.io/kinetica/busybox:v7.2.2-3.ga-2</li> <li>docker.io/kinetica/fluent-bit:v7.2.2-3.ga-2</li> <li>docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>nvcr.io/nvidia/gpu-operator:v23.9.1</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>registry.k8s.io/nfd/node-feature-discovery:v0.14.2</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"<ul> <li>docker.io/bitnami/openldap:2.6.7</li> <li>docker.io/alpine/openssl:latest (used by bitnami/openldap)</li> <li>docker.io/otel/opentelemetry-collector-contrib:0.95.0</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"<ul> <li>quay.io/brancz/kube-rbac-proxy:v0.14.2</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#optional-container-images","title":"Optional Container Images","text":"<p>These images are only required if certain features are enabled as part of the Helm installation: -</p> <ul> <li>CertManager</li> <li>ingress-ninx</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"<ul> <li>quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"<ul> <li>registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)</li> <li>registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2) <ol> <li>It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image</li> <li>With a supported nVidia GPU.</li> </ol>"},{"location":"Advanced/kinetica_mac_arm_k8s/","title":"Kinetica DB on Kubernetes","text":"<p>This walkthrough will show how to install Kinetica DB on a Mac running OS X. The Kubernetes cluster will be running on VMs with Ubuntu Linux 22.04 ARM64. </p> <p>This solution is equivalent to a production bare metal installation and does not use Docker, Podman or QEMU but rather Apple native Virtualization.</p> <p>The Kubernetes cluster will consist of one Master node <code>k8smaster1</code> and two Worker nodes <code>k8snode1</code> & <code>k8snode2</code>.</p> <p>The virtualization platform is UTM. </p> <p>Obtain a Kinetica License Key</p> <p>A product license key will be required for install. Please contact Kinetica Support to request a trial key.</p> <p>Download and install UTM.</p>","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#create-the-vms","title":"Create the VMs","text":"","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#k8smaster1","title":"<code>k8smaster1</code>","text":"<p>For this walkthrough the master node will be 4 vCPU, 8 GB RAM & 40-64 GB disk.</p> <p>Start the creation of a new VM in UTM. Select <code>Virtualize</code></p> <p></p> <p>Select Linux as the VM OS.</p> <p></p> <p>On the Linux page - Select <code>Use Apple Virtualization</code> and an Ubuntu 22.04 (Arm64) ISO.</p> <p></p> <p>As this is the master Kubernetes node (VM) it can be smaller than the nodes hosting the Kinetica DB itself.</p> <p>Set the memory to 8 GB and the number of CPUs to 4.</p> <p></p> <p>Set the storage to between 40-64 GB.</p> <p></p> <p>This next step is optional if you wish to setup a shared folder between your Mac host & the Linux VM.</p> <p></p> <p>The final step to create the VM is a summary. Please check the values shown and hit <code>Save</code></p> <p></p> <p>You should now see your new VM in the left hand pane of the UTM UI.</p> <p></p> <p>Go ahead and click the button.</p> <p>Once the Ubuntu installer comes up follow the steps selecting whichever keyboard etc. you require.</p> <p>The only changes you need to make are: -</p> <ul> <li>Change the installation to <code>Ubuntu Server (minimized)</code></li> <li>Your server's name to <code>k8smaster1</code></li> <li>Enable OpenSSH server.</li> </ul> <p>and complete the installation.</p> <p>Reboot the VM, remove the ISO from the 'external' drive . Log in to the VM and get the VMs IP address with</p> Bash<pre><code>ip a\n</code></pre> <p>Make a note of the IP for later use.</p>","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#k8snode1-k8snode2","title":"<code>k8snode1</code> & <code>k8snode2</code>","text":"<p>Repeat the same process to provision one or two nodes depending on how much memory you have available on the Mac.</p> <p>You need to change the RAM size to 16 GB. You can leave the vCPU count at 4. For the disk size that depends on how much data you want to ingest. It should however be at least 4x RAM size.</p> <p>Once installed again log in to the VM and get the VMs IP address with</p> Bash<pre><code>ip a\n</code></pre> <p>Note</p> <p>Make a note of the IP(s) for later use.</p> <p>Your VMs are complete</p> <p>Continue installing your new VMs by following Bare Metal/VM Installation</p>","tags":["Advanced","Development"]},{"location":"Advanced/kube_vip_loadbalancer/","title":"Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations","text":"<p>For our example we are going to enable a Kubernetes based LoadBalancer to issue IP addresses to our Kubernetes Services of type <code>LoadBalancer</code> using <code>kube-vip</code>.</p> Ingress Service is pending <p>The <code>ingress-nginx-controller</code> is currently in the <code>pending</code> state as there is no CCM/LoadBalancer </p>","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kube-vip","title":"<code>kube-vip</code>","text":"<p>We will install two components into our Kubernetes CLuster</p> <ul> <li>kube-vip-cloud-controller</li> <li>Kubernetes Load-Balancer Service</li> </ul>","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kube-vip-cloud-controller","title":"kube-vip-cloud-controller","text":"<p>Quote</p> <p>The kube-vip cloud provider can be used to populate an IP address for Services of type LoadBalancer similar to what public cloud providers allow through a Kubernetes CCM.</p> Install the kube-vip CCM <p></p> Install the kube-vip CCM<pre><code>kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml\n</code></pre> <p>Now we need to setup the required RBAC permissions: -</p> Install the kube-vip RBAC <p></p> Install kube-vip RBAC<pre><code>kubectl apply -f https://kube-vip.io/manifests/rbac.yaml\n</code></pre> <p>The following ConfigMap will configure the <code>kube-vip-cloud-controller</code> to obtain IP addresses from the host networks DHCP server. i.e. the DHCP on the physical network that the host machine or VM is connected to.</p> Install the kube-vip ConfigMap <p></p> Install the kube-vip ConfigMap<pre><code>apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: kubevip\n namespace: kube-system\ndata:\n cidr-global: 0.0.0.0/32\n</code></pre> <p>It is possible to specify IP address ranges see here.</p>","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kubernetes-load-balancer-service","title":"Kubernetes Load-Balancer Service","text":"Obtain the Master Node IP address & Interface name Obtain the Master Node IP address & Interface name<pre><code>ip a\n</code></pre> <p>In this example the network interface of the master node is <code>192.168.2.180</code> and the interface is <code>enp0s1</code>.</p> <p>We need to apply the <code>kube-vip</code> daemonset but first we need to create the configuration</p> Install the kube-vip Daemonset<pre><code>apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n app.kubernetes.io/name: kube-vip-ds\n app.kubernetes.io/version: v0.7.2\n name: kube-vip-ds\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: kube-vip-ds\n template:\n metadata:\n labels:\n app.kubernetes.io/name: kube-vip-ds\n app.kubernetes.io/version: v0.7.2\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: node-role.kubernetes.io/master\n operator: Exists\n - matchExpressions:\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n containers:\n - args:\n - manager\n env:\n - name: vip_arp\n value: \"true\"\n - name: port\n value: \"6443\"\n - name: vip_interface\n value: enp0s1\n - name: vip_cidr\n value: \"32\"\n - name: dns_mode\n value: first\n - name: cp_enable\n value: \"true\"\n - name: cp_namespace\n value: kube-system\n - name: svc_enable\n value: \"true\"\n - name: svc_leasename\n value: plndr-svcs-lock\n - name: vip_leaderelection\n value: \"true\"\n - name: vip_leasename\n value: plndr-cp-lock\n - name: vip_leaseduration\n value: \"5\"\n - name: vip_renewdeadline\n value: \"3\"\n - name: vip_retryperiod\n value: \"1\"\n - name: address\n value: 192.168.2.180\n - name: prometheus_server\n value: :2112\n image: ghcr.io/kube-vip/kube-vip:v0.7.2\n imagePullPolicy: Always\n name: kube-vip\n resources: {}\n securityContext:\n capabilities:\n add:\n - NET_ADMIN\n - NET_RAW\n hostNetwork: true\n serviceAccountName: kube-vip\n tolerations:\n - effect: NoSchedule\n operator: Exists\n - effect: NoExecute\n operator: Exists\n updateStrategy: {}\n</code></pre> <p>Lines 5, 7, 12, 16, 38 and 62 need modifying to your environment.</p> Install the kube-vip Daemonset <p></p> <p>ARP or BGP</p> <p>The Daemonset above uses ARP to communicate with the network it is also possible to use BGP. See Here</p> Example showing DHCP allocated external IP address to the Ingress Controller <p></p> <p>Our <code>ingress-nginx-controller</code> has been allocated the IP Address <code>192.168.2.194</code>. </p> <p>Ingress Access</p> <p>The <code>ingress-niginx-controller</code> requires the host FQDN to be on the user requests in order to know how to route the requests to the correct Kubernetes Service. Using the iP address in the URL will cause an error as ingress cannot select the correct service.</p> List Ingress <p></p> <p></p> <p>If you did not set the FQDN of the Kinetica Cluster to a DNS resolvable hostname add <code>local.kinetica</code> to your <code>/etc/hosts/</code> file in order to be able to access the Kinetica URLs</p> Edit /etc/hosts <p></p> <p>Accessing the Workbench</p> <p>You should be able to access the workbench at http://local.kinetica</p>","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/","title":"Bare Metal/VM Installation - <code>kubeadm</code>","text":"<p>This walkthrough will show how to install Kinetica DB. For this example the Kubernetes cluster will be running on 3 VMs with Ubuntu Linux 22.04 (ARM64).</p> <p>This solution is equivalent to a production bare metal installation and does not use Docker, Podman or QEMU.</p> <p>The Kubernetes cluster requires 3 VMs consiting of one Master node <code>k8smaster1</code> and two Worker nodes <code>k8snode1</code> & <code>k8snode2</code>.</p> <p>Purple Example Boxes</p> <p>The purple boxes in the instructions below can be expanded for a screen recording of the commands & their results.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#kubernetes-node-installation","title":"Kubernetes Node Installation","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#setup-the-kubernetes-nodes","title":"Setup the Kubernetes Nodes","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#edit-etchosts","title":"Edit <code>/etc/hosts</code>","text":"<p>SSH into each of the nodes and run the following: -</p> Edit `/etc/hosts<pre><code>sudo vi /etc/hosts\n\nx.x.x.x k8smaster1\nx.x.x.x k8snode1\nx.x.x.x k8snode2\n</code></pre> <p>where x.x.x.x is the IP Address of the corresponding nose.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#disable-linux-swap","title":"Disable Linux Swap","text":"<p>Next we need to disable Swap on Linux: -</p> Disable Swap <p></p> Disable Swap<pre><code>sudo swapoff -a\n\nsudo vi /etc/fstab\n</code></pre> <p>comment out the swap entry in <code>/etc/fstab</code> on each node.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#linux-system-configuration-changes","title":"Linux System Configuration Changes","text":"<p>We are using containerd as the container runtime but in order to do so we need to make some system level changes on Linux.</p> Linux System Configuration Changes <p></p> Linux System Configuration Changes<pre><code>cat << EOF | sudo tee /etc/modules-load.d/containerd.conf\noverlay\nbr_netfilter\nEOF\n\nsudo modprobe overlay\n\nsudo modprobe br_netfilter\n\ncat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nEOF\n\nsudo sysctl --system\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#container-runtime-installation","title":"Container Runtime Installation","text":"<p>Run on all nodes (VMs)</p> <p>Run the following commands, until advised not to, on all of the VMs you created.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-containerd","title":"Install <code>containerd</code>","text":"Install <code>containerd</code> Install `containerd`<pre><code>sudo apt update\n\nsudo apt install -y containerd\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#create-a-default-containerd-config","title":"Create a Default <code>containerd</code> Config","text":"Create a Default <code>containerd</code> Config Create a Default `containerd` Config<pre><code>sudo mkdir -p /etc/containerd\n\nsudo containerd config default | sudo tee /etc/containerd/config.toml\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#enable-system-cgroup","title":"Enable System CGroup","text":"<p>Change the SystemdCgroup value to true in the containerd configuration file and restart the service</p> Enable System CGroup <p></p> Enable System CGroup<pre><code>sudo sed -i 's/SystemdCgroup \\= false/SystemdCgroup \\= true/g' /etc/containerd/config.toml\n\nsudo systemctl restart containerd\nsudo systemctl enable containerd\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-pre-requisiteutility-packages","title":"Install Pre-requisite/Utility packages","text":"Install Pre-requisite/Utility packages Install Pre-requisite/Utility packages<pre><code>sudo apt update\n\nsudo apt install -y apt-transport-https ca-certificates curl gpg git\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#download-the-kubernetes-public-signing-key","title":"Download the Kubernetes public signing key","text":"Download the Kubernetes public signing key Download the Kubernetes public signing key<pre><code>curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#add-the-kubernetes-package-repository","title":"Add the Kubernetes Package Repository","text":"Add the Kubernetes Package Repository<pre><code>echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-the-kubernetes-installation-and-management-tools","title":"Install the Kubernetes Installation and Management Tools","text":"Install the Kubernetes Installation and Management Tools Install the Kubernetes Installation and Management Tools<pre><code>sudo apt update\n\nsudo apt install -y kubeadm=1.29.0-1.1 kubelet=1.29.0-1.1 kubectl=1.29.0-1.1 \n\nsudo apt-mark hold kubeadm kubelet kubectl\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#initialize-the-kubernetes-cluster","title":"Initialize the Kubernetes Cluster","text":"<p>Initialize the Kubernetes Cluster by using kubeadm on the <code>k8smaster1</code> control plane node.</p> <p>Note</p> <p>You will need an IP Address range for the Kubernetes Pods. This range is provided to <code>kubeadm</code> as part of the initialization. For our cluster of three nodes, given the default number of pods supported by a node (110) we need a CIDR of at least 330 distinct IP Addresses. Therefore, for this example we will use a <code>--pod-network-cidr</code> of <code>10.1.1.0/22</code> which allows for 1007 usable IPs. The reason for this is each node will get <code>/24</code> of the <code>/22</code> total.</p> <p>The <code>apiserver-advertise-address</code> should be the IP Address of the <code>k8smaster1</code> VM.</p> Initialize the Kubernetes Cluster <p></p> Initialize the Kubernetes Cluster<pre><code>sudo kubeadm init --pod-network-cidr 10.1.1.0/22 --apiserver-advertise-address 192.168.2.180 --kubernetes-version 1.29.2\n</code></pre> <p>You should now deploy a pod network to the cluster. Run <code>kubectl apply -f [podnetwork].yaml</code> with one of the options listed at: Cluster Administration Addons</p> <p>Make a note of the portion of the shell output which gives the join command which we will need to add our worker nodes to the master.</p> <p>Copy the <code>kudeadm join</code> command</p> <p>Then you can join any number of worker nodes by running the following on each as root:</p> Copy the `kudeadm join` command<pre><code>kubeadm join 192.168.2.180:6443 --token wonuiv.v93rkizr6wvxwe6l \\\n--discovery-token-ca-cert-hash sha256:046ffa6303e6b281285a636e856b8e9e51d8c755248d9d013e15ae5c5f6bb127\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#setup-kubeconfig","title":"Setup <code>kubeconfig</code>","text":"<p>Before we add the worker nodes we can setup the <code>kubeconfig</code> so we will be able to use <code>kubectl</code> going forwards.</p> Setup <code>kubeconfig</code> <p></p> Setup `kubeconfig`<pre><code>sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#connect-list-the-kubernetes-cluster-nodes","title":"Connect & List the Kubernetes Cluster Nodes","text":"<p>We can now run <code>kubectl</code> to connect to the Kubernetes API Server to display the nodes in the newly created Kubernetes CLuster.</p> Connect & List the Kubernetes Cluster Nodes <p></p> Connect & List the Kubernetes Cluster Nodes<pre><code>kubectl get nodes\n</code></pre> <p>STATUS = NotReady</p> <p>From the <code>kubectl</code> output the status of the <code>k8smaster1</code> node is showing as <code>NotReady</code> as we have yet to install the Kubernetes Network to the cluster.</p> <p>We will be installing <code>cilium</code> as that provider in a future step.</p> <p>Warning</p> <p>At this point we should complete the installations of the worker nodes to this same point before continuing.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#join-the-worker-nodes-to-the-cluster","title":"Join the Worker Nodes to the Cluster","text":"<p>Once installed we run the join on the worker nodes. Note that the command which was output from the <code>kubeadm init</code> needs to run with <code>sudo</code></p> Join the Worker Nodes to the Cluster<pre><code>sudo kubeadm join 192.168.2.180:6443 --token wonuiv.v93rkizr6wvxwe6l \\\n --discovery-token-ca-cert-hash sha256:046ffa6303e6b281285a636e856b8e9e51d8c755248d9d013e15ae5c5f6bb127\n</code></pre> <code>kubectl get nodes</code> <p></p> <p>Now we can again run</p> `kubectl get nodes`<pre><code>kubectl get nodes\n</code></pre> <p>Now we can see all the nodes are present in the Kubernetes Cluster.</p> <p>Run on Head Node only</p> <p>From now the following cpmmands need to be run on the Master Node only..</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-kubernetes-networking","title":"Install Kubernetes Networking","text":"<p>We now need to install a Kubernetes CNI (Container Network Interface) to enable the pod network.</p> <p>We will use Cilium as the CNI for our cluster.</p> Installing the Cilium CLI<pre><code>curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-arm64.tar.gz\nsudo tar xzvfC cilium-linux-arm64.tar.gz /usr/local/bin\nrm cilium-linux-arm64.tar.gz\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-cilium","title":"Install <code>cilium</code>","text":"<p>You can now install Cilium with the following command:</p> Install `cilium`<pre><code>cilium install\ncilium status \n</code></pre> <p>If <code>cilium status</code> shows errors you may need to wait until the Cilium pods have started.</p> <p>You can check progress with</p> Bash<pre><code>kubectl get po -A\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#check-cilium-status","title":"Check <code>cilium</code> Status","text":"<p>Once Cilium the Cilium pods are running we can check the status of Cilium again by using</p> Check <code>cilium</code> Status <p></p> Check `cilium` Status<pre><code>cilium status \n</code></pre> <p>We can now recheck the Kubernetes Cluster Nodes</p> <p></p> Bash<pre><code>kubectl get nodes\n</code></pre> <p>and they should have <code>Status Ready</code></p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#kubernetes-node-preparation","title":"Kubernetes Node Preparation","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#label-kubernetes-nodes","title":"Label Kubernetes Nodes","text":"<p>Now we go ahead and label the nodes. Kinetica uses node labels in production clusters where there are separate 'node groups' configured so that the Kinetica Infrastructure pods are deployed on a smaller VM type and the DB itself is deployed on larger nodes or gpu enabled nodes.</p> <p>If we were using a Cloud Provider Kubernetes these are synonymous with EKS Node Groups or AKS VMSS which would be created with the same two labels on two node groups.</p> Label Kubernetes Nodes<pre><code>kubectl label node k8snode1 app.kinetica.com/pool=infra\nkubectl label node k8snode2 app.kinetica.com/pool=compute\n</code></pre> <p>additionally in our case as we have created a new cluster the 'role' of the worker nodes is not set so we can also set that. In many cases the role is already set to <code>worker</code> but here we have some latitude.</p> <p></p> Bash<pre><code>kubectl label node k8snode1 kubernetes.io/role=kinetica-infra\nkubectl label node k8snode2 kubernetes.io/role=kinetica-compute\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-storage-class","title":"Install Storage Class","text":"<p>Install a local path provisioner storage class. In this case we are using the Rancher Local Path provisioner</p> Install Storage Class <p></p> Install Storage Class<pre><code>kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#set-default-storage-class","title":"Set Default Storage Class","text":"Set Default Storage Class Set Default Storage Class<pre><code>kubectl patch storageclass local-path -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'\n</code></pre> <p>Kubernetes Cluster Provision Complete</p> <p>Your basre Kubernetes Cluster is now complete and ready to have the Kinetica DB installed on it using the Helm Chart.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-kinetica-for-kubernetes-using-helm","title":"Install Kinetica for Kubernetes using Helm","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#add-the-helm-repository","title":"Add the Helm Repository","text":"Add the Helm Repository Add the Helm Repository<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts\nhelm repo update\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#download-a-starter-helm-valuesyaml","title":"Download a Starter Helm <code>values.yaml</code>","text":"<p>Now we need to obtain a starter <code>values.yaml</code> file to pass to our Helm install. We can download one from the <code>github.com/kineticadb/charts</code> repo.</p> Download a Starter Helm <code>values.yaml</code> <p></p> Download a Starter Helm `values.yaml`<pre><code> wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n</code></pre> <p>Obtain a Kinetica License Key</p> <p>A product license key will be required for install. Please contact Kinetica Support to request a trial key.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#helm-install-kinetica","title":"Helm Install Kinetica","text":"#### Helm Install Kinetica Helm install kinetica-operators<pre><code>helm -n kinetica-system upgrade -i \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"local-path\"\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#monitor-kinetica-startup","title":"Monitor Kinetica Startup","text":"<p>After a few moments, follow the progression of the main database pod startup with:</p> Monitor the Kinetica installation progress<pre><code>kubectl -n gpudb get po gpudb-0 -w\n</code></pre> <p>Kinetica DB Provision Complete</p> <p>Once you see <code>gpudb-0 3/3 Running</code> the database is up and running.</p> <p>Software LoadBalancer</p> <p>If you require a software based LoadBalancer to allocate IP address to the Ingress Controller or exposed Kubernetes Services then see here</p> <p>This is usually apparent if your ingress or other Kubernetes Services with the type <code>LoadBalancer</code> are stuck in the <code>Pending</code> state.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/minio_s3_dev_test/","title":"Using Minio for S3 Storage in Dev/Test","text":"<p>If you require a new Minio installation</p> <p>Please follow the installation instructions found here to install the Minio Operator and to create your first tenant.</p>","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#create-minio-tenant","title":"Create Minio Tenant","text":"<p>In our example below we have created a tenant <code>kinetica</code> in the <code>gpudb</code> namespace using the Kinetica storage class <code>kinetica-k8s-sample-storageclass</code>.</p> <p></p> <p>or use the minio kubectl plugin</p> minio cli - create tenant<pre><code>kubectl minio tenant create kinetica --capacity 32Gi --servers 1 --volumes 1 --namespace gpudb --storage-class kinetica-k8s-sample-storageclass --disable-tls\n</code></pre> <p>Console Port Forward</p> <p>Forward the minio console for our newly created tenant</p> Bash<pre><code>kubectl port-forward service/kinetica-console -n gpudb 9443:9443\n</code></pre> <p>In that tenant we create a bucket <code>kinetica-cold-storage</code> and in that bucket we create the path <code>gpudb/cold-storage</code>.</p> <p></p> <p>Once you have a tenant up and running we can configure Kinetica for Kubernetes to use it as the DB Cold Storage tier.</p> <p>Backup/Restore Storage</p> <p>Minio can also be used as the S3 storage for Velero. This enables Backup/Restore functionality via the <code>KineticaBackup</code> & <code>KineticaRestore</code> CRs.</p>","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#configuring-kinetica-to-use-minio","title":"Configuring Kinetica to use Minio","text":"","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#cold-storage","title":"Cold Storage","text":"<p>In order to configure the Cold Storage Tier for the Database it is necessary to add a <code>coldStorageTier</code> to the <code>KineticaCluster</code> CR. As we are using S3 Buckets for storage we then require <code>coldStorageS3</code> entry which allows us to set the <code>awsSecretAccessKey</code> & <code>awsAccessKeyId</code> which were generated when the tenant was created in Minio. </p> <p>If we look in the <code>gpudb</code> name space we can see that Minio created a Kubernetes service called <code>minio</code> exposed on port <code>443</code>. </p> <p>In the <code>coldStorageS3</code> we need to add an <code>endpoint</code> field which contains the <code>minio</code> service name and the namespace <code>gpudb</code> i.e. <code>minio.gpudb.svc.cluster.local</code>.</p> KineticaCluster coldStorageTier S3 Configuration<pre><code>spec:\n gpudbCluster:\n config:\n tieredStorage:\n coldStorageTier:\n coldStorageType: s3\n coldStorageS3:\n basePath: gpudb/cold-storage/\n bucketName: kinetica-cold-storage\n endpoint: minio.gpudb.svc.cluster.local:80\n limit: \"32Gi\"\n useHttps: false\n useManagedCredentials: false\n useVirtualAddressing: false\n awsSecretAccessKey: 6rLaOOddP3KStwPDhf47XLHREPdBqdav\n awsAccessKeyId: VvlP5rHbQqzcYPHG\n tieredStrategy:\n default: VRAM 1, RAM 5, PERSIST 5, COLD0 10\n</code></pre>","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/nginx_ingress_config/","title":"<code>nginx-ingress</code> Ingress Configuration","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/nginx_ingress_config/#coming-soon","title":"Coming Soon","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Architecture/","title":"Architecture","text":"<p>Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance \u2013 particularly on streaming analytics and geospatial workloads.</p> <p>Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.</p> <ul> <li> <p> Kinetica Database Architecture</p> <p>Install the Kinetica DB with helm and get up and running in minutes Core Database Architecture</p> </li> <li> <p> Kinetica for Kubernetes Architecture</p> <p>Install the Kinetica DB with helm and get up and running in minutes Kubernetes Architecture</p> </li> </ul>","tags":["Architecture"]},{"location":"Architecture/db_architecture/","title":"Architecture","text":"<p>Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance \u2013 particularly on streaming analytics and geospatial workloads.</p> <p>Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.</p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#database-architecture","title":"Database Architecture","text":"","tags":["Architecture"]},{"location":"Architecture/db_architecture/#scale-out-architecture","title":"Scale-out Architecture","text":"<p>Kinetica has a distributed architecture that has been designed for data processing at scale. A standard cluster consists of identical nodes run on commodity hardware. A single node is chosen to be the head aggregation node.</p> <p> A cluster can be scaled up at any time to increase storage capacity and processing power, with near-linear scale processing improvements for most operations. Sharding of data can be done automatically, or specified and optimized by the user.</p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#distributed-ingest-query","title":"Distributed Ingest & Query","text":"<p>Kinetica uses a shared-nothing data distribution across worker nodes. The head node receives a query and breaks it down into small tasks that can be spread across worker nodes. To avoid bottlenecks at the head node, ingestion can also be organized in parallel by all the worker nodes. Kinetica is able to distribute data client-side before sending it to designated worker nodes. This streamlines communication and processing time.</p> <p>For the client application, there is no need to be aware of how many nodes are in the cluster, where they are, or how the data is distributed across them!</p> <p></p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#column-oriented","title":"Column Oriented","text":"<p>Columnar data structures lend themselves to low-latency reads of data. But from a user's perspective, Kinetica behaves very similarly to a standard relational database \u2013 with tables of rows and columns and it can be queried with SQL or through APIs. Available column types include the standard base types (int, long, float, double, string, & bytes), as well as numerous sub-types supporting date/time, geospatial, and other data forms.</p> <p></p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#vectorized-functions","title":"Vectorized Functions","text":"<p>Vectorization is Kinetica\u2019s secret sauce and the key feature that underpins its blazing fast performance.</p> <p>Advanced vectorized kernels are optimized to use vectorized CPUs and GPUs for faster performance. The query engine automatically assigns tasks to the processor where they will be most performant. Aggregations, filters, window functions, joins and geospatial rendering are some of the capabilities that see performance improvements.</p> <p></p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#memory-first-tiered-storage","title":"Memory-First, Tiered Storage","text":"<p>Tiered storage makes it possible to optimize where data lives for performance and cost. Recent data (such as all data where the timestamp is within the last 2 weeks) can be held in-memory, while older data can be moved to disk, or even to external storage services.</p> <p>Kinetica operates on an entire data corpus by intelligently managing data across GPU memory, system memory, SIMD, disk / SSD, HDFS, and cloud storage like S3 for optimal performance.</p> <p>Kinetica can also query and process data stored in data lakes, joining it with data managed by Kinetica in highly parallelized queries.</p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#performant-key-value-lookup","title":"Performant Key-Value Lookup","text":"<p>Kinetica is able to generate distributed key-value lookups, from columnar data, for high-performance and concurrency. Sharding logic is embedded directly within client APIs enabling linear scale-out as clients can lookup data directly from the node where the data lives.</p>","tags":["Architecture"]},{"location":"Architecture/kinetica_for_kubernetes_architecture/","title":"Kubernetes Architecture","text":"","tags":["Architecture"]},{"location":"Architecture/kinetica_for_kubernetes_architecture/#coming-soon","title":"Coming Soon","text":"","tags":["Architecture"]},{"location":"GettingStarted/","title":"Getting Started","text":"<ul> <li> <p> Set up in 15 minutes (local install)</p> <p>Install the Kinetica DB locally on <code>Kind</code> or <code>k3s</code> with <code>helm</code> to get up and running in minutes (Dev/Test).</p> <p> Quickstart</p> </li> <li> <p> Prepare to Install</p> <p>What you need to know & do before beginning an installation.</p> <p> Preparation and Prerequisites</p> </li> <li> <p> Production Installation</p> <p>Install the Kinetica DB with helm to get up and running quickly (Production).</p> <p> Installation</p> </li> <li> <p> Channel Your Inner Ninja</p> <p>Advanced Installation Topics which go beyond the basic installation.</p> <p> Advanced Topics</p> </li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/aks/","title":"Azure AKS Specifics","text":"<p>This page covers any Microsoft Azure AKS cluster installation specifics.</p>","tags":["AKS","Getting Started"]},{"location":"GettingStarted/eks/","title":"Amazon EKS Specifics","text":"<p>This page covers any Amazon EKS kubernetes cluster installation specifics.</p>","tags":["EKS","Getting Started","Storage"]},{"location":"GettingStarted/eks/#ebs-csi-driver","title":"EBS CSI driver","text":"<p>Warning</p> <p>Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work.</p> <p>Please refer to this AWS documentation for more information.</p>","tags":["EKS","Getting Started","Storage"]},{"location":"GettingStarted/helm_repo_add/","title":"Helm repo add","text":"Add Kinetica Operators Chart Repo<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n</code></pre>"},{"location":"GettingStarted/installation/","title":"Kinetica for Kubernetes Installation","text":"<ul> <li> <p> CPU Only Installation </p> <p>Install the Kinetica DB to run on Intel, AMD or ARM CPUs with no GPU acceleration. CPU</p> </li> <li> <p> CPU & GPU Installation</p> <p>Install the Kinetica DB to run on nodes with nVidia GPU acceleration. Optionally enable Kinetica On-Prem SQLAssistant (LLM). GPU</p> </li> </ul>","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/","title":"Installation - CPU Only","text":"<p>For managed Kubernetes solutions (AKS, EKS, GKE), OpenShift or on-prem (kubeadm) Kubernetes variants, follow this generic guide to install the Kinetica Operators, Database and Workbench.</p> <p>Preparation & Prequisites</p> <p>Please make sure you have followed the Preparation & Prequisites steps</p>","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#install-the-helm-chart","title":"Install the helm chart","text":"<p>Run the following Helm install command after substituting values from Preparation & Prequisites</p> Helm install kinetica-operators<pre><code>helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n</code></pre>","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#check-installation-progress","title":"Check installation progress","text":"<p>After a few moments, follow the progression of the main database pod startup with:</p> Monitor the Kinetica installation progress<pre><code>kubectl -n gpudb get po gpudb-0 -w\n</code></pre> <p>until it reaches <code>\"gpudb-0 3/3 Running\"</code> at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the <code>helm</code> command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using Ctrl+C.</p> error no pod named gpudb-0 <p>If you receive an error message running <code>kubectl -n gpudb get po gpudb-0 -w</code> informing you that no pod named <code>gpudb-0</code> exists. Please check that the OpenLDAP pod is running by running</p> Check OpenLDAP status<pre><code>kubectl -n gpudb get pods\nkubectl -n gpudb describe pod openldap-5f87f77c8b-trpmf\n</code></pre> <p>where the pod name <code>openldap-5f87f77c8b-trpmf</code> is that shown when running <code>kubectl -n gpudb get pods</code></p> <p>Validate if the pod is waiting for it's Persistent Volume Claim/Persistent Volume to be created and bound to the pod.</p>","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#accessing-the-kinetica-installation","title":"Accessing the Kinetica installation","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#target-platform-specifics","title":"Target Platform Specifics","text":"cloudOpenShiftlocal - devbare metal - prod <p>If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.</p> <p>As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.</p> <p>Option 1: Use the LoadBalancer domain Set your FQDN in Kinetica<pre><code>kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n</code></pre></p> <p>Option 2: Use your custom domain Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.</p> <p>OpenShift Container Platform version 4 is supported. If you are installing on this flavor of Kubernetes, SecurityContextConstraints are required for some of the Kinetica components. To install these add the folowing set to the main Helm install kinetica-operators command above:</p> <pre><code>--set openshift=\"true\"\n</code></pre> <p>Note</p> <p>The defaultStorageClass must still be set for installation to proceed. Run <code>oc get sc</code> to determine available choices.</p> <p>Installing on a local machine which does not have a domain name, you can add the following entry to your <code>/etc/hosts</code> file or equivalent:</p> Configure local acces - /etc/hosts<pre><code>127.0.0.1 local.kinetica\n</code></pre> <p>Note</p> <p>The default chart configuration points to <code>local.kinetica</code> but this is configurable.</p> <p>Installing on a bare metal machines which do not have an external hardware loadbalancer requires an Ingress controller along with a software loadbalancer in order to be accessible. </p> <p>Kinetica for Kubernetes has been tested with kube-vip</p>","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/","title":"Installation - CPU with GPU Acceleration","text":"<p>For managed Kubernetes solutions (AKS, EKS, GKE), OpenShift or on-prem (kubeadm) Kubernetes variants, follow this generic guide to install the Kinetica Operators, Database and Workbench.</p> <p>Preparation & Prequisites</p> <p>Please make sure you have followed the Preparation & Prequisites steps</p>","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#install-via-the-kinetica-operators-helm-chart","title":"Install via the <code>kinetica-operators</code> Helm Chart","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#gpu-cluster-with-remote-sqlassistant","title":"GPU Cluster with Remote SQLAssistant","text":"<p>Run the following Helm install command after substituting values from section 3</p> Helm install kinetica-operators (No On-Prem SQLAssistant)<pre><code>helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n</code></pre>","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#gpu-cluster-with-on-prem-sqlassistant","title":"GPU Cluster with On-Prem SQLAssistant","text":"<p>or to enable SQLAssistant to be deployed and run 'On-Prem' i.e. in the same cluster</p> Helm install kinetica-operators (With On-Prem SQLAssistant)<pre><code>helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n--set db.gpudbCluster.config.ai.apiProvider = \"kineticallm\"\n</code></pre> <p>On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory</p> <p>To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled <code>app.kinetica.com/pool=compute-llm</code>. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU. </p> Label Kubernetes Nodes for LLM<pre><code>kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n</code></pre>","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#check-installation-progress","title":"Check installation progress","text":"<p>After a few moments, follow the progression of the main database pod startup with:</p> Monitor the Kinetica installation progress<pre><code>kubectl -n gpudb get po gpudb-0 -w\n</code></pre> <p>until it reaches <code>\"gpudb-0 3/3 Running\"</code> at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the <code>helm</code> command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using Ctrl+C.</p> error no pod named gpudb-0 <p>If you receive an error message running <code>kubectl -n gpudb get po gpudb-0 -w</code> informing you that no pod named <code>gpudb-0</code> exists. Please check that the OpenLDAP pod is running by running</p> Check OpenLDAP status<pre><code>kubectl -n gpudb get pods\nkubectl -n gpudb describe pod openldap-5f87f77c8b-trpmf\n</code></pre> <p>where the pod name <code>openldap-5f87f77c8b-trpmf</code> is that shown when running <code>kubectl -n gpudb get pods</code></p> <p>Validate if the pod is waiting for it's Persistent Volume Claim/Persistent Volume to be created and bound to the pod.</p>","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#accessing-the-kinetica-installation","title":"Accessing the Kinetica installation","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#target-platform-specifics","title":"Target Platform Specifics","text":"cloudOpenShiftlocal - devbare metal - prod <p>If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.</p> <p>As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.</p> <p>Option 1: Use the LoadBalancer domain Set your FQDN in Kinetica<pre><code>kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n</code></pre></p> <p>Option 2: Use your custom domain Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.</p> <p>OpenShift Container Platform version 4 is supported. If you are installing on this flavor of Kubernetes, SecurityContextConstraints are required for some of the Kinetica components. To install these add the folowing set to the main Helm install kinetica-operators command above:</p> <pre><code>--set openshift=\"true\"\n</code></pre> <p>Note</p> <p>The defaultStorageClass must still be set for installation to proceed. Run <code>oc get sc</code> to determine available choices.</p> <p>Installing on a local machine which does not have a domain name, you can add the following entry to your <code>/etc/hosts</code> file or equivalent:</p> Configure local acces - /etc/hosts<pre><code>127.0.0.1 local.kinetica\n</code></pre> <p>Note</p> <p>The default chart configuration points to <code>local.kinetica</code> but this is configurable.</p> <p>Installing on a bare metal machines which do not have an external hardware loadbalancer requires an Ingress controller along with a software loadbalancer in order to be accessible.</p> <p>Kinetica for Kubernetes has been tested with kube-vip</p>","tags":["Installation"]},{"location":"GettingStarted/local_kinetica_etc_hosts/","title":"Local kinetica etc hosts","text":"<p>FQDN or Local Access</p> <p>By default we create an ingress pointing towards <code>local.kinetica</code>. If you have a domain pointing to your machine, replace/set the FQDN in the <code>values.yaml</code> with the correct domain name or by adding <code>--set</code>.</p> <p>If you are on a local machine which is not having a domain name, you add the following entry to your <code>/etc/hosts</code> file or equivalent.</p> Configure local access - /etc/hosts<pre><code>127.0.0.1 local.kinetica\n</code></pre>"},{"location":"GettingStarted/note_additional_gpu_sqlassistant/","title":"Note additional gpu sqlassistant","text":"<p>On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory</p> <p>To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled <code>app.kinetica.com/pool=compute-llm</code>. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU. </p> Label Kubernetes Nodes for LLM<pre><code>kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n</code></pre>"},{"location":"GettingStarted/preparation_and_prerequisites/","title":"Preparation & Prerequisites","text":"<p>Checks & steps to ensure a smooth installation.</p> <p>Obtain a Kinetica License Key</p> <p>A product license key will be required for install. Please contact Kinetica Support to request a trial key.</p> <p>Failing to provide a license key at installation time will prevent the DB from starting.</p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"<p>Free Resources</p> <p>Your Kubernetes cluster version should be >= 1.22.x and have a minimum of 8 CPU, 8GB Ram and SSD or SATA 7200RPM hard drive(s) with 4X memory capacity.</p> GPU Support <p>For GPU enabled clusters the cards below have been tested in large-scale production environments and provide the best performance for the database.</p> GPU Driver P4/P40/P100 525.X (or higher) V100 525.X (or higher) T4 525.X (or higher) A10/A40/A100 525.X (or higher)","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#kubernetes-cluster-connectivity","title":"Kubernetes Cluster Connectivity","text":"<p>Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. and the Kubernetes CLI kubectl.</p> <p>The context for the desired target cluster must be selected from your <code>~/.kube/config</code> file and set via the <code>KUBECONFIG</code> environment variable or <code>kubectl ctx</code> (if installed). Check to see if you have the correct context with,</p> show the current kubernetes context<pre><code>kubectl config current-context\n</code></pre> <p>and that you can access this cluster correctly with,</p> list kubernetes cluster nodes<pre><code>kubectl get nodes\n</code></pre> Get Nodes <p></p> <p>If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).</p> Kinetica Images for an Air-Gapped Environment <p>If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.</p> <p>For information on ways to transfer the files into an air-gapped environment See here.</p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#required-container-images","title":"Required Container Images","text":"","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"<ul> <li>docker.io/kinetica/kinetica-k8s-operator:{{kinetica_full_version}}<ul> <li>docker.io/kinetica/kinetica-k8s-cpu:{{kinetica_full_version}} or</li> <li>docker.io/kinetica/kinetica-k8s-cpu-avx512:{{kinetica_full_version}} or</li> <li>docker.io/kinetica/kinetica-k8s-gpu:{{kinetica_full_version}}</li> </ul> </li> <li>docker.io/kinetica/workbench-operator:{{kinetica_full_version}}</li> <li>docker.io/kinetica/workbench:{{kinetica_full_version}}</li> <li>docker.io/kinetica/kinetica-k8s-monitor:{{kinetica_full_version}}</li> <li>docker.io/kinetica/busybox:{{kinetica_full_version}}</li> <li>docker.io/kinetica/fluent-bit:{{kinetica_full_version}}</li> <li>docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>nvcr.io/nvidia/gpu-operator:v23.9.1</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>registry.k8s.io/nfd/node-feature-discovery:v0.14.2</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"<ul> <li>docker.io/bitnami/openldap:2.6.7</li> <li>docker.io/alpine/openssl:latest (used by bitnami/openldap)</li> <li>docker.io/otel/opentelemetry-collector-contrib:0.95.0</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"<ul> <li>quay.io/brancz/kube-rbac-proxy:v0.14.2</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#optional-container-images","title":"Optional Container Images","text":"<p>These images are only required if certain features are enabled as part of the Helm installation: -</p> <ul> <li>CertManager</li> <li>ingress-ninx</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"<ul> <li>quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"<ul> <li>registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)</li> <li>registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2) <ol> <li>It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image</li> <li>With a supported nVidia GPU.</li> </ol>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#label-the-kubernetes-nodes","title":"Label the Kubernetes Nodes","text":"<p>Kinetica requires some of the Kubernetes Nodes to be labeled as it splits some of the components into different deployment 'pools'. This enables different physical node types to be present in the Kubernetes Cluster allowing us to target which Kinetica components go where.</p> <p>e.g. for a GPU installation some nodes in the cluster will have GPUs and others are CPU only. We can put the DB on the GPU nodes and our infrastructure components on CPU only nodes.</p> cpu gpu <p>The Kubernetes cluster nodes selected to host the Kinetica infrastructure pods i.e. non-DB Pods require the following label <code>app.kinetica.com/pool=infra</code>.</p> <p></p> Label the Infrastructure Nodes<pre><code> kubectl label node k8snode1 app.kinetica.com/pool=infra\n</code></pre> <p>whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label <code>app.kinetica.com/pool=compute</code>.</p> Label the Database Nodes<pre><code> kubectl label node k8snode2 app.kinetica.com/pool=compute\n</code></pre> <p>The Kubernetes cluster nodes selected to host the Kinetica infrastructure pods i.e. non-DB Pods require the following label <code>app.kinetica.com/pool=infra</code>.</p> <p></p> Label the Infrastructure Nodes<pre><code> kubectl label node k8snode1 app.kinetica.com/pool=infra\n</code></pre> <p>whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label <code>app.kinetica.com/pool=compute-gpu</code>.</p> Label the Database Nodes<pre><code> kubectl label node k8snode2 app.kinetica.com/pool=compute-gpu\n</code></pre> <p>On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory</p> <p>To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled <code>app.kinetica.com/pool=compute-llm</code>. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU. </p> Label Kubernetes Nodes for LLM<pre><code>kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n</code></pre> <p>Pods Not Scheduling</p> <p>If the Kubernetes are not labeled you may have a situation where Kinetica pods not schedule and sit in a 'Pending' state.</p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"<p>This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.</p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#add-the-kinetica-chart-repository","title":"Add the Kinetica chart repository","text":"<p>Add the repo locally as kinetica-operators:</p> Helm repo add<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n</code></pre> Helm Repo Add <p></p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#obtain-the-default-helm-values-file","title":"Obtain the default Helm values file","text":"<p>For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.</p> Helm values.yaml download<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k8s.yaml\n</code></pre>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#determine-the-following-prior-to-the-chart-install","title":"Determine the following prior to the chart install","text":"<p>Default Admin User</p> <p>the default admin user in the Helm chart is <code>kadmin</code> but this is configurable. Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, <code>--set dbAdminUser.password=\"MyPassword\\!\"</code></p> <ol> <li>Obtain a LICENSE-KEY as described in the introduction above.</li> <li>Choose a PASSWORD for the initial administrator user</li> <li>As the storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain the DEFAULT-STORAGE-CLASS name with the command:</li> </ol> <p></p> Find the default storageclass<pre><code>kubectl get sc -o name \n</code></pre> List StorageClass <p></p> <p>use the name found after the /, For example, in <code>storageclass.storage.k8s.io/local-path</code> use \"local-path\" as the parameter.</p> Amazon EKS <p>If installing on Amazon EKS See here</p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#planning-access-to-your-kinetica-cluster","title":"Planning access to your Kinetica Cluster","text":"Existing Ingress Controller? <p>If you have an existing Ingress Controller in your Kubernetes cluster and do not want Kinetica to install an <code>ingresss-nginx</code> to expose it's endpoints then you can disable <code>ingresss-nginx</code> installation in the <code>values.yaml</code> by editing the file and setting <code>install: true</code> to <code>install: false</code>: -</p> Text Only<pre><code>```` yaml\nnodeSelector: {}\ntolerations: []\naffinity: {}\n\ningressNginx:\n install: false\n````\n</code></pre>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/quickstart/","title":"Quickstart","text":"<p>For the quickstart we have examples for Kind or k3s.</p> <ul> <li>Kind - is suitable for CPU only installations.</li> <li>k3s - is suitable for CPU or GPU installations.</li> </ul> <p>Kubernetes >= 1.25</p> <p>The current version of the chart supports kubernetes version 1.25 and above.</p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#please-select-your-target-kubernetes-variant","title":"Please select your target Kubernetes variant:","text":"kind k3s <p>Default User</p> <p>Username as per the values file mentioned above is <code>kadmin</code> and password is <code>Kinetica1234!</code></p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":"<p>This installation in a kind cluster is for trying out the operators and the database in a non-production environment.</p> <p>CPU Only</p> <p>This method currently only supports installing a CPU version of the database.</p> <p>Please contact Kinetica Support to request a trial key.</p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Create a new Kind Cluster<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/kind.yaml\nkind create cluster --name kinetica --config kind.yaml\n</code></pre> List Kind clusters<pre><code> kind get clusters\n</code></pre> <p>Set Kubernetes Context</p> <p>Please set your Kubernetes Context to <code>kind-kinetica</code> before performing the following steps. </p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"<p>Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.</p> <p>As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.</p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the Kinetica-Operators Chart","text":"Add Kinetica Operators Chart Repo<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n</code></pre> <p>FQDN or Local Access</p> <p>By default we create an ingress pointing towards <code>local.kinetica</code>. If you have a domain pointing to your machine, replace/set the FQDN in the <code>values.yaml</code> with the correct domain name or by adding <code>--set</code>.</p> <p>If you are on a local machine which is not having a domain name, you add the following entry to your <code>/etc/hosts</code> file or equivalent.</p> Configure local access - /etc/hosts<pre><code>127.0.0.1 local.kinetica\n</code></pre> Get & install the Kinetica-Operators Chart<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n</code></pre> <p>or if you have been asked by the Kinetica Support team to try a development version</p> Using a development version<pre><code>helm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 72.2.3\n</code></pre> <p>Accessing the Workbench</p> <p>You should be able to access the workbench at http://local.kinetica</p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-k3sio","title":"k3s (k3s.io)","text":"","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#install-k3s-129","title":"Install k3s 1.29","text":"Install k3s<pre><code>curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n</code></pre> <p>Once installed we need to set the current Kubernetes context to point to the newly created k3s cluster.</p> <p>Select if you want local or remote access to the Kubernetes Cluster: -</p> Local AccessRemote Access <p>For only local access to the cluster we can simply set the <code>KUBECONFIG</code> environment variable</p> Set kubectl context<pre><code>export KUBECONFIG=/etc/rancher/k3s/k3s.yaml\n</code></pre> <p>For remote access i.e. outside the host/VM k3s is installed on: -</p> <p>Copy <code>/etc/rancher/k3s/k3s.yaml</code> on your machine located outside the cluster as <code>~/.kube/config</code>. Then edit the file and replace the value of the server field with the IP or name of your K3s server.</p> Copy the kube config and set the context<pre><code>sudo chmod 600 /etc/rancher/k3s/k3s.yaml\nmkdir -p ~/.kube\nsudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config\nsudo chown \"${USER:=$(/usr/bin/logname)}:$USER\" ~/.kube/config\n# Edit the ~/.kube/config server field with the IP or name of your K3s server here\nexport KUBECONFIG=~/.kube/config\n</code></pre>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s - Install kinetica-operators including a sample db to try out","text":"<p>Review the values file <code>charts/kinetica-operators/values.onPrem.k3s.yaml</code>. This is trying to install the operators and a simple db with workbench installation for a non production try out.</p> <p>FQDN or Local Access</p> <p>By default we create an ingress pointing towards <code>local.kinetica</code>. If you have a domain pointing to your machine, replace/set the FQDN in the <code>values.yaml</code> with the correct domain name or by adding <code>--set</code>.</p> <p>If you are on a local machine which is not having a domain name, you add the following entry to your <code>/etc/hosts</code> file or equivalent.</p> Configure local access - /etc/hosts<pre><code>127.0.0.1 local.kinetica\n</code></pre>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-the-kinetica-operators-chart-cpu","title":"K3S - Install the Kinetica-Operators Chart (CPU)","text":"Add Kinetica Operators Chart Repo<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n</code></pre> Download Template values.yaml<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n</code></pre> <p>or if you have been asked by the Kinetica Support team to try a development version</p> Using a development version<pre><code>helm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n</code></pre>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-the-kinetica-operators-chart-gpu","title":"K3S - Install the Kinetica-Operators Chart (GPU)","text":"<p>If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.</p> k3s GPU Installation<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n</code></pre> <p>Accessing the Workbench</p> <p>You should be able to access the workbench at http://local.kinetica</p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#uninstall-k3s","title":"Uninstall k3s","text":"uninstall k3s<pre><code>/usr/local/bin/k3s-uninstall.sh\n</code></pre>","tags":["Development","Getting Started","Installation"]},{"location":"Help/changing_the_fqdn/","title":"How to change the Clusters FQDN","text":"","tags":["Configuration","Support"]},{"location":"Help/changing_the_fqdn/#coming-soon","title":"Coming Soon","text":"","tags":["Configuration","Support"]},{"location":"Help/faq/","title":"Frequently Asked Questions","text":"","tags":["Support"]},{"location":"Help/faq/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Help/help_and_tutorials/","title":"Help & Tutorials","text":"<ul> <li> <p> Tutorials</p> <p> Tutorials</p> </li> <li> <p> Help</p> <p> Help</p> </li> </ul>","tags":["Support"]},{"location":"Help/help_and_tutorials/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Help/help_index/","title":"Creating Users, Roles, Schemas and other Kinetica DB Objects","text":"","tags":["Support"]},{"location":"Help/help_index/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Monitoring/logs/","title":"Log Collection & Display","text":"<p>It is possible to forward/server the Kinetica on Kubernetes logs via an OpenTelemetry [OTEL] collector.</p> <p>By default an OpenTelemetry Collector is deployed in the <code>kinetica-system</code> namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the <code>kinetica-system</code> namespace and is called <code>otel-collector-conf</code>.</p> <p>Detailed <code>otel-collector-conf</code> setup</p> <p>For more details on the Kinetica installed OTEL Collector please see here.</p> <p>There are many supported mechanisms to expose the logs here is one possibility: -</p> <ul> <li><code>lokiexporter</code> - Exports data via HTTP to Loki.</li> </ul> <p>Tip</p> <p>For a full list of supported OTEL exporters, including those for GrafanaCloud, AWS, Azure, Logz.io, Splunk and many databases please see here</p>","tags":["Operations","Monitoring"]},{"location":"Monitoring/logs/#lokiexporter-otel-collector-exporter","title":"<code>lokiexporter</code> OTEL Collector Exporter","text":"<p>Exports data via HTTP to Loki.</p> Example Configuration<pre><code>exporters:\n loki:\n endpoint: https://loki.example.com:3100/loki/api/v1/push\n default_labels_enabled:\n exporter: false\n job: true\n</code></pre> <p>For full details on configuring the OTEL collector exporter <code>lokiexporter</code> see here.</p>","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/","title":"Metrics Collection & Display","text":"<p>It is possible to forward/server the Kinetica on Kubernetes metrics via an OpenTelemetry [OTEL] collector. </p> <p>By default an OpenTelemetry Collector is deployed in the <code>kinetica-system</code> namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the <code>kinetica-system</code> namespace and is called <code>otel-collector-conf</code>.</p> <p>Detailed <code>otel-collector-conf</code> setup</p> <p>For more details on the Kinetica installed OTEL Collector please see here.</p> <p>There are many supported mechanisms to expose the metrics here are a few possibilities: -</p> <ul> <li><code>prometheusremotewriteexporter</code> - Prometheus Remote Write Exporter sends OpenTelemetry metrics to Prometheus remote write compatible backends.</li> <li><code>prometheusexporter</code> - allows the metrics to be scraped by a Prometheus server</li> </ul> <p>Tip</p> <p>For a full list of supported OTEL exporters, including those for Grafana Cloud, AWS, Azure and many databases please see here</p>","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/#prometheusremotewriteexporter-prometheus-otel-remote-write-exporter","title":"<code>prometheusremotewriteexporter</code> Prometheus OTEL Remote Write Exporter","text":"<p>prometheusremotewriteexporter OTEL Exporter</p> <p>Prometheus Remote Write Exporter sends OpenTelemetry metrics to Prometheus remote write compatible backends such as Cortex, Mimir, and Thanos. By default, this exporter requires TLS and offers queued retry capabilities.</p> <p>Warning</p> <p>Non-cumulative monotonic, histogram, and summary OTLP metrics are dropped by this exporter.</p> Example Configuration<pre><code>exporters:\n prometheusremotewrite:\n endpoint: \"https://my-cortex:7900/api/v1/push\"\n external_labels:\n label_name1: label_value1\n label_name2: label_value2\n</code></pre> <p>For full details on configuring the OTEL collector exporter <code>prometheusremotewriteexporter</code> see here.</p>","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/#prometheusexporter-prometheus-otel-exporter","title":"<code>prometheusexporter</code> Prometheus OTEL Exporter","text":"<p>Exports data in the Prometheus format, which allows it to be scraped by a Prometheus server.</p> Example Configuration<pre><code>exporters:\n prometheus:\n endpoint: \"1.2.3.4:1234\"\n tls:\n ca_file: \"/path/to/ca.pem\"\n cert_file: \"/path/to/cert.pem\"\n key_file: \"/path/to/key.pem\"\n namespace: test-space\n const_labels:\n label1: value1\n \"another label\": spaced value\n send_timestamps: true\n metric_expiration: 180m\n enable_open_metrics: true\n add_metric_suffixes: false\n resource_to_telemetry_conversion:\n enabled: true\n</code></pre> <p>For full details on configuring the OTEL collector exporter <code>prometheusexporter</code> see here.</p>","tags":["Operations","Monitoring"]},{"location":"Operations/","title":"Operational Management","text":"<ul> <li> <p> Metrics</p> <p>Collecting and storing metrics as time series data. Metrics</p> </li> <li> <p> Logs</p> <p>Log aggregation. Logs</p> </li> <li> <p> Metric & Log Distribution</p> <p>Metrics & Logs can be distributed to other systems using OpenTelemetry. OpenTelemety</p> </li> <li> <p> Backup & Restore</p> <p>Backup & Restore of the Kinetica DB. Backup & Restore</p> <p>Note</p> <p>This requires Velero to be installed on the Kubernetes Cluster.</p> </li> <li> <p> Reduce Costs</p> <p>Suspend & Resume Kinetica for Kubernetes. Suspend & Resume</p> </li> <li> <p> Database Rebalancing</p> <p>Kinetica for Kubernetes Data Sharding & Rebalancing. Rebalancing</p> </li> </ul>","tags":["Operations"]},{"location":"Operations/backup_and_restore/","title":"Kinetica for Kubernetes Backup & Restore","text":"<p>Kinetica for Kubernetes supports the Backup & Restoring of the installed Kinetica DB by leveraging Velero which is required to be installed into the same Kubernetes cluster that the <code>kinetica-operators</code> Helm chart is deployed.</p> <p>Velero</p> <p>Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises.</p> <p>For Velero installation please see here.</p> <p>Velero Installation</p> <p>The <code>kinetica-operators</code> Helm chart does not deploy Velero it is a prerequisite for it to be installed before Backup & Restore will work correctly.</p> <p>There are two ways to initiate a Backup or Restore</p> <ul> <li>Workbench Initiated</li> <li>Kubernetes CR Initiated</li> </ul> <p>Preferred Backup/Restore Mechanism</p> <p>The preferred way to Backup or Restore the Kinetica for Kubernetes DB instance is via Workbench.</p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#workbench-initiated-backup-or-restore","title":"Workbench Initiated Backup or Restore","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#home","title":"> Home","text":"<p>From the Workbench Home page</p> <p></p> <p>we need to select the <code>Manage</code> option from the toolbar.</p> <p></p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#manage-cluster-overview","title":"> Manage > Cluster > Overview","text":"<p>On the Cluster Overview page select the 'Snapshots' tab</p> <p></p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#manage-cluster-snapshots","title":"> Manage > Cluster > Snapshots","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#backup","title":"Backup","text":"<p>Select the 'Backup Now' button</p> <p></p> <p>and the backup will start and you will be able to see the progress</p> <p></p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#restore","title":"Restore","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kubernetes-cr-initiated-backup-or-restore","title":"Kubernetes CR Initiated Backup or Restore","text":"<p>The Kinetica DB Operator supports two custom CRs </p> <ul> <li><code>KineticaClusterBackup</code></li> <li><code>KineticaClusterRestore</code></li> </ul> <p>which can be used to perform a Backup of the database and a Restore of Kinetica namespaces.</p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kineticaclusterbackup-cr","title":"<code>KineticaClusterBackup</code> CR","text":"<p>Submission of a <code>KineticaClusterBackup</code> CR will trigger the Kinetica DB Operator to perform a backup of a Kinetica DB instance.</p> <p>Kinetica DB Offline</p> <p>In order to perform a database backup the Kinetica DB needs to be suspended in order for Velero to have access to the necessary disks. The DB will be stopped & restarted automatically by the Kinetica DB Operator as part of the backup process.</p> Example KineticaClusterBackup CR yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaClusterBackup\nmetadata:\n name: kineticaclusterbackup-sample\n namespace: gpudb\nspec:\n includedNamespaces:\n - gpudb\n</code></pre> <p>The namespace of the backup CR should be different to that of the namespace the Kinetica DB is running in i.e. not <code>gpudb</code>. We recommend using the namespace Velero is deployed into.</p> <p>Backup names are unique</p> <p>The name of the <code>KineticaClusterBackup</code> CR is unique we therefore suggest creating the name of the CR containing the date + time of the backup to ensure uniqueness. Kubernetes CR names have a strict naming format so the specified name must conform to those patterns.</p> <p>For a detailed description of the <code>KineticaClusterBackup</code> CRD see here</p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kineticaclusterrestore-cr","title":"<code>KineticaClusterRestore</code> CR","text":"<p>In order to perform a restore of Kinetica for Kubernetes the easiest way is to simply delete the <code>gpudb</code> namespace from the Kubernetes cluster.</p> Delete the Kinetica DB<pre><code>kubectl delete ns gpudb\n</code></pre> <p>Kinetica DB Offline</p> <p>In order to perform a database restore the Kinetica DB needs to be suspended in order for Velero to have access to the necessary disks. The DB will be stopped & restarted automatically by the Kinetica DB Operator as part of the restore process.</p> Example KineticaClusterBackup CR yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaClusterRestore\nmetadata:\n name: kineticaclusterrestore-sample\n namespace: gpudb\nspec:\n backupName: kineticaclusterbackup-sample\n</code></pre> <p>The namespace of the restore CR should be the same as that of the namespace the <code>KineticaClusterBackup</code> CR was placed in. i.e. Not the namespace Kinetica DB is running in.</p> <p>Restore names are unique</p> <p>The name of the <code>KineticaClusterRestore</code> CR is unique we therefore suggest creating the name of the CR containing the date + time of the restore process to ensure uniqueness. Kubernetes CR names have a strict naming format so the specified name must conform to those patterns.</p> <p>For a detailed description of the <code>KineticaClusterRestore</code> CRD see here</p>","tags":["Operations"]},{"location":"Operations/otel/","title":"OTEL Integration for Metric & Log Distribution","text":"<p>Helm installed OTEL Collector</p> <p>By default an OpenTelemetry Collector is deployed in the <code>kinetica-system</code> namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the <code>kinetica-system</code> namespace and is called <code>otel-collector-conf</code>.</p> <p>The Kinetica DB Operators send information to an OpenTelemetry collector. There are two choices</p> <ul> <li>install an OpenTelemetry collector with the Kinetica Operators Helm chart</li> <li>use an existing provisioned OpenTelemetry collector within the Kubernetes Cluster</li> </ul>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#install-an-opentelemetry-collector-with-the-kinetica-operators-helm-chart","title":"Install an OpenTelemetry collector with the Kinetica Operators Helm chart","text":"<p>To enable the Kinetica Operators Helm Chart to deploy an instance of the OpenTelemetry collector into the <code>kinetica-system</code> namespace you need to set the following configuration in the helm values: -</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#todo-add-helm-config-example-here","title":"TODO add Helm Config Example Here","text":"<p>A ConfigMap containing the OTEL collector configuration will be generated so that the necessary <code>receivers</code> and <code>processors</code> sections are correctly setup for a Kinetica DB Cluster.</p> <p>This configuration will: -</p> <p>receivers:</p> <ul> <li>configure a <code>syslog</code> receiver which will receive logs from the Kinetica DB pod.</li> <li>configure a <code>prometheus</code> receiver/scraper which will collect metrics form the Kinetica DB.</li> <li>configure an <code>otlp</code> receiver which will receive trace spans from the Kinetica Operators (Optional).</li> <li>configure the <code>hostmetrics</code> collection of host load & memory usage (Optional).</li> <li>configure the <code>k8s_events</code> collection of Kubernetes Events for the Kinetica namespaces (Optional).</li> </ul> <p>processors:</p> <ul> <li>configure attribute processing to set some useful values</li> <li>configure resource processing to set some useful values</li> </ul>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#syslog-configuration","title":"<code>syslog</code> Configuration","text":"<p>The OpenTelemetry <code>syslogreceiver</code> documentation can be found here.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration","title":"OTEL Receivers Configuration","text":"YAML<pre><code>receivers: \n syslog: \n tcp: \n listen_address: \"0.0.0.0:9601\" \n protocol: rfc5424 \n</code></pre>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-service-configuration","title":"OTEL Service Configuration","text":"<p>Tip</p> <p>In order to batch pushes of log data upstream you can use the following <code>processors section</code> in the OTEL configuration.</p> YAML<pre><code>processors: \n batch: \n</code></pre> YAML<pre><code>service: \n pipelines:\n logs:\n receivers: [syslog]\n processors: [resourcedetection, attributes, resource, batch]\n exporters: ... # Requires configuring for your environment\n</code></pre>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otlp-configuration","title":"<code>otlp</code> Configuration","text":"<p>The default configuration opens both the OTEL gRPC & HTTP listeners.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration_1","title":"OTEL Receivers Configuration","text":"YAML<pre><code>receivers:\n otlp: \n protocols: \n grpc: \n endpoint: \"0.0.0.0:4317\" \n http: \n endpoint: \"0.0.0.0:4318\" \n</code></pre>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-service-configuration_1","title":"OTEL Service Configuration","text":"<p>Tip</p> <p>In order to batch pushes of trace data upstream you can use the following <code>processors section</code> in the OTEL configuration.</p> YAML<pre><code>processors: \n batch: \n</code></pre> YAML<pre><code>service:\n traces:\n receivers: [otlp]\n processors: [batch]\n exporters: ... # Requires configuring for your environment\n</code></pre> <p>exporters</p> <p>The <code>exporters</code> will need to be manually configured to your specific environment e.g. forwarding logs/metrics to Grafana, Azure Monitor, AWS etc.</p> <p>Otherwise the data will 'disappear into the ether' and not be relayed upstream.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#hostmetrics-configuration-optional","title":"<code>hostmetrics</code> Configuration (Optional)","text":"<p>The Host Metrics receiver generates metrics about the host system scraped from various sources. This is intended to be used when the collector is deployed as an agent.</p> <p>The OpenTelemetry <code>hostmetrics</code> documentation can be found here.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration_2","title":"OTEL Receivers Configuration","text":"<p>hostmetricsreceiver</p> <p>The OTEL <code>hostmetricsreceiver</code>requires that the running OTEL collector is the 'contrib' version.</p> YAML<pre><code>receivers:\n hostmetrics: \n # https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver \n scrapers: \n load: \n memory: \n</code></pre> Grafana <p>the attributes and resource processing enables finer grained selection using Grafana queries.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#k8s_events-configuration-optional","title":"<code>k8s_events</code> Configuration (Optional)","text":"<p>The kubernetes Events receiver collects events from the Kubernetes API server. It collects all the new or updated events that come in from the specified namespaces. Below we are collecting events from the two default Kinetica namespaces: -</p> YAML<pre><code>receivers:\n k8s_events: \n namespaces: [kinetica-system, gpudb] \n</code></pre> <p>The OpenTelemetry <code>k8seventsreceiver</code> documentation can be found here.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#use-an-existing-provisioned-opentelemetry-collector","title":"Use an existing provisioned OpenTelemetry Collector","text":"","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#coming-soon","title":"Coming Soon","text":"","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/rebalance/","title":"Kinetica for Kubernetes Data Rebalancing","text":"","tags":["Operations"]},{"location":"Operations/rebalance/#coming-soon","title":"Coming Soon","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/","title":"Kinetica for Kubernetes Suspend & Resume","text":"<p>It is possible to supend Kinetica for Kubernetes which spins down the DB.</p> <p>Infra Structure</p> <p>For each deployment of Kinetica for Kubernetes there are two distinct types of pods: -</p> <ul> <li>'Compute' pods containing the Kinetica DB along with the Statics Pod</li> <li>'Infra' pods containing the supporting apps, e.g. Workbench, OpenLDAP etc, and the Kinetica Operators.</li> </ul> <p>Whilst Kinetica for Kubernetes is in the <code>Suspended</code> state only the 'Compute' pods are scaled down. The 'Infra' pods remain running in order for Workbenchto be able to login, backup, restore and in this case Resume the suspended system.</p> <p>There are three discrete ways to suspand and resume KInetica for Kubernetes: -</p> <ul> <li>Manually from Workbench</li> <li>Auto-Suspend set in Workbench or fro the Helm installation Chart.</li> <li>Manually using a Kubernetes CR</li> </ul>","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-manually-from-workbench","title":"Suspend - Manually from Workbench","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-auto-suspend","title":"Suspend - Auto-Suspend","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-manually-using-a-kubernetes-cr","title":"Suspend - Manually using a Kubernetes CR","text":"","tags":["Operations"]},{"location":"Operators/k3s/","title":"Overview","text":"<p>Kinetica Operators can be installed in any on-prem kubernetes cluster. This document provides instructions to install the operators in k3s. If you are on another distribution, you should be able to change the values file to suit your environment.</p> <p>You will need a license key for this to work. Please contact Kinetica Support.</p>"},{"location":"Operators/k3s/#kinetica-on-k3s-k3sio","title":"Kinetica on k3s (k3s.io)","text":"<p>Current version of the chart supports kubernetes version 1.25 and above.</p>"},{"location":"Operators/k3s/#install-k3s-129","title":"Install k3s 1.29","text":"Bash<pre><code>curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n</code></pre>"},{"location":"Operators/k3s/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s -Install kinetica-operators including a sample db to try out","text":"<p>Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.</p> <p>As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.</p> <p>If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.</p> Text Only<pre><code>127.0.0.1 local.kinetica\n</code></pre>"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart","title":"K3s - Install the kinetica-operators chart","text":"Bash<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n</code></pre>"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart-gpu-capable-machine","title":"K3s - Install the kinetica-operators chart (GPU Capable Machine)","text":"<p>If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.</p> Bash<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n</code></pre> <p>You should be able to access the workbench at http://local.kinetica</p> <p>Username as per the values file mentioned above is kadmin and password is Kinetica1234!</p>"},{"location":"Operators/k3s/#uninstall-k3s","title":"Uninstall k3s","text":"Bash<pre><code>/usr/local/bin/k3s-uninstall.sh\n</code></pre>"},{"location":"Operators/k8s/","title":"Overview","text":"<p>For managed Kubernetes solutions (AKS, EKS, GKE) or other on-prem K8s flavors, follow this generic guide to install the Kinetica Operators, Database and Workbench. A product license key will be required for install. Please contact Kinetica Support to request a trial key.</p>"},{"location":"Operators/k8s/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"<p>Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. kubectl is optional but highly recommended. The context for the desired target cluster must be selected from your <code>~/.kube/config</code> file or set via the <code>KUBECONFIG</code> environment variable. Check to see if you have the correct context with,</p> Bash<pre><code>kubectl config current-context\n</code></pre> <p>and that you can access this cluster correctly with,</p> Bash<pre><code>kubectl get nodes\n</code></pre> <p>If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).</p>"},{"location":"Operators/k8s/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"<p>This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.</p> <p>If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.</p> <p>Alternatively, if you are installing on a local machine which does not have a domain name, you can add the following entry to your <code>/etc/hosts</code> file or equivalent:</p> Bash<pre><code>127.0.0.1 local.kinetica\n</code></pre> <p>Note that the default chart configuration points to <code>local.kinetica</code> but this is configurable.</p>"},{"location":"Operators/k8s/#1-add-the-kinetica-chart-repository","title":"1. Add the Kinetica chart repository","text":"<p>Add the repo locally as kinetica-operators:</p> Bash<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts\n</code></pre>"},{"location":"Operators/k8s/#2-obtain-the-default-helm-values-file","title":"2. Obtain the default Helm values file","text":"<p>For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.</p> Bash<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n</code></pre>"},{"location":"Operators/k8s/#3-determine-the-following-prior-to-the-chart-install","title":"3. Determine the following prior to the chart install","text":"<p>(a) Obtain a LICENSE-KEY as described in the introduction above. (b) Choose a PASSWORD for the initial administrator user (Note: the default in the chart for this user is <code>kadmin</code> but this is configurable). Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, <code>--set dbAdminUser.password=\"MyPassword\\!\"</code> \u00a9 As storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain DEFAULT-STORAGE-CLASS name with the command:</p> Bash<pre><code>kubectl get sc -o name \n</code></pre> <p>use the name found after the /, For example, in <code>\"storageclass.storage.k8s.io/TheName\"</code> use \"TheName\" as the parameter.</p>"},{"location":"Operators/k8s/#4-install-the-helm-chart","title":"4. Install the helm chart","text":"<p>Run the following Helm install command after substituting values from section 3 above:</p> Bash<pre><code>helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n</code></pre>"},{"location":"Operators/k8s/#5-check-installation-progress","title":"5. Check installation progress","text":"<p>After a few moments, follow the progression of the main database pod startup with:</p> Bash<pre><code>kubectl -n gpudb get po gpudb-0 -w\n</code></pre> <p>until it reaches <code>\"gpudb-0 3/3 Running\"</code> at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the <code>helm</code> command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using ctrl-c.</p>"},{"location":"Operators/k8s/#6-accessing-the-kinetica-installation","title":"6. Accessing the Kinetica installation","text":""},{"location":"Operators/k8s/#optional-install-a-development-chart-version","title":"(Optional) Install a development chart version","text":"<p>Find all alternative chart versions with:</p> Bash<pre><code>helm search repo kinetica-operators --devel --versions\n</code></pre> <p>Then append <code>--devel --version [CHART-DEVEL-VERSION]</code> to the end of the Helm install command in section 4 above.</p>"},{"location":"Operators/k8s/#k8s-flavour-specific-notes","title":"K8s Flavour specific notes","text":""},{"location":"Operators/k8s/#eks","title":"EKS","text":""},{"location":"Operators/k8s/#ebs-csi-driver","title":"EBS CSI driver","text":"<p>Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work. Please refer to this AWS documentation for more information.</p>"},{"location":"Operators/k8s/#ingress","title":"Ingress","text":"<p>As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.</p>"},{"location":"Operators/k8s/#option-1-use-the-loadbalancer-domain","title":"Option 1: Use the LoadBalancer domain","text":"Bash<pre><code>kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n</code></pre>"},{"location":"Operators/k8s/#option-1-use-your-custom-domain","title":"Option 1: Use your custom domain","text":"<p>Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.</p>"},{"location":"Operators/kind/","title":"Overview","text":"<p>This installation in a kind cluster is for trying out the operators and the database in a non production environment. This method currently only supports installing a CPU version of the database.</p> <p>You will need a license key for this to work. Please contact Kinetica Support.</p>"},{"location":"Operators/kind/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":""},{"location":"Operators/kind/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Bash<pre><code>kind create cluster --config charts/kinetica-operators/kind.yaml\n</code></pre>"},{"location":"Operators/kind/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"<p>Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.</p> <p>As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.</p>"},{"location":"Operators/kind/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the kinetica-operators chart","text":"Bash<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n</code></pre> <p>You should be able to access the workbench at http://local.kinetica</p> <p>Username as per the values file mentioned above is kadmin and password is Kinetica1234!</p>"},{"location":"Operators/kinetica-operators/","title":"Kinetica DB Operator Helm Charts","text":"<p>To install all the required operators in a single command perform the following: -</p> Bash<pre><code>helm install -n kinetica-system \\\nkinetica-operators kinetica-operators/kinetica-operators --create-namespace\n</code></pre> <p>This will install all the Kubernetes Operators required into the <code>kinetica-system</code> namespace and create the namespace if it is not currently present.</p> <p>Note</p> <p>Depending on what target platform you are installing to it may be necessary to supply an additional parameter pointing to a values file to successfully provision the DB.</p> Bash<pre><code>helm install -n kinetica-system -f values.yaml --set provider=aks \\\nkinetica-operators kinetica-operators/kinetica-operators --create-namespace\n</code></pre> <p>The command above uses a custom <code>values.yaml</code> for helm and sets the install platform to Microsoft Azure AKS.</p> <p>Currently supported <code>providers</code> are: -</p> <ul> <li><code>aks</code> - Microsoft Azure AKS</li> <li><code>eks</code> - Amazon AWS EKS</li> <li><code>local</code> - Generic 'On-Prem' Kubernetes Clusters e.g. one deployed using <code>kubeadm</code></li> </ul> <p>Example Helm <code>values.yaml</code> for different Cloud Providers/On-Prem installations: -</p> Azure AKSAmazon EKSOn-Prem values.yaml<pre><code>namespace: kinetica-system\n\ndb:\n serviceAccount: {}\n image:\n # Kinetica DB Operator installer image\n repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n # Kinetica DB Operator installer image tag\n tag: \"\"\n\n parameters:\n # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n kubeconfig: \"\"\n # The storage class to use for PVCs\n storageClass: \"managed-premium\"\n\n storageClass:\n persist:\n # Workbench Operator Persistent Volume Storage Class\n provisioner: \"disk.csi.azure.com\"\n procs:\n # Workbench Operator Procs Volume Storage Class\n provisioner: \"disk.csi.azure.com\"\n cache:\n # Workbench Operator Cache Volume Storage Class\n provisioner: \"disk.csi.azure.com\"\n</code></pre> <p>15 <code>storageClass: \"managed-premium\"</code> - sets the appropriate <code>storageClass</code> for Microsoft Azure AKS Persistent Volume (PV)</p> <p>20 <code>provisioner: \"disk.csi.azure.com\"</code> - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure</p> <p>23 <code>provisioner: \"disk.csi.azure.com\"</code> - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure</p> <p>26 <code>provisioner: \"disk.csi.azure.com\"</code> - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure</p> values.yaml<pre><code>namespace: kinetica-system\n\ndb:\n serviceAccount: {}\n image:\n # Kinetica DB Operator installer image\n repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n # Kinetica DB Operator installer image tag\n tag: \"\"\n\n parameters:\n # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n kubeconfig: \"\"\n # The storage class to use for PVCs\n storageClass: \"gp2\"\n\n storageClass:\n persist:\n # Workbench Operator Persistent Volume Storage Class\n provisioner: \"kubernetes.io/aws-ebs\"\n procs:\n # Workbench Operator Procs Volume Storage Class\n provisioner: \"kubernetes.io/aws-ebs\"\n cache:\n # Workbench Operator Cache Volume Storage Class\n provisioner: \"kubernetes.io/aws-ebs\"\n</code></pre> <p>15 <code>storageClass: \"gp2\"</code> - sets the appropriate <code>storageClass</code> for Amazon EKS Persistent Volume (PV)</p> <p>20 <code>provisioner: \"kubernetes.io/aws-ebs\"</code> - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure</p> <p>23 <code>provisioner: \"kubernetes.io/aws-ebs\"</code> - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure</p> <p>26 <code>provisioner: \"kubernetes.io/aws-ebs\"</code> - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure</p> values.yaml<pre><code>namespace: kinetica-system\n\ndb:\n serviceAccount: {}\n image:\n # Kinetica DB Operator installer image\n repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n # Kinetica DB Operator installer image tag\n tag: \"\"\n\n parameters:\n # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n kubeconfig: \"\"\n # the type of installation e.g. aks, eks, local\n environment: \"local\"\n # The storage class to use for PVCs\n storageClass: \"standard\"\n\n storageClass:\n procs: {}\n persist: {}\n cache: {}\n</code></pre> <p>15 <code>environment: \"local\"</code> - tells the DB Operator to deploy the DB as a 'local' instance to the Kubernetes Cluster</p> <p>17 <code>storageClass: \"standard\"</code> - sets the appropriate <code>storageClass</code> for the On-Prem Persistent Volume Provisioner</p> <p>storageClass</p> <p>The <code>storageClass</code> should be present in the target environment. </p> <p>A list of available <code>storageClass</code> can be obtained using: -</p> Bash<pre><code>kubectl get sc\n</code></pre>"},{"location":"Operators/kinetica-operators/#components","title":"Components","text":"<p>The <code>kinetica-db</code> Helm Chart wraps the deployment of a number of sub-components: -</p> <ul> <li>Porter Operator</li> <li>Kinetica Database Operator</li> <li>Kinetica Workbench Operator</li> </ul> <p>Installation/Upgrading/Deletion of the Kinetica Operators is done via two CRs which leverage porter.sh as the orchestrator. The corresponding Porter Operator, DB Operator & Workbench Operator CRs are submitted by running the appropriate helm command i.e.</p> <ul> <li>install</li> <li>upgrade</li> <li>uninstall</li> </ul>"},{"location":"Operators/kinetica-operators/#porter-operator","title":"Porter Operator","text":""},{"location":"Operators/kinetica-operators/#database-operator","title":"Database Operator","text":"<p>The Kinetica DB Operator installation CR for the porter.sh operator is: -</p> YAML<pre><code>apiVersion: porter.sh/v1\nkind: Installation\nmetadata:\n annotations:\n meta.helm.sh/release-name: kinetica-operators\n meta.helm.sh/release-namespace: kinetica-system\n labels:\n app.kubernetes.io/instance: kinetica-operators\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/name: kinetica-operators\n app.kubernetes.io/version: 0.1.0\n helm.sh/chart: kinetica-operators-0.1.0\n installVersion: 0.38.10\n name: kinetica-operators-operator-install\n namespace: kinetica-system\nspec:\n action: install\n agentConfig:\n volumeSize: '0'\n parameters:\n environment: local\n storageclass: managed-premium\n reference: docker.io/kinetica/kinetica-k8s-operator:v7.1.9-7.rc3\n</code></pre>"},{"location":"Operators/kinetica-operators/#workbench-operator","title":"Workbench Operator","text":"<p>The Kinetica Workbench installation CR for the porter.sh operator is: -</p> YAML<pre><code>apiVersion: porter.sh/v1\nkind: Installation\nmetadata:\n annotations:\n meta.helm.sh/release-name: kinetica-operators\n meta.helm.sh/release-namespace: kinetica-system\n labels:\n app.kubernetes.io/instance: kinetica-operators\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/name: kinetica-operators\n app.kubernetes.io/version: 0.1.0\n helm.sh/chart: kinetica-operators-0.1.0\n installVersion: 0.38.10\n name: kinetica-operators-wb-operator-install\n namespace: kinetica-system\nspec:\n action: install\n agentConfig:\n volumeSize: '0'\n parameters:\n environment: local\n reference: docker.io/kinetica/workbench-operator:v7.1.9-7.rc3\n</code></pre>"},{"location":"Operators/kinetica-operators/#overriding-images-tags","title":"Overriding Images Tags","text":"Bash<pre><code>helm install -n kinetica-system kinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--set provider=aks \n--set dbOperator.image.tag=v7.1.9-7.rc3 \\\n--set dbOperator.image.repository=docker.io/kinetica/kinetica-k8s-operator \\\n--set wbOperator.image.repository=docker.io/kinetica/workbench-operator \\\n--set wbOperator.image.tag=v7.1.9-7.rc3\n</code></pre>"},{"location":"Reference/","title":"Reference Section","text":"<ul> <li> <p> Kinetica Operators Helm</p> <p>Kinetica Operators Helm charts & values file reference data. Charts</p> </li> <li> <p> Kinetica Core DB CRDs-</p> <p>Kinetica DB Kubernetes CRD & ConfigMap reference data. Cluster CRDs</p> </li> <li> <p> Kinetica Workbench CRDs</p> <p>Kinetica Workbench Kubernetes CRD & ConfigMap reference data. Workbench</p> </li> </ul>","tags":["Reference"]},{"location":"Reference/database/","title":"Kinetica Database Configuration","text":"<ul> <li>kubectl (yaml)</li> </ul>","tags":["Reference"]},{"location":"Reference/database/#kineticacluster","title":"KineticaCluster","text":"<p>To deploy a new Database Instance into a Kubernetes cluster...</p> kubectl <p>Using kubetctl a CustomResource of type <code>KineticaCluster</code> is used to define a new Kinetica DB Cluster in a yaml file.</p> <p>The basic Group, Version, Kind or GVK to instantiate a Kinetica DB Cluster is as follows: -</p> kineticacluster.yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\n</code></pre>","tags":["Reference"]},{"location":"Reference/database/#metadata","title":"Metadata","text":"<p>to which we add a <code>metadata:</code> block for the name of the DB CR along with the <code>namespace</code> into which we are targetting the installation of the DB cluster.</p> kineticacluster.yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n name: my-kinetica-db-cr\n namespace: gpudb\nspec:\n</code></pre>","tags":["Reference"]},{"location":"Reference/database/#spec","title":"Spec","text":"<p>Under the <code>spec:</code> section of the KineticaCLuster CR we have a number of sections supporting different aspects of the deployed DB cluster:-</p> <ul> <li>gpudbCluster</li> <li>autoSuspend</li> <li>gadmin</li> </ul>","tags":["Reference"]},{"location":"Reference/database/#gpudbcluster","title":"gpudbCluster","text":"<p>Configuartion items specific to the DB itself.</p> kineticacluster.yaml - gpudbCluster<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n name: my-kinetica-db-cr\n namespace: gpudb\nspec:\n gpudbCluster:\n</code></pre>","tags":["Reference"]},{"location":"Reference/database/#gpudbcluster_1","title":"gpudbCluster","text":"cluster name & size<pre><code>clusterName: kinetica-cluster \nclusterSize: \n tshirtSize: M \n tshirtType: LargeCPU \nfqdn: kinetica-cluster.saas.kinetica.com\nhaRingName: default\nhasPools: false \n</code></pre> <p><code>1. clusterName</code> - the user defined name of the Kinetica DB Cluster</p> <p><code>2. clusterSize</code> - block that defines the number of DB Ranks to run</p> <p><code>3. tshirtSize</code> - sets the cluster size to a defined size based upon the t-shirt size. Valid sizes are: -</p> <ul> <li><code>XS</code> - 1 DB Rank</li> <li><code>S</code> - 2 DB Ranks</li> <li><code>M</code> - 4 DB Ranks</li> <li><code>L</code> - 8 DB Ranks</li> <li><code>XL</code> - 16 DB Ranks</li> <li><code>XXL</code> - 32 DB Ranks</li> <li><code>XXXL</code> - 64 DB Ranks</li> </ul> <p><code>4. tshirtType</code> - block that defines the tyoe DB Ranks to run: -</p> <ul> <li><code>SmallCPU</code> - </li> <li><code>LargeCPU</code> -</li> <li><code>SmallGPU</code> - </li> <li><code>LargeGPU</code> -</li> </ul> <p><code>5. fqdn</code> - The fully qualified URL for the DB cluster. Used on the Ingress records for any exposed services.</p> <p><code>6. haRingName</code> - Default: <code>default</code></p> <p><code>7. hasPools</code> - Whether to enable the separate node 'pools' for \"infra\", \"compute\" pod scheduling. Default: false +optional</p>","tags":["Reference"]},{"location":"Reference/database/#autosuspend","title":"autoSuspend","text":"<p>The DB Cluster autosuspend section allows for the spinning down of the core DB Pods to release the underlying Kubernetes nodes to reduce infrastructure costs when the DB is not in use. </p> kineticacluster.yaml - autoSuspend<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n name: my-kinetica-db-cr\n namespace: gpudb\nspec:\n autoSuspend:\n enabled: false\n inactivityDuration: 1h0m0s\n</code></pre> <p><code>7.</code> the start of the <code>autoSuspend</code> definition</p> <p><code>8.</code> <code>enabled</code> when set to <code>true</code> auto suspend of the DB cluster is enabled otherwise set to <code>false</code> and no automatic suspending of the DB takes place. If omitted it defaults to <code>false</code></p> <p><code>9.</code> <code>inactivityDuration</code> the duration after which if no DB activity has taken place the DB will be suspended</p> <p>Horizontal Pod Autoscaler</p> <p>In order for <code>autoSuspend</code> to work correctly the Kubernetes Horizontal Pod Autoscaler needs to be deployed to the cluster.</p>","tags":["Reference"]},{"location":"Reference/database/#gadmin","title":"gadmin","text":"<p>GAdmin the Database Administration Console</p> <p></p> kineticacluster.yaml - gadmin<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n name: my-kinetica-db-cr\n namespace: gpudb\nspec:\n gadmin:\n containerPort:\n containerPort: 8080\n name: gadmin\n protocol: TCP\n isEnabled: true\n</code></pre> <p><code>7.</code> <code>gadmin</code> configuration block definition</p> <p><code>8.</code> <code>containerPort</code> configuration block i.e. where <code>gadmin</code> is exposed on the DB Pod</p> <p><code>9.</code> <code>containerPort</code> the port number as an integer. Default: <code>8080</code></p> <p><code>10.</code> <code>name</code> the name of the port being exposed. Default: <code>gadmin</code></p> <p><code>11.</code> <code>protocol</code> network protocal used. Default: <code>TCP</code></p> <p><code>12.</code> <code>isEnabled</code> whether <code>gadmin</code> is exposed from the DB pod. Default: <code>true</code></p>","tags":["Reference"]},{"location":"Reference/database/#kineticauser","title":"KineticaUser","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticagrant","title":"KineticaGrant","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticaschema","title":"KineticaSchema","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticaresourcegroup","title":"KineticaResourceGroup","text":"","tags":["Reference"]},{"location":"Reference/helm_kinetica_operators/","title":"Helm Chart Reference","text":"","tags":["Reference"]},{"location":"Reference/helm_kinetica_operators/#coming-soon","title":"Coming Soon","text":"","tags":["Reference"]},{"location":"Reference/kinetica_cluster_admins/","title":"Kinetica Cluster Admins Reference","text":"","tags":["Reference"]},{"location":"Reference/kinetica_cluster_admins/#full-kineticaclusteradmin-cr-structure","title":"Full KineticaClusterAdmin CR Structure","text":"kineticaclusteradmins.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterAdmin\nmetadata: {}\n# KineticaClusterAdminSpec defines the desired state of\n# KineticaClusterAdmin\nspec:\n # ForceDBStatus - Force a Status of the DB.\n forceDbStatus: string\n # Name - The name of the cluster to target.\n kineticaClusterName: string\n # Offline - Pause/Resume of the DB.\n offline:\n # Set to true if desired state is offline. The supported values are:\n # true false\n offline: false\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: flush_to_disk Flush to disk when\n # going offline The supported values are: true false\n options: {}\n # Rebalance of the DB.\n rebalance:\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: rebalance_sharded_data If true,\n # sharded data will be rebalanced approximately equally across the\n # cluster. Note that for clusters with large amounts of sharded\n # data, this data transfer could be time-consuming and result in\n # delayed query responses. The default value is true. The supported\n # values are: true false rebalance_unsharded_data If true,\n # unsharded data (a.k.a. randomly-sharded) will be rebalanced\n # approximately equally across the cluster. Note that for clusters\n # with large amounts of unsharded data, this data transfer could be\n # time-consuming and result in delayed query responses. The default\n # value is true. The supported values are: true false\n # table_includes Comma-separated list of unsharded table names\n # to rebalance. Not applicable to sharded tables because they are\n # always rebalanced. Cannot be used simultaneously with\n # table_excludes. This parameter is ignored if\n # rebalance_unsharded_data is false.\n # table_excludes Comma-separated list of unsharded table names\n # to not rebalance. Not applicable to sharded tables because they\n # are always rebalanced. Cannot be used simultaneously with\n # table_includes. This parameter is ignored if rebalance_\n # unsharded_data is false. aggressiveness Influences how much\n # data is moved at a time during rebalance. A higher aggressiveness\n # will complete the rebalance faster. A lower aggressiveness will\n # take longer but allow for better interleaving between the\n # rebalance and other queries. Valid values are constants from 1\n # (lowest) to 10 (highest). The default value is '1'.\n # compact_after_rebalance Perform compaction of deleted records\n # once the rebalance completes to reclaim memory and disk space.\n # Default is true, unless repair_incorrectly_sharded_data is set to\n # true. The default value is true. The supported values are: true\n # false compact_only If set to true, ignore rebalance options\n # and attempt to perform compaction of deleted records to reclaim\n # memory and disk space without rebalancing first. The default\n # value is false. The supported values are: true false\n # repair_incorrectly_sharded_data Scans for any data sharded\n # incorrectly and re-routes the data to the correct location. Only\n # necessary if /admin/verifydb reports an error in sharding\n # alignment. This can be done as part of a typical rebalance after\n # expanding the cluster or in a standalone fashion when it is\n # believed that data is sharded incorrectly somewhere in the\n # cluster. Compaction will not be performed by default when this is\n # enabled. If this option is set to true, the time necessary to\n # rebalance and the memory used by the rebalance may increase. The\n # default value is false. The supported values are: true false\n options: {}\n # RegenerateDBConfig - Force regenerate of DB ConfigMap. true -\n # restarts DB Pods after config generation false - writes new\n # configuration without restarting the DB Pods\n regenerateDBConfig:\n # Restart - Scales down the DB STS and back up once the DB\n # Configuration has been regenerated.\n restart: false\n# KineticaClusterAdminStatus defines the observed state of\n# KineticaClusterAdmin\nstatus:\n # Phase - The current phase/state of the Admin request\n phase: string\n # Processed - Indicates if the admin request has already been\n # processed. Avoids the request being rerun in the case the Operator\n # gets restarted.\n processed: false\n</code></pre>","tags":["Reference"]},{"location":"Reference/kinetica_cluster_backups/","title":"Kinetica Cluster Backups Reference","text":"","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_backups/#full-kineticaclusterbackup-cr-structure","title":"Full KineticaClusterBackup CR Structure","text":"kineticaclusterbackups.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterBackup \nmetadata: {}\n# Fields specific to the linked backup engine\nprovider:\n # Name of the backup/restore provider. FOR INTERNAL USE ONLY.\n backupProvider: \"velero\"\n # Name of the backup in the linked BackupProvider. FOR INTERNAL USE\n # ONLY.\n linkedItemName: \"\"\n# BackupSpec defines the specification for a Velero backup.\nspec:\n # DefaultVolumesToRestic specifies whether restic should be used to\n # take a backup of all pod volumes by default.\n defaultVolumesToRestic: true\n # ExcludedNamespaces contains a list of namespaces that are not\n # included in the backup.\n excludedNamespaces: [\"string\"]\n # ExcludedResources is a slice of resource names that are not included\n # in the backup.\n excludedResources: [\"string\"]\n # Hooks represent custom behaviors that should be executed at\n # different phases of the backup.\n hooks:\n # Resources are hooks that should be executed when backing up\n # individual instances of a resource.\n resources:\n - excludedNamespaces: [\"string\"]\n # ExcludedResources specifies the resources to which this hook\n # spec does not apply.\n excludedResources: [\"string\"]\n # IncludedNamespaces specifies the namespaces to which this hook\n # spec applies. If empty, it applies to all namespaces.\n includedNamespaces: [\"string\"]\n # IncludedResources specifies the resources to which this hook\n # spec applies. If empty, it applies to all resources.\n includedResources: [\"string\"]\n # LabelSelector, if specified, filters the resources to which this\n # hook spec applies.\n labelSelector:\n # matchExpressions is a list of label selector requirements. The\n # requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists and DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is In\n # or NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array must\n # be empty. This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\", the\n # operator is \"In\", and the values array contains only \"value\".\n # The requirements are ANDed.\n matchLabels: {}\n # Name is the name of this hook.\n name: string\n # PostHooks is a list of BackupResourceHooks to execute after\n # storing the item in the backup. These are executed after\n # all \"additional items\" from item actions are processed.\n post:\n - exec:\n # Command is the command and arguments to execute.\n command: [\"string\"]\n # Container is the container in the pod where the command\n # should be executed. If not specified, the pod's first\n # container is used.\n container: string\n # OnError specifies how Velero should behave if it encounters\n # an error executing this hook.\n onError: string\n # Timeout defines the maximum amount of time Velero should\n # wait for the hook to complete before considering the\n # execution a failure.\n timeout: string\n # PreHooks is a list of BackupResourceHooks to execute prior to\n # storing the item in the backup. These are executed before\n # any \"additional items\" from item actions are processed.\n pre:\n - exec:\n # Command is the command and arguments to execute.\n command: [\"string\"]\n # Container is the container in the pod where the command\n # should be executed. If not specified, the pod's first\n # container is used.\n container: string\n # OnError specifies how Velero should behave if it encounters\n # an error executing this hook.\n onError: string\n # Timeout defines the maximum amount of time Velero should\n # wait for the hook to complete before considering the\n # execution a failure.\n timeout: string\n # IncludeClusterResources specifies whether cluster-scoped resources\n # should be included for consideration in the backup.\n includeClusterResources: true\n # IncludedNamespaces is a slice of namespace names to include objects\n # from. If empty, all namespaces are included.\n includedNamespaces: [\"string\"]\n # IncludedResources is a slice of resource names to include in the\n # backup. If empty, all resources are included.\n includedResources: [\"string\"]\n # LabelSelector is a metav1.LabelSelector to filter with when adding\n # individual objects to the backup. If empty or nil, all objects are\n # included. Optional.\n labelSelector:\n # matchExpressions is a list of label selector requirements. The\n # requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists and DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the operator is\n # Exists or DoesNotExist, the values array must be empty. This\n # array is replaced during a strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single {key,value} in\n # the matchLabels map is equivalent to an element of\n # matchExpressions, whose key field is \"key\", the operator is \"In\",\n # and the values array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {} metadata: labels: {}\n # OrderedResources specifies the backup order of resources of specific\n # Kind. The map key is the Kind name and value is a list of resource\n # names separated by commas. Each resource name has\n # format \"namespace/resourcename\". For cluster resources, simply\n # use \"resourcename\".\n orderedResources: {}\n # SnapshotVolumes specifies whether to take cloud snapshots of any\n # PV's referenced in the set of objects included in the Backup.\n snapshotVolumes: true\n # StorageLocation is a string containing the name of a\n # BackupStorageLocation where the backup should be stored.\n storageLocation: string\n # TTL is a time.Duration-parseable string describing how long the\n # Backup should be retained for.\n ttl: string\n # VolumeSnapshotLocations is a list containing names of\n # VolumeSnapshotLocations associated with this backup.\n volumeSnapshotLocations: [\"string\"] status:\n # ClusterSize the current number of ranks & type i.e. CPU or GPU of\n # the cluster when the backup took place.\n clusterSize:\n # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n # representation of the number of nodes in a simple to understand\n # T-Short size scheme. This indicates the size of the cluster i.e.\n # the number of nodes. It does not identify the size of the cloud\n # provider nodes. For node size see ClusterTypeEnum. Supported\n # Values are: - XS S M L XL XXL XXXL\n tshirtSize: string\n # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n # of the VM.\n tshirtType: string coldTierBackup: string\n # CompletionTimestamp records the time a backup was completed.\n # Completion time is recorded even on failed backups. Completion time\n # is recorded before uploading the backup object. The server's time\n # is used for CompletionTimestamps\n completionTimestamp: string\n # Errors is a count of all error messages that were generated during\n # execution of the backup. The actual errors are in the backup's log\n # file in object storage.\n errors: 1\n # Expiration is when this Backup is eligible for garbage-collection.\n expiration: string\n # FormatVersion is the backup format version, including major, minor,\n # and patch version.\n formatVersion: string\n # Phase is the current state of the Backup.\n phase: string\n # Progress contains information about the backup's execution progress.\n # Note that this information is best-effort only -- if Velero fails\n # to update it during a backup for any reason, it may be\n # inaccurate/stale.\n progress:\n # ItemsBackedUp is the number of items that have actually been\n # written to the backup tarball so far.\n itemsBackedUp: 1\n # TotalItems is the total number of items to be backed up. This\n # number may change throughout the execution of the backup due to\n # plugins that return additional related items to back up, the\n # velero.io/exclude-from-backup label, and various other filters\n # that happen as items are processed.\n totalItems: 1\n # StartTimestamp records the time a backup was started. Separate from\n # CreationTimestamp, since that value changes on restores. The\n # server's time is used for StartTimestamps\n startTimestamp: string\n # ValidationErrors is a slice of all validation errors\n # (if applicable).\n validationErrors: [\"string\"]\n # Version is the backup format major version. Deprecated: Please see\n # FormatVersion\n version: 1\n # VolumeSnapshotsAttempted is the total number of attempted volume\n # snapshots for this backup.\n volumeSnapshotsAttempted: 1\n # VolumeSnapshotsCompleted is the total number of successfully\n # completed volume snapshots for this backup.\n volumeSnapshotsCompleted: 1\n # Warnings is a count of all warning messages that were generated\n # during execution of the backup. The actual warnings are in the\n # backup's log file in object storage.\n warnings: 1\n</code></pre>","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_grants/","title":"Kinetica Cluster Grants CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_grants/#full-kineticagrant-cr-structure","title":"Full KineticaGrant CR Structure","text":"kineticagrants.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaGrant \nmetadata: {}\n# KineticaGrantSpec defines the desired state of KineticaGrant\nspec:\n # Grants system-level and/or table permissions to a user or role.\n addGrantAllOnSchemaRequest:\n # Name of the user or role that will be granted membership in input\n # parameter role. Must be an existing user or role.\n member: string\n # Optional parameters. The default value is an empty map ( {} ).\n options: {}\n # SchemaName - name of the schema on which to perform the Grant All\n schemaName: string\n # Grants system-level and/or table permissions to a user or role.\n addGrantPermissionRequest:\n # Optional parameters. The default value is an empty map ( {} ).\n options: {}\n # Permission to grant to the user or role. Supported\n # Values Description system_admin Full access to all data and\n # system functions. system_user_admin Access to administer users\n # and roles that do not have system_admin permission.\n # system_write Read and write access to all tables.\n # system_read Read-only access to all tables.\n systemPermission:\n # UID of the user or role to which the permission will be granted.\n # Must be an existing user or role.\n name: string\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: resource_group Name of an existing\n # resource group to associate with this role.\n options: {}\n # Permission to grant to the user or role. Supported\n # Values Description table_admin Full read/write and\n # administrative access to the table. table_insert Insert access\n # to the table. table_update Update access to the table.\n # table_delete Delete access to the table. table_read Read access\n # to the table.\n permission: string\n # Permission to grant to the user or role. Supported\n # Values Description<br/> system_admin Full access to all data and\n # system functions.<br/> system_user_admin Access to administer\n # users and roles that do not have system_admin permission.<br/>\n # system_write Read and write access to all tables.<br/>\n # system_read Read-only access to all tables.<br/>\n tablePermissions:\n - filter_expression: \"\"\n # UID of the user or role to which the permission will be granted.\n # Must be an existing user or role.\n name: string\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: resource_group Name of an existing\n # resource group to associate with this role.\n options: {}\n # Permission to grant to the user or role. Supported\n # Values Description table_admin Full read/write and\n # administrative access to the table. table_insert Insert access\n # to the table. table_update Update access to the table.\n # table_delete Delete access to the table. table_read Read access\n # to the table.\n permission: string\n # Name of the table for which the Permission is to be granted\n table_name: string\n # Grants membership in a role to a user or role.\n addGrantRoleRequest:\n # Name of the user or role that will be granted membership in input\n # parameter role. Must be an existing user or role.\n member: string\n # Optional parameters. The default value is an empty map ( {} ).\n options: {}\n # Name of the role in which membership will be granted. Must be an\n # existing role.\n role: string\n # Debug debug the call\n debug: false\n # RingName is the name of the kinetica ring that this user belongs\n # to.\n ringName: string\n# KineticaGrantStatus defines the observed state of KineticaGrant\nstatus:\n # DBStringResponse - The GPUdb server embeds the endpoint response\n # inside a standard response structure which contains status\n # information and the actual response to the query.\n db_response: data: string\n # This embedded JSON represents the result of the endpoint\n data_str: string\n # API Call Specific\n data_type: string\n # Empty if success or an error message\n message: string\n # 'OK' or 'ERROR'\n status: string \n ldap_response: string\n</code></pre>","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_reference/","title":"Core DB CRDs","text":"<ul> <li> <p> DB Clusters</p> <p>Core Kinetica Database Cluster Management CRD & sample CR.</p> <p> KineticaCluster</p> </li> <li> <p> DB Users</p> <p>Kinetica Database User Management CRD & sample CR.</p> <p> KineticaUser</p> </li> <li> <p> DB Roles</p> <p>Kinetica Database Role Management CRD & sample CR.</p> <p> KineticaRole</p> </li> <li> <p> DB Schemas</p> <p>Kinetica Database Schema Management CRD & sample CR.</p> <p> KineticaSchema</p> </li> <li> <p> DB Grants</p> <p>Kinetica Database Grant Management CRD & sample CR.</p> <p> KineticaGrant</p> </li> <li> <p> DB Resource Groups</p> <p>Kinetica Database Resource Group Management CRD & sample CR.</p> <p> KineticaResourceGroup</p> </li> <li> <p> DB Administration</p> <p>Kinetica Database Administration CRD & sample CR.</p> <p> KineticaAdmin</p> </li> <li> <p> DB Backups</p> <p>Kinetica Database Backup Management CRD & sample CR.</p> <p>Note</p> <p>This requires Velero to be installed on the Kubernetes Cluster.</p> <p> KineticaBackup</p> </li> <li> <p> DB Restore</p> <p>Kinetica Database Restore CRD & sample CR.</p> <p>Note</p> <p>This requires Velero to be installed on the Kubernetes Cluster.</p> <p> KineticaRestore</p> </li> </ul>","tags":["Reference","Installation","Operations"]},{"location":"Reference/kinetica_cluster_resource_groups/","title":"Kinetica Cluster Resource Groups CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_resource_groups/#full-kineticaresourcegroup-cr-structure","title":"Full KineticaResourceGroup CR Structure","text":"kineticaclusterresourcegroups.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterResourceGroup \nmetadata: {}\n# KineticaClusterResourceGroupSpec defines the desired state of\n# KineticaClusterResourceGroup\nspec: \n db_create_resource_group_request:\n # AdjoiningResourceGroup -\n adjoining_resource_group: \"\"\n # Name - name of the DB ResourceGroup\n # https://docs.kinetica.com/7.1/azure/sql/resource_group/?search-highlight=resource+group#id-baea5b60-769c-5373-bff1-53f4f1ca5c21\n name: string\n # Options - DB Options used when creating the ResourceGroup\n options: {}\n # Ranking - Indicates the relative ranking among existing resource\n # groups where this new resource group will be placed. When using\n # before or after, specify which resource group this one will be\n # inserted before or after in input parameter\n # adjoining_resource_group. The supported values are: first last\n # before after\n ranking: \"\"\n # RingName is the name of the kinetica ring that this user belongs\n # to.\n ringName: string\n# KineticaClusterResourceGroupStatus defines the observed state of\n# KineticaClusterResourceGroup\nstatus: \n provisioned: string\n</code></pre>","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_restores/","title":"Kinetica Cluster Restores Reference","text":"","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_restores/#full-kineticaclusterrestore-cr-structure","title":"Full KineticaClusterRestore CR Structure","text":"kineticaclusterrestores.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterRestore \nmetadata: {}\n# RestoreSpec defines the specification for a Velero restore.\nspec:\n # BackupName is the unique name of the Velero backup to restore from.\n backupName: string\n # ExcludedNamespaces contains a list of namespaces that are not\n # included in the restore.\n excludedNamespaces: [\"string\"]\n # ExcludedResources is a slice of resource names that are not included\n # in the restore.\n excludedResources: [\"string\"]\n # IncludeClusterResources specifies whether cluster-scoped resources\n # should be included for consideration in the restore. If null,\n # defaults to true.\n includeClusterResources: true\n # IncludedNamespaces is a slice of namespace names to include objects\n # from. If empty, all namespaces are included.\n includedNamespaces: [\"string\"]\n # IncludedResources is a slice of resource names to include in the\n # restore. If empty, all resources in the backup are included.\n includedResources: [\"string\"]\n # LabelSelector is a metav1.LabelSelector to filter with when\n # restoring individual objects from the backup. If empty or nil, all\n # objects are included. Optional.\n labelSelector:\n # matchExpressions is a list of label selector requirements. The\n # requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists and DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the operator is\n # Exists or DoesNotExist, the values array must be empty. This\n # array is replaced during a strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single {key,value} in\n # the matchLabels map is equivalent to an element of\n # matchExpressions, whose key field is \"key\", the operator is \"In\",\n # and the values array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # NamespaceMapping is a map of source namespace names to target\n # namespace names to restore into. Any source namespaces not included\n # in the map will be restored into namespaces of the same name.\n namespaceMapping: {}\n # RestorePVs specifies whether to restore all included PVs from\n # snapshot (via the cloudprovider).\n restorePVs: true\n # ScheduleName is the unique name of the Velero schedule to restore\n # from. If specified, and BackupName is empty, Velero will restore\n # from the most recent successful backup created from this schedule.\n scheduleName: string status: coldTierRestore: \"\"\n # CompletionTimestamp records the time the restore operation was\n # completed. Completion time is recorded even on failed restore. The\n # server's time is used for StartTimestamps\n completionTimestamp: string\n # Errors is a count of all error messages that were generated during\n # execution of the restore. The actual errors are stored in object\n # storage.\n errors: 1\n # FailureReason is an error that caused the entire restore to fail.\n failureReason: string\n # Phase is the current state of the Restore\n phase: string\n # Progress contains information about the restore's execution\n # progress. Note that this information is best-effort only -- if\n # Velero fails to update it during a restore for any reason, it may\n # be inaccurate/stale.\n progress:\n # ItemsRestored is the number of items that have actually been\n # restored so far\n itemsRestored: 1\n # TotalItems is the total number of items to be restored. This\n # number may change throughout the execution of the restore due to\n # plugins that return additional related items to restore\n totalItems: 1\n # StartTimestamp records the time the restore operation was started.\n # The server's time is used for StartTimestamps\n startTimestamp: string\n # ValidationErrors is a slice of all validation errors(if applicable)\n validationErrors: [\"string\"]\n # Warnings is a count of all warning messages that were generated\n # during execution of the restore. The actual warnings are stored in\n # object storage.\n warnings: 1\n</code></pre>","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_roles/","title":"Kinetica Cluster Roles CRD","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_roles/#full-kineticarole-cr-structure","title":"Full KineticaRole CR Structure","text":"kineticaroles.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaRole \nmetadata: {}\n# KineticaRoleSpec defines the desired state of KineticaRole\nspec:\n # AlterRoleRequest Kinetica DB REST API Request Format Object.\n alter_role:\n # Action - Modification operation to be applied to the role.\n action: string\n # Role UID - Name of the role to be altered. Must be an existing\n # role.\n name: string\n # Optional parameters. The default value is an empty map ( {} ).\n options: {}\n # Value - The value of the modification, depending on input\n # parameter action.\n value: string\n # Debug debug the call\n debug: false\n # RingName is the name of the kinetica ring that this user belongs\n # to.\n ringName: string\n # AddRoleRequest Kinetica DB REST API Request Format Object.\n role:\n # User UID\n name: string\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: resource_group Name of an existing\n # resource group to associate with this role.\n options: {}\n # ResourceGroupName of an existing resource group to associate with\n # this role\n resourceGroupName: \"\"\n# KineticaRoleStatus defines the observed state of KineticaRole\nstatus:\n # DBStringResponse - The GPUdb server embeds the endpoint response\n # inside a standard response structure which contains status\n # information and the actual response to the query.\n db_response: data: string\n # This embedded JSON represents the result of the endpoint\n data_str: string\n # API Call Specific\n data_type: string\n # Empty if success or an error message\n message: string\n # 'OK' or 'ERROR'\n status: string \n ldap_response: string\n</code></pre>","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_schemas/","title":"Kinetica Cluster Schemas CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_schemas/#full-kinetica-cluster-schemas-cr-structure","title":"Full Kinetica Cluster Schemas CR Structure","text":"kineticaclusterschemas.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterSchema \nmetadata: {}\n# KineticaClusterSchemaSpec defines the desired state of\n# KineticaClusterSchema\nspec: \n db_create_schema_request:\n # Name - the name of the resource group to create in the DB\n name: string\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: \"max_cpu_concurrency\", \"max_data\"\n options: {}\n # RingName is the name of the kinetica ring that this user belongs\n # to.\n ringName: string\n# KineticaClusterSchemaStatus defines the observed state of\n# KineticaClusterSchema\nstatus: \n provisioned: string\n</code></pre>","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_users/","title":"Kinetica Cluster Users CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_users/#full-kineticauser-cr-structure","title":"Full KineticaUser CR Structure","text":"kineticausers.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaUser\nmetadata: {}\n# KineticaUserSpec defines the desired state of KineticaUser\nspec:\n # Action field contains UserActionEnum field indicating whether it is\n # an Upsert or Change Password operation. For deletion delete the\n # KineticaUser CR and a finalizer will remove the user from LDAP.\n action: string\n # ChangePassword specific fields\n changePassword:\n # PasswordSecret - Not the actual user password but the name of a\n # Kubernetes Secret containing a Data element with a Password\n # attribute. The secret is removed on user creation. Must be in the\n # same namespace as the Kinetica Cluster. Must contain the\n # following fields: - oldPassword newPassword\n passwordSecret: string\n # Debug debug the call\n debug: false\n # GroupID - Organisation or Team Id the user belongs to.\n groupId: string\n # Create the user in Reveal\n reveal: true\n # RingName is the name of the kinetica ring that this user belongs\n # to.\n ringName: string\n # UID is the username (not UUID UID).\n uid: string\n # Upsert specific fields\n upsert:\n # CreateHomeDirectory - when true, a home directory in KiFS is\n # created for this user The default value is true. The supported\n # values are: true false\n createHomeDirectory: true\n # DB Memory user data size limit\n dataLimit: \"10Gi\"\n # DisplayName\n displayName: string\n # GivenName is Firstname also called Christian name. givenName in\n # LDAP terms.\n givenName: string\n # KIFs user data size limit\n kifsDataLimit: \"2Gi\"\n # LastName refers to last name or surname. sn in LDAP terms.\n lastName: string\n # Options -\n options: {}\n # PasswordSecret - Not the actual user password but the name of a\n # Kubernetes Secret containing a Data element with a Password\n # attribute. The secret is removed on user creation. Must be in the\n # same namespace as the Kinetica Cluster.\n passwordSecret: string\n # UPN or UserPrincipalName - e.g. guyt@cp.com \n # Looks like an email address.\n userPrincipalName: string\n # UUID is the user unique UUID from the Control Plane.\n uuid: string\n# KineticaUserStatus defines the observed state of KineticaUser\nstatus:\n # DBStringResponse - The GPUdb server embeds the endpoint response\n # inside a standard response structure which contains status\n # information and the actual response to the query.\n db_response: data: string\n # This embedded JSON represents the result of the endpoint\n data_str: string\n # API Call Specific\n data_type: string\n # Empty if success or an error message\n message: string\n # 'OK' or 'ERROR'\n status: string \n ldap_response: string \n reveal_admin: string\n</code></pre>","tags":["Reference","Administration"]},{"location":"Reference/kinetica_clusters/","title":"image/svg+xml crd Kinetica Clusters CRD Reference","text":"<p>This page covers the Kinetica Cluster Kubernetes CRD.</p>","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#kubectl-cli-commands","title":"<code>kubectl</code> cli commands","text":"","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#kubectl-n-_namespace_-get-kc","title":"<code>kubectl -n _namespace_ get kc</code>","text":"<p>Lists the <code>KineticaUsers</code> defined within the specified anmespace to the console.</p> <p></p> Bash<pre><code>kubectl -n _namespace_ get ku\n</code></pre>","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#full-kineticacluster-cr-structure","title":"Full KineticaCluster CR Structure","text":"kineticaclusters.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaCluster \nmetadata: {}\n# KineticaClusterSpec defines the configuration for KineticaCluster DB\nspec:\n # An optional duration after which the database is stopped and DB\n # resources are freed\n autoSuspend: \n enabled: false\n # InactivityDuration - the duration which the cluster should be idle\n # before auto-pausing the DB Cluster.\n inactivityDuration: \"1h\"\n # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n # etc.\n awsConfig:\n # ClusterName - AWS name of the EKS Cluster. NOTE: Marked as\n # optional but is mandatory\n clusterName: string\n # MarketplaceAppConfig - Amazon AWS specific DB Cluster\n # information.\n marketplaceApp:\n # KmsKeyId - Key for disk encryption. The full Amazon Resource\n # Name of the key to use when encrypting the volume. If none is\n # supplied but encrypted is true, a key is generated by AWS. See\n # AWS docs for valid ARN value.\n kmsKeyId: string\n # ProductCode - used to uniquely identify a product in AWS\n # Marketplace. The product code should be the same as the one\n # used during the publishing of a new product.\n productCode: \"1cmucncoyp9pi8xjdwqjimlf8\"\n # PublicKeyVersion - Public Key Version provided by AWS\n # Marketplace\n publicKeyVersion: 1\n # ParentResourceGroup - The resource group of the ManagedApp\n # itself ParentResourceGroup string\n # `json:\"parentResourceGroup\"` ResourceId - Identifier of the\n # resource against which usage is emitted Format is GUID\n # (UUID)\n # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n # Optional only if that exactly of ResourceId or ResourceUri is\n # specified.\n resourceId: string\n # NodeGroups - List of NodeGroups for this cluster MUST contain at\n # least one of the following keys: - \n # * none\n # * infra \n # * infra_public \n # * compute \n # * compute-gpu \n # * aaw_cpu \n # NOTE: Marked as optional but is mandatory\n nodeGroups: {}\n # OTELTracing - OpenTelemetry Tracing Specifics\n otelTracing:\n # Endpoint - Set the OpenTelemetry reporting Endpoint\n endpoint: \"\"\n # Key - KineticaCluster specific Key required to send Telemetry\n # information to the Cloud\n key: string\n # MaxBatchSize - Telemetry Reporting Interval to use in seconds.\n maxBatchInterval: 10\n # MaxBatchSize - Telemetry Maximum Batch Size to send.\n maxBatchSize: 1024\n # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n # etc.\n azureConfig:\n # App Insights Specifics\n appInsights:\n # Endpoint - Override the default AppInsights reporting Endpoint\n endpoint: \"\"\n # Key - KineticaCluster specific Application Insights Key required\n # to send Telemetry information to the Azure Portal\n key: string\n # MaxBatchSize - Telemetry Reporting Interval to use in seconds.\n maxBatchInterval: 10\n # MaxBatchSize - Telemetry Maximum Batch Size to send.\n maxBatchSize: 1024\n # AzureManagedAppConfig - Microsoft Azure specific DB Cluster\n # information.\n managedApp:\n # DiskEncryptionSetID - By default, managed disks use\n # platform-managed encryption keys. All managed disks, snapshots,\n # images, and data written to existing managed disks are\n # automatically encrypted-at-rest with platform-managed keys. You\n # can choose to manage encryption at the level of each managed\n # disk, with your own keys. When you specify a customer-managed\n # key, that key is used to protect and control access to the key\n # that encrypts your data. Customer-managed keys offer greater\n # flexibility to manage access controls.\n diskEncryptionSetId: string\n # PlanId - The Azure Marketplace Plan/Offer identifier selected by\n # the customer for this DB cluster e.g. BYOL, Pay-As-You-Go etc.\n planId: string\n # ParentResourceGroup - The resource group of the ManagedApp\n # itself ParentResourceGroup string\n # `json:\"parentResourceGroup\"` ResourceId - Identifier of the\n # resource against which usage is emitted Format is GUID\n # (UUID)\n # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n # Optional only if that exactly of ResourceId or ResourceUri is\n # specified.\n resourceId: string\n # ResourceUri - Identifier of the managed app resource against\n # which usage is emitted\n # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n # Optional only if that exactly of ResourceId or ResourceUri is\n # specified.\n resourceUri: string\n # Tells the operator we want to run in Debug mode.\n debug: false\n # Identifies the type of Kubernetes deployment.\n deploymentType:\n # CloudRegionEnum - The target Kubernetes type to deploy to.\n # Supported Values are: - aws_useast_1 aws_useast_2 aws_uswest_1\n # az_useast_1 az_uswest_1\n region: string\n # DeploymentTypeEnum - The type of the Deployment. Supported Values\n # are: - Managed FreeSaaS DedicatedSaaS OnPrem\n type: string\n # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n # etc.\n devEditionConfig:\n # Host IPv4 address. Used by KiND based Developer Edition where\n # ingress paths set to *. Provides qualified, routable URLs to\n # workbench.\n hostIpAddress: \"\"\n # The GAdmin Dashboard Configuration for the Kinetica Cluster.\n gadmin:\n # The port that GAdmin will be running on. It runs only on the head\n # node pod in the cluster. Default: 8080\n containerPort:\n # Number of port to expose on the pod's IP address. This must be a\n # valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must be\n # a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name. Name\n # for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # The Ingress Endpoint that GAdmin will be running on.\n ingressPath:\n # backend defines the referenced service endpoint to which the\n # traffic will be forwarded to.\n backend:\n # resource is an ObjectRef to another Kubernetes resource in the\n # namespace of the Ingress object. If resource is specified,\n # serviceName and servicePort must not be specified.\n resource:\n # APIGroup is the group for the resource being referenced. If\n # APIGroup is not specified, the specified Kind must be in\n # the core API group. For any other third-party types,\n # APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # serviceName specifies the name of the referenced service.\n serviceName: string\n # servicePort Specifies the port of the referenced service.\n servicePort: \n # path is matched against the path of an incoming request.\n # Currently it can contain characters disallowed from the\n # conventional \"path\" part of a URL as defined by RFC 3986. Paths\n # must begin with a '/' and must be present when using PathType\n # with value \"Exact\" or \"Prefix\".\n path: string\n # pathType determines the interpretation of the path matching.\n # PathType can be one of the following values: * Exact: Matches\n # the URL path exactly. * Prefix: Matches based on a URL path\n # prefix split by '/'. Matching is done on a path element by\n # element basis. A path element refers is the list of labels in\n # the path split by the '/' separator. A request is a match for\n # path p if every p is an element-wise prefix of p of the request\n # path. Note that if the last element of the path is a substring\n # of the last element in request path, it is not a match\n # (e.g. /foo/bar matches /foo/bar/baz, but does not\n # match /foo/barbaz). * ImplementationSpecific: Interpretation of\n # the Path matching is up to the IngressClass. Implementations\n # can treat this as a separate PathType or treat it identically\n # to Prefix or Exact path types. Implementations are required to\n # support all path types. Defaults to ImplementationSpecific.\n pathType: string\n # Whether to enable the GAdmin Dashboard on the Cluster. Default:\n # true\n isEnabled: true\n # Gaia - gaia.properties configuration\n gaia: admin:\n # AdminLoginOnlyGpudbDown - When GPUdb is down, only allow admin\n # user to login\n admin_login_only_gpudb_down: true\n # Username - We do check for admin username in various places\n admin_username: \"admin\"\n # LoginAnimationEnabled - Display any animation in login page\n login_animation_enabled: true\n # AdminLoginOnlyGpudbDown - Convenience settings for dev mode\n login_bypass_enabled: false\n # RequireStrongPassword - Convenience settings for dev mode\n require_strong_password: true\n # SSLTruststorePasswordScript - Display any animation in login\n # page\n ssl_truststore_password_script: string\n # DemoSchema - Schema-related configuration\n demo_schema: \"demo\" gpudb:\n # DataFileStringNullValue - Table import/export null value string\n data_file_string_null_value: \"\\\\N\"\n gpudb_ext_url: \"http://127.0.0.1:8082/gpudb-0\"\n # URL - Current instance of gpudb, when running in HA mode change\n # this to load balancer endpoint\n gpudb_url: \"http://127.0.0.1:9191\"\n # LoggingLogFileName - Which file to use when displaying logging\n # on Cluster page.\n logging_log_file_name: \"gpudb.log\"\n # SampleRepoURL - Table import/export null value string\n sample_repo_url: \"//s3.amazonaws.com/kinetica-ce-data\" hm:\n gpudb_ext_hm_url: \"http://127.0.0.1:8082/gpudb-host-manager\"\n gpudb_hm_url: \"http://127.0.0.1:9300\" http:\n # ClientTimeout - Number of seconds for proxy request timeout\n http_client_timeout: 3600\n # ClientTimeoutV2 - Force override of previous default with 0 as\n # infinite timeout\n http_client_timeout_v2: 0\n # TomcatPathKey - Name of folder where Tomcat apps are installed\n tomcat_path_key: \"tomcat\"\n # WebappContext - Web App context\n webapp_context: \"gadmin\"\n # GAdminIsRemote - True if the gadmin application is running on a\n # remote machine (not on same node as gpudb). If running on a\n # remote machine the manage options will be disabled.\n is_remote: false\n # KAgentCLIPath - Schema-related configuration\n kagent_cli_path: \"/opt/gpudb/kagent/bin/kagent\"\n # KIO - KIO-related configuration\n kio: kio_log_file_path: \"/opt/gpudb/kitools/kio/logs/gadmin.log\"\n kio_log_level: \"DEBUG\" kio_log_size_limit: 10485760 kisql:\n # QueryResultsLimit - KiSQL limit on the number of results in each\n # query\n kisql_query_results_limit: 10000\n # QueryTimezone - KiSQL TimeZoneId setting for queries\n # (use \"system\" for local system time)\n kisql_query_timezone: \"GMT\" license:\n # Status - Stub for license manager\n status: \"ok\"\n # Type - Stub for license manager\n type: \"unlimited\"\n # MaxConcurrentUserSessions - Session management configuration\n max_concurrent_user_sessions: 0\n # PublicSchema - Schema-related configuration\n public_schema: \"ki_home\"\n # RevealDBInfoFile - Path to file containing Reveal DB location\n reveal_db_info_file: \"/opt/gpudb/connectors/reveal/var/REVEAL_DB_DIR\"\n # RootSchema - Schema-related configuration\n root_schema: \"root\" stats:\n # GraphanaURL -\n graphana_url: \"http://127.0.0.1:3000\"\n # GraphiteURL\n graphite_url: \"http://127.0.0.1:8181\"\n # StatsGrafanaURL - Port used to host the Grafana user interface\n # and embeddable metric dashboards in GAdmin. Note: If this value\n # is defaulted then it will be replaced by the name of the Stats\n # service if it is deployed & Grafana is enabled e.g.\n # cluster-1234.gpudb.svc.cluster.local\n stats_grafana_url: \"http://127.0.0.1:9091\"\n # https://github.com/kubernetes-sigs/controller-tools/issues/622 if we\n # want to set usePools as false, need to set defaults GPUDBCluster is\n # an instance of a Kinetica DB Cluster i.e. it's StatefulSet,\n # Service, Ingress, ConfigMap etc.\n gpudbCluster:\n # Affinity - is a group of affinity scheduling rules.\n affinity:\n # Describes node affinity scheduling rules for the pod.\n nodeAffinity:\n # The scheduler will prefer to schedule pods to nodes that\n # satisfy the affinity expressions specified by this field, but\n # it may choose a node that violates one or more of the\n # expressions. The node that is most preferred is the one with\n # the greatest sum of weights, i.e. for each node that meets\n # all of the scheduling requirements (resource request,\n # requiredDuringScheduling affinity expressions, etc.), compute\n # a sum by iterating through the elements of this field and\n # adding \"weight\" to the sum if the node matches the\n # corresponding matchExpressions; the node(s) with the highest\n # sum are the most preferred.\n preferredDuringSchedulingIgnoredDuringExecution:\n - preference:\n # A list of node selector requirements by node's labels.\n matchExpressions:\n - key: string\n # Represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists, DoesNotExist.\n # Gt, and Lt.\n operator: string\n # An array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. If the operator is Gt or Lt, the values\n # array must have a single element, which will be\n # interpreted as an integer. This array is replaced\n # during a strategic merge patch.\n values: [\"string\"]\n # A list of node selector requirements by node's fields.\n matchFields:\n - key: string\n # Represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists, DoesNotExist.\n # Gt, and Lt.\n operator: string\n # An array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. If the operator is Gt or Lt, the values\n # array must have a single element, which will be\n # interpreted as an integer. This array is replaced\n # during a strategic merge patch.\n values: [\"string\"]\n # Weight associated with matching the corresponding\n # nodeSelectorTerm, in the range 1-100.\n weight: 1\n # If the affinity requirements specified by this field are not\n # met at scheduling time, the pod will not be scheduled onto\n # the node. If the affinity requirements specified by this\n # field cease to be met at some point during pod execution\n # (e.g. due to an update), the system may or may not try to\n # eventually evict the pod from its node.\n requiredDuringSchedulingIgnoredDuringExecution:\n # Required. A list of node selector terms. The terms are\n # ORed.\n nodeSelectorTerms:\n - matchExpressions:\n - key: string\n # Represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists, DoesNotExist.\n # Gt, and Lt.\n operator: string\n # An array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. If the operator is Gt or Lt, the values\n # array must have a single element, which will be\n # interpreted as an integer. This array is replaced\n # during a strategic merge patch.\n values: [\"string\"]\n # A list of node selector requirements by node's fields.\n matchFields:\n - key: string\n # Represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists, DoesNotExist.\n # Gt, and Lt.\n operator: string\n # An array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. If the operator is Gt or Lt, the values\n # array must have a single element, which will be\n # interpreted as an integer. This array is replaced\n # during a strategic merge patch.\n values: [\"string\"]\n # Describes pod affinity scheduling rules (e.g. co-locate this pod\n # in the same node, zone, etc. as some other pod(s)).\n podAffinity:\n # The scheduler will prefer to schedule pods to nodes that\n # satisfy the affinity expressions specified by this field, but\n # it may choose a node that violates one or more of the\n # expressions. The node that is most preferred is the one with\n # the greatest sum of weights, i.e. for each node that meets\n # all of the scheduling requirements (resource request,\n # requiredDuringScheduling affinity expressions, etc.), compute\n # a sum by iterating through the elements of this field and\n # adding \"weight\" to the sum if the node has pods which matches\n # the corresponding podAffinityTerm; the node(s) with the\n # highest sum are the most preferred.\n preferredDuringSchedulingIgnoredDuringExecution:\n - podAffinityTerm:\n # A label query over a set of resources, in this case pods.\n labelSelector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator\n # is In or NotIn, the values array must be non-empty.\n # If the operator is Exists or DoesNotExist, the values\n # array must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # A label query over the set of namespaces that the term\n # applies to. The term is applied to the union of the\n # namespaces selected by this field and the ones listed in\n # the namespaces field. null selector and null or empty\n # namespaces list means \"this pod's namespace\". An empty\n # selector ({}) matches all namespaces.\n namespaceSelector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator\n # is In or NotIn, the values array must be non-empty.\n # If the operator is Exists or DoesNotExist, the values\n # array must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # namespaces specifies a static list of namespace names that\n # the term applies to. The term is applied to the union of\n # the namespaces listed in this field and the ones selected\n # by namespaceSelector. null or empty namespaces list and\n # null namespaceSelector means \"this pod's namespace\".\n namespaces: [\"string\"]\n # This pod should be co-located (affinity) or not\n # co-located (anti-affinity) with the pods matching the\n # labelSelector in the specified namespaces, where\n # co-located is defined as running on a node whose value of\n # the label with key topologyKey matches that of any node\n # on which any of the selected pods is running. Empty\n # topologyKey is not allowed.\n topologyKey: string\n # weight associated with matching the corresponding\n # podAffinityTerm, in the range 1-100.\n weight: 1\n # If the affinity requirements specified by this field are not\n # met at scheduling time, the pod will not be scheduled onto\n # the node. If the affinity requirements specified by this\n # field cease to be met at some point during pod execution\n # (e.g. due to a pod label update), the system may or may not\n # try to eventually evict the pod from its node. When there are\n # multiple elements, the lists of nodes corresponding to each\n # podAffinityTerm are intersected, i.e. all terms must be\n # satisfied.\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n # matchExpressions is a list of label selector requirements.\n # The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is\n # In or NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # A label query over the set of namespaces that the term\n # applies to. The term is applied to the union of the\n # namespaces selected by this field and the ones listed in\n # the namespaces field. null selector and null or empty\n # namespaces list means \"this pod's namespace\". An empty\n # selector ({}) matches all namespaces.\n namespaceSelector:\n # matchExpressions is a list of label selector requirements.\n # The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is\n # In or NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # namespaces specifies a static list of namespace names that\n # the term applies to. The term is applied to the union of\n # the namespaces listed in this field and the ones selected\n # by namespaceSelector. null or empty namespaces list and\n # null namespaceSelector means \"this pod's namespace\".\n namespaces: [\"string\"]\n # This pod should be co-located (affinity) or not co-located\n # (anti-affinity) with the pods matching the labelSelector in\n # the specified namespaces, where co-located is defined as\n # running on a node whose value of the label with key\n # topologyKey matches that of any node on which any of the\n # selected pods is running. Empty topologyKey is not\n # allowed.\n topologyKey: string\n # Describes pod anti-affinity scheduling rules (e.g. avoid putting\n # this pod in the same node, zone, etc. as some other pod(s)).\n podAntiAffinity:\n # The scheduler will prefer to schedule pods to nodes that\n # satisfy the anti-affinity expressions specified by this\n # field, but it may choose a node that violates one or more of\n # the expressions. The node that is most preferred is the one\n # with the greatest sum of weights, i.e. for each node that\n # meets all of the scheduling requirements (resource request,\n # requiredDuringScheduling anti-affinity expressions, etc.),\n # compute a sum by iterating through the elements of this field\n # and adding \"weight\" to the sum if the node has pods which\n # matches the corresponding podAffinityTerm; the node(s) with\n # the highest sum are the most preferred.\n preferredDuringSchedulingIgnoredDuringExecution:\n - podAffinityTerm:\n # A label query over a set of resources, in this case pods.\n labelSelector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator\n # is In or NotIn, the values array must be non-empty.\n # If the operator is Exists or DoesNotExist, the values\n # array must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # A label query over the set of namespaces that the term\n # applies to. The term is applied to the union of the\n # namespaces selected by this field and the ones listed in\n # the namespaces field. null selector and null or empty\n # namespaces list means \"this pod's namespace\". An empty\n # selector ({}) matches all namespaces.\n namespaceSelector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator\n # is In or NotIn, the values array must be non-empty.\n # If the operator is Exists or DoesNotExist, the values\n # array must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # namespaces specifies a static list of namespace names that\n # the term applies to. The term is applied to the union of\n # the namespaces listed in this field and the ones selected\n # by namespaceSelector. null or empty namespaces list and\n # null namespaceSelector means \"this pod's namespace\".\n namespaces: [\"string\"]\n # This pod should be co-located (affinity) or not\n # co-located (anti-affinity) with the pods matching the\n # labelSelector in the specified namespaces, where\n # co-located is defined as running on a node whose value of\n # the label with key topologyKey matches that of any node\n # on which any of the selected pods is running. Empty\n # topologyKey is not allowed.\n topologyKey: string\n # weight associated with matching the corresponding\n # podAffinityTerm, in the range 1-100.\n weight: 1\n # If the anti-affinity requirements specified by this field are\n # not met at scheduling time, the pod will not be scheduled\n # onto the node. If the anti-affinity requirements specified by\n # this field cease to be met at some point during pod\n # execution (e.g. due to a pod label update), the system may or\n # may not try to eventually evict the pod from its node. When\n # there are multiple elements, the lists of nodes corresponding\n # to each podAffinityTerm are intersected, i.e. all terms must\n # be satisfied.\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n # matchExpressions is a list of label selector requirements.\n # The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is\n # In or NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # A label query over the set of namespaces that the term\n # applies to. The term is applied to the union of the\n # namespaces selected by this field and the ones listed in\n # the namespaces field. null selector and null or empty\n # namespaces list means \"this pod's namespace\". An empty\n # selector ({}) matches all namespaces.\n namespaceSelector:\n # matchExpressions is a list of label selector requirements.\n # The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is\n # In or NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # namespaces specifies a static list of namespace names that\n # the term applies to. The term is applied to the union of\n # the namespaces listed in this field and the ones selected\n # by namespaceSelector. null or empty namespaces list and\n # null namespaceSelector means \"this pod's namespace\".\n namespaces: [\"string\"]\n # This pod should be co-located (affinity) or not co-located\n # (anti-affinity) with the pods matching the labelSelector in\n # the specified namespaces, where co-located is defined as\n # running on a node whose value of the label with key\n # topologyKey matches that of any node on which any of the\n # selected pods is running. Empty topologyKey is not\n # allowed.\n topologyKey: string\n # Annotations - Annotations labels to be applied to the Statefulset\n # DB pods.\n annotations: {}\n # The name of the cluster to form.\n clusterName: string\n # The Ingress Endpoint that GAdmin will be running on.\n clusterSize:\n # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n # representation of the number of nodes in a simple to understand\n # T-Short size scheme. This indicates the size of the cluster\n # i.e. the number of nodes. It does not identify the size of the\n # cloud provider nodes. For node size see ClusterTypeEnum.\n # Supported Values are: - XS S M L XL XXL XXXL\n tshirtSize: string\n # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n # of the VM.\n tshirtType: string\n # Config Kinetica DB Configuration Object\n config: ai: apiKey: string\n # Provider - AI API provider type. The default is \"sqlgpt\"\n apiProvider: \"sqlgpt\" apiUrl: string\n # AlertManagerConfig\n alertManager:\n # AlertManager IP address (run on head node) default port\n # is \"2003\"\n ipAddress: \"${gaia.host0.address}\" port: 2003\n # AlertConfig\n alerts: alertDiskAbsolute: [integer]\n # Trigger an alert if available disk space on any given node\n # falls to or below a certain threshold, either absolute\n # (number of bytes) or percentage of total disk space. For\n # multiple thresholds, use a comma-delimited list of values.\n alertDiskPercentage: [1,5,10,20]\n # Trigger generic error message alerts, in cases of various\n # significant runtime errors.\n alertErrorMessages: true\n # Executable to run when an alert condition occurs. This\n # executable will only be run on **rank0** and does not need to\n # be present on other nodes.\n alertExe: \"\"\n # Trigger an alert whenever the status of a host or rank\n # changes.\n alertHostStatus: true\n # Optionally, filter host alerts for a comma-delimited list of\n # statuses. If a filter is empty, every host status change will\n # trigger an alert.\n alertHostStatusFilter: \"fatal_init_error\"\n # The maximum number of triggered alerts guaranteed to be stored\n # at any given time. When this number of alerts is exceeded,\n # older alerts may be discarded to stay within the limit.\n alertMaxStoredAlerts: 100 alertMemoryAbsolute: [integer]\n # Trigger an alert if available memory on any given node falls\n # to or below a certain threshold, either absolute (number of\n # bytes) or percentage of total memory. For multiple\n # thresholds, use a comma-delimited list of values.\n alertMemoryPercentage: [1,5,10,20]\n # Trigger an alert if a CUDA error occurs on a rank.\n alertRankCudaError: true\n # Trigger alerts when the fallback allocator is employed; e.g.,\n # host memory is allocated because GPU allocation fails. NOTE:\n # To prevent a flooding of alerts, if a fallback allocator is\n # triggered in bursts, not every use will generate an alert.\n alertRankFallbackAllocator: true\n # Trigger an alert whenever the status of a rank changes.\n alertRankStatus: true\n # Optionally, filter rank alerts for a comma-delimited list of\n # statuses. If a filter is empty, every rank status change will\n # trigger an alert.\n alertRankStatusFilter:\n [\"fatal_init_error\",\"not_responding\",\"terminated\"]\n # Enable the alerting system.\n enableAlerts: true\n # Directory where the trace event and summary files are stored.\n # Must be a fully qualified path with sufficient free space for\n # required volume of data.\n traceDirectory: \"/tmp\"\n # The maximum number of trace events to be collected\n traceEventBufferSize: 1000000\n # Audit - This section controls the request auditor, which will\n # audit all requests received by the server in full or in part\n # based on the settings.\n audit:\n # Controls whether the body of each request is audited (in JSON\n # format). If 'enable_audit' is \"false\" this setting has no\n # effect. NOTE: For requests that insert data records, this\n # setting does not control the auditing of the records being\n # inserted, only the rest of the request body; see 'audit_data'\n # below to control this. audit_body = false\n body: false\n # Controls whether records being inserted are audited (in JSON\n # format) for requests that insert data records. If\n # either 'enable_audit' or 'audit_body' is \"false\", this\n # setting has no effect. NOTE: Enabling this setting during\n # bulk ingestion of data will rapidly produce very large audit\n # logs and may cause disk space exhaustion; use with caution.\n # audit_data = false\n data: false\n # Controls whether request auditing is enabled. If set\n # to \"true\", the following information is audited for every\n # request: Job ID, URI, User, and Client Address. The settings\n # below control whether additional information about each\n # request is also audited. If set to \"false\", all auditing is\n # disabled. enable_audit = false\n enable: false\n # Controls whether HTTP headers are audited for each request.\n # If 'enable_audit' is \"false\" this setting has no effect.\n # audit_headers = false\n headers: true\n # Controls whether the above audit settings can be altered at\n # runtime via the /alter/system/properties endpoint. In a\n # secure environment where auditing is required at all times,\n # this should be set to \"true\" to lock the settings to what is\n # set in this file. lock_audit = false\n lock: false\n # Controls whether response information is audited for each\n # request. If 'enable_audit' is \"false\" this setting has no\n # effect. audit_response = false\n response: false\n # EventConfig\n events:\n # Run a statistics server to collect information about Kinetica\n # and the machines it runs on.\n internal: true\n # Statistics server IP address (run on head node) default port\n # is \"2003\"\n ipAddress: \"${gaia.host0.address}\" port: 2003\n # Statistics server namespace - should be a machine identifier\n statsServerNamespace: \"gpudb\"\n # ExternalFilesConfig\n externalFiles:\n # Defines the directory from which external files can be loaded\n directory: \"/opt/gpudb/persist\"\n # # Parquet files compression type egress_parquet_compression =\n # snappy\n egressParquetCompression: \"snappy\"\n # Max file size (in MB) to allow saving to a single file. May be\n # overridden by target limitations. egress_single_file_max_size\n # = 100\n egressSingleFileMaxSize: \"100\"\n # Maximum number of simultaneous threads allocated to a given\n # external file read request, on each rank. Note that thread\n # allocation may also be limited by resource group limits, the\n # subtask_concurrency_limit setting, or system load.\n readerNumTasks: \"-1\"\n # GeneralConfig - the root of the gpudb.conf configuration in the\n # CRD\n general:\n # Timeout (in seconds) to wait for a rank to start during a\n # cluster event (ex: failover) event is considered failed.\n clusterEventTimeoutStartupRank: \"300\"\n # Enable (if \"true\") multiple kernels to run concurrently on the\n # same GPU\n concurrentKernelExecution: true\n # Time-to-live in minutes of non-protected tables before they\n # are automatically deleted from the database.\n defaultTTL: \"20\"\n # Disallow the /clear/table request to clear all tables.\n disableClearAll: true\n # Enable overlapped-equi-join filters\n enableOverlappedEquiJoin: true\n # Enable predicate-equi-join filter plan type\n enablePredicateEquiJoin: true\n # If \"true\" then all filter execution will be host-only\n # (i.e. CPU). This can be useful for high-concurrency\n # situations and when PCIe bandwidth is a limiting factor.\n forceHostFilterExecution: false\n # Maximum number of kernels that can be running at the same time\n # on a given GPU. Set to \"0\" for no limit. Only takes effect\n # if 'concurrent_kernel_execution' is \"true\"\n maxConcurrentKernels: \"0\"\n # Maximum number of records that data retrieval requests such\n # as /get/records and /aggregate/groupby will return per\n # request.\n maxGetRecordsSize: 20000\n # Set an optional executable command that will be run once when\n # Kinetica is ready for client requests. This can be used to\n # perform any initialization logic that needs to be run before\n # clients connect. It will be run as the \"gpudb\" user, so you\n # must ensure that any required permissions are set on the file\n # to allow it to be executed. If the command cannot be\n # executed or returns a non-zero error code, then Kinetica will\n # be stopped. Output from the startup script will be logged\n # to \"/opt/gpudb/core/logs/gpudb-on-start.log\" (and its dated\n # relatives). The \"gpudb_env.sh\" script is run directly before\n # the command, so the path will be set to include the supplied\n # Python runtime. Example: on_startup_script\n # = /home/gpudb/on-start.sh param1 param2 ...\n onStartupScript: \"\"\n # Size in bytes of the pinned memory pool per-rank process to\n # speed up copying data to the GPU. Set to \"0\" to disable.\n pinnedMemoryPoolSize: 2000000000\n # Tables and collections with these names will not be deleted\n # (comma separated).\n protectedSets: \"MASTER,_MASTER,_DATASOURCE\"\n # Timeout (in minutes) for filter-type requests\n requestTimeout: \"20\"\n # Timeout (in seconds) to wait for a rank to exit gracefully\n # before it is force-killed. Machines with slow disk drives may\n # require longer times and data may be lost if a drive is not\n # responsive.\n timeoutShutdownRank: \"300\"\n # Timeout (in seconds) to wait for each database subsystem to\n # exit gracefully before it is force-killed.\n timeoutShutdownSubsystem: \"20\"\n # Timeout (in seconds) to wait for each database subsystem to\n # startup. Subsystems include the Query Planner, Graph,\n # Stats, & HTTP servers, as well as external text-search\n # ranks.\n timeoutStartupSubsystem: \"60\"\n # GraphConfig\n graph:\n # Enable the graph server\n enable: false\n # List of GPU devices to be used by graph server The server\n # would ideally be run on a different node with dedicated GPU\n # (s)\n gpuList: \"\"\n # Specify where the graph server should be run, defaults to head\n # node\n ipAddress: \"${gaia.rank0_ip_address}\"\n # Maximum memory that can be used by the graph server, set\n # to \"0\" to disable memory restriction\n maxMemory: 0\n # Port used for responses from the graph server to the database\n # server\n pullPort: 8100\n # Port used for requests from the database server to the graph\n # server\n pushPort: 8099\n # Number of seconds the graph client will wait for a response\n # from the graph server\n timeout: 1200\n # HardwareConfig\n hardware:\n # Rank0HardwareConfig\n rank0:\n # Specify the GPU to use for all calculations on the HTTP\n # server node, **rank0**. NOTE: The **rank0** GPU may be\n # shared with another rank.\n gpu: 0\n # Set the head HTTP **rank0** numa node(s). If left empty,\n # there will be no thread affinity or preferred memory node.\n # The node list may be either a single node number or a\n # range; e.g., \"1-5,7,10\". If there will be many simultaneous\n # users, specify as many nodes as possible that won't overlap\n # the **rank1** to **rankN** worker numa nodes that the GPUs\n # are on. If there will be few simultaneous users and WMS\n # speed is important, choose the numa node the 'rank0.gpu' is\n # on.\n numaNode: ranks:\n - baseNumaNode: string\n # Set each worker rank's preferred data numa node for CPU\n # affinity and memory allocation.\n # The 'rank<#>.data_numa_node' is the node or nodes that data\n # intensive threads will run in and should be set to the same\n # numa node that the GPU specified by the\n # corresponding 'rank<#>.taskcalc_gpu' is on for best\n # performance. If the 'rank<#>.taskcalc_gpu' is specified\n # the 'rank<#>.data_numa_node' will be automatically set to\n # the node the GPU is attached to, otherwise there will be no\n # CPU thread affinity or preferred node for memory allocation\n # if not specified or left empty. The node list may be a\n # single node number or a range; e.g., \"1-5,7,10\".\n dataNumaNode: string\n # Set the GPU device for each worker rank to use. If no GPUs\n # are specified, each rank will round-robin the available\n # GPUs per host system. Add 'rank<#>.taskcalc_gpu' as needed\n # for the worker ranks, where *#* ranges from \"1\" to the\n # highest *rank #* among the 'rank<#>.host' parameters\n # Example setting the GPUs to use for ranks 1 and 2: \n # # rank1.taskcalc_gpu = 0 # rank2.taskcalc_gpu = 1\n taskCalcGPU: kafka:\n # Maximum number of records to be ingested in a single batch\n # kafka.batch_size = 1000\n batchSize: 1000\n # Maximum time (milliseconds) for each poll to get records from\n # kafka kafka.poll_timeout = 0\n pollTimeout: 1\n # Maximum wait time (seconds) to buffer records received from\n # kafka before ingestion kafka.wait_time = 30\n waitTime: 30\n # KifsConfig\n kifs:\n # KIFs user data size limit\n dataLimit: \"4Gi\"\n # sudo usermod -a -G gpudb_proc <user>\n enable: false\n # Parent directory of the mount point for the KiFS file system.\n # Must be a fully qualified path. The actual mount point will\n # be a subdirectory *mount* below this directory. Note that\n # this folder must have read, write and execute permissions for\n # the \"gpudb\" user and the \"gpudb_proc\" group, and it cannot be\n # a path on an NFS.\n mountPoint: \"/gpudb/kifs\" useManagedCredentials: true\n # Etcd *ETCDConfig `json:\"etcd,omitempty\"` HA HAConfig\n # `json:\"ha,omitempty\"`\n ml:\n # Enable the ML server.\n enable: false\n # NetworkConfig\n network:\n # HAAddress - An optional address to allow inter-cluster\n # communication with HA when 'address' is not routable between\n # clusters.\n HAAddress: string\n # CompressNetworkData - Enables compression of inter-node\n # network data transfers.\n compressNetworkData: false\n # EnableHTTPDProxy - Start an HTTP server as a proxy to handle\n # LDAP and/or Kerberos authentication. Each host will run an\n # HTTP server and access to each rank is available through\n # http://host:8082/gpudb-1, where port \"8082\" is defined\n # by 'httpd_proxy_port'. NOTE: HTTP external endpoints are not\n # affected by the 'use_https' parameter above. If you wish to\n # enable HTTPS, you must edit\n # the \"/opt/gpudb/httpd/conf/httpd.conf\" and setup HTTPS as per\n # the Apache httpd documentation at\n # https://httpd.apache.org/docs/2.2/\n enableHTTPDProxy: true\n # EnableWorkerHTTPServers - Enable worker HTTP servers; each\n # process runs its own server for multi-head ingest.\n enableWorkerHTTPServers: true\n # GlobalManagerLocalPubPort - ?\n globalManagerLocalPubPort: 5554\n # GlobalManagerPortOne - Internal communication ports - Host\n # manager status notification channel\n globalManagerPortOne: 5552\n # GlobalManagerPubPort - Host manager synchronization message\n # publishing channel port\n globalManagerPubPort: 5553\n # HeadIPAddress - Head HTTP server IP address. Set to the\n # publicly accessible IP address of the first\n # process, **rank0**.\n headIPAddress: \"172.20.0.10\"\n # HeadPort - Head HTTP server port to use\n # for 'head_ip_address'.\n headPort: 9191\n # HostManagerHTTPPort - HTTP port for web portal of the host\n # manager\n hostManagerHTTPPort: 9300\n # HTTPAllowOrigin - Value to return via\n # Access-Control-Allow-Origin HTTP header (for Cross-Origin\n # Resource Sharing). Set to empty to not return the header and\n # disallow CORS.\n httpAllowOrigin: \"*\"\n # HTTPKeepAlive - Keep HTTP connections alive between requests\n httpKeepAlive: false\n # HTTPDProxyPort - TCP port that the httpd auth proxy server\n # will listen on if 'enable_httpd_proxy' is \"true\".\n httpdProxyPort: 8082\n # HTTPDProxyUseHTTPS - Set to \"true\" if the httpd auth proxy\n # server is configured to use HTTPS.\n httpdProxyUseHTTPS: false\n # HTTPSCertFile - File containing the SSL certificate e.g.\n # cert.pem If required, a self-signed certificate(expires after\n # 10 years) can be generated via the command: e.g. cert.pem\n # openssl req -newkey rsa:2048 -new -nodes -x509 \\ -days\n # 3650 -keyout key.pem -out cert.pem\n httpsCertFile: \"\"\n # HTTPSKeyFile - File containing the SSL private Key e.g.\n # key.pem If required, a self-signed certificate (expires after\n # 10 years) can be generated via the command: openssl\n # req -newkey rsa:2048 -new -nodes -x509 \\ -days 3650 -keyout\n # key.pem -out cert.pem\n httpsKeyFile: \"\"\n # Rank0IPAddress - Internal use IP address of the head HTTP\n # server, **rank0**. Set to either a second internal network\n # accessible by all ranks or to '${gaia.head_ip_address}'.\n rank0IPAddress: \"${gaia.rank0.host}\" ranks:\n - communicatorPort:\n # Number of port to expose on the pod's IP address. This\n # must be a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this\n # must be a valid port number, 0 < x < 65536. If\n # HostNetwork is specified, this must match ContainerPort.\n # Most containers do not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique\n # within the pod. Each named port in a pod must have a\n # unique name. Name for the port that can be referred to by\n # services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # Specify the hosts to run each rank worker process in the\n # cluster. For a single machine system, use \"127.0.0.1\", but\n # if using two or more machines, a hostname or IP address\n # must be specified for each rank that is accessible from the\n # other ranks. See also 'head_ip_address'\n # and 'rank0_ip_address'.\n host: string\n # Optionally, specify the worker HTTP server ports. The\n # default is to use ('head_port' + *rank #*) for each worker\n # process where rank number is from \"1\" to number of ranks\n # in 'rank<#>.host' below.\n httpServerPort:\n # Number of port to expose on the pod's IP address. This\n # must be a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this\n # must be a valid port number, 0 < x < 65536. If\n # HostNetwork is specified, this must match ContainerPort.\n # Most containers do not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique\n # within the pod. Each named port in a pod must have a\n # unique name. Name for the port that can be referred to by\n # services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # This is the Kubernetes pod IP Address of the current rank\n # which we need to populate in the operator. NOTE: Internal\n # Attribute\n podIP: string\n # Optionally, specify a public URL for each worker HTTP server\n # that clients should use to connect for multi-head\n # operations. NOTE: If specified for any ranks, a public URL\n # must be specified for all ranks.\n publicURL: \"https://:8082/gpudb-{{.Rank}}\"\n # Define the rank number of this rank.\n rank: 1\n # SetMonitorPort - Set monitor ZMQ publisher server port (-1 to\n # disable), uses the 'head_ip_address' interface.\n setMonitorPort: 9002\n # SetMonitorProxyPort - Set monitor ZMQ publisher internal proxy\n # server port (\"-1\" to disable), uses the 'head_ip_address'\n # interface. IMPORTANT: Disabling this port effectively\n # prevents worker nodes from publishing set monitor\n # notifications when multi-head ingest is enabled\n # (see 'enable_worker_http_servers').\n setMonitorProxyPort: 9003\n # SetMonitorQueueSize - Set monitor queue size\n setMonitorQueueSize: 1000\n # TriggerPort - Trigger ZMQ publisher server port (\"-1\" to\n # disable), uses the 'head_ip_address' interface.\n triggerPort: -1\n # UseHTTPS - Set to \"true\" to use HTTPS; if \"true\"\n # then 'https_key_file' and 'https_cert_file' must be provided\n useHttps: false\n # PersistenceConfig\n persistence:\n # Removed in 7.2\n IndexDBFlushImmediate: true\n # DataLoadingSchema Startup data-loading scheme\n buildMaterializedViewsOnStart: \"on_demand\"\n # DataLoadingSchema Startup data-loading scheme\n buildPKIndexOnStart: \"on_demand\"\n # Target maximum data size for any one column in a chunk\n # (512 MB) (0 = disable). chunk_max_memory = 8192000000\n chunkColumnMaxMemory: 8192000000\n # Target maximum total data size for all columns in a chunk\n # (8 GB) (0 = disable).\n chunkMaxMemory: 512000000\n # Number of records per chunk (\"0\" disables chunking)\n chunkSize: 8000000\n # Determines whether to execute kernels on host (CPU) or device\n # (GPU). Possible values are: \n # * \"default\" : engine decides * \"host\" : execute only\n # host * \"device\" : execute only device * *<rows>* :\n # execute on the host if chunked column contains the given\n # number of *rows* or fewer; otherwise, execute on device.\n executionMode: \"device\"\n # Removed in 7.2\n fsyncIndexDBImmediate: true\n # Removed in 7.2\n fsyncInodesImmediate: true\n # Removed in 7.2\n fsyncMetadataImmediate: true\n # Removed in 7.2\n fsyncOnInterval: true\n # Maximum number of open files for IndexedDb object file store.\n # Removed in 7.2\n indexDBMaxOpenFiles: \n # Table of contents size for IndexedDb object file store.\n # Removed in 7.2\n indexDBTOCSize: \n # Disable detection of sparse file support and use the full file\n # length which may be an over-estimate of the actual usage in\n # the persist tier. Removed in 7.2\n indexDBTierByFileLength: false\n # Startup data-loading scheme: \n # * \"always\" : load all the data into memory before\n # accepting requests * \"lazy\" : load the necessary\n # data to start, but load the remainder\n # lazily * \"on_demand\" : only load data as requests use it\n loadVectorsOnStart: \"on_demand\"\n # Removed in 7.2\n metadataFlushImmediate: true\n # Specify a base directory to store persistence data files.\n persistDirectory: \"/opt/gpudb/persist\"\n # Whether to use synchronous persistence file writing.\n # If \"false\", files will be written asynchronously. Removed in\n # 7.2\n persistSync: true\n # Duration in seconds, for which persistence files will be\n # force-synced if out of sync, once per minute. NOTE: Files are\n # always opportunistically saved; this simply enforces a\n # maximum time a file can be out of date. Set to a very high\n # number to disable.\n persistSyncTime: 5\n # The maximum number of bytes in the shadow aggregate cache\n shadowAggSize: 100000000\n # Whether to enable chunk caching\n shadowCubeEnabled: true\n # The maximum number of bytes in the shadow filter cache\n shadowFilterSize: 100000000\n # Base directory to store hashed strings.\n smsDirectory: \"${gaia.persist_directory}\"\n # Maximum number of open files (per-TOM) for the SMS\n # (string) store.\n smsMaxOpenFiles: 128\n # Synchronous compression: compress vectors on set compression.\n synchronousCompression: false\n # Directory for GPUdb to use to store temporary files. Must be a\n # fully qualified path, have at least 100Mb of free space, and\n # execute permission.\n tempDirectory: \"${gaia.persist_directory}/tmp\"\n # Base directory to store the text search index.\n textIndexDirectory: \"${gaia.persist_directory}\"\n # Enable checksum protection on the wal entries. New in 7.2\n walChecksum: true\n # Specifies how frequently wal entries are written with\n # background sync. New in 7.2\n walFlushFrequency: 60\n # Maximum size of each wal segment file New in 7.2\n walMaxSegmentSize: 500000000\n # Approximate number of segment files to split the wal across. A\n # minimum of two is required. The size of the wal is limited by\n # segment_count * max_segment_size. (per rank and per tom) Set\n # to 0 to remove a size limit on the wal itself, but still be\n # bounded by rank tier limits. Set to -1 to have the database\n # decide automatically per table. New in 7.2\n walSegmentCount: \n # Sync mode to use when persisting wal entries to disk: \n # \"none\" : Disable the wal \"background\" : Wal entries are\n # periodically written instead of immediately after each\n # operation \"flush\" : Protects entries in the event of a\n # database crash \"fsync\" : Protects entries in the event\n # of an OS crash New in 7.2\n walSyncPolicy: \"flush\"\n # If true, any table that is found to be corrupt after replaying\n # its wal at startup will automatically be truncated so that\n # the table becomes operable. If false, the user will be\n # responsible for resolving the issue via sql REPAIR TABLE or\n # similar. New in 7.2\n walTruncateCorruptTablesOnStart: true\n # PostgresProxy\n postgresProxy:\n # Postgres Proxy Server Start an Postgres(TCP) server as a proxy\n # to handle postgres wire protocol messages.\n enablePostgresProxy: false\n # Set idle connection timeout in seconds. (default: \"1200\")\n idleConnectionTimeout: 1200\n # Set max number of queued server connections. (default: \"1\")\n maxQueuedConnections: 1\n # Set max number of server threads to spawn. (default: \"64\")\n maxThreads: 64\n # Set min number of server threads to spawn. (default: \"2\")\n minThreads: 2\n # TCP port that the postgres proxy server will listen on\n # if 'enable_postgres_proxy' is \"true\".\n port:\n # Number of port to expose on the pod's IP address. This must\n # be a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this\n # must be a valid port number, 0 < x < 65536. If HostNetwork\n # is specified, this must match ContainerPort. Most\n # containers do not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique\n # within the pod. Each named port in a pod must have a unique\n # name. Name for the port that can be referred to by\n # services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # Set to \"true\" to use SSL; if \"true\" then 'ssl_key_file'\n # and 'ssl_cert_file' must be provided\n ssl: false sslCertFile: \"\"\n # Files containing the SSL private Key and the SSL certificate\n # for. If required, a self signed certificate (expires after 10\n # years) can be generated via the command: openssl req -newkey\n # rsa:2048 -new -nodes -x509 \\ -days 3650 -keyout key.pem -out\n # cert.pem\n sslKeyFile: \"\"\n # ProcessesConfig\n processes:\n # Set the maximum number of threads per tom for table\n # initialization on startup\n initTablesNumThreadsPerTom: 8\n # Set the number of parallel calculation threads to use for data\n # processing use -1 to use the max number of threads\n # (not recommended)\n kernelOmpThreads: 3\n # The maximum number of web server threads to spawn\n maxHttpThreads: 512\n # Set the maximum number of threads (both workers and masters)\n # to be passed to TBB on initialization. Generally\n # speaking, 'max_tbb_threads_per_rank' - \"1\" TBB workers will\n # be created. Use \"-1\" for no limit.\n maxTbbThreadsPerRank: \"-1\"\n # The minimum number of web server threads to spawn\n minHttpThreads: 8\n # Set the number of parallel jobs to create for multi-child set\n # calulations use \"-1\" to use the max number of threads\n # (not recommended)\n smOmpThreads: 2\n # Maximum number of simultaneous threads allocated to a given\n # request, on each rank. Note that thread allocation may also\n # be limted by resource group limits and/or system load.\n subtaskConcurrentyLimit: \"-1\"\n # Set the number of TaskCalculators per TOM, GPU data\n # processors.\n tcsPerTom: \"-1\"\n # Set the number of TOMs (data container shards) per rank\n tomsPerRank: 1\n # Set the number of TaskProcessors per TOM, CPU data\n # processors.\n tpsPerTom: \"-1\"\n # ProcsConfig\n procs:\n # Directory where proc files are stored at runtime. Must be a\n # fully qualified path with execute permission. If not\n # specified, 'temp_directory' will be used.\n directory:\n # PersistentVolumeClaim is a user's request for and claim to a\n # persistent volume\n persistVolumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and may\n # reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource this\n # object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the volume\n # should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An existing\n # PVC (PersistentVolumeClaim) If the provisioner or an\n # external controller can support the specified data\n # source, it will create a new volume based on the\n # contents of the specified data source. When the\n # AnyVolumeDataSource feature gate is enabled, dataSource\n # contents will be copied to dataSourceRef, and\n # dataSourceRef contents will be copied to dataSource\n # when dataSourceRef.namespace is not specified. If the\n # namespace is specified, then dataSourceRef will not be\n # copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For any\n # other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume is\n # desired. This may be any object from a non-empty API\n # group (non core object) or a PersistentVolumeClaim\n # object. When this field is specified, volume binding\n # will only succeed if the type of the specified object\n # matches some installed volume populator or dynamic\n # provisioner. This field will replace the functionality\n # of the dataSource field and as such if both fields are\n # non-empty, they must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other is\n # non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate to\n # be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For any\n # other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified, a\n # gateway.networking.k8s.io/ReferenceGrant object is\n # required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.(Alpha) This\n # field requires the CrossNamespaceVolumeDataSource\n # feature gate to be enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container.\n # This is an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed\n # Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator\n # is In or NotIn, the values array must be non-empty.\n # If the operator is Exists or DoesNotExist, the\n # values array must be empty. This array is replaced\n # during a strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values array\n # contains only \"value\". The requirements are ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the volume\n # backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request is\n # lowered, allocatedResources is only lowered if there\n # are no expansion operations in progress and if the\n # actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and requires\n # enabling RecoverVolumeExpansionFailure feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent volume\n # claim. If underlying persistent volume is being resized\n # then the Condition will be set to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value of\n # PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # VolumeMount describes a mounting of a Volume within a\n # container.\n volumeMount:\n # Path within the container at which the volume should be\n # mounted. Must not contain ':'.\n mountPath: string\n # mountPropagation determines how mounts are propagated from\n # the host to container and the other way around. When not\n # set, MountPropagationNone is used. This field is beta in\n # 1.10.\n mountPropagation: string\n # This must match the Name of a Volume.\n name: string\n # Mounted read-only if true, read-write otherwise (false or\n # unspecified). Defaults to false.\n readOnly: true\n # Path within the volume from which the container's volume\n # should be mounted. Defaults to \"\" (volume's root).\n subPath: string\n # Expanded path within the volume from which the container's\n # volume should be mounted. Behaves similarly to SubPath\n # but environment variable references $(VAR_NAME) are\n # expanded using the container's environment. Defaults\n # to \"\" (volume's root). SubPathExpr and SubPath are\n # mutually exclusive.\n subPathExpr: string\n # Enable procs (UDFs)\n enable: true\n # SecurityConfig\n security:\n # Automatically create accounts for externally-authenticated\n # users. If 'enable_external_authentication' is \"false\", this\n # setting has no effect. Note that accounts are not\n # automatically deleted if users are removed from the external\n # authentication provider and will be orphaned.\n autoCreateExternalUsers: false\n # Automatically add roles passed in via the \"KINETICA_ROLES\"\n # HTTP header to externally-authenticated users. Specified\n # roles that do not exist are ignored.\n # If 'enable_external_authentication' is \"false\", this setting\n # has no effect. IMPORTANT: DO NOT ENABLE unless the\n # authentication proxy is configured to block \"KINETICA_ROLES\"\n # HTTP headers passed in from clients.\n autoGrantExternalRoles: false\n # Comma-separated list of roles to revoke from\n # externally-authenticated users prior to granting roles passed\n # in via the \"KINETICA_ROLES\" HTTP header, or \"*\" to revoke all\n # roles. Preceding a role name with an \"!\" overrides the\n # revocation (e.g. \"*,!foo\" revokes all roles except \"foo\").\n # Leave blank to disable. If\n # either 'enable_external_authentication'\n # or 'auto_grant_external_roles' is \"false\", this setting has\n # no effect.\n autoRevokeExternalRoles: false\n # Enable authorization checks. When disabled, all requests will\n # be treated as the administrative user.\n enableAuthorization: true\n # Enable external (LDAP, Kerberos, etc.) authentication. User\n # IDs of externally-authenticated users must be passed in via\n # the \"REMOTE_USER\" HTTP header from the authentication proxy.\n # May be used in conjuntion with the 'enable_httpd_proxy'\n # setting above for an integrated external authentication\n # solution. IMPORTANT: DO NOT ENABLE unless external access to\n # GPUdb ports has been blocked via firewall AND the\n # authentication proxy is configured to block \"REMOTE_USER\"\n # HTTP headers passed in from clients. server.\n enableExternalAuthentication: true\n # ExternalSecurity\n externalSecurity:\n # Ranger\n ranger:\n # AuthorizerAddress - The network URI for the\n # ranger_authorizer to start. The URI can be either TCP or\n # IPC. TCP address is used to indicate the remote\n # ranger_authorizer which may run at other hosts. The IPC\n # address is for a local ranger_authorizer. Example\n # addresses for remote or TCP servers: tcp://127.0.0.1:9293\n # tcp://HOST_IP:9293 Example address for local IPC servers:\n # ipc:///tmp/gpudb-ranger-0\n # security.external.ranger_authorizer.address = ipc://$\n # {gaia.temp_directory}/gpudb-ranger-0\n authorizerAddress: \"ipc://$\n {gaia.temp_directory}/gpudb-ranger-0\"\n # Remote debugger port used for the ranger_authorizer.\n # Setting the port to \"0\" disables remote debugging. NOTE:\n # Recommended port to use is \"5005\"\n # security.external.ranger_authorizer.remote_debug_port =\n # 0\n authorizerRemoteDebugPort: 0\n # AuthorizerTimeout - Ranger Authorizer timeout in seconds\n # security.external.ranger_authorizer.timeout = 120\n authorizerTimeout: 120\n # CacheMinutes- Maximum minutes to hold on to data from\n # Ranger security.external.ranger.cache_minutes = 60\n cacheMinutes: 60\n # Name of the service created on the Ranger Server to manage\n # this Kinetica instance\n # security.external.ranger.service_name = kinetica\n name: \"kinetica\"\n # ExtURL - URL of Ranger REST API. E.g.,\n # https://localhost:6080/ Leave blank for no Ranger Server\n # security.external.ranger.url =\n url: string\n # The minimum allowable password length.\n minPasswordLength: 4\n # Require all users to be authenticated. Disable this to allow\n # users to access the database as the 'unauthenticated' user.\n # Useful for situations where the public needs to access the\n # data.\n requireAuthentication: true\n # UnifiedSecurityNamespace - Use a single namespace for internal\n # and external user IDs and role names. If false, external user\n # IDs must be prefixed with \"@\" to differentiate them from\n # internal user IDs and role names (except in the \"REMOTE_USER\"\n # HTTP header, where the \"@\" is omitted).\n # unified_security_namespace = true\n unifiedSecurityNamespace: true\n # SQLConfig\n sql:\n # SQLPlannerAddress is not included as it is just default\n # always\n address: \"ipc://${gaia.temp_directory}/gpudb-query-engine-0\"\n # Enable the cost-based optimizer\n costBasedOptimization: false\n # Enable distributed joins\n distributedJoins: true\n # Enable distributed operations\n distributedOperations: true\n # Enable Query Planner\n enablePlanner: true\n # Perform joins between only 2 tables at a time; default is all\n # tables involved in the operation at once\n forceBinaryJoins: false\n # Perform unions/intersections/exceptions between only 2 tables\n # at a time; default is all tables involved in the operation at\n # once\n forceBinarySetOps: false\n # Max parallel steps\n maxParallelSteps: 4\n # Max allowed view nesting levels. Valid range(1-64)\n maxViewNestingLevels: 16\n # TTL of the paging results table\n pagingTableTTL: 20\n # Enable parallel query evaluation\n parallelExecution: true\n # The maximum number of entries in the SQL plan cache. The\n # default is \"4000\" entries, but the configurable range\n # is \"1\" - \"1000000\". Plan caching will be disabled if the\n # value is set outside of that range.\n planCacheSize: 4000\n # The maximum memory for the query planner to use in Megabytes.\n plannerMaxMemory: 4096\n # The maximum stack size for the query planner threads to use in\n # Megabytes.\n plannerMaxStack: 6\n # Query planner timeout in seconds\n plannerTimeout: 120\n # Max Query planner threads\n plannerWorkers: 16\n # Remote debugger port used for the query planner. Setting the\n # port to \"0\" disables remote debugging. NOTE: Recommended\n # port to use is \"5005\"\n remoteDebugPort: 5005\n # TTL of the query cache results table\n resultsCacheTTL: 60\n # Enable query results caching\n resultsCaching: true\n # Enable rule-based query rewrites\n ruleBasedOptimization: true\n # SQLEngineConfig\n sqlEngine:\n # Enable the cost-based optimizer\n costBasedOptimization: false\n # Name of default collection for user tables\n defaultSchema: \"\"\n # Enable distributed joins\n distributedJoins: true\n # Enable distributed operations\n distributedOperations: true\n # Perform joins between only 2 tables at a time; default is all\n # tables involved in the operation at once\n forceBinaryJoins: false\n # Perform unions/intersections/exceptions between only 2 tables\n # at a time; default is all tables involved in the operation at\n # once\n forceBinarySetOps: false\n # Max parallel steps\n maxParallelSteps: 4\n # Max allowed view nesting levels. Valid range(1-64)\n maxViewNestingLevels: 16\n # TTL of the paging results table\n pagingTableTTL: 20\n # Enable parallel query evaluation\n parallelExecution: true\n # The maximum number of entries in the SQL plan cache. The\n # default is \"4000\" entries, but the configurable range\n # is \"1\" - \"1000000\". Plan caching will be disabled if the\n # value is set outside of that range.\n planCacheSize: 4000\n # PlannerConfig\n planner:\n # Enable Query Planner\n enablePlanner: true\n # The maximum memory for the query planner to use in\n # Megabytes.\n maxMemory: 4096\n # The maximum stack size for the query planner threads to use\n # in Megabytes.\n maxStack: 6\n # The network URI for the query planner to start. The URI can\n # be either TCP or IPC. TCP address is used to indicate the\n # remote query planner which may run at other hosts. The IPC\n # address is for a local query planner. Example for remote or\n # TCP servers: \n # # sql.planner.address = tcp://127.0.0.1:9293 #\n # sql.planner.address = tcp://HOST_IP:9293 Example for\n # local IPC servers: \n # # sql.planner.address = ipc:///tmp/gpudb-query-engine-0\n plannerAddress: \"ipc:///tmp/gpudb-query-engine-0\"\n # Remote debugger port used for the query planner. Setting the\n # port to \"0\" disables remote debugging. NOTE: Recommended\n # port to use is \"5005\"\n remoteDebugPort: 0\n # Query planner timeout in seconds\n timeout: 120\n # Max Query planner threads\n workers: 16 results:\n # TTL of the query cache results table\n cacheTTL: 60\n # Enable query results caching\n caching: true\n # Enable rule-based query rewrites\n ruleBasedOptimization: true\n # Name of collection that will be used to store result tables\n # generated as part of query execution\n tempCollection: \"__SQL_TEMP\"\n # StatisticsConfig\n statistics:\n # system_metadata.stats_aggr_rowcount = 10000\n aggrRowCount: 10000\n # system_metadata.stats_aggr_time = 1\n aggrTime: 1\n # Run a statistics server to collect information about Kinetica\n # and the machines it runs on.\n enable: true\n # Statistics server IP address (run on head node) default port\n # is \"2003\"\n ipAddress: \"${gaia.host0.address}\"\n # Statistics server namespace - should be a machine identifier\n namespace: \"gpudb\" port: 2003\n # System metadata catalog settings\n # system_metadata.stats_retention_days = 21\n retentionDays: 21\n # TextSearchConfig\n textSearch:\n # Enable text search capability within the database.\n enableTextSearch: false\n # Number of text indices to start for each rank\n textIndicesPerTom: 2\n # Searcher refresh intervals - specifies the maximum delay\n # (in seconds) between writing to the text search index and\n # being able to search for the value just written. A value\n # of \"0\" insures that writes to the index are immediately\n # available to be searched. A more nominal value of \"100\"\n # should improve ingest speed at the cost of some delay in\n # being able to text search newly added values.\n textSearcherRefreshInterval: 20\n # Use the production capable external text server instead of a\n # lightweight internal server which should only be used for\n # light testing. Note: The internal text server is deprecated\n # and may be removed in future versions.\n useExternalTextServer: true tieredStorage:\n # Cold Storage Tiers can be used to extend the storage capacity\n # of the Persist Tier. Assign a tier strategy with cold storage\n # to objects that will be infrequently accessed since they will\n # be moved as needed from the Persist Tier. The Cold Storage\n # Tier is typically a much larger capacity physical disk or a\n # cloud-based storage system which may not be as performant as\n # the Persist Tier storage. A default storage limit and\n # eviction thresholds can be set across all ranks for a given\n # Cold Storage Tier, while one or more ranks within a Cold\n # Storage Tier may be configured to override those defaults.\n # NOTE: If an object needs to be pulled out of cold storage\n # during a query, it may need to use the local persist\n # directory as a temporary swap space. This may trigger an\n # eviction of other persisted items to cold storage due to low\n # disk space condition defined by the watermark settings for\n # the Persist Tier.\n coldStorageTier:\n # ColdStorageAzure\n coldStorageAzure:\n # 'base_path' : A base path based on the\n # provider type for this tier.\n basePath: string clientID: string clientSecret: string\n # 'connection_timeout' : Timeout in seconds for\n # connecting to this storage provider.\n connectionTimeout: \"30\"\n # 'base_path' : A base path based on the\n # provider type for this tier. BasePath string\n # `json:\"basePath,omitempty\"`\n containerName: \"/gpudb/cold_storage\"\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\" sasToken:\n string storageAccountKey: string storageAccountName: string\n tenantID: string useManagedCredentials: false\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # 'wait_timeout' : Timeout in seconds for reading\n # from or writing to this storage provider.\n waitTimeout: \"90\"\n # ColdStorageDisk\n coldStorageDisk:\n # 'base_path' : A base path based on the\n # provider type for this tier.\n basePath: string\n # 'connection_timeout' : Timeout in seconds for\n # connecting to this storage provider.\n connectionTimeout: \"30\"\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # 'wait_timeout' : Timeout in seconds for reading\n # from or writing to this storage provider.\n waitTimeout: \"90\"\n # ColdStorageGCS - Google Cloud Storage-specific *parameter*\n # names: \n # * BucketName = 'gcs_bucket_name' *\n # ProjectID - 'gcs_project_id'\n # (optional) * AccountID - 'gcs_service_account_id'\n # (optional) *\n # AccountPrivateKey - 'gcs_service_account_private_key'\n # (optional) * AccountKeys - 'gcs_service_account_keys'\n # (optional) NOTE: If\n # the 'gcs_service_account_id', 'gcs_service_account_private_key'\n # and/or 'gcs_service_account_keys' values are not\n # specified, the Google Clould Client Libraries will\n # attempt to find and use service account credentials from\n # the GOOGLE_APPLICATION_CREDENTIALS environment\n # variable.\n coldStorageGCS: accountID: string accountKeys: string\n accountPrivateKey: string\n # 'base_path' : A base path based on the\n # provider type for this tier.\n basePath: string bucketName: string\n # 'connection_timeout' : Timeout in seconds for\n # connecting to this storage provider.\n connectionTimeout: \"30\"\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" projectID: string\n provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # 'wait_timeout' : Timeout in seconds for reading\n # from or writing to this storage provider.\n waitTimeout: \"90\"\n # ColdStorageHDFS\n coldStorageHDFS:\n # ColdStorageDisk\n default:\n # 'base_path' : A base path based on the\n # provider type for this tier.\n basePath: string\n # 'connection_timeout' : Timeout in seconds for\n # connecting to this storage provider.\n connectionTimeout: \"30\"\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and\n # continue until the 'low_watermark' percentage usage\n # is reached. Default is \"90\", signifying a 90%\n # memory usage threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that\n # can be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from\n # the endpoint the client submits requests to. Cannot\n # be updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support\n # the specified data source, it will create a new\n # volume based on the contents of the specified data\n # source. When the AnyVolumeDataSource feature gate\n # is enabled, dataSource contents will be copied to\n # dataSourceRef, and dataSourceRef contents will be\n # copied to dataSource when dataSourceRef.namespace\n # is not specified. If the namespace is specified,\n # then dataSourceRef will not be copied to\n # dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is\n # required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty\n # volume is desired. This may be any object from a\n # non-empty API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty,\n # they must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same\n # value and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object,\n # as well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values\n # (dropping them), dataSourceRef preserves all\n # values, and generates an error if a disallowed\n # value is specified. * While dataSource only allows\n # local objects, dataSourceRef allows objects in any\n # namespaces. (Beta) Using this field requires the\n # AnyVolumeDataSource feature gate to be enabled.\n # (Alpha) Using the namespace field of dataSourceRef\n # requires the CrossNamespaceVolumeDataSource feature\n # gate to be enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is\n # required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is\n # specified, a\n # gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow\n # that namespace's owner to accept the reference.\n # See the ReferenceGrant documentation for\n # details. (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the\n # volume should have. If\n # RecoverVolumeExpansionFailure feature is enabled\n # users are allowed to specify resource requirements\n # that are lower than previous value but must still\n # be higher than capacity recorded in the status\n # field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider\n # for binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a\n # set of values. Valid operators are In, NotIn,\n # Exists and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must\n # be non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A\n # single {key,value} in the matchLabels map is\n # equivalent to an element of matchExpressions,\n # whose key field is \"key\", the operator is \"In\",\n # and the values array contains only \"value\". The\n # requirements are ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required\n # by the claim. Value of Filesystem is implied when\n # not included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to\n # a PVC. It may be larger than the actual capacity\n # when a volume expansion operation is requested. For\n # storage quota, the larger value from\n # allocatedResources and PVC.spec.resources is used.\n # If allocatedResources is not set,\n # PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and\n # if the actual volume capacity is equal or lower\n # than the requested capacity. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short,\n # machine understandable string that gives the\n # reason for condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid\n # value of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when\n # expansion is complete resizeStatus is set to empty\n # string by resize controller or kubelet. This is an\n # alpha field and requires enabling\n # RecoverVolumeExpansionFailure feature.\n resizeStatus: string\n # 'wait_timeout' : Timeout in seconds for reading\n # from or writing to this storage provider.\n waitTimeout: \"90\"\n # 'hdfs_kerberos_keytab' : The Kerberos keytab file used to\n # authenticate the \"gpudb\" Kerberos\n kerberosKeytab: string\n # 'hdfs_principal' : The effective principal name to\n # use when connecting to the hadoop cluster.\n principal: string\n # 'hdfs_uri' : The host IP address & port for\n # the hadoop distributed file system. For example:\n # hdfs://localhost:8020\n uri: string\n # 'hdfs_use_kerberos' : Set to \"true\" to enable Kerberos\n # authentication to an HDFS storage server. The\n # credentials of the principal are in the file specified\n # by the 'hdfs_kerberos_keytab' parameter. Note that\n # Kerberos's *kinit* command will be run when the database\n # is started.\n useKerberos: true\n # ColdStorageS3\n coldStorageS3: awsAccessKeyId: string awsRoleARN: string\n awsSecretAccessKey: string\n # 'base_path' : A base path based on the\n # provider type for this tier.\n basePath: string bucketName: string\n # 'connection_timeout' : Timeout in seconds for\n # connecting to this storage provider.\n connectionTimeout: \"30\" encryptionCustomerAlgorithm: string\n encryptionCustomerKey: string\n # EncryptionType - This is optional and valid values are\n # sse-s3 (Encryption key is managed by Amazon S3) and\n # sse-kms (Encryption key is managed by AWS Key Management\n # Service (kms)).\n encryptionType: string\n # Endpoint - s3_endpoint\n endpoint: string\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # KMSKeyID - This is optional and must be specified when\n # encryption type is sse-kms.\n kmsKeyID: string\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\" region:\n string useManagedCredentials: true\n # UseVirtualAddressing - 's3_use_virtual_addressing' : If\n # true (default), S3 endpoints will be constructed using\n # the 'virtual' style which includes the bucket name as\n # part of the hostname. Set to false to use the 'path'\n # style which treats the bucket name as if it is a path in\n # the URI.\n useVirtualAddressing: true\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # 'wait_timeout' : Timeout in seconds for reading\n # from or writing to this storage provider.\n waitTimeout: \"90\"\n # ColdStorageType The storage provider type. Currently,\n # supports \"none\", \"disk\"(local/network storage), \"hdfs\"\n # (Hadoop distributed filesystem), \"s3\" (Amazon S3\n # bucket), \"azure_blob\" (Microsoft Azure Blob Storage)\n # and \"gcs\" (Google GCS Bucket).\n coldStorageType: \"none\" name: string\n # The DiskCacheTier are used as temporary swap space for data\n # that doesn't fit in RAM or VRAM. The disk should be as fast\n # or faster than the Persist Tier storage since this tier is\n # used as an intermediary cache between the RAM and Persist\n # Tiers.\n diskCacheTier:\n # DiskTierStorageLimit\n default:\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string defaultStorePersistentObjects: true\n ranks:\n - highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # GlobalTier Parameters\n globalTier:\n # Co-locates all disks to a single disk i.e. persist, cache,\n # UDF will be on a single PVC.\n colocateDisks: true\n # Timeout in seconds for subsequent requests to wait on a\n # locked resource\n concurrentWaitTimeout: 120\n # EncryptDataAtRest - Enable disk encryption of data at rest\n encryptDataAtRest: true\n # The PersistTier are used as temporary swap space for data that\n # doesn't fit in RAM or VRAM. The disk should be as fast or\n # faster than the Persist Tier storage since this tier is used\n # as an intermediary cache between the RAM and Persist Tiers.\n persistTier:\n # DiskTierStorageLimit\n default:\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string defaultStorePersistentObjects: true\n ranks:\n - highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # The RAMTier represents the RAM available for data storage per\n # rank. The RAM Tier is NOT used for small, non-data objects or\n # variables that are allocated and deallocated for program flow\n # control or used to store metadata or other similar\n # information; these continue to use either the stack or the\n # regular runtime memory allocator. This tier should be sized\n # on each machine such that there is sufficient RAM left over\n # to handle this overhead, as well as the needs of other\n # processes running on the same machine.\n ramTier:\n # The RAM Tier represents the RAM available for data storage\n # per rank. The RAM Tier is NOT used for small, non-data\n # objects or variables that are allocated and deallocated for\n # program flow control or used to store metadata or other\n # similar information; these continue to use either the stack\n # or the regular runtime memory allocator. This tier should\n # be sized on each machine such that there is sufficient RAM\n # left over to handle this overhead, as well as the needs of\n # other processes running on the same machine. A default\n # memory limit and eviction thresholds can be set across all\n # ranks, while one or more ranks may be configured to\n # override those defaults. The general format for RAM\n # settings: \n # # tier.ram.[default|rank<#>].<parameter> Valid *parameter*\n # names include: \n # * 'limit' : The maximum RAM (bytes) per rank that\n # can be allocated across all resource groups. Default\n # is -1, signifying no limit and ignore watermark\n # settings. * 'high_watermark' : RAM percentage used\n # eviction threshold. Once memory usage exceeds this\n # value, evictions from this tier will be scheduled in\n # the background and continue until the 'low_watermark'\n # percentage usage is reached. Default is \"90\",\n # signifying a 90% memory usage\n # threshold. * 'low_watermark' : RAM percentage used\n # recovery threshold. Once memory usage exceeds\n # the 'high_watermark', evictions will continue until\n # memory usage falls below this recovery threshold.\n # Default is \"50\", signifying a 50% memory usage\n # threshold.\n default:\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # The maximum RAM (bytes) for processing data at rank 0.\n # Overrides the overall default RAM tier\n # limit. #tier.ram.rank0.limit = -1\n ranks:\n - highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string tieredStrategy:\n # Default strategy to apply to tables or columns when one was\n # not provided during table creation. This strategy is also\n # applied to a resource group that does not specify one at time\n # of creation. The strategy is formed by chaining together the\n # tier types and their respective eviction priorities. Any\n # given tier may appear no more than once in the chain and the\n # priority must be in range \"1\" - \"10\", where \"1\" is the lowest\n # priority (first to be evicted) and \"9\" is the highest\n # priority (last to be evicted). A priority of \"10\" indicates\n # that an object is unevictable. Each tier's priority is in\n # relation to the priority of other objects in the same tier;\n # e.g., \"RAM 9, DISK2 1\" indicates that an object will be the\n # highest evictable priority among objects in the RAM Tier\n # (last evicted), but that it will be the lowest priority among\n # objects in the Disk Tier named 'disk2' (first evicted). Note\n # that since an object can only have one Disk Tier instance in\n # its strategy, the corresponding priority will only apply in\n # relation to other objects in Disk Tier instance 'disk2'. See\n # the Tiered Storage section for more information about tier\n # type names. Format: <tier1> <priority>, <tier2> <priority>,\n # <tier3> <priority>, ... Examples using a Disk Tier\n # named 'disk2' and a Cold Storage Tier 'cold0': vram 3, ram 5,\n # disk2 3, persist 10 vram 3, ram 5, disk2 3, persist 6, cold0\n # 10 tier_strategy.default = VRAM 1, RAM 5, PERSIST 5\n default: \"VRAM 1, RAM 5, PERSIST 5\"\n # Predicate evaluation interval (in minutes) - indicates the\n # interval at which the tier strategy predicates are evaluated\n predicateEvaluationInterval: 60 video:\n # System default TTL for videos. Time-to-live (TTL) is the\n # number of minutes before a video will expire and be removed,\n # or -1 to disable. video_default_ttl = -1\n defaultTTL: \"-1\"\n # The maximum number of videos to allow on the system. Set to 0\n # to disable video rendering. Set to -1 to allow an unlimited\n # number of videos. video_max_count = -1\n maxCount: \"-1\"\n # Directory where video files should be temporarily stored while\n # rendering. Only accessed by rank 0. video_temp_directory = $\n # {gaia.temp_directory}/gpudb-temp-videos\n tmpDir: \"${gaia.temp_directory}/gpudb-temp-videos\"\n # VisualizationConfig\n visualization:\n # Enable level-of-details rendering for fast interaction with\n # large WKT polygon data. Only available for the OpenGL\n # renderer (when 'enable_opengl_renderer' is \"true\").\n enableLODRendering: true\n # If \"true\", enable hardware-accelerated OpenGL renderer;\n # if \"false\", use the software-based Cairo renderer.\n enableOpenGLRenderer: true\n # If \"true\", enable Vector Tile Service (VTS) to support\n # client-side visualization of geospatial data. Enabling this\n # option increases memory usage on ingestion.\n enableVectorTileService: false\n # Longitude and latitude ranges of geospatial data for which\n # level-of-details representations are being generated. The\n # parameter order is: <min_longitude> <min_latitude>\n # <max_longitude> <max_latitude> The default values span over\n # the world, but the level-of-details rendering becomes more\n # efficient when the precise extent of geospatial data is\n # specified. kubebuilder:default:={ -180, -90, 180, 90 }\n lodDataExtent: [integer]\n # The extent to which shape data are pre-processed for\n # level-of-details rendering during data insert/load or\n # processed on-the-fly in rendering time. This is a trade-off\n # between speed and memory. The higher the value, the faster\n # level-of-details rendering is, but the more memory is used\n # for storing processed shape data. The maximum level is \"10\"\n # (most shape data are pre-processed) and the minimum level\n # is \"0\".\n lodPreProcessingLevel: 5\n # The number of subregions in horizontal and vertical geospatial\n # data extent. The default values of \"12 6\" divide the world\n # into subregions of 30 degree (lon.) x 30 degree (lat.)\n lodSubRegionNum: [12,6]\n # A base image resolution (width and height in pixels) at which\n # a subregion would be rendered in a global view spanning over\n # the whole dataset. Based on this resolution level-of-details\n # representations are generated for the polygons located in the\n # subregion.\n lodSubRegionResolution: [512,512]\n # Maximum heatmap size (in pixels) that can be generated. This\n # reserves 'max_heatmap_size' ^ 2 * 8 bytes of GPU memory\n # at **rank0**\n maxHeatmapSize: 3072\n # The maximum number of levels in the level-of-details\n # rendering. As the number increases, level-of-details\n # rendering becomes effective at higher zoom levels, but it may\n # increase memory usage for storing level-of-details\n # representations.\n maxLODLevel: 8\n # Input geometries are pre-processed upon ingestion for faster\n # vector tile generation. This parameter determines the\n # zoomlevel at which the vector tile pre-processing stops. A\n # vector tile request for a higher zoomlevel than this\n # parameter takes additional time because the vector tile needs\n # to be generated on the fly.\n maxVectorTileZoomLevel: 8\n # Input geometries are pre-processed upon ingestion for faster\n # vector tile generation. This parameter determines the\n # zoomlevel from which the vector tile pre-processing starts. A\n # vector tile request for a lower zoomlevel than this parameter\n # takes additional time because the vector tile needs to be\n # generated on the fly.\n minVectorTileZoomLevel: 1\n # The number of samples to use for antialiasing. Higher numbers\n # will improve image quality but require more GPU memory to\n # store the samples on worker ranks. This affects only the\n # OpenGL renderer. Value may be \"0\", \"4\", \"8\" or \"16\". When \"0\"\n # antialiasing is disabled. The default value is \"0\".\n openGLAntialiasingLevel: 1\n # Threshold number of points (per-TOM) at which point rendering\n # switches to fast mode.\n pointRenderThreshold: 100000\n # Single-precision coordinates are used for usual rendering\n # processes, but depending on the precision of geometry data\n # and use case, double precision processing may be required at\n # a high zoomlevel. Double precision rendering processes are\n # used from the zoomlevel specified by this parameter, which is\n # corresponding to a zoomlevel of TMS or Google map service.\n renderingPrecisionThreshold: 30\n # The image width/height (in pixels) of svg symbols cached in\n # the OpenGL symbol cache.\n symbolResolution: 100\n # The width/height (in pixels) of an OpenGL texture which caches\n # symbol images for OpenGL rendering.\n symbolTextureSize: 4000\n # Threshold for the number of points (per-TOM) after which\n # symbology rendering falls back to regular rendering\n symbologyRenderThreshold: 10000\n # The name of map tiler used for Vector Tile Service. \"google\"\n # and \"tms\" map tilers are supported currently. This parameter\n # should be matched with the map tiler of clients' vector tile\n # renderer.\n vectorTileMapTiler: \"google\" workbench:\n # Start the Workbench app on the head host when host manager is\n # started. enable_workbench = false\n enable: false\n # # HTTP server port for Workbench if enabled. workbench_port =\n # 8000\n port:\n # Number of port to expose on the pod's IP address. This must\n # be a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this\n # must be a valid port number, 0 < x < 65536. If HostNetwork\n # is specified, this must match ContainerPort. Most\n # containers do not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique\n # within the pod. Each named port in a pod must have a unique\n # name. Name for the port that can be referred to by\n # services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # The fully qualified URL used on the Ingress records for any\n # exposed services. Completed buy yth Operator. DO NOT POPULATE\n # MANUALLY.\n fqdn: \"\"\n # The name of the parent HA Ring this cluster belongs to.\n haRingName: \"default\"\n # Whether to enable the separate node 'pools' for \"infra\", \"compute\"\n # pod scheduling. Default: false\n hasPools: true\n # The port the HostManager will be running in each pod in the\n # cluster. Default: 9300, TCP\n hostManagerPort:\n # Number of port to expose on the pod's IP address. This must be a\n # valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must be\n # a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name. Name\n # for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # Set the name of the container image to use.\n image: \"kinetica/kinetica-k8s-intel:v7.1.6.0\"\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets in\n # the same gpudb-namespace to use for pulling any of the images\n # used by this PodSpec. If specified, these secrets will be passed\n # to individual puller implementations for them to use. For\n # example, in the case of docker, only DockerConfig type secrets\n # are honored.\n imagePullSecrets:\n - name: string\n # Labels - Pod labels to be applied to the Statefulset DB pods.\n labels: {}\n # The Ingress Endpoint that GAdmin will be running on.\n letsEncrypt:\n # Enable LetsEncrypt for Certificate generation.\n enabled: false\n # LetsEncryptEnvironment\n environment: \"staging\"\n # Set the Kinetica DB License.\n license: string\n # Periodic probe of container liveness. Container will be restarted\n # if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # LoggerConfig Kinetica DB Logger Configuration Object Configure the\n # LOG4CPLUS logger for the DB. Field takes a string containing the\n # full configuration. If not specified a template file is used\n # during DB configuration generation.\n loggerConfig: configString: string\n # Metrics - DB Metrics scrape & forward configuration for\n # `fluent-bit`.\n metricsRegistryRepositoryTag:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets in\n # the same gpudb-namespace to use for pulling any of the images\n # used by this PodSpec. If specified, these secrets will be\n # passed to individual puller implementations for them to use.\n # For example, in the case of docker, only DockerConfig type\n # secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Metrics - `fluent-bit` container requests/limits.\n metricsResources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # NodeSelector - NodeSelector to be applied to the DB Pods\n nodeSelector: {}\n # Do not use internal Operator field only.\n originalReplicas: 1\n # podManagementPolicy controls how pods are created during initial\n # scale up, when replacing pods on nodes, or when scaling down. The\n # default policy is `OrderedReady`, where pods are created in\n # increasing order (pod-0, then pod-1, etc) and the controller will\n # wait until each pod is ready before continuing. When scaling\n # down, the pods are removed in the opposite order. The alternative\n # policy is `Parallel` which will create pods in parallel to match\n # the desired scale without waiting, and on scale down will delete\n # all pods at once.\n podManagementPolicy: \"Parallel\"\n # Number of ranks per node as a uint16 i.e. 1-65535 ranks per node.\n # Default: 1\n ranksPerNode: 1\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # The number of DB ranks i.e. replicas that the cluster will spin\n # up. Default: 3\n replicas: 3\n # Limit the resources a DB Pod can consume.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # SecurityContext holds security configuration that will be applied\n # to a container. Some fields are present in both SecurityContext\n # and PodSecurityContext. When both are set, the values in\n # SecurityContext take precedence.\n securityContext:\n # AllowPrivilegeEscalation controls whether a process can gain\n # more privileges than its parent process. This bool directly\n # controls if the no_new_privs flag will be set on the container\n # process. AllowPrivilegeEscalation is true always when the\n # container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note\n # that this field cannot be set when spec.os.name is windows.\n allowPrivilegeEscalation: true\n # The capabilities to add/drop when running containers. Defaults\n # to the default set of capabilities granted by the container\n # runtime. Note that this field cannot be set when spec.os.name\n # is windows.\n capabilities:\n # Added capabilities\n add: [\"string\"]\n # Removed capabilities\n drop: [\"string\"]\n # Run container in privileged mode. Processes in privileged\n # containers are essentially equivalent to root on the host.\n # Defaults to false. Note that this field cannot be set when\n # spec.os.name is windows.\n privileged: true\n # procMount denotes the type of proc mount to use for the\n # containers. The default is DefaultProcMount which uses the\n # container runtime defaults for readonly paths and masked paths.\n # This requires the ProcMountType feature flag to be enabled.\n # Note that this field cannot be set when spec.os.name is\n # windows.\n procMount: string\n # Whether this container has a read-only root filesystem. Default\n # is false. Note that this field cannot be set when spec.os.name\n # is windows.\n readOnlyRootFilesystem: true\n # The GID to run the entrypoint of the container process. Uses\n # runtime default if unset. May also be set in\n # PodSecurityContext. If set in both SecurityContext and\n # PodSecurityContext, the value specified in SecurityContext\n # takes precedence. Note that this field cannot be set when\n # spec.os.name is windows.\n runAsGroup: 1\n # Indicates that the container must run as a non-root user. If\n # true, the Kubelet will validate the image at runtime to ensure\n # that it does not run as UID 0 (root) and fail to start the\n # container if it does. If unset or false, no such validation\n # will be performed. May also be set in PodSecurityContext. If\n # set in both SecurityContext and PodSecurityContext, the value\n # specified in SecurityContext takes precedence.\n runAsNonRoot: true\n # The UID to run the entrypoint of the container process. Defaults\n # to user specified in image metadata if unspecified. May also be\n # set in PodSecurityContext. If set in both SecurityContext and\n # PodSecurityContext, the value specified in SecurityContext\n # takes precedence. Note that this field cannot be set when\n # spec.os.name is windows.\n runAsUser: 1\n # The SELinux context to be applied to the container. If\n # unspecified, the container runtime will allocate a random\n # SELinux context for each container. May also be set in\n # PodSecurityContext. If set in both SecurityContext and\n # PodSecurityContext, the value specified in SecurityContext\n # takes precedence. Note that this field cannot be set when\n # spec.os.name is windows.\n seLinuxOptions:\n # Level is SELinux level label that applies to the container.\n level: string\n # Role is a SELinux role label that applies to the container.\n role: string\n # Type is a SELinux type label that applies to the container.\n type: string\n # User is a SELinux user label that applies to the container.\n user: string\n # The seccomp options to use by this container. If seccomp options\n # are provided at both the pod & container level, the container\n # options override the pod options. Note that this field cannot\n # be set when spec.os.name is windows.\n seccompProfile:\n # localhostProfile indicates a profile defined in a file on the\n # node should be used. The profile must be preconfigured on the\n # node to work. Must be a descending path, relative to the\n # kubelet's configured seccomp profile location. Must only be\n # set if type is \"Localhost\".\n localhostProfile: string\n # type indicates which kind of seccomp profile will be applied.\n # Valid options are: Localhost - a profile defined in a file on\n # the node should be used. RuntimeDefault - the container\n # runtime default profile should be used. Unconfined - no\n # profile should be applied.\n type: string\n # The Windows specific settings applied to all containers. If\n # unspecified, the options from the PodSecurityContext will be\n # used. If set in both SecurityContext and PodSecurityContext,\n # the value specified in SecurityContext takes precedence. Note\n # that this field cannot be set when spec.os.name is linux.\n windowsOptions:\n # GMSACredentialSpec is where the GMSA admission webhook\n # (https://github.com/kubernetes-sigs/windows-gmsa) inlines the\n # contents of the GMSA credential spec named by the\n # GMSACredentialSpecName field.\n gmsaCredentialSpec: string\n # GMSACredentialSpecName is the name of the GMSA credential spec\n # to use.\n gmsaCredentialSpecName: string\n # HostProcess determines if a container should be run as a 'Host\n # Process' container. This field is alpha-level and will only\n # be honored by components that enable the\n # WindowsHostProcessContainers feature flag. Setting this field\n # without the feature flag will result in errors when\n # validating the Pod. All of a Pod's containers must have the\n # same effective HostProcess value (it is not allowed to have a\n # mix of HostProcess containers and non-HostProcess\n # containers). In addition, if HostProcess is true then\n # HostNetwork must also be set to true.\n hostProcess: true\n # The UserName in Windows to run the entrypoint of the container\n # process. Defaults to the user specified in image metadata if\n # unspecified. May also be set in PodSecurityContext. If set in\n # both SecurityContext and PodSecurityContext, the value\n # specified in SecurityContext takes precedence.\n runAsUserName: string\n # StartupProbe indicates that the Pod has successfully initialized.\n # If specified, no other probes are executed until this completes\n # successfully. If this probe fails, the Pod will be restarted,\n # just as if the livenessProbe failed. This can be used to provide\n # different probe parameters at the beginning of a Pod's lifecycle,\n # when it might take a long time to load data or warm a cache, than\n # during steady-state operation. This cannot be updated. This is an\n # alpha feature enabled by the StartupProbe feature flag. More\n # info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n startupProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # HostManagerMonitor is used to monitor the Kinetica DB Ranks. If a\n # rank is unavailable for the specified time(MaxRankFailureCount) the\n # cluster will be restarted.\n hostManagerMonitor:\n # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and\n # Liveness probes. Default: 8888\n db_healthz_port:\n # Number of port to expose on the pod's IP address. This must be a\n # valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must be\n # a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name. Name\n # for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and\n # Liveness probes. Default: 8889\n hm_healthz_port:\n # Number of port to expose on the pod's IP address. This must be a\n # valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must be\n # a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name. Name\n # for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # Periodic probe of container liveness. Container will be restarted\n # if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # Set the name of the container image to use.\n monitorRegistryRepositoryTag:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets in\n # the same gpudb-namespace to use for pulling any of the images\n # used by this PodSpec. If specified, these secrets will be\n # passed to individual puller implementations for them to use.\n # For example, in the case of docker, only DockerConfig type\n # secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # Allow for overriding resource requests/limits.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # StartupProbe indicates that the Pod has successfully initialized.\n # If specified, no other probes are executed until this completes\n # successfully. If this probe fails, the Pod will be restarted,\n # just as if the livenessProbe failed. This can be used to provide\n # different probe parameters at the beginning of a Pod's lifecycle,\n # when it might take a long time to load data or warm a cache, than\n # during steady-state operation. This cannot be updated. This is an\n # alpha feature enabled by the StartupProbe feature flag. More\n # info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n startupProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n # etc.\n infra: \"on-prem\"\n # The Kubernetes Ingress Controller will be running on e.g.\n # ingress-nginx, Traefik, Ambassador, Gloo, Kong etc.\n ingressController: \"nginx\"\n # The LDAP server to connect to.\n ldap:\n # BaseDN - The root base LDAP Distinguished Name to use as the base\n # for the LDAP usage\n baseDN: \"dc=kinetica,dc=com\"\n # BindDN - The LDAP Distinguished Name to use for the LDAP\n # connectivity/data connectivity/bind\n bindDN: \"cn=admin,dc=kinetica,dc=com\"\n # Host - The name of the host to connect to. If IsInLocalK8S=true\n # then supply only the name e.g. `openldap` Default: openldap\n host: \"openldap\"\n # IsInLocalK8S - Is the LDAP server co-located in the same K8s\n # cluster the operator is running in. Default: true\n isInLocalK8S: true\n # IsLDAPS - IUse LDAPS instead of LDAP Default: false\n isLDAPS: false\n # Namespace - The namespace the Default: openldap\n namespace: \"gpudb\"\n # Port - Defaults to LDAP Port 389 Default: 389\n port: 389\n # Tells the operator to use Cloud Provider Pay As You Go\n # functionality.\n payAsYouGo: false\n # The Reveal Dashboard Configuration for the Kinetica Cluster.\n reveal:\n # The port that Reveal will be running on. It runs only on the head\n # node pod in the cluster. Default: 8080\n containerPort:\n # Number of port to expose on the pod's IP address. This must be a\n # valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must be\n # a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name. Name\n # for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # The Ingress Endpoint that Reveal will be running on.\n ingressPath:\n # backend defines the referenced service endpoint to which the\n # traffic will be forwarded to.\n backend:\n # resource is an ObjectRef to another Kubernetes resource in the\n # namespace of the Ingress object. If resource is specified,\n # serviceName and servicePort must not be specified.\n resource:\n # APIGroup is the group for the resource being referenced. If\n # APIGroup is not specified, the specified Kind must be in\n # the core API group. For any other third-party types,\n # APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # serviceName specifies the name of the referenced service.\n serviceName: string\n # servicePort Specifies the port of the referenced service.\n servicePort: \n # path is matched against the path of an incoming request.\n # Currently it can contain characters disallowed from the\n # conventional \"path\" part of a URL as defined by RFC 3986. Paths\n # must begin with a '/' and must be present when using PathType\n # with value \"Exact\" or \"Prefix\".\n path: string\n # pathType determines the interpretation of the path matching.\n # PathType can be one of the following values: * Exact: Matches\n # the URL path exactly. * Prefix: Matches based on a URL path\n # prefix split by '/'. Matching is done on a path element by\n # element basis. A path element refers is the list of labels in\n # the path split by the '/' separator. A request is a match for\n # path p if every p is an element-wise prefix of p of the request\n # path. Note that if the last element of the path is a substring\n # of the last element in request path, it is not a match\n # (e.g. /foo/bar matches /foo/bar/baz, but does not\n # match /foo/barbaz). * ImplementationSpecific: Interpretation of\n # the Path matching is up to the IngressClass. Implementations\n # can treat this as a separate PathType or treat it identically\n # to Prefix or Exact path types. Implementations are required to\n # support all path types. Defaults to ImplementationSpecific.\n pathType: string\n # Whether to enable the Reveal Dashboard on the Cluster. Default:\n # true\n isEnabled: true\n # The Stats server to deploy & connect to if required.\n stats:\n # AlertManager - AlertManager specific configuration.\n alertManager:\n # Set the arguments for the command within the container to run.\n args:\n [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n --storage.tsdb.retention.time=7d --web.enable-lifecycle\"]\n # Set the command within the container to run.\n command: [\"/bin/sh\"]\n # ConfigFile - Set the location of the Loki configuration file.\n configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n # ConfigMap\n configFileAsConfigMap: true\n # The port that Stats will be running on. It runs only on the head\n # node pod in the cluster. Default: 9091\n containerPort:\n # Number of port to expose on the pod's IP address. This must be\n # a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must\n # be a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name.\n # Name for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # List of environment variables to set in the container.\n env:\n - name: string\n # Variable references $(VAR_NAME) are expanded using the\n # previously defined environment variables in the container and\n # any service environment variables. If a variable cannot be\n # resolved, the reference in the input string will be\n # unchanged. Double $$ are reduced to a single $, which allows\n # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n # produce the string literal \"$(VAR_NAME)\". Escaped references\n # will never be expanded, regardless of whether the variable\n # exists or not. Defaults to \"\".\n value: string\n # Source for the environment variable's value. Cannot be used if\n # value is not empty.\n valueFrom:\n # Selects a key of a ConfigMap.\n configMapKeyRef:\n # The key to select.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the ConfigMap or its key must be defined\n optional: true\n # Selects a field of the pod: supports metadata.name,\n # metadata.namespace, `metadata.labels\n # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n # spec.serviceAccountName, status.hostIP, status.podIP,\n # status.podIPs.\n fieldRef:\n # Version of the schema the FieldPath is written in terms\n # of, defaults to \"v1\".\n apiVersion: app.kinetica.com/v1\n # Path of the field to select in the specified API version.\n fieldPath: string\n # Selects a resource of the container: only resources limits\n # and requests (limits.cpu, limits.memory,\n # limits.ephemeral-storage, requests.cpu, requests.memory and\n # requests.ephemeral-storage) are currently supported.\n resourceFieldRef:\n # Container name: required for volumes, optional for env\n # vars\n containerName: string\n # Specifies the output format of the exposed resources,\n # defaults to \"1\"\n divisor: \n # Required: resource to select\n resource: string\n # Selects a key of a secret in the pod's namespace\n secretKeyRef:\n # The key of the secret to select from. Must be a valid\n # secret key.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the Secret or its key must be defined\n optional: true\n # Set the name of the container image to use.\n image:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets\n # in the same gpudb-namespace to use for pulling any of the\n # images used by this PodSpec. If specified, these secrets will\n # be passed to individual puller implementations for them to\n # use. For example, in the case of docker, only DockerConfig\n # type secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Whether to enable the Stats Server on the Cluster. Default:\n # true\n isEnabled: true\n # Periodic probe of container liveness. Container will be\n # restarted if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Logs - Set the location of the Loki configuration file.\n logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Resource Requests & Limits for the Stats Pod.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # StoragePath - Set the location of the AlertManager file\n # storage.\n storagePath: \"/opt/gpudb/kagent/stats/storage/alertmanager/alertmanager\"\n # WebConfigFile - Set the location of the AlertManager\n # alertmanager-web-config.yml.\n webConfigFile: \"/opt/gpudb/kagent/stats/alertmanager/alertmanager-web-config.yml\"\n # WebListenAddress - Set the location of the AlertManager\n # alertmanager-web-config.yml.\n webListenAddress: \"0.0.0.0:9089\"\n # Grafana - Grafana specific configuration.\n grafana:\n # Set the arguments for the command within the container to run.\n args:\n [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n --storage.tsdb.retention.time=7d --web.enable-lifecycle\"]\n # Set the command within the container to run.\n command: [\"/bin/sh\"]\n # ConfigFile - Set the location of the Loki configuration file.\n configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n # ConfigMap\n configFileAsConfigMap: true\n # The port that Stats will be running on. It runs only on the head\n # node pod in the cluster. Default: 9091\n containerPort:\n # Number of port to expose on the pod's IP address. This must be\n # a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must\n # be a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name.\n # Name for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # List of environment variables to set in the container.\n env:\n - name: string\n # Variable references $(VAR_NAME) are expanded using the\n # previously defined environment variables in the container and\n # any service environment variables. If a variable cannot be\n # resolved, the reference in the input string will be\n # unchanged. Double $$ are reduced to a single $, which allows\n # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n # produce the string literal \"$(VAR_NAME)\". Escaped references\n # will never be expanded, regardless of whether the variable\n # exists or not. Defaults to \"\".\n value: string\n # Source for the environment variable's value. Cannot be used if\n # value is not empty.\n valueFrom:\n # Selects a key of a ConfigMap.\n configMapKeyRef:\n # The key to select.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the ConfigMap or its key must be defined\n optional: true\n # Selects a field of the pod: supports metadata.name,\n # metadata.namespace, `metadata.labels\n # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n # spec.serviceAccountName, status.hostIP, status.podIP,\n # status.podIPs.\n fieldRef:\n # Version of the schema the FieldPath is written in terms\n # of, defaults to \"v1\".\n apiVersion: app.kinetica.com/v1\n # Path of the field to select in the specified API version.\n fieldPath: string\n # Selects a resource of the container: only resources limits\n # and requests (limits.cpu, limits.memory,\n # limits.ephemeral-storage, requests.cpu, requests.memory and\n # requests.ephemeral-storage) are currently supported.\n resourceFieldRef:\n # Container name: required for volumes, optional for env\n # vars\n containerName: string\n # Specifies the output format of the exposed resources,\n # defaults to \"1\"\n divisor: \n # Required: resource to select\n resource: string\n # Selects a key of a secret in the pod's namespace\n secretKeyRef:\n # The key of the secret to select from. Must be a valid\n # secret key.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the Secret or its key must be defined\n optional: true\n # HomePath - Set the location of the Grafana home directory.\n homePath: \"/opt/gpudb/kagent/stats/grafana\"\n # GraphiteHost - Host Address\n host: \"0.0.0.0\"\n # Set the name of the container image to use.\n image:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets\n # in the same gpudb-namespace to use for pulling any of the\n # images used by this PodSpec. If specified, these secrets will\n # be passed to individual puller implementations for them to\n # use. For example, in the case of docker, only DockerConfig\n # type secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Whether to enable the Stats Server on the Cluster. Default:\n # true\n isEnabled: true\n # Periodic probe of container liveness. Container will be\n # restarted if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Logs - Set the location of the Loki configuration file.\n logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Resource Requests & Limits for the Stats Pod.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # Whether to enable the Stats Server on the Cluster. Default: true\n isEnabled: true\n # Loki - Loki specific configuration.\n loki:\n # Set the arguments for the command within the container to run.\n args:\n [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n --storage.tsdb.retention.time=7d --web.enable-lifecycle\"]\n # Set the command within the container to run.\n command: [\"/bin/sh\"]\n # ConfigFile - Set the location of the Loki configuration file.\n configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n # ConfigMap\n configFileAsConfigMap: true\n # The port that Stats will be running on. It runs only on the head\n # node pod in the cluster. Default: 9091\n containerPort:\n # Number of port to expose on the pod's IP address. This must be\n # a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must\n # be a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name.\n # Name for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # List of environment variables to set in the container.\n env:\n - name: string\n # Variable references $(VAR_NAME) are expanded using the\n # previously defined environment variables in the container and\n # any service environment variables. If a variable cannot be\n # resolved, the reference in the input string will be\n # unchanged. Double $$ are reduced to a single $, which allows\n # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n # produce the string literal \"$(VAR_NAME)\". Escaped references\n # will never be expanded, regardless of whether the variable\n # exists or not. Defaults to \"\".\n value: string\n # Source for the environment variable's value. Cannot be used if\n # value is not empty.\n valueFrom:\n # Selects a key of a ConfigMap.\n configMapKeyRef:\n # The key to select.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the ConfigMap or its key must be defined\n optional: true\n # Selects a field of the pod: supports metadata.name,\n # metadata.namespace, `metadata.labels\n # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n # spec.serviceAccountName, status.hostIP, status.podIP,\n # status.podIPs.\n fieldRef:\n # Version of the schema the FieldPath is written in terms\n # of, defaults to \"v1\".\n apiVersion: app.kinetica.com/v1\n # Path of the field to select in the specified API version.\n fieldPath: string\n # Selects a resource of the container: only resources limits\n # and requests (limits.cpu, limits.memory,\n # limits.ephemeral-storage, requests.cpu, requests.memory and\n # requests.ephemeral-storage) are currently supported.\n resourceFieldRef:\n # Container name: required for volumes, optional for env\n # vars\n containerName: string\n # Specifies the output format of the exposed resources,\n # defaults to \"1\"\n divisor: \n # Required: resource to select\n resource: string\n # Selects a key of a secret in the pod's namespace\n secretKeyRef:\n # The key of the secret to select from. Must be a valid\n # secret key.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the Secret or its key must be defined\n optional: true\n # ExpandEnv\n expandEnv: true\n # Set the name of the container image to use.\n image:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets\n # in the same gpudb-namespace to use for pulling any of the\n # images used by this PodSpec. If specified, these secrets will\n # be passed to individual puller implementations for them to\n # use. For example, in the case of docker, only DockerConfig\n # type secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Whether to enable the Stats Server on the Cluster. Default:\n # true\n isEnabled: true\n # Periodic probe of container liveness. Container will be\n # restarted if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Logs - Set the location of the Loki configuration file.\n logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Resource Requests & Limits for the Stats Pod.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # Storage - Set the path of the Loki storage.\n storage: \"/opt/gpudb/kagent/stats/storage/loki-storage\"\n # Which vmss/node group etc. to use as the NodeSelector\n pool: \"compute\"\n # Prometheus - Prometheus specific configuration.\n prometheus:\n # Set the arguments for the command within the container to run.\n args:\n [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n --storage.tsdb.retention.time=7d --web.enable-lifecycle\"]\n # Set the command within the container to run.\n command: [\"/bin/sh\"]\n # ConfigFile - Set the location of the Loki configuration file.\n configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n # ConfigMap\n configFileAsConfigMap: true\n # The port that Stats will be running on. It runs only on the head\n # node pod in the cluster. Default: 9091\n containerPort:\n # Number of port to expose on the pod's IP address. This must be\n # a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must\n # be a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name.\n # Name for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # List of environment variables to set in the container.\n env:\n - name: string\n # Variable references $(VAR_NAME) are expanded using the\n # previously defined environment variables in the container and\n # any service environment variables. If a variable cannot be\n # resolved, the reference in the input string will be\n # unchanged. Double $$ are reduced to a single $, which allows\n # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n # produce the string literal \"$(VAR_NAME)\". Escaped references\n # will never be expanded, regardless of whether the variable\n # exists or not. Defaults to \"\".\n value: string\n # Source for the environment variable's value. Cannot be used if\n # value is not empty.\n valueFrom:\n # Selects a key of a ConfigMap.\n configMapKeyRef:\n # The key to select.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the ConfigMap or its key must be defined\n optional: true\n # Selects a field of the pod: supports metadata.name,\n # metadata.namespace, `metadata.labels\n # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n # spec.serviceAccountName, status.hostIP, status.podIP,\n # status.podIPs.\n fieldRef:\n # Version of the schema the FieldPath is written in terms\n # of, defaults to \"v1\".\n apiVersion: app.kinetica.com/v1\n # Path of the field to select in the specified API version.\n fieldPath: string\n # Selects a resource of the container: only resources limits\n # and requests (limits.cpu, limits.memory,\n # limits.ephemeral-storage, requests.cpu, requests.memory and\n # requests.ephemeral-storage) are currently supported.\n resourceFieldRef:\n # Container name: required for volumes, optional for env\n # vars\n containerName: string\n # Specifies the output format of the exposed resources,\n # defaults to \"1\"\n divisor: \n # Required: resource to select\n resource: string\n # Selects a key of a secret in the pod's namespace\n secretKeyRef:\n # The key of the secret to select from. Must be a valid\n # secret key.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the Secret or its key must be defined\n optional: true\n # Set the name of the container image to use.\n image:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets\n # in the same gpudb-namespace to use for pulling any of the\n # images used by this PodSpec. If specified, these secrets will\n # be passed to individual puller implementations for them to\n # use. For example, in the case of docker, only DockerConfig\n # type secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Whether to enable the Stats Server on the Cluster. Default:\n # true\n isEnabled: true\n # Periodic probe of container liveness. Container will be\n # restarted if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Set the Prometheus logging level.\n logLevel: \"debug\"\n # Logs - Set the location of the Loki configuration file.\n logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Resource Requests & Limits for the Stats Pod.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # Set the location of the TSDB database.\n storageTSDBPath: \"/opt/gpudb/kagent/stats/storage/prometheus-storage\"\n # Set the time to hold data in the TSDB database.\n storageTSDBRetentionTime: \"7d\"\n # Timings - Prometheus Intervals & Timeouts\n timings: evaluationInterval: \"30s\" scrapeInterval: \"30s\"\n scrapeTimeout: \"10s\"\n # Whether to share a single PV for Loki, Prometheus & Grafana or\n # have a separate PV for each. Default: true\n sharedPV: true\n # Resource block specifically for use with SharedPV = true to set\n # storage `requests` & `limits`\n sharedPVResources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # Supporting images like socat,busybox etc.\n supportingImages:\n # Set the resource requests/limits for the BusyBox Pod(s).\n busyBoxResources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # Set the name of the container image to use.\n busybox:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets in\n # the same gpudb-namespace to use for pulling any of the images\n # used by this PodSpec. If specified, these secrets will be\n # passed to individual puller implementations for them to use.\n # For example, in the case of docker, only DockerConfig type\n # secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Set the name of the container image to use.\n socat:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets in\n # the same gpudb-namespace to use for pulling any of the images\n # used by this PodSpec. If specified, these secrets will be\n # passed to individual puller implementations for them to use.\n # For example, in the case of docker, only DockerConfig type\n # secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Set the resource requests/limits for the Socat Pod.\n socatResources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n# KineticaClusterStatus defines the observed state of KineticaCluster\nstatus:\n # CloudProvider the DB is deployed on\n cloudProvider: string\n # CloudRegion the DB is deployed on\n cloudRegion: string\n # ClusterSize the current number of ranks & type i.e. CPU or GPU of\n # the cluster\n clusterSize:\n # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n # representation of the number of nodes in a simple to understand\n # T-Short size scheme. This indicates the size of the cluster i.e.\n # the number of nodes. It does not identify the size of the cloud\n # provider nodes. For node size see ClusterTypeEnum. Supported\n # Values are: - XS S M L XL XXL XXXL\n tshirtSize: string\n # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n # of the VM.\n tshirtType: string\n # The number of ranks (replicas) that the cluster was last run with\n currentReplicas: 0\n # The first start of a new cluster has completed.\n firstStartComplete: false\n # HostManagerStatusResponse - The contents of polling the HostManager\n # on port 9300n are added to the BR status field. This allows clients\n # to get the Host/Rank/Graph/ML status information.\n hmStatus: cluster_leader: string cluster_operation: string graph:\n status: string graph_status: string host_httpd_status: string\n host_mode: string host_num_gpus: string host_pid: 1\n host_stats_status: string host_status: string hostname: string hosts:\n graph_status: string host_httpd_status: string host_mode: string\n host_pid: 1 host_stats_status: string host_status: string ml_status:\n string query_planner_status: string reveal_status: string\n license_expiration: string license_status: string license_type:\n string ml_status: string query_planner_status: string ranks: mode:\n string\n # Pid - The OS Process Id for the Rank.\n pid: 1 status: string reveal_status: string system_idle_time:\n string system_mode: string system_rebalancing: 1 system_status:\n string text: status: string version: string\n # The fully qualified Ingress routes.\n ingressUrls: aaw: string dbMonitor: string files: string gadmin:\n string postgresProxy: string ranks: {} reveal: string\n # The fully qualified in-cluster Ingress routes.\n internalIngressUrls: aaw: string dbMonitor: string files: string\n gadmin: string postgresProxy: string ranks: {} reveal: string\n # Identify FreeSaaS Cluster\n isFreeSaaS: false\n # HostOptions used during DB Cluster Scaling Functions\n options: ram_limit: 1\n # OutstandingBilling - A list of hours not yet billed for. Will only\n # be present if the plan is Pay As You Go and the operator was unable\n # to send the billing information due to an issue with the cloud\n # providers billing APIs.\n outstandingBillableHour:\n - billable: true billed: true billedAt: string duration: string end:\n string start: string\n # The state or phase of the current DB installation\n phase: stringv\n</code></pre>","tags":["Reference"]},{"location":"Reference/kinetica_workbench/","title":"Workbench CRD Reference","text":"","tags":["Reference"]},{"location":"Reference/kinetica_workbench/#coming-soon","title":"Coming Soon","text":"","tags":["Reference"]},{"location":"Reference/workbench/","title":"Kinetica Workbench Configuration","text":"<ul> <li>kubectl (yaml)</li> <li>Helm Chart</li> </ul>","tags":["Reference"]},{"location":"Reference/workbench/#workbench","title":"Workbench","text":"kubectl <p>Using kubetctl a CustomResource of type <code>KineticaCluster</code> is used to define a new Kinetica DB Cluster in a yaml file.</p> <p>The basic Group, Version, Kind or GVK to instantiate a Kinetica Workbench is as follows: -</p> Workbench GVK<pre><code>apiVersion: workbench.com.kinetica/v1\nkind: Workbench\n</code></pre>","tags":["Reference"]},{"location":"Reference/workbench/#metadata","title":"Metadata","text":"<p>to which we add a <code>metadata:</code> block for the name of the DB CR along with the <code>namespace</code> into which we are targetting the installation of the DB cluster.</p> Workbench metadata<pre><code>apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n name: workbench-kinetica-cluster\n namespace: gpudb\n</code></pre> <p>The simplest valid Workbench CR looks as follows: -</p> workbench.yaml<pre><code>apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n name: workbench-kinetica-cluster\n namespace: gpudb\nspec:\n executeSqlLimit: 10000\n fqdn: kinetica-cluster.saas.kinetica.com\n image: kinetica/workbench:v7.1.9-8.rc1\n letsEncrypt:\n enabled: false\n userIdleTimeout: 60\n ingressController: nginx-ingress\n</code></pre> <p><code>1. clusterName</code> - the user defined name of the Kinetica DB Cluster</p> <p><code>2. clusterSize</code> - block that defines the number of DB Ranks to run</p> helm","tags":["Reference"]},{"location":"Setup/","title":"Kinetica for Kubernetes Setup","text":"<ul> <li> <p> Set up in 15 minutes </p> <p>Install the Kinetica DB locally on <code>Kind</code> or <code>k3s</code> with <code>helm</code> to get up and running in minutes. Quickstart</p> </li> <li> <p> Prepare to Install</p> <p>What you need to know & do before beginning a production installation. Preparation and Prerequisites</p> </li> <li> <p> Production DB Installation</p> <p>Install the Kinetica DB with helm to get up and running quickly Installation</p> </li> <li> <p> Channel Your Inner Ninja</p> <p>Advanced Installation Topics which go beyond the basic installation. Advanced Topics</p> </li> </ul>","tags":["Getting Started","Installation"]},{"location":"Support/","title":"Support","text":"<ul> <li> <p> Taking the next steps</p> <p>Further tutorials or help on configuring Kinetica in different environments. Help & Tutorials</p> </li> <li> <p> Locating Issues</p> <p>In the unlikely event you require information on how to troubleshoot your installation, help can be found here. Troubleshooting</p> </li> <li> <p> FAQ</p> <p>Frequently Asked Questions.. FAQ</p> </li> </ul>","tags":["Support"]},{"location":"Troubleshooting/troubleshooting/","title":"Troubleshooting","text":"","tags":["Support"]},{"location":"Troubleshooting/troubleshooting/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"tags/","title":"Categories","text":"<p>Following is a list of relevant documentation categories:</p>"},{"location":"tags/#aks","title":"AKS","text":"<ul> <li>Azure AKS</li> </ul>"},{"location":"tags/#administration","title":"Administration","text":"<ul> <li>Administration</li> <li>Grant management</li> <li>Resource group management</li> <li>Role Management</li> <li>Schema management</li> <li>User Management</li> <li>Kinetica Cluster Grants Reference</li> <li>Kinetica Cluster Resource Groups Reference</li> <li>Kinetica Cluster Roles Reference</li> <li>Kinetica Cluster Schemas Reference</li> <li>Kinetica Cluster Users Reference</li> </ul>"},{"location":"tags/#advanced","title":"Advanced","text":"<ul> <li>Advanced</li> <li> Advanced Topics</li> <li>Air-Gapped Environments</li> <li>Alternative Charts</li> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li>Kinetica DB on OS X (Arm64)</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li>Bare Metal/VM Installation - <code>kubeadm</code></li> <li>S3 Storage for Dev/Test</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> </ul>"},{"location":"tags/#architecture","title":"Architecture","text":"<ul> <li>Architecture</li> <li>Core Database Architecture</li> <li>Kubernetes Architecture</li> </ul>"},{"location":"tags/#configuration","title":"Configuration","text":"<ul> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> <li>How to change the Clusters FQDN</li> <li>OpenTelemetry</li> </ul>"},{"location":"tags/#development","title":"Development","text":"<ul> <li>Kinetica DB on OS X (Arm64)</li> <li>S3 Storage for Dev/Test</li> <li>Quickstart</li> </ul>"},{"location":"tags/#eks","title":"EKS","text":"<ul> <li>Amazon EKS</li> </ul>"},{"location":"tags/#getting-started","title":"Getting Started","text":"<ul> <li>Getting Started</li> <li>Azure AKS</li> <li>Amazon EKS</li> <li>Preparation & Prerequisites</li> <li>Quickstart</li> <li>Kinetica for Kubernetes Setup</li> </ul>"},{"location":"tags/#ingress","title":"Ingress","text":"<ul> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> </ul>"},{"location":"tags/#installation","title":"Installation","text":"<ul> <li>Air-Gapped Environments</li> <li>Alternative Charts</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li>Bare Metal/VM Installation - <code>kubeadm</code></li> <li>S3 Storage for Dev/Test</li> <li>Getting Started</li> <li>Kinetica for Kubernetes Installation</li> <li>CPU</li> <li>GPU</li> <li>Preparation & Prerequisites</li> <li>Quickstart</li> <li> Core DB CRDs</li> <li>Kinetica for Kubernetes Setup</li> </ul>"},{"location":"tags/#monitoring","title":"Monitoring","text":"<ul> <li>Logs</li> <li> Metrics Collection & Display</li> <li>OpenTelemetry</li> </ul>"},{"location":"tags/#operations","title":"Operations","text":"<ul> <li>Logs</li> <li> Metrics Collection & Display</li> <li>Operational Management</li> <li>Kinetica for Kubernetes Backup & Restore</li> <li>OpenTelemetry</li> <li>Kinetica for Kubernetes Data Rebalancing</li> <li>Kinetica for Kubernetes Suspend & Resume</li> <li>Kinetica Cluster Backups Reference</li> <li> Core DB CRDs</li> <li>Kinetica Cluster Restores Reference</li> </ul>"},{"location":"tags/#reference","title":"Reference","text":"<ul> <li>Reference Section</li> <li>Kinetica Database Configuration</li> <li>Kinetica Operators</li> <li>Kinetica Cluster Admins Reference</li> <li>Kinetica Cluster Backups Reference</li> <li>Kinetica Cluster Grants Reference</li> <li> Core DB CRDs</li> <li>Kinetica Cluster Resource Groups Reference</li> <li>Kinetica Cluster Restores Reference</li> <li>Kinetica Cluster Roles Reference</li> <li>Kinetica Cluster Schemas Reference</li> <li>Kinetica Cluster Users Reference</li> <li>Kinetica Clusters Reference</li> <li>Kinetica Workbench Reference</li> <li>Kinetica Workbench Configuration</li> </ul>"},{"location":"tags/#storage","title":"Storage","text":"<ul> <li>S3 Storage for Dev/Test</li> <li>Amazon EKS</li> </ul>"},{"location":"tags/#support","title":"Support","text":"<ul> <li>How to change the Clusters FQDN</li> <li>FAQ</li> <li>Help & Tutorials</li> <li>Creating Users, Roles, Schemas and other Kinetica DB Objects</li> <li>Support</li> <li>Troubleshooting</li> </ul>"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"<code>kineticadb/charts</code>","text":"<p>Accelerate your AI and analytics. Kinetica harnesses real-time data and the power of CPUs & GPUs for lightning-fast insights due to it being uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.</p> <p>Kinetica DB can be quickly installed into Kubernetes using Helm.</p> <ul> <li> <p> Set up in 15 minutes </p> <p>Install the Kinetica DB locally on <code>Kind</code> or <code>k3s</code> with <code>helm</code> to get up and running in minutes. Quickstart</p> </li> <li> <p> Prepare to Install</p> <p>What you need to know & do before beginning a production installation. Preparation and Prerequisites</p> </li> <li> <p> Production DB Installation</p> <p>Install the Kinetica DB with helm to get up and running quickly Installation</p> </li> <li> <p> Channel Your Inner Ninja</p> <p>Advanced Installation Topics which go beyond the basic installation. Advanced Topics</p> </li> <li> <p> Running and Managing the Platform</p> <p>Metrics, Monitoring, Logs and Telemetry Distribution. Operations</p> </li> <li> <p> Product Architecture</p> <p>The Modern Analytics Database Architected for Performance at Scale. Architecture</p> </li> <li> <p> Support</p> <p>Additional Help, Tutorials and Troubleshooting resources. Support</p> </li> <li> <p> Configuration in Detail</p> <p>Detailed reference material for the Helm Charts & Kinetica for Kubernetes CRDs. Reference Documentation</p> </li> </ul>"},{"location":"tags/","title":"Categories","text":"<p>Following is a list of relevant documentation categories:</p>"},{"location":"tags/#aks","title":"AKS","text":"<ul> <li>Azure AKS</li> </ul>"},{"location":"tags/#administration","title":"Administration","text":"<ul> <li>Administration</li> <li>Grant management</li> <li>Resource group management</li> <li>Role Management</li> <li>Schema management</li> <li>User Management</li> <li>Kinetica Cluster Grants Reference</li> <li>Kinetica Cluster Resource Groups Reference</li> <li>Kinetica Cluster Roles Reference</li> <li>Kinetica Cluster Schemas Reference</li> <li>Kinetica Cluster Users Reference</li> </ul>"},{"location":"tags/#advanced","title":"Advanced","text":"<ul> <li>Advanced</li> <li> Advanced Topics</li> <li>Air-Gapped Environments</li> <li>Alternative Charts</li> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li>Kinetica DB on OS X (Arm64)</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li>Bare Metal/VM Installation - <code>kubeadm</code></li> <li>S3 Storage for Dev/Test</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> </ul>"},{"location":"tags/#architecture","title":"Architecture","text":"<ul> <li>Architecture</li> <li>Core Database Architecture</li> <li>Kubernetes Architecture</li> </ul>"},{"location":"tags/#configuration","title":"Configuration","text":"<ul> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> <li>How to change the Clusters FQDN</li> <li>OpenTelemetry</li> </ul>"},{"location":"tags/#development","title":"Development","text":"<ul> <li>Kinetica DB on OS X (Arm64)</li> <li>S3 Storage for Dev/Test</li> <li>Quickstart</li> </ul>"},{"location":"tags/#eks","title":"EKS","text":"<ul> <li>Amazon EKS</li> </ul>"},{"location":"tags/#getting-started","title":"Getting Started","text":"<ul> <li>Getting Started</li> <li>Azure AKS</li> <li>Amazon EKS</li> <li>Preparation & Prerequisites</li> <li>Quickstart</li> <li>Kinetica for Kubernetes Setup</li> </ul>"},{"location":"tags/#ingress","title":"Ingress","text":"<ul> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> </ul>"},{"location":"tags/#installation","title":"Installation","text":"<ul> <li>Air-Gapped Environments</li> <li>Alternative Charts</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li>Bare Metal/VM Installation - <code>kubeadm</code></li> <li>S3 Storage for Dev/Test</li> <li>Getting Started</li> <li>Kinetica for Kubernetes Installation</li> <li>CPU</li> <li>GPU</li> <li>Preparation & Prerequisites</li> <li>Quickstart</li> <li> Core DB CRDs</li> <li>Kinetica for Kubernetes Setup</li> </ul>"},{"location":"tags/#monitoring","title":"Monitoring","text":"<ul> <li>Logs</li> <li> Metrics Collection & Display</li> <li>OpenTelemetry</li> </ul>"},{"location":"tags/#operations","title":"Operations","text":"<ul> <li>Logs</li> <li> Metrics Collection & Display</li> <li>Operational Management</li> <li>Kinetica for Kubernetes Backup & Restore</li> <li>OpenTelemetry</li> <li>Kinetica for Kubernetes Data Rebalancing</li> <li>Kinetica for Kubernetes Suspend & Resume</li> <li>Kinetica Cluster Backups Reference</li> <li> Core DB CRDs</li> <li>Kinetica Cluster Restores Reference</li> </ul>"},{"location":"tags/#reference","title":"Reference","text":"<ul> <li>Reference Section</li> <li>Kinetica Database Configuration</li> <li>Kinetica Operators</li> <li>Kinetica Cluster Admins Reference</li> <li>Kinetica Cluster Backups Reference</li> <li>Kinetica Cluster Grants Reference</li> <li> Core DB CRDs</li> <li>Kinetica Cluster Resource Groups Reference</li> <li>Kinetica Cluster Restores Reference</li> <li>Kinetica Cluster Roles Reference</li> <li>Kinetica Cluster Schemas Reference</li> <li>Kinetica Cluster Users Reference</li> <li>Kinetica Clusters Reference</li> <li>Kinetica Workbench Reference</li> <li>Kinetica Workbench Configuration</li> </ul>"},{"location":"tags/#storage","title":"Storage","text":"<ul> <li>S3 Storage for Dev/Test</li> <li>Amazon EKS</li> </ul>"},{"location":"tags/#support","title":"Support","text":"<ul> <li>How to change the Clusters FQDN</li> <li>FAQ</li> <li>Help & Tutorials</li> <li>Creating Users, Roles, Schemas and other Kinetica DB Objects</li> <li>Support</li> <li>Troubleshooting</li> </ul>"},{"location":"Administration/","title":"Administration","text":"<ul> <li> <p> DB Clusters</p> <p>Core Kinetica Database Cluster Management.</p> <p> KineticaCluster</p> </li> <li> <p> DB Users</p> <p>Kinetica Database User Management.</p> <p> KineticaUser</p> </li> <li> <p> DB Roles</p> <p>Kinetica Database Role Management.</p> <p> KineticaRole</p> </li> <li> <p> DB Schemas</p> <p>Kinetica Database Schema Management.</p> <p> KineticaSchema</p> </li> <li> <p> DB Grants</p> <p>Kinetica Database Grant Management.</p> <p> KineticaGrant</p> </li> <li> <p> DB Resource Groups</p> <p>Kinetica Database Resource Group Management.</p> <p> KineticaResourceGroup</p> </li> <li> <p> DB Administration</p> <p>Kinetica Database Administration.</p> <p> KineticaAdmin</p> </li> <li> <p> DB Backups</p> <p>Kinetica Database Backup Management.</p> <p>Note</p> <p>This requires Velero to be installed on the Kubernetes Cluster.</p> <p> KineticaBackup</p> </li> <li> <p> DB Restore</p> <p>Kinetica Database Restoration.</p> <p>Note</p> <p>This requires Velero to be installed on the Kubernetes Cluster.</p> <p> KineticaRestore</p> </li> </ul> <p> Home</p>","tags":["Administration"]},{"location":"Administration/role_management/","title":"Role Management","text":"<p>Management of roles is done with the <code>KineticaRole</code> CRD. </p> <p>kubectl Usage</p> <p>From the <code>kubectl</code> command line they are referenced by <code>kineticaroles</code> or the short form is <code>kr</code>.</p>","tags":["Administration"]},{"location":"Administration/role_management/#list-roles","title":"List Roles","text":"<p>To list the roles deployed to a Kinetica DB installation we can use the following from the command-line: -</p> <p><code>kubectl -n gpudb get kineticaroles</code> or <code>kubectl -n gpudb get kr</code></p> <p>where the namespace <code>-n gpudb</code> matches the namespace of the Kinetica DB installation.</p> <p>This outputs</p> Name Ring Name Role Resource Group Name LDAP DB db-users kinetica-k8s-sample db_users OK OK global-admins kinetica-k8s-sample global_admins OK OK","tags":["Administration"]},{"location":"Administration/role_management/#name","title":"Name","text":"<p>The name of the Kubernetes CR i.e. the <code>metadata.name</code> this is not necessarily the name of the user.</p>","tags":["Administration"]},{"location":"Administration/role_management/#ring-name","title":"Ring Name","text":"<p>The name of the <code>KineticaCluster</code> the user is created in.</p>","tags":["Administration"]},{"location":"Administration/role_management/#role-name","title":"Role Name","text":"<p>The name of the role as contained with LDAP & the DB.</p>","tags":["Administration"]},{"location":"Administration/role_management/#role-creation","title":"Role Creation","text":"test-role-2.yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaRole\nmetadata:\n name: test-role-2\n namespace: gpudb\nspec:\n ringName: kineticacluster-sample\n role:\n name: \"test_role2\"\n</code></pre>","tags":["Administration"]},{"location":"Administration/role_management/#role-deletion","title":"Role Deletion","text":"<p>To delete a role from the Kinetica Cluster simply delete the Role CR from Kubernetes: -</p> Delete User<pre><code>kubectl -n gpudb delete kr user-fred-smith \n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/","title":"User Management","text":"<p>Management of users is done with the <code>KineticaUser</code> CRD. </p> <p>kubectl Usage</p> <p>From the <code>kubectl</code> command line they are referenced by <code>kineticausers</code> or the short form is <code>ku</code>.</p>","tags":["Administration"]},{"location":"Administration/user_management/#list-users","title":"List Users","text":"<p>To list the users deployed to a Kinetica DB installation we can use the following from the command-line: -</p> <p><code>kubectl -n gpudb get kineticausers</code> or <code>kubectl -n gpudb get ku</code></p> <p>where the namespace <code>-n gpudb</code> matches the namespace of the Kinetica DB installation.</p> <p>This outputs </p> Name Action Ring Name UID Last Name Given Name Display Name LDAP DB Reveal kadmin upsert kinetica-k8s-sample kadmin Account Admin Admin Account OK OK OK","tags":["Administration"]},{"location":"Administration/user_management/#name","title":"Name","text":"<p>The name of the Kubernetes CR i.e. the <code>metadata.name</code> this is not necessarily the name of the user.</p>","tags":["Administration"]},{"location":"Administration/user_management/#action","title":"Action","text":"<p>There are two actions possible on a <code>KineticaUser</code>. The first is <code>upsert</code> which is for user creation or modification. The second is <code>change-password</code> which shows when a user password reset has been performed.</p>","tags":["Administration"]},{"location":"Administration/user_management/#ring-name","title":"Ring Name","text":"<p>The name of the <code>KineticaCluster</code> the user is created in.</p>","tags":["Administration"]},{"location":"Administration/user_management/#uid","title":"UID","text":"<p>The unique, user id to use in LDAP & the DB to reference this user.</p>","tags":["Administration"]},{"location":"Administration/user_management/#last-name","title":"Last Name","text":"<p>Last Name refers to last name or surname. </p> <p><code>sn</code> in LDAP terms.</p>","tags":["Administration"]},{"location":"Administration/user_management/#given-name","title":"Given Name","text":"<p>Given Name is the Firstname also called Christian name. </p> <p><code>givenName</code> in LDAP terms.</p>","tags":["Administration"]},{"location":"Administration/user_management/#display-name","title":"Display Name","text":"<p>The name shown on any UI representation.</p>","tags":["Administration"]},{"location":"Administration/user_management/#ldap","title":"LDAP","text":"<p>Identifies if the user has been successfully created within LDAP. </p> <ul> <li>'' - if empty the user has not yet been created in LDAP</li> <li>'OK' - shows the user has been successfully created within LDAP</li> <li>'Failed' - shows there was a failure adding the user to LDAP</li> </ul>","tags":["Administration"]},{"location":"Administration/user_management/#db","title":"DB","text":"<p>Identifies if the user has been successfully created within the DB.</p> <ul> <li>'' - if empty the user has not yet been created in the DB</li> <li>'OK' - shows the user has been successfully created within the DB</li> <li>'Failed' - shows there was a failure adding the user to the DB</li> </ul>","tags":["Administration"]},{"location":"Administration/user_management/#reveal","title":"Reveal","text":"<p>Identifies if the user has been successfully created within Reveal.</p> <ul> <li>'' - if empty the user has not yet been created in Reveal</li> <li>'OK' - shows the user has been successfully created within Reveal</li> <li>'Failed' - shows there was a failure adding the user to Reveal</li> </ul>","tags":["Administration"]},{"location":"Administration/user_management/#user-creation","title":"User Creation","text":"<p>User creation requires two Kubernetes CRs to be submitted to Kubernetes and processed by the Kinetica DB Operator.</p> <ul> <li>User Secret (Password)</li> <li>Kinetica User</li> </ul> <p>Creation Sequence</p> <p>It is preferable to create the User Secret prior to creating the <code>KineticaUser</code>.</p> <p>Secret Deletion</p> <p>The User Secret will be deleted once the <code>KineticaUser</code> is created by the operator. The users password will be stored in LDAP and not be present in Kubernetes.</p>","tags":["Administration"]},{"location":"Administration/user_management/#user-secret","title":"User Secret","text":"<p>In this example a user Fred Smith will be created.</p> fred-smith-secret.yaml<pre><code>apiVersion: v1\nkind: Secret\nmetadata:\n name: fred-smith-secret\n namespace: gpudb\nstringData:\n password: testpassword\n</code></pre> Create the User Password Secret<pre><code>kubectl apply -f fred-smith-secret.yaml\n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#kineticauser","title":"<code>KineticaUser</code>","text":"user-fred-smith.yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaUser\nmetadata:\n name: user-fred-smith\n namespace: gpudb\nspec:\n ringName: kineticacluster-sample\n uid: fred\n action: upsert\n reveal: true\n upsert:\n userPrincipalName: fred.smith@example.com\n givenName: Fred\n displayName: FredSmith\n lastName: Smith\n passwordSecret: fred-smith-secret\n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#user-deletion","title":"User Deletion","text":"<p>To delete a user from the Kinetica Cluster simply delete the User CR from Kubernetes: -</p> Delete User<pre><code>kubectl -n gpudb delete ku user-fred-smith \n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#change-password","title":"Change Password","text":"<p>To change a users password we use the <code>change-password</code> action rather than the <code>upsert</code> action we used previously.</p> <p>Creation Sequence</p> <p>It is preferable to create the User Secret prior to creating the <code>KineticaUser</code>.</p> <p>Secret Deletion</p> <p>The User Secret will be deleted once the <code>KineticaUser</code> is created by the operator. The users password will be stored in LDAP and not be present in Kubernetes.</p> fred-smith-change-pwd-secret.yaml<pre><code>apiVersion: v1\nkind: Secret\nmetadata:\n name: fred-smith-change-pwd-secret\n namespace: gpudb\nstringData:\n password: testpassword\n</code></pre> Create the User Password Secret<pre><code>kubectl apply -f fred-smith-change-pwd-secret.yaml\n</code></pre> user-fred-smith-change-password.yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaUser\nmetadata:\n name: user-fred-smith-change-password\n namespace: gpudb\nspec:\n ringName: kineticacluster-sample\n uid: fred\n action: change-password\n changePassword:\n passwordSecret: fred-smith-change-pwd-secret\n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#advanced-topics","title":"Advanced Topics","text":"","tags":["Administration"]},{"location":"Administration/user_management/#limit-user-resources","title":"Limit User Resources","text":"","tags":["Administration"]},{"location":"Administration/user_management/#data-limit","title":"Data Limit","text":"<p>KIFs user data size limit.</p> dataLimit<pre><code>spec:\n upsert:\n dataLimit: 10Gi\n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#user-kifs-usage","title":"User Kifs Usage","text":"<p>Kifs Enablement</p> <p>In order to use the Kifs user features below there is a requirement that Kifs is enabled on the Kinetica DB.</p>","tags":["Administration"]},{"location":"Administration/user_management/#home-directory","title":"Home Directory","text":"<p>When creating a new user it is possible to create that user a 'home' directory within the Kifs filesystem by using the <code>createHomeDirectory</code> option.</p> createHomeDirectory<pre><code>spec:\n upsert:\n createHomeDirectory: true\n</code></pre>","tags":["Administration"]},{"location":"Administration/user_management/#limit-directory-storage","title":"Limit Directory Storage","text":"<p>It is possible to limit the amount of Kifs file storage the user has by adding <code>kifsDataLimit</code> to the user creation yaml and setting the value to a Kubernetes Quantity e.g. <code>2Gi</code></p> kifsDataLimit<pre><code>spec:\n upsert:\n kifsDataLimit: 2Gi\n</code></pre>","tags":["Administration"]},{"location":"Advanced/","title":"Advanced Topics","text":"<ul> <li> <p> Find alternative chart versions </p> <p>How to use pre-release or development Chart version if requested to by Kinetica Support. Alternative Charts</p> </li> <li> <p> Configuring Ingress Records </p> <p>How to expose Kinetica via Kubernetes Ingress. Ingress Configuration</p> </li> <li> <p> Air-Gapped Environments </p> <p>Specifics for installing Kinetica for Kubernetes in an Air-Gapped Environment Airgapped</p> </li> <li> <p> Using your own OpenTelemetry Collector</p> <p>How to configure Kinetica for Kubernetes to use your open OpenTelemetry collector. </p> <p> External OTEL</p> </li> <li> <p> Minio for Dev/Test S3 Storage </p> <p>Install Minio in order to enable S3 storage for Development.</p> <p> min.io</p> </li> <li> <p> Creating Resources with Kubernetes APIs </p> <p>Create Users, Roles, DB Schema etc. using Kubernertes CRs. Resources</p> </li> <li> <p> Kinetica on OS X (Apple Silicon) </p> <p>Install the Kinetica DB on a new Kubernetes 'production-like' cluster on Apple OS X (Apple Silicon) using UTM. Apple ARM64</p> </li> <li> <p> Bare Metal/VM Installation from Scratch </p> <p>Install the Kinetica DB on a new Kubernetes 'production-like' bare metal (or VMs) cluster via <code>kubeadm</code> using <code>cilium</code> Networking, <code>kube-vip</code> LoadBalancer. Bare Metal/VM Installation</p> </li> <li> <p> Software LoadBalancer </p> <p>Install a software Kubernetes CCM/LoadBalancer for bare metal or VM based Kubernetes CLusters. <code>kube-vip</code> LoadBalancer.</p> <p> Software LoadBalancer</p> </li> </ul>","tags":["Advanced"]},{"location":"Advanced/advanced_topics/","title":"Advanced Topics","text":"","tags":["Advanced"]},{"location":"Advanced/advanced_topics/#install-from-a-developmentpre-release-chart-version","title":"Install from a development/pre-release chart version","text":"<p>Find all alternative chart versions with:</p> Find alternative chart versions<pre><code>helm search repo kinetica-operators --devel --versions\n</code></pre> <p></p> <p>Then append <code>--devel --version [CHART-DEVEL-VERSION]</code> to the end of the Helm install command. See here.</p>","tags":["Advanced"]},{"location":"Advanced/airgapped/","title":"Air-Gapped Environments","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#obtaining-the-kinetica-images","title":"Obtaining the Kinetica Images","text":"Kinetica Images for an Air-Gapped Environment <p>If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.</p> <p>For information on ways to transfer the files into an air-gapped environment See here.</p> <p>Please select the method to transfer the images: -</p> mindthegap containerd docker <p>It is possible to use <code>mesosphere/mindthegap</code></p> <p>mindthegap</p> <p><code>mindthegap</code> provides utilities to manage air-gapped image bundles, both creating image bundles and seeding images from a bundle into an existing OCI registry or directly loading them to <code>containerd</code>.</p> <p>This makes it possible with <code>mindthegap</code> to</p> <ul> <li>create a single archive bundle of all the required images outside the air-gapped environment</li> <li>run <code>mindthegap</code> using the archive bundle on the Kubernetes Nodes to bulk load the images into <code>contained</code> in a single command.</li> </ul> <p>Kinetica provides two <code>mindthegap</code> yaml files which list all the necessary images for Kinetica for Kubernetes.</p> <ul> <li>CPU only</li> <li> CPU & nVidia CUDA GPU</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#required-container-images","title":"Required Container Images","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"<ul> <li>docker.io/kinetica/kinetica-k8s-operator:{{kinetica_full_version}}<ul> <li>docker.io/kinetica/kinetica-k8s-cpu:{{kinetica_full_version}} or</li> <li>docker.io/kinetica/kinetica-k8s-cpu-avx512:{{kinetica_full_version}} or</li> <li>docker.io/kinetica/kinetica-k8s-gpu:{{kinetica_full_version}}</li> </ul> </li> <li>docker.io/kinetica/workbench-operator:{{kinetica_full_version}}</li> <li>docker.io/kinetica/workbench:{{kinetica_full_version}}</li> <li>docker.io/kinetica/kinetica-k8s-monitor:{{kinetica_full_version}}</li> <li>docker.io/kinetica/busybox:{{kinetica_full_version}}</li> <li>docker.io/kinetica/fluent-bit:{{kinetica_full_version}}</li> <li>docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>nvcr.io/nvidia/gpu-operator:v23.9.1</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>registry.k8s.io/nfd/node-feature-discovery:v0.14.2</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"<ul> <li>docker.io/bitnami/openldap:2.6.7</li> <li>docker.io/alpine/openssl:latest (used by bitnami/openldap)</li> <li>docker.io/otel/opentelemetry-collector-contrib:0.95.0</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"<ul> <li>quay.io/brancz/kube-rbac-proxy:v0.14.2</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#optional-container-images","title":"Optional Container Images","text":"<p>These images are only required if certain features are enabled as part of the Helm installation: -</p> <ul> <li>CertManager</li> <li>ingress-ninx</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"<ul> <li>quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> </ul>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"<ul> <li>registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)</li> <li>registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c</li> </ul> <p>It is possible with <code>containerd</code> to pull images, save them and load them either into a Container Registry in the air gapped environment or directly into another <code>containerd</code> instance. </p> <p>If the target <code>containerd</code> is on a node running a Kubernetes Cluster then these images will be sourced by Kubernetes from the loaded images, via CRI, with no requirement to pull them from an external source e.g. a Registry or Mirror.</p> <p><code>sudo</code> required</p> <p>Depending on how <code>containerd</code> has been installed and configured many of the example calls below may require running with <code>sudo</code></p> <p>It is possible with <code>docker</code> to pull images, save them and load them into an OCI Container Registry in the air gapped environment.</p> Pull a remote image (docker)<pre><code>docker pull --platformlinux/amd64 docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-5.ga-1\n</code></pre> Export a local image (docker)<pre><code>docker export --platformlinux/amd64 -o kinetica-k8s-cpu-v7.2.2-5.ga-1.tar \\\ndocker.io/kinetica/kinetica-k8s-cpu:v7.2.2-5.ga-1\n</code></pre> <p>We can now transfer this archive (<code>kinetica-k8s-cpu-v7.2.2-5.ga-1.rc-3.tar</code>) to the Kubernetes Node inside the air-gapped environment.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2) <ol> <li>It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image</li> <li>With a supported nVidia GPU.</li> </ol>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#install-mindthegap","title":"Install <code>mindthegap</code>","text":"Install mindthegap<pre><code>wget https://github.com/mesosphere/mindthegap/releases/download/v1.13.1/mindthegap_v1.13.1_linux_amd64.tar.gz\ntar zxvf mindthegap_v1.13.1_linux_amd64.tar.gz\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-create-the-bundle","title":"mindthegap - Create the Bundle","text":"mindthegap create image-bundle<pre><code>mindthegap create image-bundle --images-file mtg.yaml --platform linux/amd64\n</code></pre> <p>where <code>--images-file</code> is either the CPU or GPU Kinetica <code>mindthegap</code> yaml file.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-the-bundle","title":"mindthegap - Import the Bundle","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-to-containerd","title":"mindthegap - Import to <code>containerd</code>","text":"mindthegap import image-bundle<pre><code>mindthegap import image-bundle --image-bundle images.tar [--containerd-namespace k8s.io]\n</code></pre> <p>If <code>--containerd-namespace</code> is not specified, images will be imported into <code>k8s.io</code> namespace. </p> <p><code>sudo</code> required</p> <p>Depending on how <code>containerd</code> has been installed and configured it may require running the above command with <code>sudo</code></p>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-to-an-internal-oci-registry","title":"mindthegap - Import to an internal OCI Registry","text":"mindthegap import image-bundle<pre><code>mindthegap push bundle --bundle <path/to/bundle.tar> \\\n--to-registry <registry.address> \\\n[--to-registry-insecure-skip-tls-verify]\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-using-containerd-to-pull-and-export-an-image","title":"containerd - Using <code>containerd</code> to pull and export an image","text":"<p>Similar to <code>docker pull</code> we can use <code>ctr image pull</code> so to pull the core Kinetica DB cpu based image</p> Pull a remote image (containerd)<pre><code>ctr image pull docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-5.ga-1\n</code></pre> <p>We now need to export the pulled image as an archive to the local filesystem.</p> Export a local image (containerd)<pre><code>ctr image export kinetica-k8s-cpu-v7.2.2-5.ga-1.tar \\\ndocker.io/kinetica/kinetica-k8s-cpu:v7.2.2-5.ga-1\n</code></pre> <p>We can now transfer this archive (<code>kinetica-k8s-cpu-v7.2.2-5.ga-1.tar</code>) to the Kubernetes Node inside the air-gapped environment.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-using-containerd-to-import-an-image","title":"containerd - Using <code>containerd</code> to import an image","text":"<p>Using <code>containerd</code> to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.</p> Import the Images<pre><code>ctr -n=k8s.io images import kinetica-k8s-cpu-v7.2.2-5.ga-1.tar\n</code></pre> <p><code>-n=k8s.io</code></p> <p>It is possible to use <code>ctr images import kinetica-k8s-cpu-v7.2.2-5.ga-1.rc-3.tar</code> to import the image to <code>containerd</code>.</p> <p>However, in order for the image to be visible to the Kubernetes Cluster running on <code>containerd</code> it is necessary to add the parameter <code>-n=k8s.io</code>.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-verifying-the-image-is-available","title":"containerd - Verifying the image is available","text":"<p>To verify the image is loaded into <code>containerd</code> on the node run the following on the node: -</p> Verify containerd Images<pre><code>ctr image ls\n</code></pre> <p>To verify the image is visible to Kubernetes on the node run the following: -</p> Verify CRI Images<pre><code>crictl images\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#docker-using-docker-to-import-an-image","title":"docker - Using <code>docker</code> to import an image","text":"<p>Using <code>docker</code> to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.</p> Import the Images<pre><code>docker import --platformlinux/amd64 kinetica-k8s-cpu-v7.2.2-5.ga-1.tar registry:repository/kinetica-k8s-cpu:v7.2.0-3.rc-3\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/alternative_charts/","title":"Using Alternative Helm Charts","text":"<p>If requested by Kinetica Support you can search and use pre-release versions of the Kinetica Helm Charts.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/alternative_charts/#install-from-a-developmentpre-release-chart-version","title":"Install from a development/pre-release chart version","text":"<p>Find all alternative chart versions with:</p> Find alternative chart versions<pre><code>helm search repo kinetica-operators --devel --versions\n</code></pre> <p></p> <p>Then append <code>--devel --version [CHART-DEVEL-VERSION]</code> to the end of the Helm install command.</p> Helm install kinetica-operators<pre><code>helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--devel \\\n--version 72.0 \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/ingress_configuration/","title":"Ingress Configuration","text":"<ul> <li> <p> <code>ingress-nginx</code> Configuration</p> <p>How to enable Ingress with <code>ingress-nginx</code> for Kinetica DB.</p> <p> <code>ingress-nginx</code></p> </li> <li> <p> <code>nginx-ingress</code> Configuration</p> <p>How to enable Ingress with <code>nginx-ingress</code> for Kinetica DB.</p> <p> <code>nginx-ingress</code></p> </li> </ul>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/","title":"<code>ingress-nginx</code> Ingress Configuration","text":"<p>To use an 'external' ingress-nginx controller i.e. not the one optionally installed by the Kinetica Operators Helm chart it is necessary to disable ingress in the <code>KineticaCluster</code> CR.</p> <p>The field <code>spec.ingressController: nginx</code> should be set to <code>spec.ingressController: none</code>.</p> <p>It is then necessary to create the required Ingress CRs by hand. Below is a list of the Ingress paths that need to be exposed along with sample ingress-nginx CRs.</p>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#required-ingress-routes","title":"Required Ingress Routes","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#ingress-routes","title":"Ingress Routes","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#gadmin-paths","title":"GAdmin Paths","text":"Path Service Port <code>/gadmin</code> <code>cluster-name-gadmin-service</code> <code>gadmin</code> (8080/TCP) <code>/tableau</code> <code>cluster-name-gadmin-service</code> <code>gadmin</code> (8080/TCP) <code>/files</code> <code>cluster-name^-gadmin-service</code> <code>gadmin</code> (8080/TCP) <p>where <code>cluster-name</code> is the name of the Kinetica Cluster i.e. what is in the <code>.spec.gpudbCluster.clusterName</code> in the KineticaCluster CR.</p>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#workbench-paths","title":"Workbench Paths","text":"Path Service Port <code>/</code> <code>workbench-workbench-service</code> <code>workbench-port</code> (8000/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#db-rank-0-paths","title":"DB <code>rank-0</code> Paths","text":"Path Service Port <code>/cluster-145025b8(/gpudb-0(/.*|$))</code> <code>cluster-145025b8-rank0-service</code> <code>httpd</code> (8082/TCP) <code>/cluster-145025b8/gpudb-0/hostmanager(.*)</code> <code>cluster-145025b8-rank0-service</code> <code>hostmanager</code> (9300/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#db-rank-n-paths","title":"DB <code>rank-N</code> Paths","text":"Path Service Port <code>/cluster-145025b8(/gpudb-N(/.*|$))</code> <code>cluster-145025b8-rank1-service</code> <code>httpd</code> (8082/TCP) <code>/cluster-145025b8/gpudb-N/hostmanager(.*)</code> <code>cluster-145025b8-rank1-service</code> <code>hostmanager</code> (9300/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#reveal-paths","title":"Reveal Paths","text":"Path Service Port <code>/reveal</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/caravel</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/static</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logout</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/resetmypassword</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/dashboardmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/dashboardmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/slicemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/slicemodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/sliceaddview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databaseview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databaseasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databasetablesasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/tablemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/csstemplatemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/csstemplatemodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/users</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/roles</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/userstatschartview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/permissions</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/viewmenus</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/permissionviews</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/accessrequestsmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/accessrequestsmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/userinfoeditview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/tablecolumninlineview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/sqlmetricinlineview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-ingress-crs","title":"Example Ingress CRs","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-gadmin-ingress-cr","title":"Example GAdmin Ingress CR","text":"Example GAdmin Ingress CR <p>Example GAdmin Ingress CR<pre><code>apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: cluster-name-gadmin-ingress #(1)!\n namespace: gpudb\nspec:\n ingressClassName: nginx\n tls:\n - hosts:\n - cluster-name.example.com #(1)!\n secretName: kinetica-tls\n rules:\n - host: cluster-name.example.com #(1)!\n http:\n paths:\n - path: /gadmin\n pathType: Prefix\n backend:\n service:\n name: cluster-name-gadmin-service #(1)!\n port:\n name: gadmin\n - path: /tableau\n pathType: Prefix\n backend:\n service:\n name: cluster-name-gadmin-service #(1)!\n port:\n name: gadmin\n - path: /files\n pathType: Prefix\n backend:\n service:\n name: cluster-name-gadmin-service #(1)!\n port:\n name: gadmin\n</code></pre> 1. where <code>cluster-name</code> is the name of the Kinetica Cluster</p>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-rank-ingress-cr","title":"Example Rank Ingress CR","text":"Example Rank Ingress CR Example Rank Ingress CR<pre><code>apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: cluster-name-rank1-ingress\n namespace: gpudb\nspec:\n ingressClassName: nginx\n tls:\n - hosts:\n - cluster-name.example.com\n secretName: kinetica-tls\n rules:\n - host: cluster-name.example.com\n http:\n paths:\n - path: /cluster-name(/gpudb-1(/.*|$))\n pathType: Prefix\n backend:\n service:\n name: cluster-name-rank1-service\n port:\n name: httpd\n - path: /cluster-name/gpudb-1/hostmanager(.*)\n pathType: Prefix\n backend:\n service:\n name: cluster-name-rank1-service\n port:\n name: hostmanager\n</code></pre> <ol> <li>where <code>cluster-name</code> is the name of the Kinetica Cluster</li> </ol>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-reveal-ingress-cr","title":"Example Reveal Ingress CR","text":"Example Reveal Ingress CR <p>Example Reveal Ingress CR<pre><code> apiVersion: networking.k8s.io/v1\n kind: Ingress\n metadata:\n name: cluster-name-reveal-ingress\n namespace: gpudb\n spec:\n ingressClassName: nginx\n tls:\n - hosts:\n - cluster-name.example.com\n secretName: kinetica-tls\n rules:\n - host: cluster-name.example.com\n http:\n paths:\n - path: /reveal\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /caravel\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /static\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /logout\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /resetmypassword\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /dashboardmodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /dashboardmodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /slicemodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /slicemodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /sliceaddview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /databaseview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /databaseasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /databasetablesasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /tablemodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /tablemodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /csstemplatemodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /csstemplatemodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /users\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /roles\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /userstatschartview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /permissions\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /viewmenus\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /permissionviews\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /accessrequestsmodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /accessrequestsmodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /logmodelview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /logmodelviewasync\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /userinfoeditview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /tablecolumninlineview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n - path: /sqlmetricinlineview\n pathType: Prefix\n backend:\n service:\n name: cluster-name-reveal-service\n port:\n name: reveal\n</code></pre> 1. where <code>cluster-name</code> is the name of the Kinetica Cluster</p>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#exposing-the-postgres-proxy-port","title":"Exposing the Postgres Proxy Port","text":"<p>In order to access Kinetica's Postgres functionality some TCP (not HTTP) ports need to be open externally.</p> <p>For <code>ingress-nginx</code> a configuration file needs to be created to enable port 5432.</p> <p>tcp-services.yaml<pre><code>apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: tcp-services\n namespace: kinetica-system # (1)!\ndata:\n '5432': gpudb/kinetica-k8s-sample-rank0-service:5432 #(2)!\n '9002': gpudb/kinetica-k8s-sample-rank0-service:9002 #(3)!\n</code></pre> 1. Change the namespace to the namespace your ingress-nginx controller is running in. e.g. <code>ingress-nginx</code> 2. This exposes the postgres proxy port on the default <code>5432</code> port. If you wish to change this to a non-standard port then it needs to be changed here but also in the Helm <code>values.yaml</code> to match. 3. This port is the Table Monitor port and should always be exposed alongside the Postgres Proxy.</p>","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_urls/","title":"Ingress urls","text":""},{"location":"Advanced/ingress_urls/#gadmin-paths","title":"GAdmin Paths","text":"Path Service Port <code>/gadmin</code> <code>cluster-name-gadmin-service</code> <code>gadmin</code> (8080/TCP) <code>/tableau</code> <code>cluster-name-gadmin-service</code> <code>gadmin</code> (8080/TCP) <code>/files</code> <code>cluster-name^-gadmin-service</code> <code>gadmin</code> (8080/TCP) <p>where <code>cluster-name</code> is the name of the Kinetica Cluster i.e. what is in the <code>.spec.gpudbCluster.clusterName</code> in the KineticaCluster CR.</p>"},{"location":"Advanced/ingress_urls/#workbench-paths","title":"Workbench Paths","text":"Path Service Port <code>/</code> <code>workbench-workbench-service</code> <code>workbench-port</code> (8000/TCP)"},{"location":"Advanced/ingress_urls/#db-rank-0-paths","title":"DB <code>rank-0</code> Paths","text":"Path Service Port <code>/cluster-145025b8(/gpudb-0(/.*|$))</code> <code>cluster-145025b8-rank0-service</code> <code>httpd</code> (8082/TCP) <code>/cluster-145025b8/gpudb-0/hostmanager(.*)</code> <code>cluster-145025b8-rank0-service</code> <code>hostmanager</code> (9300/TCP)"},{"location":"Advanced/ingress_urls/#db-rank-n-paths","title":"DB <code>rank-N</code> Paths","text":"Path Service Port <code>/cluster-145025b8(/gpudb-N(/.*|$))</code> <code>cluster-145025b8-rank1-service</code> <code>httpd</code> (8082/TCP) <code>/cluster-145025b8/gpudb-N/hostmanager(.*)</code> <code>cluster-145025b8-rank1-service</code> <code>hostmanager</code> (9300/TCP)"},{"location":"Advanced/ingress_urls/#reveal-paths","title":"Reveal Paths","text":"Path Service Port <code>/reveal</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/caravel</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/static</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logout</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/resetmypassword</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/dashboardmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/dashboardmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/slicemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/slicemodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/sliceaddview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databaseview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databaseasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/databasetablesasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/tablemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/csstemplatemodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/csstemplatemodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/users</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/roles</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/userstatschartview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/permissions</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/viewmenus</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/permissionviews</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/accessrequestsmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/accessrequestsmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logmodelview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/logmodelviewasync</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/userinfoeditview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/tablecolumninlineview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP) <code>/sqlmetricinlineview</code> <code>cluster-name-reveal-service</code> <code>reveal</code> (8088/TCP)"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/","title":"Kinetica images list for airgapped environments","text":"Kinetica Images for an Air-Gapped Environment <p>If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.</p> <p>For information on ways to transfer the files into an air-gapped environment See here.</p>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#required-container-images","title":"Required Container Images","text":""},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"<ul> <li>docker.io/kinetica/kinetica-k8s-operator:v7.2.2-5.ga-1<ul> <li>docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-5.ga-1 or</li> <li>docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.2-5.ga-1 or</li> <li>docker.io/kinetica/kinetica-k8s-gpu:v7.2.2-5.ga-1</li> </ul> </li> <li>docker.io/kinetica/workbench-operator:v7.2.2-5.ga-1</li> <li>docker.io/kinetica/workbench:v7.2.2-5.ga-1</li> <li>docker.io/kinetica/kinetica-k8s-monitor:v7.2.2-5.ga-1</li> <li>docker.io/kinetica/busybox:v7.2.2-5.ga-1</li> <li>docker.io/kinetica/fluent-bit:v7.2.2-5.ga-1</li> <li>docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>nvcr.io/nvidia/gpu-operator:v23.9.1</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>registry.k8s.io/nfd/node-feature-discovery:v0.14.2</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"<ul> <li>docker.io/bitnami/openldap:2.6.7</li> <li>docker.io/alpine/openssl:latest (used by bitnami/openldap)</li> <li>docker.io/otel/opentelemetry-collector-contrib:0.95.0</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"<ul> <li>quay.io/brancz/kube-rbac-proxy:v0.14.2</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#optional-container-images","title":"Optional Container Images","text":"<p>These images are only required if certain features are enabled as part of the Helm installation: -</p> <ul> <li>CertManager</li> <li>ingress-ninx</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"<ul> <li>quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"<ul> <li>registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)</li> <li>registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c</li> </ul>"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2) <ol> <li>It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image</li> <li>With a supported nVidia GPU.</li> </ol>"},{"location":"Advanced/kinetica_mac_arm_k8s/","title":"Kinetica DB on Kubernetes","text":"<p>This walkthrough will show how to install Kinetica DB on a Mac running OS X. The Kubernetes cluster will be running on VMs with Ubuntu Linux 22.04 ARM64. </p> <p>This solution is equivalent to a production bare metal installation and does not use Docker, Podman or QEMU but rather Apple native Virtualization.</p> <p>The Kubernetes cluster will consist of one Master node <code>k8smaster1</code> and two Worker nodes <code>k8snode1</code> & <code>k8snode2</code>.</p> <p>The virtualization platform is UTM. </p> <p>Obtain a Kinetica License Key</p> <p>A product license key will be required for install. Please contact Kinetica Support to request a trial key.</p> <p>Download and install UTM.</p>","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#create-the-vms","title":"Create the VMs","text":"","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#k8smaster1","title":"<code>k8smaster1</code>","text":"<p>For this walkthrough the master node will be 4 vCPU, 8 GB RAM & 40-64 GB disk.</p> <p>Start the creation of a new VM in UTM. Select <code>Virtualize</code></p> <p></p> <p>Select Linux as the VM OS.</p> <p></p> <p>On the Linux page - Select <code>Use Apple Virtualization</code> and an Ubuntu 22.04 (Arm64) ISO.</p> <p></p> <p>As this is the master Kubernetes node (VM) it can be smaller than the nodes hosting the Kinetica DB itself.</p> <p>Set the memory to 8 GB and the number of CPUs to 4.</p> <p></p> <p>Set the storage to between 40-64 GB.</p> <p></p> <p>This next step is optional if you wish to setup a shared folder between your Mac host & the Linux VM.</p> <p></p> <p>The final step to create the VM is a summary. Please check the values shown and hit <code>Save</code></p> <p></p> <p>You should now see your new VM in the left hand pane of the UTM UI.</p> <p></p> <p>Go ahead and click the button.</p> <p>Once the Ubuntu installer comes up follow the steps selecting whichever keyboard etc. you require.</p> <p>The only changes you need to make are: -</p> <ul> <li>Change the installation to <code>Ubuntu Server (minimized)</code></li> <li>Your server's name to <code>k8smaster1</code></li> <li>Enable OpenSSH server.</li> </ul> <p>and complete the installation.</p> <p>Reboot the VM, remove the ISO from the 'external' drive . Log in to the VM and get the VMs IP address with</p> Bash<pre><code>ip a\n</code></pre> <p>Make a note of the IP for later use.</p>","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#k8snode1-k8snode2","title":"<code>k8snode1</code> & <code>k8snode2</code>","text":"<p>Repeat the same process to provision one or two nodes depending on how much memory you have available on the Mac.</p> <p>You need to change the RAM size to 16 GB. You can leave the vCPU count at 4. For the disk size that depends on how much data you want to ingest. It should however be at least 4x RAM size.</p> <p>Once installed again log in to the VM and get the VMs IP address with</p> Bash<pre><code>ip a\n</code></pre> <p>Note</p> <p>Make a note of the IP(s) for later use.</p> <p>Your VMs are complete</p> <p>Continue installing your new VMs by following Bare Metal/VM Installation</p>","tags":["Advanced","Development"]},{"location":"Advanced/kube_vip_loadbalancer/","title":"Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations","text":"<p>For our example we are going to enable a Kubernetes based LoadBalancer to issue IP addresses to our Kubernetes Services of type <code>LoadBalancer</code> using <code>kube-vip</code>.</p> Ingress Service is pending <p>The <code>ingress-nginx-controller</code> is currently in the <code>pending</code> state as there is no CCM/LoadBalancer </p>","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kube-vip","title":"<code>kube-vip</code>","text":"<p>We will install two components into our Kubernetes CLuster</p> <ul> <li>kube-vip-cloud-controller</li> <li>Kubernetes Load-Balancer Service</li> </ul>","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kube-vip-cloud-controller","title":"kube-vip-cloud-controller","text":"<p>Quote</p> <p>The kube-vip cloud provider can be used to populate an IP address for Services of type LoadBalancer similar to what public cloud providers allow through a Kubernetes CCM.</p> Install the kube-vip CCM <p></p> Install the kube-vip CCM<pre><code>kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml\n</code></pre> <p>Now we need to setup the required RBAC permissions: -</p> Install the kube-vip RBAC <p></p> Install kube-vip RBAC<pre><code>kubectl apply -f https://kube-vip.io/manifests/rbac.yaml\n</code></pre> <p>The following ConfigMap will configure the <code>kube-vip-cloud-controller</code> to obtain IP addresses from the host networks DHCP server. i.e. the DHCP on the physical network that the host machine or VM is connected to.</p> Install the kube-vip ConfigMap <p></p> Install the kube-vip ConfigMap<pre><code>apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: kubevip\n namespace: kube-system\ndata:\n cidr-global: 0.0.0.0/32\n</code></pre> <p>It is possible to specify IP address ranges see here.</p>","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kubernetes-load-balancer-service","title":"Kubernetes Load-Balancer Service","text":"Obtain the Master Node IP address & Interface name Obtain the Master Node IP address & Interface name<pre><code>ip a\n</code></pre> <p>In this example the network interface of the master node is <code>192.168.2.180</code> and the interface is <code>enp0s1</code>.</p> <p>We need to apply the <code>kube-vip</code> daemonset but first we need to create the configuration</p> Install the kube-vip Daemonset<pre><code>apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n app.kubernetes.io/name: kube-vip-ds\n app.kubernetes.io/version: v0.7.2\n name: kube-vip-ds\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/name: kube-vip-ds\n template:\n metadata:\n labels:\n app.kubernetes.io/name: kube-vip-ds\n app.kubernetes.io/version: v0.7.2\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: node-role.kubernetes.io/master\n operator: Exists\n - matchExpressions:\n - key: node-role.kubernetes.io/control-plane\n operator: Exists\n containers:\n - args:\n - manager\n env:\n - name: vip_arp\n value: \"true\"\n - name: port\n value: \"6443\"\n - name: vip_interface\n value: enp0s1\n - name: vip_cidr\n value: \"32\"\n - name: dns_mode\n value: first\n - name: cp_enable\n value: \"true\"\n - name: cp_namespace\n value: kube-system\n - name: svc_enable\n value: \"true\"\n - name: svc_leasename\n value: plndr-svcs-lock\n - name: vip_leaderelection\n value: \"true\"\n - name: vip_leasename\n value: plndr-cp-lock\n - name: vip_leaseduration\n value: \"5\"\n - name: vip_renewdeadline\n value: \"3\"\n - name: vip_retryperiod\n value: \"1\"\n - name: address\n value: 192.168.2.180\n - name: prometheus_server\n value: :2112\n image: ghcr.io/kube-vip/kube-vip:v0.7.2\n imagePullPolicy: Always\n name: kube-vip\n resources: {}\n securityContext:\n capabilities:\n add:\n - NET_ADMIN\n - NET_RAW\n hostNetwork: true\n serviceAccountName: kube-vip\n tolerations:\n - effect: NoSchedule\n operator: Exists\n - effect: NoExecute\n operator: Exists\n updateStrategy: {}\n</code></pre> <p>Lines 5, 7, 12, 16, 38 and 62 need modifying to your environment.</p> Install the kube-vip Daemonset <p></p> <p>ARP or BGP</p> <p>The Daemonset above uses ARP to communicate with the network it is also possible to use BGP. See Here</p> Example showing DHCP allocated external IP address to the Ingress Controller <p></p> <p>Our <code>ingress-nginx-controller</code> has been allocated the IP Address <code>192.168.2.194</code>. </p> <p>Ingress Access</p> <p>The <code>ingress-niginx-controller</code> requires the host FQDN to be on the user requests in order to know how to route the requests to the correct Kubernetes Service. Using the iP address in the URL will cause an error as ingress cannot select the correct service.</p> List Ingress <p></p> <p></p> <p>If you did not set the FQDN of the Kinetica Cluster to a DNS resolvable hostname add <code>local.kinetica</code> to your <code>/etc/hosts/</code> file in order to be able to access the Kinetica URLs</p> Edit /etc/hosts <p></p> <p>Accessing the Workbench</p> <p>You should be able to access the workbench at http://local.kinetica</p>","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/","title":"Bare Metal/VM Installation - <code>kubeadm</code>","text":"<p>This walkthrough will show how to install Kinetica DB. For this example the Kubernetes cluster will be running on 3 VMs with Ubuntu Linux 22.04 (ARM64).</p> <p>This solution is equivalent to a production bare metal installation and does not use Docker, Podman or QEMU.</p> <p>The Kubernetes cluster requires 3 VMs consiting of one Master node <code>k8smaster1</code> and two Worker nodes <code>k8snode1</code> & <code>k8snode2</code>.</p> <p>Purple Example Boxes</p> <p>The purple boxes in the instructions below can be expanded for a screen recording of the commands & their results.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#kubernetes-node-installation","title":"Kubernetes Node Installation","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#setup-the-kubernetes-nodes","title":"Setup the Kubernetes Nodes","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#edit-etchosts","title":"Edit <code>/etc/hosts</code>","text":"<p>SSH into each of the nodes and run the following: -</p> Edit `/etc/hosts<pre><code>sudo vi /etc/hosts\n\nx.x.x.x k8smaster1\nx.x.x.x k8snode1\nx.x.x.x k8snode2\n</code></pre> <p>where x.x.x.x is the IP Address of the corresponding nose.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#disable-linux-swap","title":"Disable Linux Swap","text":"<p>Next we need to disable Swap on Linux: -</p> Disable Swap <p></p> Disable Swap<pre><code>sudo swapoff -a\n\nsudo vi /etc/fstab\n</code></pre> <p>comment out the swap entry in <code>/etc/fstab</code> on each node.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#linux-system-configuration-changes","title":"Linux System Configuration Changes","text":"<p>We are using containerd as the container runtime but in order to do so we need to make some system level changes on Linux.</p> Linux System Configuration Changes <p></p> Linux System Configuration Changes<pre><code>cat << EOF | sudo tee /etc/modules-load.d/containerd.conf\noverlay\nbr_netfilter\nEOF\n\nsudo modprobe overlay\n\nsudo modprobe br_netfilter\n\ncat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nEOF\n\nsudo sysctl --system\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#container-runtime-installation","title":"Container Runtime Installation","text":"<p>Run on all nodes (VMs)</p> <p>Run the following commands, until advised not to, on all of the VMs you created.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-containerd","title":"Install <code>containerd</code>","text":"Install <code>containerd</code> Install `containerd`<pre><code>sudo apt update\n\nsudo apt install -y containerd\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#create-a-default-containerd-config","title":"Create a Default <code>containerd</code> Config","text":"Create a Default <code>containerd</code> Config Create a Default `containerd` Config<pre><code>sudo mkdir -p /etc/containerd\n\nsudo containerd config default | sudo tee /etc/containerd/config.toml\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#enable-system-cgroup","title":"Enable System CGroup","text":"<p>Change the SystemdCgroup value to true in the containerd configuration file and restart the service</p> Enable System CGroup <p></p> Enable System CGroup<pre><code>sudo sed -i 's/SystemdCgroup \\= false/SystemdCgroup \\= true/g' /etc/containerd/config.toml\n\nsudo systemctl restart containerd\nsudo systemctl enable containerd\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-pre-requisiteutility-packages","title":"Install Pre-requisite/Utility packages","text":"Install Pre-requisite/Utility packages Install Pre-requisite/Utility packages<pre><code>sudo apt update\n\nsudo apt install -y apt-transport-https ca-certificates curl gpg git\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#download-the-kubernetes-public-signing-key","title":"Download the Kubernetes public signing key","text":"Download the Kubernetes public signing key Download the Kubernetes public signing key<pre><code>curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#add-the-kubernetes-package-repository","title":"Add the Kubernetes Package Repository","text":"Add the Kubernetes Package Repository<pre><code>echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-the-kubernetes-installation-and-management-tools","title":"Install the Kubernetes Installation and Management Tools","text":"Install the Kubernetes Installation and Management Tools Install the Kubernetes Installation and Management Tools<pre><code>sudo apt update\n\nsudo apt install -y kubeadm=1.29.0-1.1 kubelet=1.29.0-1.1 kubectl=1.29.0-1.1 \n\nsudo apt-mark hold kubeadm kubelet kubectl\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#initialize-the-kubernetes-cluster","title":"Initialize the Kubernetes Cluster","text":"<p>Initialize the Kubernetes Cluster by using kubeadm on the <code>k8smaster1</code> control plane node.</p> <p>Note</p> <p>You will need an IP Address range for the Kubernetes Pods. This range is provided to <code>kubeadm</code> as part of the initialization. For our cluster of three nodes, given the default number of pods supported by a node (110) we need a CIDR of at least 330 distinct IP Addresses. Therefore, for this example we will use a <code>--pod-network-cidr</code> of <code>10.1.1.0/22</code> which allows for 1007 usable IPs. The reason for this is each node will get <code>/24</code> of the <code>/22</code> total.</p> <p>The <code>apiserver-advertise-address</code> should be the IP Address of the <code>k8smaster1</code> VM.</p> Initialize the Kubernetes Cluster <p></p> Initialize the Kubernetes Cluster<pre><code>sudo kubeadm init --pod-network-cidr 10.1.1.0/22 --apiserver-advertise-address 192.168.2.180 --kubernetes-version 1.29.2\n</code></pre> <p>You should now deploy a pod network to the cluster. Run <code>kubectl apply -f [podnetwork].yaml</code> with one of the options listed at: Cluster Administration Addons</p> <p>Make a note of the portion of the shell output which gives the join command which we will need to add our worker nodes to the master.</p> <p>Copy the <code>kudeadm join</code> command</p> <p>Then you can join any number of worker nodes by running the following on each as root:</p> Copy the `kudeadm join` command<pre><code>kubeadm join 192.168.2.180:6443 --token wonuiv.v93rkizr6wvxwe6l \\\n--discovery-token-ca-cert-hash sha256:046ffa6303e6b281285a636e856b8e9e51d8c755248d9d013e15ae5c5f6bb127\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#setup-kubeconfig","title":"Setup <code>kubeconfig</code>","text":"<p>Before we add the worker nodes we can setup the <code>kubeconfig</code> so we will be able to use <code>kubectl</code> going forwards.</p> Setup <code>kubeconfig</code> <p></p> Setup `kubeconfig`<pre><code>sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#connect-list-the-kubernetes-cluster-nodes","title":"Connect & List the Kubernetes Cluster Nodes","text":"<p>We can now run <code>kubectl</code> to connect to the Kubernetes API Server to display the nodes in the newly created Kubernetes CLuster.</p> Connect & List the Kubernetes Cluster Nodes <p></p> Connect & List the Kubernetes Cluster Nodes<pre><code>kubectl get nodes\n</code></pre> <p>STATUS = NotReady</p> <p>From the <code>kubectl</code> output the status of the <code>k8smaster1</code> node is showing as <code>NotReady</code> as we have yet to install the Kubernetes Network to the cluster.</p> <p>We will be installing <code>cilium</code> as that provider in a future step.</p> <p>Warning</p> <p>At this point we should complete the installations of the worker nodes to this same point before continuing.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#join-the-worker-nodes-to-the-cluster","title":"Join the Worker Nodes to the Cluster","text":"<p>Once installed we run the join on the worker nodes. Note that the command which was output from the <code>kubeadm init</code> needs to run with <code>sudo</code></p> Join the Worker Nodes to the Cluster<pre><code>sudo kubeadm join 192.168.2.180:6443 --token wonuiv.v93rkizr6wvxwe6l \\\n --discovery-token-ca-cert-hash sha256:046ffa6303e6b281285a636e856b8e9e51d8c755248d9d013e15ae5c5f6bb127\n</code></pre> <code>kubectl get nodes</code> <p></p> <p>Now we can again run</p> `kubectl get nodes`<pre><code>kubectl get nodes\n</code></pre> <p>Now we can see all the nodes are present in the Kubernetes Cluster.</p> <p>Run on Head Node only</p> <p>From now the following cpmmands need to be run on the Master Node only..</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-kubernetes-networking","title":"Install Kubernetes Networking","text":"<p>We now need to install a Kubernetes CNI (Container Network Interface) to enable the pod network.</p> <p>We will use Cilium as the CNI for our cluster.</p> Installing the Cilium CLI<pre><code>curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-arm64.tar.gz\nsudo tar xzvfC cilium-linux-arm64.tar.gz /usr/local/bin\nrm cilium-linux-arm64.tar.gz\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-cilium","title":"Install <code>cilium</code>","text":"<p>You can now install Cilium with the following command:</p> Install `cilium`<pre><code>cilium install\ncilium status \n</code></pre> <p>If <code>cilium status</code> shows errors you may need to wait until the Cilium pods have started.</p> <p>You can check progress with</p> Bash<pre><code>kubectl get po -A\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#check-cilium-status","title":"Check <code>cilium</code> Status","text":"<p>Once Cilium the Cilium pods are running we can check the status of Cilium again by using</p> Check <code>cilium</code> Status <p></p> Check `cilium` Status<pre><code>cilium status \n</code></pre> <p>We can now recheck the Kubernetes Cluster Nodes</p> <p></p> Bash<pre><code>kubectl get nodes\n</code></pre> <p>and they should have <code>Status Ready</code></p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#kubernetes-node-preparation","title":"Kubernetes Node Preparation","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#label-kubernetes-nodes","title":"Label Kubernetes Nodes","text":"<p>Now we go ahead and label the nodes. Kinetica uses node labels in production clusters where there are separate 'node groups' configured so that the Kinetica Infrastructure pods are deployed on a smaller VM type and the DB itself is deployed on larger nodes or gpu enabled nodes.</p> <p>If we were using a Cloud Provider Kubernetes these are synonymous with EKS Node Groups or AKS VMSS which would be created with the same two labels on two node groups.</p> Label Kubernetes Nodes<pre><code>kubectl label node k8snode1 app.kinetica.com/pool=infra\nkubectl label node k8snode2 app.kinetica.com/pool=compute\n</code></pre> <p>additionally in our case as we have created a new cluster the 'role' of the worker nodes is not set so we can also set that. In many cases the role is already set to <code>worker</code> but here we have some latitude.</p> <p></p> Bash<pre><code>kubectl label node k8snode1 kubernetes.io/role=kinetica-infra\nkubectl label node k8snode2 kubernetes.io/role=kinetica-compute\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-storage-class","title":"Install Storage Class","text":"<p>Install a local path provisioner storage class. In this case we are using the Rancher Local Path provisioner</p> Install Storage Class <p></p> Install Storage Class<pre><code>kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#set-default-storage-class","title":"Set Default Storage Class","text":"Set Default Storage Class Set Default Storage Class<pre><code>kubectl patch storageclass local-path -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'\n</code></pre> <p>Kubernetes Cluster Provision Complete</p> <p>Your basre Kubernetes Cluster is now complete and ready to have the Kinetica DB installed on it using the Helm Chart.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-kinetica-for-kubernetes-using-helm","title":"Install Kinetica for Kubernetes using Helm","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#add-the-helm-repository","title":"Add the Helm Repository","text":"Add the Helm Repository Add the Helm Repository<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts\nhelm repo update\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#download-a-starter-helm-valuesyaml","title":"Download a Starter Helm <code>values.yaml</code>","text":"<p>Now we need to obtain a starter <code>values.yaml</code> file to pass to our Helm install. We can download one from the <code>github.com/kineticadb/charts</code> repo.</p> Download a Starter Helm <code>values.yaml</code> <p></p> Download a Starter Helm `values.yaml`<pre><code> wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n</code></pre> <p>Obtain a Kinetica License Key</p> <p>A product license key will be required for install. Please contact Kinetica Support to request a trial key.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#helm-install-kinetica","title":"Helm Install Kinetica","text":"#### Helm Install Kinetica Helm install kinetica-operators<pre><code>helm -n kinetica-system upgrade -i \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"local-path\"\n</code></pre>","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#monitor-kinetica-startup","title":"Monitor Kinetica Startup","text":"<p>After a few moments, follow the progression of the main database pod startup with:</p> Monitor the Kinetica installation progress<pre><code>kubectl -n gpudb get po gpudb-0 -w\n</code></pre> <p>Kinetica DB Provision Complete</p> <p>Once you see <code>gpudb-0 3/3 Running</code> the database is up and running.</p> <p>Software LoadBalancer</p> <p>If you require a software based LoadBalancer to allocate IP address to the Ingress Controller or exposed Kubernetes Services then see here</p> <p>This is usually apparent if your ingress or other Kubernetes Services with the type <code>LoadBalancer</code> are stuck in the <code>Pending</code> state.</p>","tags":["Advanced","Installation"]},{"location":"Advanced/minio_s3_dev_test/","title":"Using Minio for S3 Storage in Dev/Test","text":"<p>If you require a new Minio installation</p> <p>Please follow the installation instructions found here to install the Minio Operator and to create your first tenant.</p>","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#create-minio-tenant","title":"Create Minio Tenant","text":"<p>In our example below we have created a tenant <code>kinetica</code> in the <code>gpudb</code> namespace using the Kinetica storage class <code>kinetica-k8s-sample-storageclass</code>.</p> <p></p> <p>or use the minio kubectl plugin</p> minio cli - create tenant<pre><code>kubectl minio tenant create kinetica --capacity 32Gi --servers 1 --volumes 1 --namespace gpudb --storage-class kinetica-k8s-sample-storageclass --disable-tls\n</code></pre> <p>Console Port Forward</p> <p>Forward the minio console for our newly created tenant</p> Bash<pre><code>kubectl port-forward service/kinetica-console -n gpudb 9443:9443\n</code></pre> <p>In that tenant we create a bucket <code>kinetica-cold-storage</code> and in that bucket we create the path <code>gpudb/cold-storage</code>.</p> <p></p> <p>Once you have a tenant up and running we can configure Kinetica for Kubernetes to use it as the DB Cold Storage tier.</p> <p>Backup/Restore Storage</p> <p>Minio can also be used as the S3 storage for Velero. This enables Backup/Restore functionality via the <code>KineticaBackup</code> & <code>KineticaRestore</code> CRs.</p>","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#configuring-kinetica-to-use-minio","title":"Configuring Kinetica to use Minio","text":"","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#cold-storage","title":"Cold Storage","text":"<p>In order to configure the Cold Storage Tier for the Database it is necessary to add a <code>coldStorageTier</code> to the <code>KineticaCluster</code> CR. As we are using S3 Buckets for storage we then require <code>coldStorageS3</code> entry which allows us to set the <code>awsSecretAccessKey</code> & <code>awsAccessKeyId</code> which were generated when the tenant was created in Minio. </p> <p>If we look in the <code>gpudb</code> name space we can see that Minio created a Kubernetes service called <code>minio</code> exposed on port <code>443</code>. </p> <p>In the <code>coldStorageS3</code> we need to add an <code>endpoint</code> field which contains the <code>minio</code> service name and the namespace <code>gpudb</code> i.e. <code>minio.gpudb.svc.cluster.local</code>.</p> KineticaCluster coldStorageTier S3 Configuration<pre><code>spec:\n gpudbCluster:\n config:\n tieredStorage:\n coldStorageTier:\n coldStorageType: s3\n coldStorageS3:\n basePath: gpudb/cold-storage/\n bucketName: kinetica-cold-storage\n endpoint: minio.gpudb.svc.cluster.local:80\n limit: \"32Gi\"\n useHttps: false\n useManagedCredentials: false\n useVirtualAddressing: false\n awsSecretAccessKey: 6rLaOOddP3KStwPDhf47XLHREPdBqdav\n awsAccessKeyId: VvlP5rHbQqzcYPHG\n tieredStrategy:\n default: VRAM 1, RAM 5, PERSIST 5, COLD0 10\n</code></pre>","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/nginx_ingress_config/","title":"<code>nginx-ingress</code> Ingress Configuration","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/nginx_ingress_config/#coming-soon","title":"Coming Soon","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Architecture/","title":"Architecture","text":"<p>Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance \u2013 particularly on streaming analytics and geospatial workloads.</p> <p>Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.</p> <ul> <li> <p> Kinetica Database Architecture</p> <p>Install the Kinetica DB with helm and get up and running in minutes Core Database Architecture</p> </li> <li> <p> Kinetica for Kubernetes Architecture</p> <p>Install the Kinetica DB with helm and get up and running in minutes Kubernetes Architecture</p> </li> </ul>","tags":["Architecture"]},{"location":"Architecture/db_architecture/","title":"Architecture","text":"<p>Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance \u2013 particularly on streaming analytics and geospatial workloads.</p> <p>Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.</p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#database-architecture","title":"Database Architecture","text":"","tags":["Architecture"]},{"location":"Architecture/db_architecture/#scale-out-architecture","title":"Scale-out Architecture","text":"<p>Kinetica has a distributed architecture that has been designed for data processing at scale. A standard cluster consists of identical nodes run on commodity hardware. A single node is chosen to be the head aggregation node.</p> <p> A cluster can be scaled up at any time to increase storage capacity and processing power, with near-linear scale processing improvements for most operations. Sharding of data can be done automatically, or specified and optimized by the user.</p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#distributed-ingest-query","title":"Distributed Ingest & Query","text":"<p>Kinetica uses a shared-nothing data distribution across worker nodes. The head node receives a query and breaks it down into small tasks that can be spread across worker nodes. To avoid bottlenecks at the head node, ingestion can also be organized in parallel by all the worker nodes. Kinetica is able to distribute data client-side before sending it to designated worker nodes. This streamlines communication and processing time.</p> <p>For the client application, there is no need to be aware of how many nodes are in the cluster, where they are, or how the data is distributed across them!</p> <p></p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#column-oriented","title":"Column Oriented","text":"<p>Columnar data structures lend themselves to low-latency reads of data. But from a user's perspective, Kinetica behaves very similarly to a standard relational database \u2013 with tables of rows and columns and it can be queried with SQL or through APIs. Available column types include the standard base types (int, long, float, double, string, & bytes), as well as numerous sub-types supporting date/time, geospatial, and other data forms.</p> <p></p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#vectorized-functions","title":"Vectorized Functions","text":"<p>Vectorization is Kinetica\u2019s secret sauce and the key feature that underpins its blazing fast performance.</p> <p>Advanced vectorized kernels are optimized to use vectorized CPUs and GPUs for faster performance. The query engine automatically assigns tasks to the processor where they will be most performant. Aggregations, filters, window functions, joins and geospatial rendering are some of the capabilities that see performance improvements.</p> <p></p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#memory-first-tiered-storage","title":"Memory-First, Tiered Storage","text":"<p>Tiered storage makes it possible to optimize where data lives for performance and cost. Recent data (such as all data where the timestamp is within the last 2 weeks) can be held in-memory, while older data can be moved to disk, or even to external storage services.</p> <p>Kinetica operates on an entire data corpus by intelligently managing data across GPU memory, system memory, SIMD, disk / SSD, HDFS, and cloud storage like S3 for optimal performance.</p> <p>Kinetica can also query and process data stored in data lakes, joining it with data managed by Kinetica in highly parallelized queries.</p>","tags":["Architecture"]},{"location":"Architecture/db_architecture/#performant-key-value-lookup","title":"Performant Key-Value Lookup","text":"<p>Kinetica is able to generate distributed key-value lookups, from columnar data, for high-performance and concurrency. Sharding logic is embedded directly within client APIs enabling linear scale-out as clients can lookup data directly from the node where the data lives.</p>","tags":["Architecture"]},{"location":"Architecture/kinetica_for_kubernetes_architecture/","title":"Kubernetes Architecture","text":"","tags":["Architecture"]},{"location":"Architecture/kinetica_for_kubernetes_architecture/#coming-soon","title":"Coming Soon","text":"","tags":["Architecture"]},{"location":"GettingStarted/","title":"Getting Started","text":"<ul> <li> <p> Set up in 15 minutes (local install)</p> <p>Install the Kinetica DB locally on <code>Kind</code> or <code>k3s</code> with <code>helm</code> to get up and running in minutes (Dev/Test).</p> <p> Quickstart</p> </li> <li> <p> Prepare to Install</p> <p>What you need to know & do before beginning an installation.</p> <p> Preparation and Prerequisites</p> </li> <li> <p> Production Installation</p> <p>Install the Kinetica DB with helm to get up and running quickly (Production).</p> <p> Installation</p> </li> <li> <p> Channel Your Inner Ninja</p> <p>Advanced Installation Topics which go beyond the basic installation.</p> <p> Advanced Topics</p> </li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/aks/","title":"Azure AKS Specifics","text":"<p>This page covers any Microsoft Azure AKS cluster installation specifics.</p>","tags":["AKS","Getting Started"]},{"location":"GettingStarted/eks/","title":"Amazon EKS Specifics","text":"<p>This page covers any Amazon EKS kubernetes cluster installation specifics.</p>","tags":["EKS","Getting Started","Storage"]},{"location":"GettingStarted/eks/#ebs-csi-driver","title":"EBS CSI driver","text":"<p>Warning</p> <p>Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work.</p> <p>Please refer to this AWS documentation for more information.</p>","tags":["EKS","Getting Started","Storage"]},{"location":"GettingStarted/helm_repo_add/","title":"Helm repo add","text":"Add Kinetica Operators Chart Repo<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n</code></pre>"},{"location":"GettingStarted/installation/","title":"Kinetica for Kubernetes Installation","text":"<ul> <li> <p> CPU Only Installation </p> <p>Install the Kinetica DB to run on Intel, AMD or ARM CPUs with no GPU acceleration. CPU</p> </li> <li> <p> CPU & GPU Installation</p> <p>Install the Kinetica DB to run on nodes with nVidia GPU acceleration. Optionally enable Kinetica On-Prem SQLAssistant (LLM). GPU</p> </li> </ul>","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/","title":"Installation - CPU Only","text":"<p>For managed Kubernetes solutions (AKS, EKS, GKE), OpenShift or on-prem (kubeadm) Kubernetes variants, follow this generic guide to install the Kinetica Operators, Database and Workbench.</p> <p>Preparation & Prequisites</p> <p>Please make sure you have followed the Preparation & Prequisites steps</p>","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#install-the-helm-chart","title":"Install the helm chart","text":"<p>Run the following Helm install command after substituting values from Preparation & Prequisites</p> Helm install kinetica-operators<pre><code>helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n</code></pre>","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#check-installation-progress","title":"Check installation progress","text":"<p>After a few moments, follow the progression of the main database pod startup with:</p> Monitor the Kinetica installation progress<pre><code>kubectl -n gpudb get po gpudb-0 -w\n</code></pre> <p>until it reaches <code>\"gpudb-0 3/3 Running\"</code> at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the <code>helm</code> command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using Ctrl+C.</p> error no pod named gpudb-0 <p>If you receive an error message running <code>kubectl -n gpudb get po gpudb-0 -w</code> informing you that no pod named <code>gpudb-0</code> exists. Please check that the OpenLDAP pod is running by running</p> Check OpenLDAP status<pre><code>kubectl -n gpudb get pods\nkubectl -n gpudb describe pod openldap-5f87f77c8b-trpmf\n</code></pre> <p>where the pod name <code>openldap-5f87f77c8b-trpmf</code> is that shown when running <code>kubectl -n gpudb get pods</code></p> <p>Validate if the pod is waiting for it's Persistent Volume Claim/Persistent Volume to be created and bound to the pod.</p>","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#accessing-the-kinetica-installation","title":"Accessing the Kinetica installation","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#target-platform-specifics","title":"Target Platform Specifics","text":"cloudOpenShiftlocal - devbare metal - prod <p>If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.</p> <p>As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.</p> <p>Option 1: Use the LoadBalancer domain Set your FQDN in Kinetica<pre><code>kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n</code></pre></p> <p>Option 2: Use your custom domain Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.</p> <p>OpenShift Container Platform version 4 is supported. If you are installing on this flavor of Kubernetes, SecurityContextConstraints are required for some of the Kinetica components. To install these add the folowing set to the main Helm install kinetica-operators command above:</p> <pre><code>--set openshift=\"true\"\n</code></pre> <p>Note</p> <p>The defaultStorageClass must still be set for installation to proceed. Run <code>oc get sc</code> to determine available choices.</p> <p>Installing on a local machine which does not have a domain name, you can add the following entry to your <code>/etc/hosts</code> file or equivalent:</p> Configure local acces - /etc/hosts<pre><code>127.0.0.1 local.kinetica\n</code></pre> <p>Note</p> <p>The default chart configuration points to <code>local.kinetica</code> but this is configurable.</p> <p>Installing on a bare metal machines which do not have an external hardware loadbalancer requires an Ingress controller along with a software loadbalancer in order to be accessible. </p> <p>Kinetica for Kubernetes has been tested with kube-vip</p>","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/","title":"Installation - CPU with GPU Acceleration","text":"<p>For managed Kubernetes solutions (AKS, EKS, GKE), OpenShift or on-prem (kubeadm) Kubernetes variants, follow this generic guide to install the Kinetica Operators, Database and Workbench.</p> <p>Preparation & Prequisites</p> <p>Please make sure you have followed the Preparation & Prequisites steps</p>","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#install-via-the-kinetica-operators-helm-chart","title":"Install via the <code>kinetica-operators</code> Helm Chart","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#gpu-cluster-with-remote-sqlassistant","title":"GPU Cluster with Remote SQLAssistant","text":"<p>Run the following Helm install command after substituting values from section 3</p> Helm install kinetica-operators (No On-Prem SQLAssistant)<pre><code>helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n</code></pre>","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#gpu-cluster-with-on-prem-sqlassistant","title":"GPU Cluster with On-Prem SQLAssistant","text":"<p>or to enable SQLAssistant to be deployed and run 'On-Prem' i.e. in the same cluster</p> Helm install kinetica-operators (With On-Prem SQLAssistant)<pre><code>helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n--set db.gpudbCluster.config.ai.apiProvider = \"kineticallm\"\n</code></pre> <p>On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory</p> <p>To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled <code>app.kinetica.com/pool=compute-llm</code>. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU. </p> Label Kubernetes Nodes for LLM<pre><code>kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n</code></pre>","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#check-installation-progress","title":"Check installation progress","text":"<p>After a few moments, follow the progression of the main database pod startup with:</p> Monitor the Kinetica installation progress<pre><code>kubectl -n gpudb get po gpudb-0 -w\n</code></pre> <p>until it reaches <code>\"gpudb-0 3/3 Running\"</code> at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the <code>helm</code> command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using Ctrl+C.</p> error no pod named gpudb-0 <p>If you receive an error message running <code>kubectl -n gpudb get po gpudb-0 -w</code> informing you that no pod named <code>gpudb-0</code> exists. Please check that the OpenLDAP pod is running by running</p> Check OpenLDAP status<pre><code>kubectl -n gpudb get pods\nkubectl -n gpudb describe pod openldap-5f87f77c8b-trpmf\n</code></pre> <p>where the pod name <code>openldap-5f87f77c8b-trpmf</code> is that shown when running <code>kubectl -n gpudb get pods</code></p> <p>Validate if the pod is waiting for it's Persistent Volume Claim/Persistent Volume to be created and bound to the pod.</p>","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#accessing-the-kinetica-installation","title":"Accessing the Kinetica installation","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#target-platform-specifics","title":"Target Platform Specifics","text":"cloudOpenShiftlocal - devbare metal - prod <p>If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.</p> <p>As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.</p> <p>Option 1: Use the LoadBalancer domain Set your FQDN in Kinetica<pre><code>kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n</code></pre></p> <p>Option 2: Use your custom domain Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.</p> <p>OpenShift Container Platform version 4 is supported. If you are installing on this flavor of Kubernetes, SecurityContextConstraints are required for some of the Kinetica components. To install these add the folowing set to the main Helm install kinetica-operators command above:</p> <pre><code>--set openshift=\"true\"\n</code></pre> <p>Note</p> <p>The defaultStorageClass must still be set for installation to proceed. Run <code>oc get sc</code> to determine available choices.</p> <p>Installing on a local machine which does not have a domain name, you can add the following entry to your <code>/etc/hosts</code> file or equivalent:</p> Configure local acces - /etc/hosts<pre><code>127.0.0.1 local.kinetica\n</code></pre> <p>Note</p> <p>The default chart configuration points to <code>local.kinetica</code> but this is configurable.</p> <p>Installing on a bare metal machines which do not have an external hardware loadbalancer requires an Ingress controller along with a software loadbalancer in order to be accessible.</p> <p>Kinetica for Kubernetes has been tested with kube-vip</p>","tags":["Installation"]},{"location":"GettingStarted/local_kinetica_etc_hosts/","title":"Local kinetica etc hosts","text":"<p>FQDN or Local Access</p> <p>By default we create an ingress pointing towards <code>local.kinetica</code>. If you have a domain pointing to your machine, replace/set the FQDN in the <code>values.yaml</code> with the correct domain name or by adding <code>--set</code>.</p> <p>If you are on a local machine which is not having a domain name, you add the following entry to your <code>/etc/hosts</code> file or equivalent.</p> Configure local access - /etc/hosts<pre><code>127.0.0.1 local.kinetica\n</code></pre>"},{"location":"GettingStarted/note_additional_gpu_sqlassistant/","title":"Note additional gpu sqlassistant","text":"<p>On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory</p> <p>To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled <code>app.kinetica.com/pool=compute-llm</code>. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU. </p> Label Kubernetes Nodes for LLM<pre><code>kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n</code></pre>"},{"location":"GettingStarted/preparation_and_prerequisites/","title":"Preparation & Prerequisites","text":"<p>Checks & steps to ensure a smooth installation.</p> <p>Obtain a Kinetica License Key</p> <p>A product license key will be required for install. Please contact Kinetica Support to request a trial key.</p> <p>Failing to provide a license key at installation time will prevent the DB from starting.</p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"<p>Free Resources</p> <p>Your Kubernetes cluster version should be >= 1.22.x and have a minimum of 8 CPU, 8GB Ram and SSD or SATA 7200RPM hard drive(s) with 4X memory capacity.</p> GPU Support <p>For GPU enabled clusters the cards below have been tested in large-scale production environments and provide the best performance for the database.</p> GPU Driver P4/P40/P100 525.X (or higher) V100 525.X (or higher) T4 525.X (or higher) A10/A40/A100 525.X (or higher)","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#kubernetes-cluster-connectivity","title":"Kubernetes Cluster Connectivity","text":"<p>Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. and the Kubernetes CLI kubectl.</p> <p>The context for the desired target cluster must be selected from your <code>~/.kube/config</code> file and set via the <code>KUBECONFIG</code> environment variable or <code>kubectl ctx</code> (if installed). Check to see if you have the correct context with,</p> show the current kubernetes context<pre><code>kubectl config current-context\n</code></pre> <p>and that you can access this cluster correctly with,</p> list kubernetes cluster nodes<pre><code>kubectl get nodes\n</code></pre> Get Nodes <p></p> <p>If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).</p> Kinetica Images for an Air-Gapped Environment <p>If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.</p> <p>For information on ways to transfer the files into an air-gapped environment See here.</p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#required-container-images","title":"Required Container Images","text":"","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"<ul> <li>docker.io/kinetica/kinetica-k8s-operator:{{kinetica_full_version}}<ul> <li>docker.io/kinetica/kinetica-k8s-cpu:{{kinetica_full_version}} or</li> <li>docker.io/kinetica/kinetica-k8s-cpu-avx512:{{kinetica_full_version}} or</li> <li>docker.io/kinetica/kinetica-k8s-gpu:{{kinetica_full_version}}</li> </ul> </li> <li>docker.io/kinetica/workbench-operator:{{kinetica_full_version}}</li> <li>docker.io/kinetica/workbench:{{kinetica_full_version}}</li> <li>docker.io/kinetica/kinetica-k8s-monitor:{{kinetica_full_version}}</li> <li>docker.io/kinetica/busybox:{{kinetica_full_version}}</li> <li>docker.io/kinetica/fluent-bit:{{kinetica_full_version}}</li> <li>docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>nvcr.io/nvidia/gpu-operator:v23.9.1</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using <code>kinetica-k8s-gpu</code>)","text":"<ul> <li>registry.k8s.io/nfd/node-feature-discovery:v0.14.2</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"<ul> <li>docker.io/bitnami/openldap:2.6.7</li> <li>docker.io/alpine/openssl:latest (used by bitnami/openldap)</li> <li>docker.io/otel/opentelemetry-collector-contrib:0.95.0</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"<ul> <li>quay.io/brancz/kube-rbac-proxy:v0.14.2</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#optional-container-images","title":"Optional Container Images","text":"<p>These images are only required if certain features are enabled as part of the Helm installation: -</p> <ul> <li>CertManager</li> <li>ingress-ninx</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"<ul> <li>quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> <li>quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"<ul> <li>registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)</li> <li>registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c</li> </ul>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2) <ol> <li>It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image</li> <li>With a supported nVidia GPU.</li> </ol>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#label-the-kubernetes-nodes","title":"Label the Kubernetes Nodes","text":"<p>Kinetica requires some of the Kubernetes Nodes to be labeled as it splits some of the components into different deployment 'pools'. This enables different physical node types to be present in the Kubernetes Cluster allowing us to target which Kinetica components go where.</p> <p>e.g. for a GPU installation some nodes in the cluster will have GPUs and others are CPU only. We can put the DB on the GPU nodes and our infrastructure components on CPU only nodes.</p> cpu gpu <p>The Kubernetes cluster nodes selected to host the Kinetica infrastructure pods i.e. non-DB Pods require the following label <code>app.kinetica.com/pool=infra</code>.</p> <p></p> Label the Infrastructure Nodes<pre><code> kubectl label node k8snode1 app.kinetica.com/pool=infra\n</code></pre> <p>whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label <code>app.kinetica.com/pool=compute</code>.</p> Label the Database Nodes<pre><code> kubectl label node k8snode2 app.kinetica.com/pool=compute\n</code></pre> <p>The Kubernetes cluster nodes selected to host the Kinetica infrastructure pods i.e. non-DB Pods require the following label <code>app.kinetica.com/pool=infra</code>.</p> <p></p> Label the Infrastructure Nodes<pre><code> kubectl label node k8snode1 app.kinetica.com/pool=infra\n</code></pre> <p>whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label <code>app.kinetica.com/pool=compute-gpu</code>.</p> Label the Database Nodes<pre><code> kubectl label node k8snode2 app.kinetica.com/pool=compute-gpu\n</code></pre> <p>On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory</p> <p>To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled <code>app.kinetica.com/pool=compute-llm</code>. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU. </p> Label Kubernetes Nodes for LLM<pre><code>kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n</code></pre> <p>Pods Not Scheduling</p> <p>If the Kubernetes are not labeled you may have a situation where Kinetica pods not schedule and sit in a 'Pending' state.</p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"<p>This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.</p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#add-the-kinetica-chart-repository","title":"Add the Kinetica chart repository","text":"<p>Add the repo locally as kinetica-operators:</p> Helm repo add<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n</code></pre> Helm Repo Add <p></p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#obtain-the-default-helm-values-file","title":"Obtain the default Helm values file","text":"<p>For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.</p> Helm values.yaml download<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/72.2.5/kinetica-operators/values.onPrem.k8s.yaml\n</code></pre>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#determine-the-following-prior-to-the-chart-install","title":"Determine the following prior to the chart install","text":"<p>Default Admin User</p> <p>the default admin user in the Helm chart is <code>kadmin</code> but this is configurable. Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, <code>--set dbAdminUser.password=\"MyPassword\\!\"</code></p> <ol> <li>Obtain a LICENSE-KEY as described in the introduction above.</li> <li>Choose a PASSWORD for the initial administrator user</li> <li>As the storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain the DEFAULT-STORAGE-CLASS name with the command:</li> </ol> <p></p> Find the default storageclass<pre><code>kubectl get sc -o name \n</code></pre> List StorageClass <p></p> <p>use the name found after the /, For example, in <code>storageclass.storage.k8s.io/local-path</code> use \"local-path\" as the parameter.</p> Amazon EKS <p>If installing on Amazon EKS See here</p>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#planning-access-to-your-kinetica-cluster","title":"Planning access to your Kinetica Cluster","text":"Existing Ingress Controller? <p>If you have an existing Ingress Controller in your Kubernetes cluster and do not want Kinetica to install an <code>ingresss-nginx</code> to expose it's endpoints then you can disable <code>ingresss-nginx</code> installation in the <code>values.yaml</code> by editing the file and setting <code>install: true</code> to <code>install: false</code>: -</p> Text Only<pre><code>```` yaml\nnodeSelector: {}\ntolerations: []\naffinity: {}\n\ningressNginx:\n install: false\n````\n</code></pre>","tags":["Getting Started","Installation"]},{"location":"GettingStarted/quickstart/","title":"Quickstart","text":"<p>For the quickstart we have examples for Kind or k3s.</p> <ul> <li>Kind - is suitable for CPU only installations.</li> <li>k3s - is suitable for CPU or GPU installations.</li> </ul> <p>Kubernetes >= 1.25</p> <p>The current version of the chart supports kubernetes version 1.25 and above.</p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#please-select-your-target-kubernetes-variant","title":"Please select your target Kubernetes variant:","text":"kind k3s <p>Default User</p> <p>Username as per the values file mentioned above is <code>kadmin</code> and password is <code>Kinetica1234!</code></p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":"<p>This installation in a kind cluster is for trying out the operators and the database in a non-production environment.</p> <p>CPU Only</p> <p>This method currently only supports installing a CPU version of the database.</p> <p>Please contact Kinetica Support to request a trial key.</p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Create a new Kind Cluster<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/72.2.5/kinetica-operators/kind.yaml\nkind create cluster --name kinetica --config kind.yaml\n</code></pre> List Kind clusters<pre><code> kind get clusters\n</code></pre> <p>Set Kubernetes Context</p> <p>Please set your Kubernetes Context to <code>kind-kinetica</code> before performing the following steps. </p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"<p>Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.</p> <p>As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.</p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the Kinetica-Operators Chart","text":"Add Kinetica Operators Chart Repo<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n</code></pre> <p>FQDN or Local Access</p> <p>By default we create an ingress pointing towards <code>local.kinetica</code>. If you have a domain pointing to your machine, replace/set the FQDN in the <code>values.yaml</code> with the correct domain name or by adding <code>--set</code>.</p> <p>If you are on a local machine which is not having a domain name, you add the following entry to your <code>/etc/hosts</code> file or equivalent.</p> Configure local access - /etc/hosts<pre><code>127.0.0.1 local.kinetica\n</code></pre> Get & install the Kinetica-Operators Chart<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/72.2.5/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n</code></pre> <p>or if you have been asked by the Kinetica Support team to try a development version</p> Using a development version<pre><code>helm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 72.2.5\n</code></pre> <p>Accessing the Workbench</p> <p>You should be able to access the workbench at http://local.kinetica</p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-k3sio","title":"k3s (k3s.io)","text":"","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#install-k3s-129","title":"Install k3s 1.29","text":"Install k3s<pre><code>curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n</code></pre> <p>Once installed we need to set the current Kubernetes context to point to the newly created k3s cluster.</p> <p>Select if you want local or remote access to the Kubernetes Cluster: -</p> Local AccessRemote Access <p>For only local access to the cluster we can simply set the <code>KUBECONFIG</code> environment variable</p> Set kubectl context<pre><code>export KUBECONFIG=/etc/rancher/k3s/k3s.yaml\n</code></pre> <p>For remote access i.e. outside the host/VM k3s is installed on: -</p> <p>Copy <code>/etc/rancher/k3s/k3s.yaml</code> on your machine located outside the cluster as <code>~/.kube/config</code>. Then edit the file and replace the value of the server field with the IP or name of your K3s server.</p> Copy the kube config and set the context<pre><code>sudo chmod 600 /etc/rancher/k3s/k3s.yaml\nmkdir -p ~/.kube\nsudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config\nsudo chown \"${USER:=$(/usr/bin/logname)}:$USER\" ~/.kube/config\n# Edit the ~/.kube/config server field with the IP or name of your K3s server here\nexport KUBECONFIG=~/.kube/config\n</code></pre>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s - Install kinetica-operators including a sample db to try out","text":"<p>Review the values file <code>charts/kinetica-operators/values.onPrem.k3s.yaml</code>. This is trying to install the operators and a simple db with workbench installation for a non production try out.</p> <p>FQDN or Local Access</p> <p>By default we create an ingress pointing towards <code>local.kinetica</code>. If you have a domain pointing to your machine, replace/set the FQDN in the <code>values.yaml</code> with the correct domain name or by adding <code>--set</code>.</p> <p>If you are on a local machine which is not having a domain name, you add the following entry to your <code>/etc/hosts</code> file or equivalent.</p> Configure local access - /etc/hosts<pre><code>127.0.0.1 local.kinetica\n</code></pre>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-the-kinetica-operators-chart-cpu","title":"K3S - Install the Kinetica-Operators Chart (CPU)","text":"Add Kinetica Operators Chart Repo<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n</code></pre> Download Template values.yaml<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/72.2.5/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n</code></pre> <p>or if you have been asked by the Kinetica Support team to try a development version</p> Using a development version<pre><code>helm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n</code></pre>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-the-kinetica-operators-chart-gpu","title":"K3S - Install the Kinetica-Operators Chart (GPU)","text":"<p>If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.</p> k3s GPU Installation<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/72.2.5/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n</code></pre> <p>Accessing the Workbench</p> <p>You should be able to access the workbench at http://local.kinetica</p>","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#uninstall-k3s","title":"Uninstall k3s","text":"uninstall k3s<pre><code>/usr/local/bin/k3s-uninstall.sh\n</code></pre>","tags":["Development","Getting Started","Installation"]},{"location":"Help/changing_the_fqdn/","title":"How to change the Clusters FQDN","text":"","tags":["Configuration","Support"]},{"location":"Help/changing_the_fqdn/#coming-soon","title":"Coming Soon","text":"","tags":["Configuration","Support"]},{"location":"Help/faq/","title":"Frequently Asked Questions","text":"","tags":["Support"]},{"location":"Help/faq/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Help/help_and_tutorials/","title":"Help & Tutorials","text":"<ul> <li> <p> Tutorials</p> <p> Tutorials</p> </li> <li> <p> Help</p> <p> Help</p> </li> </ul>","tags":["Support"]},{"location":"Help/help_and_tutorials/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Help/help_index/","title":"Creating Users, Roles, Schemas and other Kinetica DB Objects","text":"","tags":["Support"]},{"location":"Help/help_index/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Monitoring/logs/","title":"Log Collection & Display","text":"<p>It is possible to forward/server the Kinetica on Kubernetes logs via an OpenTelemetry [OTEL] collector.</p> <p>By default an OpenTelemetry Collector is deployed in the <code>kinetica-system</code> namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the <code>kinetica-system</code> namespace and is called <code>otel-collector-conf</code>.</p> <p>Detailed <code>otel-collector-conf</code> setup</p> <p>For more details on the Kinetica installed OTEL Collector please see here.</p> <p>There are many supported mechanisms to expose the logs here is one possibility: -</p> <ul> <li><code>lokiexporter</code> - Exports data via HTTP to Loki.</li> </ul> <p>Tip</p> <p>For a full list of supported OTEL exporters, including those for GrafanaCloud, AWS, Azure, Logz.io, Splunk and many databases please see here</p>","tags":["Operations","Monitoring"]},{"location":"Monitoring/logs/#lokiexporter-otel-collector-exporter","title":"<code>lokiexporter</code> OTEL Collector Exporter","text":"<p>Exports data via HTTP to Loki.</p> Example Configuration<pre><code>exporters:\n loki:\n endpoint: https://loki.example.com:3100/loki/api/v1/push\n default_labels_enabled:\n exporter: false\n job: true\n</code></pre> <p>For full details on configuring the OTEL collector exporter <code>lokiexporter</code> see here.</p>","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/","title":"Metrics Collection & Display","text":"<p>It is possible to forward/server the Kinetica on Kubernetes metrics via an OpenTelemetry [OTEL] collector. </p> <p>By default an OpenTelemetry Collector is deployed in the <code>kinetica-system</code> namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the <code>kinetica-system</code> namespace and is called <code>otel-collector-conf</code>.</p> <p>Detailed <code>otel-collector-conf</code> setup</p> <p>For more details on the Kinetica installed OTEL Collector please see here.</p> <p>There are many supported mechanisms to expose the metrics here are a few possibilities: -</p> <ul> <li><code>prometheusremotewriteexporter</code> - Prometheus Remote Write Exporter sends OpenTelemetry metrics to Prometheus remote write compatible backends.</li> <li><code>prometheusexporter</code> - allows the metrics to be scraped by a Prometheus server</li> </ul> <p>Tip</p> <p>For a full list of supported OTEL exporters, including those for Grafana Cloud, AWS, Azure and many databases please see here</p>","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/#prometheusremotewriteexporter-prometheus-otel-remote-write-exporter","title":"<code>prometheusremotewriteexporter</code> Prometheus OTEL Remote Write Exporter","text":"<p>prometheusremotewriteexporter OTEL Exporter</p> <p>Prometheus Remote Write Exporter sends OpenTelemetry metrics to Prometheus remote write compatible backends such as Cortex, Mimir, and Thanos. By default, this exporter requires TLS and offers queued retry capabilities.</p> <p>Warning</p> <p>Non-cumulative monotonic, histogram, and summary OTLP metrics are dropped by this exporter.</p> Example Configuration<pre><code>exporters:\n prometheusremotewrite:\n endpoint: \"https://my-cortex:7900/api/v1/push\"\n external_labels:\n label_name1: label_value1\n label_name2: label_value2\n</code></pre> <p>For full details on configuring the OTEL collector exporter <code>prometheusremotewriteexporter</code> see here.</p>","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/#prometheusexporter-prometheus-otel-exporter","title":"<code>prometheusexporter</code> Prometheus OTEL Exporter","text":"<p>Exports data in the Prometheus format, which allows it to be scraped by a Prometheus server.</p> Example Configuration<pre><code>exporters:\n prometheus:\n endpoint: \"1.2.3.4:1234\"\n tls:\n ca_file: \"/path/to/ca.pem\"\n cert_file: \"/path/to/cert.pem\"\n key_file: \"/path/to/key.pem\"\n namespace: test-space\n const_labels:\n label1: value1\n \"another label\": spaced value\n send_timestamps: true\n metric_expiration: 180m\n enable_open_metrics: true\n add_metric_suffixes: false\n resource_to_telemetry_conversion:\n enabled: true\n</code></pre> <p>For full details on configuring the OTEL collector exporter <code>prometheusexporter</code> see here.</p>","tags":["Operations","Monitoring"]},{"location":"Operations/","title":"Operational Management","text":"<ul> <li> <p> Metrics</p> <p>Collecting and storing metrics as time series data. Metrics</p> </li> <li> <p> Logs</p> <p>Log aggregation. Logs</p> </li> <li> <p> Metric & Log Distribution</p> <p>Metrics & Logs can be distributed to other systems using OpenTelemetry. OpenTelemety</p> </li> <li> <p> Backup & Restore</p> <p>Backup & Restore of the Kinetica DB. Backup & Restore</p> <p>Note</p> <p>This requires Velero to be installed on the Kubernetes Cluster.</p> </li> <li> <p> Reduce Costs</p> <p>Suspend & Resume Kinetica for Kubernetes. Suspend & Resume</p> </li> <li> <p> Database Rebalancing</p> <p>Kinetica for Kubernetes Data Sharding & Rebalancing. Rebalancing</p> </li> </ul>","tags":["Operations"]},{"location":"Operations/backup_and_restore/","title":"Kinetica for Kubernetes Backup & Restore","text":"<p>Kinetica for Kubernetes supports the Backup & Restoring of the installed Kinetica DB by leveraging Velero which is required to be installed into the same Kubernetes cluster that the <code>kinetica-operators</code> Helm chart is deployed.</p> <p>Velero</p> <p>Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises.</p> <p>For Velero installation please see here.</p> <p>Velero Installation</p> <p>The <code>kinetica-operators</code> Helm chart does not deploy Velero it is a prerequisite for it to be installed before Backup & Restore will work correctly.</p> <p>There are two ways to initiate a Backup or Restore</p> <ul> <li>Workbench Initiated</li> <li>Kubernetes CR Initiated</li> </ul> <p>Preferred Backup/Restore Mechanism</p> <p>The preferred way to Backup or Restore the Kinetica for Kubernetes DB instance is via Workbench.</p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#workbench-initiated-backup-or-restore","title":"Workbench Initiated Backup or Restore","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#home","title":"> Home","text":"<p>From the Workbench Home page</p> <p></p> <p>we need to select the <code>Manage</code> option from the toolbar.</p> <p></p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#manage-cluster-overview","title":"> Manage > Cluster > Overview","text":"<p>On the Cluster Overview page select the 'Snapshots' tab</p> <p></p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#manage-cluster-snapshots","title":"> Manage > Cluster > Snapshots","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#backup","title":"Backup","text":"<p>Select the 'Backup Now' button</p> <p></p> <p>and the backup will start and you will be able to see the progress</p> <p></p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#restore","title":"Restore","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kubernetes-cr-initiated-backup-or-restore","title":"Kubernetes CR Initiated Backup or Restore","text":"<p>The Kinetica DB Operator supports two custom CRs </p> <ul> <li><code>KineticaClusterBackup</code></li> <li><code>KineticaClusterRestore</code></li> </ul> <p>which can be used to perform a Backup of the database and a Restore of Kinetica namespaces.</p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kineticaclusterbackup-cr","title":"<code>KineticaClusterBackup</code> CR","text":"<p>Submission of a <code>KineticaClusterBackup</code> CR will trigger the Kinetica DB Operator to perform a backup of a Kinetica DB instance.</p> <p>Kinetica DB Offline</p> <p>In order to perform a database backup the Kinetica DB needs to be suspended in order for Velero to have access to the necessary disks. The DB will be stopped & restarted automatically by the Kinetica DB Operator as part of the backup process.</p> Example KineticaClusterBackup CR yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaClusterBackup\nmetadata:\n name: kineticaclusterbackup-sample\n namespace: gpudb\nspec:\n includedNamespaces:\n - gpudb\n</code></pre> <p>The namespace of the backup CR should be different to that of the namespace the Kinetica DB is running in i.e. not <code>gpudb</code>. We recommend using the namespace Velero is deployed into.</p> <p>Backup names are unique</p> <p>The name of the <code>KineticaClusterBackup</code> CR is unique we therefore suggest creating the name of the CR containing the date + time of the backup to ensure uniqueness. Kubernetes CR names have a strict naming format so the specified name must conform to those patterns.</p> <p>For a detailed description of the <code>KineticaClusterBackup</code> CRD see here</p>","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kineticaclusterrestore-cr","title":"<code>KineticaClusterRestore</code> CR","text":"<p>In order to perform a restore of Kinetica for Kubernetes the easiest way is to simply delete the <code>gpudb</code> namespace from the Kubernetes cluster.</p> Delete the Kinetica DB<pre><code>kubectl delete ns gpudb\n</code></pre> <p>Kinetica DB Offline</p> <p>In order to perform a database restore the Kinetica DB needs to be suspended in order for Velero to have access to the necessary disks. The DB will be stopped & restarted automatically by the Kinetica DB Operator as part of the restore process.</p> Example KineticaClusterBackup CR yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaClusterRestore\nmetadata:\n name: kineticaclusterrestore-sample\n namespace: gpudb\nspec:\n backupName: kineticaclusterbackup-sample\n</code></pre> <p>The namespace of the restore CR should be the same as that of the namespace the <code>KineticaClusterBackup</code> CR was placed in. i.e. Not the namespace Kinetica DB is running in.</p> <p>Restore names are unique</p> <p>The name of the <code>KineticaClusterRestore</code> CR is unique we therefore suggest creating the name of the CR containing the date + time of the restore process to ensure uniqueness. Kubernetes CR names have a strict naming format so the specified name must conform to those patterns.</p> <p>For a detailed description of the <code>KineticaClusterRestore</code> CRD see here</p>","tags":["Operations"]},{"location":"Operations/otel/","title":"OTEL Integration for Metric & Log Distribution","text":"<p>Helm installed OTEL Collector</p> <p>By default an OpenTelemetry Collector is deployed in the <code>kinetica-system</code> namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the <code>kinetica-system</code> namespace and is called <code>otel-collector-conf</code>.</p> <p>The Kinetica DB Operators send information to an OpenTelemetry collector. There are two choices</p> <ul> <li>install an OpenTelemetry collector with the Kinetica Operators Helm chart</li> <li>use an existing provisioned OpenTelemetry collector within the Kubernetes Cluster</li> </ul>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#install-an-opentelemetry-collector-with-the-kinetica-operators-helm-chart","title":"Install an OpenTelemetry collector with the Kinetica Operators Helm chart","text":"<p>To enable the Kinetica Operators Helm Chart to deploy an instance of the OpenTelemetry collector into the <code>kinetica-system</code> namespace you need to set the following configuration in the helm values: -</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#todo-add-helm-config-example-here","title":"TODO add Helm Config Example Here","text":"<p>A ConfigMap containing the OTEL collector configuration will be generated so that the necessary <code>receivers</code> and <code>processors</code> sections are correctly setup for a Kinetica DB Cluster.</p> <p>This configuration will: -</p> <p>receivers:</p> <ul> <li>configure a <code>syslog</code> receiver which will receive logs from the Kinetica DB pod.</li> <li>configure a <code>prometheus</code> receiver/scraper which will collect metrics form the Kinetica DB.</li> <li>configure an <code>otlp</code> receiver which will receive trace spans from the Kinetica Operators (Optional).</li> <li>configure the <code>hostmetrics</code> collection of host load & memory usage (Optional).</li> <li>configure the <code>k8s_events</code> collection of Kubernetes Events for the Kinetica namespaces (Optional).</li> </ul> <p>processors:</p> <ul> <li>configure attribute processing to set some useful values</li> <li>configure resource processing to set some useful values</li> </ul>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#syslog-configuration","title":"<code>syslog</code> Configuration","text":"<p>The OpenTelemetry <code>syslogreceiver</code> documentation can be found here.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration","title":"OTEL Receivers Configuration","text":"YAML<pre><code>receivers: \n syslog: \n tcp: \n listen_address: \"0.0.0.0:9601\" \n protocol: rfc5424 \n</code></pre>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-service-configuration","title":"OTEL Service Configuration","text":"<p>Tip</p> <p>In order to batch pushes of log data upstream you can use the following <code>processors section</code> in the OTEL configuration.</p> YAML<pre><code>processors: \n batch: \n</code></pre> YAML<pre><code>service: \n pipelines:\n logs:\n receivers: [syslog]\n processors: [resourcedetection, attributes, resource, batch]\n exporters: ... # Requires configuring for your environment\n</code></pre>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otlp-configuration","title":"<code>otlp</code> Configuration","text":"<p>The default configuration opens both the OTEL gRPC & HTTP listeners.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration_1","title":"OTEL Receivers Configuration","text":"YAML<pre><code>receivers:\n otlp: \n protocols: \n grpc: \n endpoint: \"0.0.0.0:4317\" \n http: \n endpoint: \"0.0.0.0:4318\" \n</code></pre>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-service-configuration_1","title":"OTEL Service Configuration","text":"<p>Tip</p> <p>In order to batch pushes of trace data upstream you can use the following <code>processors section</code> in the OTEL configuration.</p> YAML<pre><code>processors: \n batch: \n</code></pre> YAML<pre><code>service:\n traces:\n receivers: [otlp]\n processors: [batch]\n exporters: ... # Requires configuring for your environment\n</code></pre> <p>exporters</p> <p>The <code>exporters</code> will need to be manually configured to your specific environment e.g. forwarding logs/metrics to Grafana, Azure Monitor, AWS etc.</p> <p>Otherwise the data will 'disappear into the ether' and not be relayed upstream.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#hostmetrics-configuration-optional","title":"<code>hostmetrics</code> Configuration (Optional)","text":"<p>The Host Metrics receiver generates metrics about the host system scraped from various sources. This is intended to be used when the collector is deployed as an agent.</p> <p>The OpenTelemetry <code>hostmetrics</code> documentation can be found here.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration_2","title":"OTEL Receivers Configuration","text":"<p>hostmetricsreceiver</p> <p>The OTEL <code>hostmetricsreceiver</code>requires that the running OTEL collector is the 'contrib' version.</p> YAML<pre><code>receivers:\n hostmetrics: \n # https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver \n scrapers: \n load: \n memory: \n</code></pre> Grafana <p>the attributes and resource processing enables finer grained selection using Grafana queries.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#k8s_events-configuration-optional","title":"<code>k8s_events</code> Configuration (Optional)","text":"<p>The kubernetes Events receiver collects events from the Kubernetes API server. It collects all the new or updated events that come in from the specified namespaces. Below we are collecting events from the two default Kinetica namespaces: -</p> YAML<pre><code>receivers:\n k8s_events: \n namespaces: [kinetica-system, gpudb] \n</code></pre> <p>The OpenTelemetry <code>k8seventsreceiver</code> documentation can be found here.</p>","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#use-an-existing-provisioned-opentelemetry-collector","title":"Use an existing provisioned OpenTelemetry Collector","text":"","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#coming-soon","title":"Coming Soon","text":"","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/rebalance/","title":"Kinetica for Kubernetes Data Rebalancing","text":"","tags":["Operations"]},{"location":"Operations/rebalance/#coming-soon","title":"Coming Soon","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/","title":"Kinetica for Kubernetes Suspend & Resume","text":"<p>It is possible to supend Kinetica for Kubernetes which spins down the DB.</p> <p>Infra Structure</p> <p>For each deployment of Kinetica for Kubernetes there are two distinct types of pods: -</p> <ul> <li>'Compute' pods containing the Kinetica DB along with the Statics Pod</li> <li>'Infra' pods containing the supporting apps, e.g. Workbench, OpenLDAP etc, and the Kinetica Operators.</li> </ul> <p>Whilst Kinetica for Kubernetes is in the <code>Suspended</code> state only the 'Compute' pods are scaled down. The 'Infra' pods remain running in order for Workbenchto be able to login, backup, restore and in this case Resume the suspended system.</p> <p>There are three discrete ways to suspand and resume KInetica for Kubernetes: -</p> <ul> <li>Manually from Workbench</li> <li>Auto-Suspend set in Workbench or fro the Helm installation Chart.</li> <li>Manually using a Kubernetes CR</li> </ul>","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-manually-from-workbench","title":"Suspend - Manually from Workbench","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-auto-suspend","title":"Suspend - Auto-Suspend","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-manually-using-a-kubernetes-cr","title":"Suspend - Manually using a Kubernetes CR","text":"","tags":["Operations"]},{"location":"Operators/k3s/","title":"Overview","text":"<p>Kinetica Operators can be installed in any on-prem kubernetes cluster. This document provides instructions to install the operators in k3s. If you are on another distribution, you should be able to change the values file to suit your environment.</p> <p>You will need a license key for this to work. Please contact Kinetica Support.</p>"},{"location":"Operators/k3s/#kinetica-on-k3s-k3sio","title":"Kinetica on k3s (k3s.io)","text":"<p>Current version of the chart supports kubernetes version 1.25 and above.</p>"},{"location":"Operators/k3s/#install-k3s-129","title":"Install k3s 1.29","text":"Bash<pre><code>curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n</code></pre>"},{"location":"Operators/k3s/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s -Install kinetica-operators including a sample db to try out","text":"<p>Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.</p> <p>As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.</p> <p>If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.</p> Text Only<pre><code>127.0.0.1 local.kinetica\n</code></pre>"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart","title":"K3s - Install the kinetica-operators chart","text":"Bash<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n</code></pre>"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart-gpu-capable-machine","title":"K3s - Install the kinetica-operators chart (GPU Capable Machine)","text":"<p>If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.</p> Bash<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n</code></pre> <p>You should be able to access the workbench at http://local.kinetica</p> <p>Username as per the values file mentioned above is kadmin and password is Kinetica1234!</p>"},{"location":"Operators/k3s/#uninstall-k3s","title":"Uninstall k3s","text":"Bash<pre><code>/usr/local/bin/k3s-uninstall.sh\n</code></pre>"},{"location":"Operators/k8s/","title":"Overview","text":"<p>For managed Kubernetes solutions (AKS, EKS, GKE) or other on-prem K8s flavors, follow this generic guide to install the Kinetica Operators, Database and Workbench. A product license key will be required for install. Please contact Kinetica Support to request a trial key.</p>"},{"location":"Operators/k8s/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"<p>Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. kubectl is optional but highly recommended. The context for the desired target cluster must be selected from your <code>~/.kube/config</code> file or set via the <code>KUBECONFIG</code> environment variable. Check to see if you have the correct context with,</p> Bash<pre><code>kubectl config current-context\n</code></pre> <p>and that you can access this cluster correctly with,</p> Bash<pre><code>kubectl get nodes\n</code></pre> <p>If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).</p>"},{"location":"Operators/k8s/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"<p>This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.</p> <p>If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.</p> <p>Alternatively, if you are installing on a local machine which does not have a domain name, you can add the following entry to your <code>/etc/hosts</code> file or equivalent:</p> Bash<pre><code>127.0.0.1 local.kinetica\n</code></pre> <p>Note that the default chart configuration points to <code>local.kinetica</code> but this is configurable.</p>"},{"location":"Operators/k8s/#1-add-the-kinetica-chart-repository","title":"1. Add the Kinetica chart repository","text":"<p>Add the repo locally as kinetica-operators:</p> Bash<pre><code>helm repo add kinetica-operators https://kineticadb.github.io/charts\n</code></pre>"},{"location":"Operators/k8s/#2-obtain-the-default-helm-values-file","title":"2. Obtain the default Helm values file","text":"<p>For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.</p> Bash<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n</code></pre>"},{"location":"Operators/k8s/#3-determine-the-following-prior-to-the-chart-install","title":"3. Determine the following prior to the chart install","text":"<p>(a) Obtain a LICENSE-KEY as described in the introduction above. (b) Choose a PASSWORD for the initial administrator user (Note: the default in the chart for this user is <code>kadmin</code> but this is configurable). Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, <code>--set dbAdminUser.password=\"MyPassword\\!\"</code> \u00a9 As storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain DEFAULT-STORAGE-CLASS name with the command:</p> Bash<pre><code>kubectl get sc -o name \n</code></pre> <p>use the name found after the /, For example, in <code>\"storageclass.storage.k8s.io/TheName\"</code> use \"TheName\" as the parameter.</p>"},{"location":"Operators/k8s/#4-install-the-helm-chart","title":"4. Install the helm chart","text":"<p>Run the following Helm install command after substituting values from section 3 above:</p> Bash<pre><code>helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n</code></pre>"},{"location":"Operators/k8s/#5-check-installation-progress","title":"5. Check installation progress","text":"<p>After a few moments, follow the progression of the main database pod startup with:</p> Bash<pre><code>kubectl -n gpudb get po gpudb-0 -w\n</code></pre> <p>until it reaches <code>\"gpudb-0 3/3 Running\"</code> at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the <code>helm</code> command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using ctrl-c.</p>"},{"location":"Operators/k8s/#6-accessing-the-kinetica-installation","title":"6. Accessing the Kinetica installation","text":""},{"location":"Operators/k8s/#optional-install-a-development-chart-version","title":"(Optional) Install a development chart version","text":"<p>Find all alternative chart versions with:</p> Bash<pre><code>helm search repo kinetica-operators --devel --versions\n</code></pre> <p>Then append <code>--devel --version [CHART-DEVEL-VERSION]</code> to the end of the Helm install command in section 4 above.</p>"},{"location":"Operators/k8s/#k8s-flavour-specific-notes","title":"K8s Flavour specific notes","text":""},{"location":"Operators/k8s/#eks","title":"EKS","text":""},{"location":"Operators/k8s/#ebs-csi-driver","title":"EBS CSI driver","text":"<p>Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work. Please refer to this AWS documentation for more information.</p>"},{"location":"Operators/k8s/#ingress","title":"Ingress","text":"<p>As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.</p>"},{"location":"Operators/k8s/#option-1-use-the-loadbalancer-domain","title":"Option 1: Use the LoadBalancer domain","text":"Bash<pre><code>kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n</code></pre>"},{"location":"Operators/k8s/#option-1-use-your-custom-domain","title":"Option 1: Use your custom domain","text":"<p>Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.</p>"},{"location":"Operators/kind/","title":"Overview","text":"<p>This installation in a kind cluster is for trying out the operators and the database in a non production environment. This method currently only supports installing a CPU version of the database.</p> <p>You will need a license key for this to work. Please contact Kinetica Support.</p>"},{"location":"Operators/kind/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":""},{"location":"Operators/kind/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Bash<pre><code>kind create cluster --config charts/kinetica-operators/kind.yaml\n</code></pre>"},{"location":"Operators/kind/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"<p>Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.</p> <p>As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.</p>"},{"location":"Operators/kind/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the kinetica-operators chart","text":"Bash<pre><code>wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n</code></pre> <p>You should be able to access the workbench at http://local.kinetica</p> <p>Username as per the values file mentioned above is kadmin and password is Kinetica1234!</p>"},{"location":"Operators/kinetica-operators/","title":"Kinetica DB Operator Helm Charts","text":"<p>To install all the required operators in a single command perform the following: -</p> Bash<pre><code>helm install -n kinetica-system \\\nkinetica-operators kinetica-operators/kinetica-operators --create-namespace\n</code></pre> <p>This will install all the Kubernetes Operators required into the <code>kinetica-system</code> namespace and create the namespace if it is not currently present.</p> <p>Note</p> <p>Depending on what target platform you are installing to it may be necessary to supply an additional parameter pointing to a values file to successfully provision the DB.</p> Bash<pre><code>helm install -n kinetica-system -f values.yaml --set provider=aks \\\nkinetica-operators kinetica-operators/kinetica-operators --create-namespace\n</code></pre> <p>The command above uses a custom <code>values.yaml</code> for helm and sets the install platform to Microsoft Azure AKS.</p> <p>Currently supported <code>providers</code> are: -</p> <ul> <li><code>aks</code> - Microsoft Azure AKS</li> <li><code>eks</code> - Amazon AWS EKS</li> <li><code>local</code> - Generic 'On-Prem' Kubernetes Clusters e.g. one deployed using <code>kubeadm</code></li> </ul> <p>Example Helm <code>values.yaml</code> for different Cloud Providers/On-Prem installations: -</p> Azure AKSAmazon EKSOn-Prem values.yaml<pre><code>namespace: kinetica-system\n\ndb:\n serviceAccount: {}\n image:\n # Kinetica DB Operator installer image\n repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n # Kinetica DB Operator installer image tag\n tag: \"\"\n\n parameters:\n # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n kubeconfig: \"\"\n # The storage class to use for PVCs\n storageClass: \"managed-premium\"\n\n storageClass:\n persist:\n # Workbench Operator Persistent Volume Storage Class\n provisioner: \"disk.csi.azure.com\"\n procs:\n # Workbench Operator Procs Volume Storage Class\n provisioner: \"disk.csi.azure.com\"\n cache:\n # Workbench Operator Cache Volume Storage Class\n provisioner: \"disk.csi.azure.com\"\n</code></pre> <p>15 <code>storageClass: \"managed-premium\"</code> - sets the appropriate <code>storageClass</code> for Microsoft Azure AKS Persistent Volume (PV)</p> <p>20 <code>provisioner: \"disk.csi.azure.com\"</code> - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure</p> <p>23 <code>provisioner: \"disk.csi.azure.com\"</code> - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure</p> <p>26 <code>provisioner: \"disk.csi.azure.com\"</code> - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure</p> values.yaml<pre><code>namespace: kinetica-system\n\ndb:\n serviceAccount: {}\n image:\n # Kinetica DB Operator installer image\n repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n # Kinetica DB Operator installer image tag\n tag: \"\"\n\n parameters:\n # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n kubeconfig: \"\"\n # The storage class to use for PVCs\n storageClass: \"gp2\"\n\n storageClass:\n persist:\n # Workbench Operator Persistent Volume Storage Class\n provisioner: \"kubernetes.io/aws-ebs\"\n procs:\n # Workbench Operator Procs Volume Storage Class\n provisioner: \"kubernetes.io/aws-ebs\"\n cache:\n # Workbench Operator Cache Volume Storage Class\n provisioner: \"kubernetes.io/aws-ebs\"\n</code></pre> <p>15 <code>storageClass: \"gp2\"</code> - sets the appropriate <code>storageClass</code> for Amazon EKS Persistent Volume (PV)</p> <p>20 <code>provisioner: \"kubernetes.io/aws-ebs\"</code> - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure</p> <p>23 <code>provisioner: \"kubernetes.io/aws-ebs\"</code> - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure</p> <p>26 <code>provisioner: \"kubernetes.io/aws-ebs\"</code> - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure</p> values.yaml<pre><code>namespace: kinetica-system\n\ndb:\n serviceAccount: {}\n image:\n # Kinetica DB Operator installer image\n repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n # Kinetica DB Operator installer image tag\n tag: \"\"\n\n parameters:\n # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n kubeconfig: \"\"\n # the type of installation e.g. aks, eks, local\n environment: \"local\"\n # The storage class to use for PVCs\n storageClass: \"standard\"\n\n storageClass:\n procs: {}\n persist: {}\n cache: {}\n</code></pre> <p>15 <code>environment: \"local\"</code> - tells the DB Operator to deploy the DB as a 'local' instance to the Kubernetes Cluster</p> <p>17 <code>storageClass: \"standard\"</code> - sets the appropriate <code>storageClass</code> for the On-Prem Persistent Volume Provisioner</p> <p>storageClass</p> <p>The <code>storageClass</code> should be present in the target environment. </p> <p>A list of available <code>storageClass</code> can be obtained using: -</p> Bash<pre><code>kubectl get sc\n</code></pre>"},{"location":"Operators/kinetica-operators/#components","title":"Components","text":"<p>The <code>kinetica-db</code> Helm Chart wraps the deployment of a number of sub-components: -</p> <ul> <li>Porter Operator</li> <li>Kinetica Database Operator</li> <li>Kinetica Workbench Operator</li> </ul> <p>Installation/Upgrading/Deletion of the Kinetica Operators is done via two CRs which leverage porter.sh as the orchestrator. The corresponding Porter Operator, DB Operator & Workbench Operator CRs are submitted by running the appropriate helm command i.e.</p> <ul> <li>install</li> <li>upgrade</li> <li>uninstall</li> </ul>"},{"location":"Operators/kinetica-operators/#porter-operator","title":"Porter Operator","text":""},{"location":"Operators/kinetica-operators/#database-operator","title":"Database Operator","text":"<p>The Kinetica DB Operator installation CR for the porter.sh operator is: -</p> YAML<pre><code>apiVersion: porter.sh/v1\nkind: Installation\nmetadata:\n annotations:\n meta.helm.sh/release-name: kinetica-operators\n meta.helm.sh/release-namespace: kinetica-system\n labels:\n app.kubernetes.io/instance: kinetica-operators\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/name: kinetica-operators\n app.kubernetes.io/version: 0.1.0\n helm.sh/chart: kinetica-operators-0.1.0\n installVersion: 0.38.10\n name: kinetica-operators-operator-install\n namespace: kinetica-system\nspec:\n action: install\n agentConfig:\n volumeSize: '0'\n parameters:\n environment: local\n storageclass: managed-premium\n reference: docker.io/kinetica/kinetica-k8s-operator:v7.1.9-7.rc3\n</code></pre>"},{"location":"Operators/kinetica-operators/#workbench-operator","title":"Workbench Operator","text":"<p>The Kinetica Workbench installation CR for the porter.sh operator is: -</p> YAML<pre><code>apiVersion: porter.sh/v1\nkind: Installation\nmetadata:\n annotations:\n meta.helm.sh/release-name: kinetica-operators\n meta.helm.sh/release-namespace: kinetica-system\n labels:\n app.kubernetes.io/instance: kinetica-operators\n app.kubernetes.io/managed-by: Helm\n app.kubernetes.io/name: kinetica-operators\n app.kubernetes.io/version: 0.1.0\n helm.sh/chart: kinetica-operators-0.1.0\n installVersion: 0.38.10\n name: kinetica-operators-wb-operator-install\n namespace: kinetica-system\nspec:\n action: install\n agentConfig:\n volumeSize: '0'\n parameters:\n environment: local\n reference: docker.io/kinetica/workbench-operator:v7.1.9-7.rc3\n</code></pre>"},{"location":"Operators/kinetica-operators/#overriding-images-tags","title":"Overriding Images Tags","text":"Bash<pre><code>helm install -n kinetica-system kinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--set provider=aks \n--set dbOperator.image.tag=v7.1.9-7.rc3 \\\n--set dbOperator.image.repository=docker.io/kinetica/kinetica-k8s-operator \\\n--set wbOperator.image.repository=docker.io/kinetica/workbench-operator \\\n--set wbOperator.image.tag=v7.1.9-7.rc3\n</code></pre>"},{"location":"Reference/","title":"Reference Section","text":"<ul> <li> <p> Kinetica Operators Helm</p> <p>Kinetica Operators Helm charts & values file reference data. Charts</p> </li> <li> <p> Kinetica Core DB CRDs-</p> <p>Kinetica DB Kubernetes CRD & ConfigMap reference data. Cluster CRDs</p> </li> <li> <p> Kinetica Workbench CRDs</p> <p>Kinetica Workbench Kubernetes CRD & ConfigMap reference data. Workbench</p> </li> </ul>","tags":["Reference"]},{"location":"Reference/database/","title":"Kinetica Database Configuration","text":"<ul> <li>kubectl (yaml)</li> </ul>","tags":["Reference"]},{"location":"Reference/database/#kineticacluster","title":"KineticaCluster","text":"<p>To deploy a new Database Instance into a Kubernetes cluster...</p> kubectl <p>Using kubetctl a CustomResource of type <code>KineticaCluster</code> is used to define a new Kinetica DB Cluster in a yaml file.</p> <p>The basic Group, Version, Kind or GVK to instantiate a Kinetica DB Cluster is as follows: -</p> kineticacluster.yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\n</code></pre>","tags":["Reference"]},{"location":"Reference/database/#metadata","title":"Metadata","text":"<p>to which we add a <code>metadata:</code> block for the name of the DB CR along with the <code>namespace</code> into which we are targetting the installation of the DB cluster.</p> kineticacluster.yaml<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n name: my-kinetica-db-cr\n namespace: gpudb\nspec:\n</code></pre>","tags":["Reference"]},{"location":"Reference/database/#spec","title":"Spec","text":"<p>Under the <code>spec:</code> section of the KineticaCLuster CR we have a number of sections supporting different aspects of the deployed DB cluster:-</p> <ul> <li>gpudbCluster</li> <li>autoSuspend</li> <li>gadmin</li> </ul>","tags":["Reference"]},{"location":"Reference/database/#gpudbcluster","title":"gpudbCluster","text":"<p>Configuartion items specific to the DB itself.</p> kineticacluster.yaml - gpudbCluster<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n name: my-kinetica-db-cr\n namespace: gpudb\nspec:\n gpudbCluster:\n</code></pre>","tags":["Reference"]},{"location":"Reference/database/#gpudbcluster_1","title":"gpudbCluster","text":"cluster name & size<pre><code>clusterName: kinetica-cluster \nclusterSize: \n tshirtSize: M \n tshirtType: LargeCPU \nfqdn: kinetica-cluster.saas.kinetica.com\nhaRingName: default\nhasPools: false \n</code></pre> <p><code>1. clusterName</code> - the user defined name of the Kinetica DB Cluster</p> <p><code>2. clusterSize</code> - block that defines the number of DB Ranks to run</p> <p><code>3. tshirtSize</code> - sets the cluster size to a defined size based upon the t-shirt size. Valid sizes are: -</p> <ul> <li><code>XS</code> - 1 DB Rank</li> <li><code>S</code> - 2 DB Ranks</li> <li><code>M</code> - 4 DB Ranks</li> <li><code>L</code> - 8 DB Ranks</li> <li><code>XL</code> - 16 DB Ranks</li> <li><code>XXL</code> - 32 DB Ranks</li> <li><code>XXXL</code> - 64 DB Ranks</li> </ul> <p><code>4. tshirtType</code> - block that defines the tyoe DB Ranks to run: -</p> <ul> <li><code>SmallCPU</code> - </li> <li><code>LargeCPU</code> -</li> <li><code>SmallGPU</code> - </li> <li><code>LargeGPU</code> -</li> </ul> <p><code>5. fqdn</code> - The fully qualified URL for the DB cluster. Used on the Ingress records for any exposed services.</p> <p><code>6. haRingName</code> - Default: <code>default</code></p> <p><code>7. hasPools</code> - Whether to enable the separate node 'pools' for \"infra\", \"compute\" pod scheduling. Default: false +optional</p>","tags":["Reference"]},{"location":"Reference/database/#autosuspend","title":"autoSuspend","text":"<p>The DB Cluster autosuspend section allows for the spinning down of the core DB Pods to release the underlying Kubernetes nodes to reduce infrastructure costs when the DB is not in use. </p> kineticacluster.yaml - autoSuspend<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n name: my-kinetica-db-cr\n namespace: gpudb\nspec:\n autoSuspend:\n enabled: false\n inactivityDuration: 1h0m0s\n</code></pre> <p><code>7.</code> the start of the <code>autoSuspend</code> definition</p> <p><code>8.</code> <code>enabled</code> when set to <code>true</code> auto suspend of the DB cluster is enabled otherwise set to <code>false</code> and no automatic suspending of the DB takes place. If omitted it defaults to <code>false</code></p> <p><code>9.</code> <code>inactivityDuration</code> the duration after which if no DB activity has taken place the DB will be suspended</p> <p>Horizontal Pod Autoscaler</p> <p>In order for <code>autoSuspend</code> to work correctly the Kubernetes Horizontal Pod Autoscaler needs to be deployed to the cluster.</p>","tags":["Reference"]},{"location":"Reference/database/#gadmin","title":"gadmin","text":"<p>GAdmin the Database Administration Console</p> <p></p> kineticacluster.yaml - gadmin<pre><code>apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n name: my-kinetica-db-cr\n namespace: gpudb\nspec:\n gadmin:\n containerPort:\n containerPort: 8080\n name: gadmin\n protocol: TCP\n isEnabled: true\n</code></pre> <p><code>7.</code> <code>gadmin</code> configuration block definition</p> <p><code>8.</code> <code>containerPort</code> configuration block i.e. where <code>gadmin</code> is exposed on the DB Pod</p> <p><code>9.</code> <code>containerPort</code> the port number as an integer. Default: <code>8080</code></p> <p><code>10.</code> <code>name</code> the name of the port being exposed. Default: <code>gadmin</code></p> <p><code>11.</code> <code>protocol</code> network protocal used. Default: <code>TCP</code></p> <p><code>12.</code> <code>isEnabled</code> whether <code>gadmin</code> is exposed from the DB pod. Default: <code>true</code></p>","tags":["Reference"]},{"location":"Reference/database/#kineticauser","title":"KineticaUser","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticagrant","title":"KineticaGrant","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticaschema","title":"KineticaSchema","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticaresourcegroup","title":"KineticaResourceGroup","text":"","tags":["Reference"]},{"location":"Reference/helm_kinetica_operators/","title":"Helm Chart Reference","text":"","tags":["Reference"]},{"location":"Reference/helm_kinetica_operators/#coming-soon","title":"Coming Soon","text":"","tags":["Reference"]},{"location":"Reference/kinetica_cluster_admins/","title":"Kinetica Cluster Admins Reference","text":"","tags":["Reference"]},{"location":"Reference/kinetica_cluster_admins/#full-kineticaclusteradmin-cr-structure","title":"Full KineticaClusterAdmin CR Structure","text":"kineticaclusteradmins.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterAdmin\nmetadata: {}\n# KineticaClusterAdminSpec defines the desired state of\n# KineticaClusterAdmin\nspec:\n # ForceDBStatus - Force a Status of the DB.\n forceDbStatus: string\n # Name - The name of the cluster to target.\n kineticaClusterName: string\n # Offline - Pause/Resume of the DB.\n offline:\n # Set to true if desired state is offline. The supported values are:\n # true false\n offline: false\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: flush_to_disk Flush to disk when\n # going offline The supported values are: true false\n options: {}\n # Rebalance of the DB.\n rebalance:\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: rebalance_sharded_data If true,\n # sharded data will be rebalanced approximately equally across the\n # cluster. Note that for clusters with large amounts of sharded\n # data, this data transfer could be time-consuming and result in\n # delayed query responses. The default value is true. The supported\n # values are: true false rebalance_unsharded_data If true,\n # unsharded data (a.k.a. randomly-sharded) will be rebalanced\n # approximately equally across the cluster. Note that for clusters\n # with large amounts of unsharded data, this data transfer could be\n # time-consuming and result in delayed query responses. The default\n # value is true. The supported values are: true false\n # table_includes Comma-separated list of unsharded table names\n # to rebalance. Not applicable to sharded tables because they are\n # always rebalanced. Cannot be used simultaneously with\n # table_excludes. This parameter is ignored if\n # rebalance_unsharded_data is false.\n # table_excludes Comma-separated list of unsharded table names\n # to not rebalance. Not applicable to sharded tables because they\n # are always rebalanced. Cannot be used simultaneously with\n # table_includes. This parameter is ignored if rebalance_\n # unsharded_data is false. aggressiveness Influences how much\n # data is moved at a time during rebalance. A higher aggressiveness\n # will complete the rebalance faster. A lower aggressiveness will\n # take longer but allow for better interleaving between the\n # rebalance and other queries. Valid values are constants from 1\n # (lowest) to 10 (highest). The default value is '1'.\n # compact_after_rebalance Perform compaction of deleted records\n # once the rebalance completes to reclaim memory and disk space.\n # Default is true, unless repair_incorrectly_sharded_data is set to\n # true. The default value is true. The supported values are: true\n # false compact_only If set to true, ignore rebalance options\n # and attempt to perform compaction of deleted records to reclaim\n # memory and disk space without rebalancing first. The default\n # value is false. The supported values are: true false\n # repair_incorrectly_sharded_data Scans for any data sharded\n # incorrectly and re-routes the data to the correct location. Only\n # necessary if /admin/verifydb reports an error in sharding\n # alignment. This can be done as part of a typical rebalance after\n # expanding the cluster or in a standalone fashion when it is\n # believed that data is sharded incorrectly somewhere in the\n # cluster. Compaction will not be performed by default when this is\n # enabled. If this option is set to true, the time necessary to\n # rebalance and the memory used by the rebalance may increase. The\n # default value is false. The supported values are: true false\n options: {}\n # RegenerateDBConfig - Force regenerate of DB ConfigMap. true -\n # restarts DB Pods after config generation false - writes new\n # configuration without restarting the DB Pods\n regenerateDBConfig:\n # Restart - Scales down the DB STS and back up once the DB\n # Configuration has been regenerated.\n restart: false\n# KineticaClusterAdminStatus defines the observed state of\n# KineticaClusterAdmin\nstatus:\n # Phase - The current phase/state of the Admin request\n phase: string\n # Processed - Indicates if the admin request has already been\n # processed. Avoids the request being rerun in the case the Operator\n # gets restarted.\n processed: false\n</code></pre>","tags":["Reference"]},{"location":"Reference/kinetica_cluster_backups/","title":"Kinetica Cluster Backups Reference","text":"","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_backups/#full-kineticaclusterbackup-cr-structure","title":"Full KineticaClusterBackup CR Structure","text":"kineticaclusterbackups.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterBackup \nmetadata: {}\n# Fields specific to the linked backup engine\nprovider:\n # Name of the backup/restore provider. FOR INTERNAL USE ONLY.\n backupProvider: \"velero\"\n # Name of the backup in the linked BackupProvider. FOR INTERNAL USE\n # ONLY.\n linkedItemName: \"\"\n# BackupSpec defines the specification for a Velero backup.\nspec:\n # DefaultVolumesToRestic specifies whether restic should be used to\n # take a backup of all pod volumes by default.\n defaultVolumesToRestic: true\n # ExcludedNamespaces contains a list of namespaces that are not\n # included in the backup.\n excludedNamespaces: [\"string\"]\n # ExcludedResources is a slice of resource names that are not included\n # in the backup.\n excludedResources: [\"string\"]\n # Hooks represent custom behaviors that should be executed at\n # different phases of the backup.\n hooks:\n # Resources are hooks that should be executed when backing up\n # individual instances of a resource.\n resources:\n - excludedNamespaces: [\"string\"]\n # ExcludedResources specifies the resources to which this hook\n # spec does not apply.\n excludedResources: [\"string\"]\n # IncludedNamespaces specifies the namespaces to which this hook\n # spec applies. If empty, it applies to all namespaces.\n includedNamespaces: [\"string\"]\n # IncludedResources specifies the resources to which this hook\n # spec applies. If empty, it applies to all resources.\n includedResources: [\"string\"]\n # LabelSelector, if specified, filters the resources to which this\n # hook spec applies.\n labelSelector:\n # matchExpressions is a list of label selector requirements. The\n # requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists and DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is In\n # or NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array must\n # be empty. This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\", the\n # operator is \"In\", and the values array contains only \"value\".\n # The requirements are ANDed.\n matchLabels: {}\n # Name is the name of this hook.\n name: string\n # PostHooks is a list of BackupResourceHooks to execute after\n # storing the item in the backup. These are executed after\n # all \"additional items\" from item actions are processed.\n post:\n - exec:\n # Command is the command and arguments to execute.\n command: [\"string\"]\n # Container is the container in the pod where the command\n # should be executed. If not specified, the pod's first\n # container is used.\n container: string\n # OnError specifies how Velero should behave if it encounters\n # an error executing this hook.\n onError: string\n # Timeout defines the maximum amount of time Velero should\n # wait for the hook to complete before considering the\n # execution a failure.\n timeout: string\n # PreHooks is a list of BackupResourceHooks to execute prior to\n # storing the item in the backup. These are executed before\n # any \"additional items\" from item actions are processed.\n pre:\n - exec:\n # Command is the command and arguments to execute.\n command: [\"string\"]\n # Container is the container in the pod where the command\n # should be executed. If not specified, the pod's first\n # container is used.\n container: string\n # OnError specifies how Velero should behave if it encounters\n # an error executing this hook.\n onError: string\n # Timeout defines the maximum amount of time Velero should\n # wait for the hook to complete before considering the\n # execution a failure.\n timeout: string\n # IncludeClusterResources specifies whether cluster-scoped resources\n # should be included for consideration in the backup.\n includeClusterResources: true\n # IncludedNamespaces is a slice of namespace names to include objects\n # from. If empty, all namespaces are included.\n includedNamespaces: [\"string\"]\n # IncludedResources is a slice of resource names to include in the\n # backup. If empty, all resources are included.\n includedResources: [\"string\"]\n # LabelSelector is a metav1.LabelSelector to filter with when adding\n # individual objects to the backup. If empty or nil, all objects are\n # included. Optional.\n labelSelector:\n # matchExpressions is a list of label selector requirements. The\n # requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists and DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the operator is\n # Exists or DoesNotExist, the values array must be empty. This\n # array is replaced during a strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single {key,value} in\n # the matchLabels map is equivalent to an element of\n # matchExpressions, whose key field is \"key\", the operator is \"In\",\n # and the values array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {} metadata: labels: {}\n # OrderedResources specifies the backup order of resources of specific\n # Kind. The map key is the Kind name and value is a list of resource\n # names separated by commas. Each resource name has\n # format \"namespace/resourcename\". For cluster resources, simply\n # use \"resourcename\".\n orderedResources: {}\n # SnapshotVolumes specifies whether to take cloud snapshots of any\n # PV's referenced in the set of objects included in the Backup.\n snapshotVolumes: true\n # StorageLocation is a string containing the name of a\n # BackupStorageLocation where the backup should be stored.\n storageLocation: string\n # TTL is a time.Duration-parseable string describing how long the\n # Backup should be retained for.\n ttl: string\n # VolumeSnapshotLocations is a list containing names of\n # VolumeSnapshotLocations associated with this backup.\n volumeSnapshotLocations: [\"string\"] status:\n # ClusterSize the current number of ranks & type i.e. CPU or GPU of\n # the cluster when the backup took place.\n clusterSize:\n # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n # representation of the number of nodes in a simple to understand\n # T-Short size scheme. This indicates the size of the cluster i.e.\n # the number of nodes. It does not identify the size of the cloud\n # provider nodes. For node size see ClusterTypeEnum. Supported\n # Values are: - XS S M L XL XXL XXXL\n tshirtSize: string\n # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n # of the VM.\n tshirtType: string coldTierBackup: string\n # CompletionTimestamp records the time a backup was completed.\n # Completion time is recorded even on failed backups. Completion time\n # is recorded before uploading the backup object. The server's time\n # is used for CompletionTimestamps\n completionTimestamp: string\n # Errors is a count of all error messages that were generated during\n # execution of the backup. The actual errors are in the backup's log\n # file in object storage.\n errors: 1\n # Expiration is when this Backup is eligible for garbage-collection.\n expiration: string\n # FormatVersion is the backup format version, including major, minor,\n # and patch version.\n formatVersion: string\n # Phase is the current state of the Backup.\n phase: string\n # Progress contains information about the backup's execution progress.\n # Note that this information is best-effort only -- if Velero fails\n # to update it during a backup for any reason, it may be\n # inaccurate/stale.\n progress:\n # ItemsBackedUp is the number of items that have actually been\n # written to the backup tarball so far.\n itemsBackedUp: 1\n # TotalItems is the total number of items to be backed up. This\n # number may change throughout the execution of the backup due to\n # plugins that return additional related items to back up, the\n # velero.io/exclude-from-backup label, and various other filters\n # that happen as items are processed.\n totalItems: 1\n # StartTimestamp records the time a backup was started. Separate from\n # CreationTimestamp, since that value changes on restores. The\n # server's time is used for StartTimestamps\n startTimestamp: string\n # ValidationErrors is a slice of all validation errors\n # (if applicable).\n validationErrors: [\"string\"]\n # Version is the backup format major version. Deprecated: Please see\n # FormatVersion\n version: 1\n # VolumeSnapshotsAttempted is the total number of attempted volume\n # snapshots for this backup.\n volumeSnapshotsAttempted: 1\n # VolumeSnapshotsCompleted is the total number of successfully\n # completed volume snapshots for this backup.\n volumeSnapshotsCompleted: 1\n # Warnings is a count of all warning messages that were generated\n # during execution of the backup. The actual warnings are in the\n # backup's log file in object storage.\n warnings: 1\n</code></pre>","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_grants/","title":"Kinetica Cluster Grants CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_grants/#full-kineticagrant-cr-structure","title":"Full KineticaGrant CR Structure","text":"kineticagrants.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaGrant \nmetadata: {}\n# KineticaGrantSpec defines the desired state of KineticaGrant\nspec:\n # Grants system-level and/or table permissions to a user or role.\n addGrantAllOnSchemaRequest:\n # Name of the user or role that will be granted membership in input\n # parameter role. Must be an existing user or role.\n member: string\n # Optional parameters. The default value is an empty map ( {} ).\n options: {}\n # SchemaName - name of the schema on which to perform the Grant All\n schemaName: string\n # Grants system-level and/or table permissions to a user or role.\n addGrantPermissionRequest:\n # Optional parameters. The default value is an empty map ( {} ).\n options: {}\n # Permission to grant to the user or role. Supported\n # Values Description system_admin Full access to all data and\n # system functions. system_user_admin Access to administer users\n # and roles that do not have system_admin permission.\n # system_write Read and write access to all tables.\n # system_read Read-only access to all tables.\n systemPermission:\n # UID of the user or role to which the permission will be granted.\n # Must be an existing user or role.\n name: string\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: resource_group Name of an existing\n # resource group to associate with this role.\n options: {}\n # Permission to grant to the user or role. Supported\n # Values Description table_admin Full read/write and\n # administrative access to the table. table_insert Insert access\n # to the table. table_update Update access to the table.\n # table_delete Delete access to the table. table_read Read access\n # to the table.\n permission: string\n # Permission to grant to the user or role. Supported\n # Values Description<br/> system_admin Full access to all data and\n # system functions.<br/> system_user_admin Access to administer\n # users and roles that do not have system_admin permission.<br/>\n # system_write Read and write access to all tables.<br/>\n # system_read Read-only access to all tables.<br/>\n tablePermissions:\n - filter_expression: \"\"\n # UID of the user or role to which the permission will be granted.\n # Must be an existing user or role.\n name: string\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: resource_group Name of an existing\n # resource group to associate with this role.\n options: {}\n # Permission to grant to the user or role. Supported\n # Values Description table_admin Full read/write and\n # administrative access to the table. table_insert Insert access\n # to the table. table_update Update access to the table.\n # table_delete Delete access to the table. table_read Read access\n # to the table.\n permission: string\n # Name of the table for which the Permission is to be granted\n table_name: string\n # Grants membership in a role to a user or role.\n addGrantRoleRequest:\n # Name of the user or role that will be granted membership in input\n # parameter role. Must be an existing user or role.\n member: string\n # Optional parameters. The default value is an empty map ( {} ).\n options: {}\n # Name of the role in which membership will be granted. Must be an\n # existing role.\n role: string\n # Debug debug the call\n debug: false\n # RingName is the name of the kinetica ring that this user belongs\n # to.\n ringName: string\n# KineticaGrantStatus defines the observed state of KineticaGrant\nstatus:\n # DBStringResponse - The GPUdb server embeds the endpoint response\n # inside a standard response structure which contains status\n # information and the actual response to the query.\n db_response: data: string\n # This embedded JSON represents the result of the endpoint\n data_str: string\n # API Call Specific\n data_type: string\n # Empty if success or an error message\n message: string\n # 'OK' or 'ERROR'\n status: string \n ldap_response: string\n</code></pre>","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_reference/","title":"Core DB CRDs","text":"<ul> <li> <p> DB Clusters</p> <p>Core Kinetica Database Cluster Management CRD & sample CR.</p> <p> KineticaCluster</p> </li> <li> <p> DB Users</p> <p>Kinetica Database User Management CRD & sample CR.</p> <p> KineticaUser</p> </li> <li> <p> DB Roles</p> <p>Kinetica Database Role Management CRD & sample CR.</p> <p> KineticaRole</p> </li> <li> <p> DB Schemas</p> <p>Kinetica Database Schema Management CRD & sample CR.</p> <p> KineticaSchema</p> </li> <li> <p> DB Grants</p> <p>Kinetica Database Grant Management CRD & sample CR.</p> <p> KineticaGrant</p> </li> <li> <p> DB Resource Groups</p> <p>Kinetica Database Resource Group Management CRD & sample CR.</p> <p> KineticaResourceGroup</p> </li> <li> <p> DB Administration</p> <p>Kinetica Database Administration CRD & sample CR.</p> <p> KineticaAdmin</p> </li> <li> <p> DB Backups</p> <p>Kinetica Database Backup Management CRD & sample CR.</p> <p>Note</p> <p>This requires Velero to be installed on the Kubernetes Cluster.</p> <p> KineticaBackup</p> </li> <li> <p> DB Restore</p> <p>Kinetica Database Restore CRD & sample CR.</p> <p>Note</p> <p>This requires Velero to be installed on the Kubernetes Cluster.</p> <p> KineticaRestore</p> </li> </ul>","tags":["Reference","Installation","Operations"]},{"location":"Reference/kinetica_cluster_resource_groups/","title":"Kinetica Cluster Resource Groups CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_resource_groups/#full-kineticaresourcegroup-cr-structure","title":"Full KineticaResourceGroup CR Structure","text":"kineticaclusterresourcegroups.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterResourceGroup \nmetadata: {}\n# KineticaClusterResourceGroupSpec defines the desired state of\n# KineticaClusterResourceGroup\nspec: \n db_create_resource_group_request:\n # AdjoiningResourceGroup -\n adjoining_resource_group: \"\"\n # Name - name of the DB ResourceGroup\n # https://docs.kinetica.com/7.1/azure/sql/resource_group/?search-highlight=resource+group#id-baea5b60-769c-5373-bff1-53f4f1ca5c21\n name: string\n # Options - DB Options used when creating the ResourceGroup\n options: {}\n # Ranking - Indicates the relative ranking among existing resource\n # groups where this new resource group will be placed. When using\n # before or after, specify which resource group this one will be\n # inserted before or after in input parameter\n # adjoining_resource_group. The supported values are: first last\n # before after\n ranking: \"\"\n # RingName is the name of the kinetica ring that this user belongs\n # to.\n ringName: string\n# KineticaClusterResourceGroupStatus defines the observed state of\n# KineticaClusterResourceGroup\nstatus: \n provisioned: string\n</code></pre>","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_restores/","title":"Kinetica Cluster Restores Reference","text":"","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_restores/#full-kineticaclusterrestore-cr-structure","title":"Full KineticaClusterRestore CR Structure","text":"kineticaclusterrestores.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterRestore \nmetadata: {}\n# RestoreSpec defines the specification for a Velero restore.\nspec:\n # BackupName is the unique name of the Velero backup to restore from.\n backupName: string\n # ExcludedNamespaces contains a list of namespaces that are not\n # included in the restore.\n excludedNamespaces: [\"string\"]\n # ExcludedResources is a slice of resource names that are not included\n # in the restore.\n excludedResources: [\"string\"]\n # IncludeClusterResources specifies whether cluster-scoped resources\n # should be included for consideration in the restore. If null,\n # defaults to true.\n includeClusterResources: true\n # IncludedNamespaces is a slice of namespace names to include objects\n # from. If empty, all namespaces are included.\n includedNamespaces: [\"string\"]\n # IncludedResources is a slice of resource names to include in the\n # restore. If empty, all resources in the backup are included.\n includedResources: [\"string\"]\n # LabelSelector is a metav1.LabelSelector to filter with when\n # restoring individual objects from the backup. If empty or nil, all\n # objects are included. Optional.\n labelSelector:\n # matchExpressions is a list of label selector requirements. The\n # requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists and DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the operator is\n # Exists or DoesNotExist, the values array must be empty. This\n # array is replaced during a strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single {key,value} in\n # the matchLabels map is equivalent to an element of\n # matchExpressions, whose key field is \"key\", the operator is \"In\",\n # and the values array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # NamespaceMapping is a map of source namespace names to target\n # namespace names to restore into. Any source namespaces not included\n # in the map will be restored into namespaces of the same name.\n namespaceMapping: {}\n # RestorePVs specifies whether to restore all included PVs from\n # snapshot (via the cloudprovider).\n restorePVs: true\n # ScheduleName is the unique name of the Velero schedule to restore\n # from. If specified, and BackupName is empty, Velero will restore\n # from the most recent successful backup created from this schedule.\n scheduleName: string status: coldTierRestore: \"\"\n # CompletionTimestamp records the time the restore operation was\n # completed. Completion time is recorded even on failed restore. The\n # server's time is used for StartTimestamps\n completionTimestamp: string\n # Errors is a count of all error messages that were generated during\n # execution of the restore. The actual errors are stored in object\n # storage.\n errors: 1\n # FailureReason is an error that caused the entire restore to fail.\n failureReason: string\n # Phase is the current state of the Restore\n phase: string\n # Progress contains information about the restore's execution\n # progress. Note that this information is best-effort only -- if\n # Velero fails to update it during a restore for any reason, it may\n # be inaccurate/stale.\n progress:\n # ItemsRestored is the number of items that have actually been\n # restored so far\n itemsRestored: 1\n # TotalItems is the total number of items to be restored. This\n # number may change throughout the execution of the restore due to\n # plugins that return additional related items to restore\n totalItems: 1\n # StartTimestamp records the time the restore operation was started.\n # The server's time is used for StartTimestamps\n startTimestamp: string\n # ValidationErrors is a slice of all validation errors(if applicable)\n validationErrors: [\"string\"]\n # Warnings is a count of all warning messages that were generated\n # during execution of the restore. The actual warnings are stored in\n # object storage.\n warnings: 1\n</code></pre>","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_roles/","title":"Kinetica Cluster Roles CRD","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_roles/#full-kineticarole-cr-structure","title":"Full KineticaRole CR Structure","text":"kineticaroles.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaRole \nmetadata: {}\n# KineticaRoleSpec defines the desired state of KineticaRole\nspec:\n # AlterRoleRequest Kinetica DB REST API Request Format Object.\n alter_role:\n # Action - Modification operation to be applied to the role.\n action: string\n # Role UID - Name of the role to be altered. Must be an existing\n # role.\n name: string\n # Optional parameters. The default value is an empty map ( {} ).\n options: {}\n # Value - The value of the modification, depending on input\n # parameter action.\n value: string\n # Debug debug the call\n debug: false\n # RingName is the name of the kinetica ring that this user belongs\n # to.\n ringName: string\n # AddRoleRequest Kinetica DB REST API Request Format Object.\n role:\n # User UID\n name: string\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: resource_group Name of an existing\n # resource group to associate with this role.\n options: {}\n # ResourceGroupName of an existing resource group to associate with\n # this role\n resourceGroupName: \"\"\n# KineticaRoleStatus defines the observed state of KineticaRole\nstatus:\n # DBStringResponse - The GPUdb server embeds the endpoint response\n # inside a standard response structure which contains status\n # information and the actual response to the query.\n db_response: data: string\n # This embedded JSON represents the result of the endpoint\n data_str: string\n # API Call Specific\n data_type: string\n # Empty if success or an error message\n message: string\n # 'OK' or 'ERROR'\n status: string \n ldap_response: string\n</code></pre>","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_schemas/","title":"Kinetica Cluster Schemas CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_schemas/#full-kinetica-cluster-schemas-cr-structure","title":"Full Kinetica Cluster Schemas CR Structure","text":"kineticaclusterschemas.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterSchema \nmetadata: {}\n# KineticaClusterSchemaSpec defines the desired state of\n# KineticaClusterSchema\nspec: \n db_create_schema_request:\n # Name - the name of the resource group to create in the DB\n name: string\n # Optional parameters. The default value is an empty map (\n # {} ). Supported Parameters: \"max_cpu_concurrency\", \"max_data\"\n options: {}\n # RingName is the name of the kinetica ring that this user belongs\n # to.\n ringName: string\n# KineticaClusterSchemaStatus defines the observed state of\n# KineticaClusterSchema\nstatus: \n provisioned: string\n</code></pre>","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_users/","title":"Kinetica Cluster Users CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_users/#full-kineticauser-cr-structure","title":"Full KineticaUser CR Structure","text":"kineticausers.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaUser\nmetadata: {}\n# KineticaUserSpec defines the desired state of KineticaUser\nspec:\n # Action field contains UserActionEnum field indicating whether it is\n # an Upsert or Change Password operation. For deletion delete the\n # KineticaUser CR and a finalizer will remove the user from LDAP.\n action: string\n # ChangePassword specific fields\n changePassword:\n # PasswordSecret - Not the actual user password but the name of a\n # Kubernetes Secret containing a Data element with a Password\n # attribute. The secret is removed on user creation. Must be in the\n # same namespace as the Kinetica Cluster. Must contain the\n # following fields: - oldPassword newPassword\n passwordSecret: string\n # Debug debug the call\n debug: false\n # GroupID - Organisation or Team Id the user belongs to.\n groupId: string\n # Create the user in Reveal\n reveal: true\n # RingName is the name of the kinetica ring that this user belongs\n # to.\n ringName: string\n # UID is the username (not UUID UID).\n uid: string\n # Upsert specific fields\n upsert:\n # CreateHomeDirectory - when true, a home directory in KiFS is\n # created for this user The default value is true. The supported\n # values are: true false\n createHomeDirectory: true\n # DB Memory user data size limit\n dataLimit: \"10Gi\"\n # DisplayName\n displayName: string\n # GivenName is Firstname also called Christian name. givenName in\n # LDAP terms.\n givenName: string\n # KIFs user data size limit\n kifsDataLimit: \"2Gi\"\n # LastName refers to last name or surname. sn in LDAP terms.\n lastName: string\n # Options -\n options: {}\n # PasswordSecret - Not the actual user password but the name of a\n # Kubernetes Secret containing a Data element with a Password\n # attribute. The secret is removed on user creation. Must be in the\n # same namespace as the Kinetica Cluster.\n passwordSecret: string\n # UPN or UserPrincipalName - e.g. guyt@cp.com \n # Looks like an email address.\n userPrincipalName: string\n # UUID is the user unique UUID from the Control Plane.\n uuid: string\n# KineticaUserStatus defines the observed state of KineticaUser\nstatus:\n # DBStringResponse - The GPUdb server embeds the endpoint response\n # inside a standard response structure which contains status\n # information and the actual response to the query.\n db_response: data: string\n # This embedded JSON represents the result of the endpoint\n data_str: string\n # API Call Specific\n data_type: string\n # Empty if success or an error message\n message: string\n # 'OK' or 'ERROR'\n status: string \n ldap_response: string \n reveal_admin: string\n</code></pre>","tags":["Reference","Administration"]},{"location":"Reference/kinetica_clusters/","title":"image/svg+xml crd Kinetica Clusters CRD Reference","text":"<p>This page covers the Kinetica Cluster Kubernetes CRD.</p>","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#kubectl-cli-commands","title":"<code>kubectl</code> cli commands","text":"","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#kubectl-n-_namespace_-get-kc","title":"<code>kubectl -n _namespace_ get kc</code>","text":"<p>Lists the <code>KineticaUsers</code> defined within the specified anmespace to the console.</p> <p></p> Bash<pre><code>kubectl -n _namespace_ get ku\n</code></pre>","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#full-kineticacluster-cr-structure","title":"Full KineticaCluster CR Structure","text":"kineticaclusters.app.kinetica.com_sample.yaml<pre><code># APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaCluster \nmetadata: {}\n# KineticaClusterSpec defines the configuration for KineticaCluster DB\nspec:\n # An optional duration after which the database is stopped and DB\n # resources are freed\n autoSuspend: \n enabled: false\n # InactivityDuration - the duration which the cluster should be idle\n # before auto-pausing the DB Cluster.\n inactivityDuration: \"1h\"\n # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n # etc.\n awsConfig:\n # ClusterName - AWS name of the EKS Cluster. NOTE: Marked as\n # optional but is mandatory\n clusterName: string\n # MarketplaceAppConfig - Amazon AWS specific DB Cluster\n # information.\n marketplaceApp:\n # KmsKeyId - Key for disk encryption. The full Amazon Resource\n # Name of the key to use when encrypting the volume. If none is\n # supplied but encrypted is true, a key is generated by AWS. See\n # AWS docs for valid ARN value.\n kmsKeyId: string\n # ProductCode - used to uniquely identify a product in AWS\n # Marketplace. The product code should be the same as the one\n # used during the publishing of a new product.\n productCode: \"1cmucncoyp9pi8xjdwqjimlf8\"\n # PublicKeyVersion - Public Key Version provided by AWS\n # Marketplace\n publicKeyVersion: 1\n # ParentResourceGroup - The resource group of the ManagedApp\n # itself ParentResourceGroup string\n # `json:\"parentResourceGroup\"` ResourceId - Identifier of the\n # resource against which usage is emitted Format is GUID\n # (UUID)\n # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n # Optional only if that exactly of ResourceId or ResourceUri is\n # specified.\n resourceId: string\n # NodeGroups - List of NodeGroups for this cluster MUST contain at\n # least one of the following keys: - \n # * none\n # * infra \n # * infra_public \n # * compute \n # * compute-gpu \n # * aaw_cpu \n # NOTE: Marked as optional but is mandatory\n nodeGroups: {}\n # OTELTracing - OpenTelemetry Tracing Specifics\n otelTracing:\n # Endpoint - Set the OpenTelemetry reporting Endpoint\n endpoint: \"\"\n # Key - KineticaCluster specific Key required to send Telemetry\n # information to the Cloud\n key: string\n # MaxBatchSize - Telemetry Reporting Interval to use in seconds.\n maxBatchInterval: 10\n # MaxBatchSize - Telemetry Maximum Batch Size to send.\n maxBatchSize: 1024\n # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n # etc.\n azureConfig:\n # App Insights Specifics\n appInsights:\n # Endpoint - Override the default AppInsights reporting Endpoint\n endpoint: \"\"\n # Key - KineticaCluster specific Application Insights Key required\n # to send Telemetry information to the Azure Portal\n key: string\n # MaxBatchSize - Telemetry Reporting Interval to use in seconds.\n maxBatchInterval: 10\n # MaxBatchSize - Telemetry Maximum Batch Size to send.\n maxBatchSize: 1024\n # AzureManagedAppConfig - Microsoft Azure specific DB Cluster\n # information.\n managedApp:\n # DiskEncryptionSetID - By default, managed disks use\n # platform-managed encryption keys. All managed disks, snapshots,\n # images, and data written to existing managed disks are\n # automatically encrypted-at-rest with platform-managed keys. You\n # can choose to manage encryption at the level of each managed\n # disk, with your own keys. When you specify a customer-managed\n # key, that key is used to protect and control access to the key\n # that encrypts your data. Customer-managed keys offer greater\n # flexibility to manage access controls.\n diskEncryptionSetId: string\n # PlanId - The Azure Marketplace Plan/Offer identifier selected by\n # the customer for this DB cluster e.g. BYOL, Pay-As-You-Go etc.\n planId: string\n # ParentResourceGroup - The resource group of the ManagedApp\n # itself ParentResourceGroup string\n # `json:\"parentResourceGroup\"` ResourceId - Identifier of the\n # resource against which usage is emitted Format is GUID\n # (UUID)\n # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n # Optional only if that exactly of ResourceId or ResourceUri is\n # specified.\n resourceId: string\n # ResourceUri - Identifier of the managed app resource against\n # which usage is emitted\n # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n # Optional only if that exactly of ResourceId or ResourceUri is\n # specified.\n resourceUri: string\n # Tells the operator we want to run in Debug mode.\n debug: false\n # Identifies the type of Kubernetes deployment.\n deploymentType:\n # CloudRegionEnum - The target Kubernetes type to deploy to.\n # Supported Values are: - aws_useast_1 aws_useast_2 aws_uswest_1\n # az_useast_1 az_uswest_1\n region: string\n # DeploymentTypeEnum - The type of the Deployment. Supported Values\n # are: - Managed FreeSaaS DedicatedSaaS OnPrem\n type: string\n # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n # etc.\n devEditionConfig:\n # Host IPv4 address. Used by KiND based Developer Edition where\n # ingress paths set to *. Provides qualified, routable URLs to\n # workbench.\n hostIpAddress: \"\"\n # The GAdmin Dashboard Configuration for the Kinetica Cluster.\n gadmin:\n # The port that GAdmin will be running on. It runs only on the head\n # node pod in the cluster. Default: 8080\n containerPort:\n # Number of port to expose on the pod's IP address. This must be a\n # valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must be\n # a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name. Name\n # for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # The Ingress Endpoint that GAdmin will be running on.\n ingressPath:\n # backend defines the referenced service endpoint to which the\n # traffic will be forwarded to.\n backend:\n # resource is an ObjectRef to another Kubernetes resource in the\n # namespace of the Ingress object. If resource is specified,\n # serviceName and servicePort must not be specified.\n resource:\n # APIGroup is the group for the resource being referenced. If\n # APIGroup is not specified, the specified Kind must be in\n # the core API group. For any other third-party types,\n # APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # serviceName specifies the name of the referenced service.\n serviceName: string\n # servicePort Specifies the port of the referenced service.\n servicePort: \n # path is matched against the path of an incoming request.\n # Currently it can contain characters disallowed from the\n # conventional \"path\" part of a URL as defined by RFC 3986. Paths\n # must begin with a '/' and must be present when using PathType\n # with value \"Exact\" or \"Prefix\".\n path: string\n # pathType determines the interpretation of the path matching.\n # PathType can be one of the following values: * Exact: Matches\n # the URL path exactly. * Prefix: Matches based on a URL path\n # prefix split by '/'. Matching is done on a path element by\n # element basis. A path element refers is the list of labels in\n # the path split by the '/' separator. A request is a match for\n # path p if every p is an element-wise prefix of p of the request\n # path. Note that if the last element of the path is a substring\n # of the last element in request path, it is not a match\n # (e.g. /foo/bar matches /foo/bar/baz, but does not\n # match /foo/barbaz). * ImplementationSpecific: Interpretation of\n # the Path matching is up to the IngressClass. Implementations\n # can treat this as a separate PathType or treat it identically\n # to Prefix or Exact path types. Implementations are required to\n # support all path types. Defaults to ImplementationSpecific.\n pathType: string\n # Whether to enable the GAdmin Dashboard on the Cluster. Default:\n # true\n isEnabled: true\n # Gaia - gaia.properties configuration\n gaia: admin:\n # AdminLoginOnlyGpudbDown - When GPUdb is down, only allow admin\n # user to login\n admin_login_only_gpudb_down: true\n # Username - We do check for admin username in various places\n admin_username: \"admin\"\n # LoginAnimationEnabled - Display any animation in login page\n login_animation_enabled: true\n # AdminLoginOnlyGpudbDown - Convenience settings for dev mode\n login_bypass_enabled: false\n # RequireStrongPassword - Convenience settings for dev mode\n require_strong_password: true\n # SSLTruststorePasswordScript - Display any animation in login\n # page\n ssl_truststore_password_script: string\n # DemoSchema - Schema-related configuration\n demo_schema: \"demo\" gpudb:\n # DataFileStringNullValue - Table import/export null value string\n data_file_string_null_value: \"\\\\N\"\n gpudb_ext_url: \"http://127.0.0.1:8082/gpudb-0\"\n # URL - Current instance of gpudb, when running in HA mode change\n # this to load balancer endpoint\n gpudb_url: \"http://127.0.0.1:9191\"\n # LoggingLogFileName - Which file to use when displaying logging\n # on Cluster page.\n logging_log_file_name: \"gpudb.log\"\n # SampleRepoURL - Table import/export null value string\n sample_repo_url: \"//s3.amazonaws.com/kinetica-ce-data\" hm:\n gpudb_ext_hm_url: \"http://127.0.0.1:8082/gpudb-host-manager\"\n gpudb_hm_url: \"http://127.0.0.1:9300\" http:\n # ClientTimeout - Number of seconds for proxy request timeout\n http_client_timeout: 3600\n # ClientTimeoutV2 - Force override of previous default with 0 as\n # infinite timeout\n http_client_timeout_v2: 0\n # TomcatPathKey - Name of folder where Tomcat apps are installed\n tomcat_path_key: \"tomcat\"\n # WebappContext - Web App context\n webapp_context: \"gadmin\"\n # GAdminIsRemote - True if the gadmin application is running on a\n # remote machine (not on same node as gpudb). If running on a\n # remote machine the manage options will be disabled.\n is_remote: false\n # KAgentCLIPath - Schema-related configuration\n kagent_cli_path: \"/opt/gpudb/kagent/bin/kagent\"\n # KIO - KIO-related configuration\n kio: kio_log_file_path: \"/opt/gpudb/kitools/kio/logs/gadmin.log\"\n kio_log_level: \"DEBUG\" kio_log_size_limit: 10485760 kisql:\n # QueryResultsLimit - KiSQL limit on the number of results in each\n # query\n kisql_query_results_limit: 10000\n # QueryTimezone - KiSQL TimeZoneId setting for queries\n # (use \"system\" for local system time)\n kisql_query_timezone: \"GMT\" license:\n # Status - Stub for license manager\n status: \"ok\"\n # Type - Stub for license manager\n type: \"unlimited\"\n # MaxConcurrentUserSessions - Session management configuration\n max_concurrent_user_sessions: 0\n # PublicSchema - Schema-related configuration\n public_schema: \"ki_home\"\n # RevealDBInfoFile - Path to file containing Reveal DB location\n reveal_db_info_file: \"/opt/gpudb/connectors/reveal/var/REVEAL_DB_DIR\"\n # RootSchema - Schema-related configuration\n root_schema: \"root\" stats:\n # GraphanaURL -\n graphana_url: \"http://127.0.0.1:3000\"\n # GraphiteURL\n graphite_url: \"http://127.0.0.1:8181\"\n # StatsGrafanaURL - Port used to host the Grafana user interface\n # and embeddable metric dashboards in GAdmin. Note: If this value\n # is defaulted then it will be replaced by the name of the Stats\n # service if it is deployed & Grafana is enabled e.g.\n # cluster-1234.gpudb.svc.cluster.local\n stats_grafana_url: \"http://127.0.0.1:9091\"\n # https://github.com/kubernetes-sigs/controller-tools/issues/622 if we\n # want to set usePools as false, need to set defaults GPUDBCluster is\n # an instance of a Kinetica DB Cluster i.e. it's StatefulSet,\n # Service, Ingress, ConfigMap etc.\n gpudbCluster:\n # Affinity - is a group of affinity scheduling rules.\n affinity:\n # Describes node affinity scheduling rules for the pod.\n nodeAffinity:\n # The scheduler will prefer to schedule pods to nodes that\n # satisfy the affinity expressions specified by this field, but\n # it may choose a node that violates one or more of the\n # expressions. The node that is most preferred is the one with\n # the greatest sum of weights, i.e. for each node that meets\n # all of the scheduling requirements (resource request,\n # requiredDuringScheduling affinity expressions, etc.), compute\n # a sum by iterating through the elements of this field and\n # adding \"weight\" to the sum if the node matches the\n # corresponding matchExpressions; the node(s) with the highest\n # sum are the most preferred.\n preferredDuringSchedulingIgnoredDuringExecution:\n - preference:\n # A list of node selector requirements by node's labels.\n matchExpressions:\n - key: string\n # Represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists, DoesNotExist.\n # Gt, and Lt.\n operator: string\n # An array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. If the operator is Gt or Lt, the values\n # array must have a single element, which will be\n # interpreted as an integer. This array is replaced\n # during a strategic merge patch.\n values: [\"string\"]\n # A list of node selector requirements by node's fields.\n matchFields:\n - key: string\n # Represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists, DoesNotExist.\n # Gt, and Lt.\n operator: string\n # An array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. If the operator is Gt or Lt, the values\n # array must have a single element, which will be\n # interpreted as an integer. This array is replaced\n # during a strategic merge patch.\n values: [\"string\"]\n # Weight associated with matching the corresponding\n # nodeSelectorTerm, in the range 1-100.\n weight: 1\n # If the affinity requirements specified by this field are not\n # met at scheduling time, the pod will not be scheduled onto\n # the node. If the affinity requirements specified by this\n # field cease to be met at some point during pod execution\n # (e.g. due to an update), the system may or may not try to\n # eventually evict the pod from its node.\n requiredDuringSchedulingIgnoredDuringExecution:\n # Required. A list of node selector terms. The terms are\n # ORed.\n nodeSelectorTerms:\n - matchExpressions:\n - key: string\n # Represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists, DoesNotExist.\n # Gt, and Lt.\n operator: string\n # An array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. If the operator is Gt or Lt, the values\n # array must have a single element, which will be\n # interpreted as an integer. This array is replaced\n # during a strategic merge patch.\n values: [\"string\"]\n # A list of node selector requirements by node's fields.\n matchFields:\n - key: string\n # Represents a key's relationship to a set of values.\n # Valid operators are In, NotIn, Exists, DoesNotExist.\n # Gt, and Lt.\n operator: string\n # An array of string values. If the operator is In or\n # NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. If the operator is Gt or Lt, the values\n # array must have a single element, which will be\n # interpreted as an integer. This array is replaced\n # during a strategic merge patch.\n values: [\"string\"]\n # Describes pod affinity scheduling rules (e.g. co-locate this pod\n # in the same node, zone, etc. as some other pod(s)).\n podAffinity:\n # The scheduler will prefer to schedule pods to nodes that\n # satisfy the affinity expressions specified by this field, but\n # it may choose a node that violates one or more of the\n # expressions. The node that is most preferred is the one with\n # the greatest sum of weights, i.e. for each node that meets\n # all of the scheduling requirements (resource request,\n # requiredDuringScheduling affinity expressions, etc.), compute\n # a sum by iterating through the elements of this field and\n # adding \"weight\" to the sum if the node has pods which matches\n # the corresponding podAffinityTerm; the node(s) with the\n # highest sum are the most preferred.\n preferredDuringSchedulingIgnoredDuringExecution:\n - podAffinityTerm:\n # A label query over a set of resources, in this case pods.\n labelSelector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator\n # is In or NotIn, the values array must be non-empty.\n # If the operator is Exists or DoesNotExist, the values\n # array must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # A label query over the set of namespaces that the term\n # applies to. The term is applied to the union of the\n # namespaces selected by this field and the ones listed in\n # the namespaces field. null selector and null or empty\n # namespaces list means \"this pod's namespace\". An empty\n # selector ({}) matches all namespaces.\n namespaceSelector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator\n # is In or NotIn, the values array must be non-empty.\n # If the operator is Exists or DoesNotExist, the values\n # array must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # namespaces specifies a static list of namespace names that\n # the term applies to. The term is applied to the union of\n # the namespaces listed in this field and the ones selected\n # by namespaceSelector. null or empty namespaces list and\n # null namespaceSelector means \"this pod's namespace\".\n namespaces: [\"string\"]\n # This pod should be co-located (affinity) or not\n # co-located (anti-affinity) with the pods matching the\n # labelSelector in the specified namespaces, where\n # co-located is defined as running on a node whose value of\n # the label with key topologyKey matches that of any node\n # on which any of the selected pods is running. Empty\n # topologyKey is not allowed.\n topologyKey: string\n # weight associated with matching the corresponding\n # podAffinityTerm, in the range 1-100.\n weight: 1\n # If the affinity requirements specified by this field are not\n # met at scheduling time, the pod will not be scheduled onto\n # the node. If the affinity requirements specified by this\n # field cease to be met at some point during pod execution\n # (e.g. due to a pod label update), the system may or may not\n # try to eventually evict the pod from its node. When there are\n # multiple elements, the lists of nodes corresponding to each\n # podAffinityTerm are intersected, i.e. all terms must be\n # satisfied.\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n # matchExpressions is a list of label selector requirements.\n # The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is\n # In or NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # A label query over the set of namespaces that the term\n # applies to. The term is applied to the union of the\n # namespaces selected by this field and the ones listed in\n # the namespaces field. null selector and null or empty\n # namespaces list means \"this pod's namespace\". An empty\n # selector ({}) matches all namespaces.\n namespaceSelector:\n # matchExpressions is a list of label selector requirements.\n # The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is\n # In or NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # namespaces specifies a static list of namespace names that\n # the term applies to. The term is applied to the union of\n # the namespaces listed in this field and the ones selected\n # by namespaceSelector. null or empty namespaces list and\n # null namespaceSelector means \"this pod's namespace\".\n namespaces: [\"string\"]\n # This pod should be co-located (affinity) or not co-located\n # (anti-affinity) with the pods matching the labelSelector in\n # the specified namespaces, where co-located is defined as\n # running on a node whose value of the label with key\n # topologyKey matches that of any node on which any of the\n # selected pods is running. Empty topologyKey is not\n # allowed.\n topologyKey: string\n # Describes pod anti-affinity scheduling rules (e.g. avoid putting\n # this pod in the same node, zone, etc. as some other pod(s)).\n podAntiAffinity:\n # The scheduler will prefer to schedule pods to nodes that\n # satisfy the anti-affinity expressions specified by this\n # field, but it may choose a node that violates one or more of\n # the expressions. The node that is most preferred is the one\n # with the greatest sum of weights, i.e. for each node that\n # meets all of the scheduling requirements (resource request,\n # requiredDuringScheduling anti-affinity expressions, etc.),\n # compute a sum by iterating through the elements of this field\n # and adding \"weight\" to the sum if the node has pods which\n # matches the corresponding podAffinityTerm; the node(s) with\n # the highest sum are the most preferred.\n preferredDuringSchedulingIgnoredDuringExecution:\n - podAffinityTerm:\n # A label query over a set of resources, in this case pods.\n labelSelector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator\n # is In or NotIn, the values array must be non-empty.\n # If the operator is Exists or DoesNotExist, the values\n # array must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # A label query over the set of namespaces that the term\n # applies to. The term is applied to the union of the\n # namespaces selected by this field and the ones listed in\n # the namespaces field. null selector and null or empty\n # namespaces list means \"this pod's namespace\". An empty\n # selector ({}) matches all namespaces.\n namespaceSelector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator\n # is In or NotIn, the values array must be non-empty.\n # If the operator is Exists or DoesNotExist, the values\n # array must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # namespaces specifies a static list of namespace names that\n # the term applies to. The term is applied to the union of\n # the namespaces listed in this field and the ones selected\n # by namespaceSelector. null or empty namespaces list and\n # null namespaceSelector means \"this pod's namespace\".\n namespaces: [\"string\"]\n # This pod should be co-located (affinity) or not\n # co-located (anti-affinity) with the pods matching the\n # labelSelector in the specified namespaces, where\n # co-located is defined as running on a node whose value of\n # the label with key topologyKey matches that of any node\n # on which any of the selected pods is running. Empty\n # topologyKey is not allowed.\n topologyKey: string\n # weight associated with matching the corresponding\n # podAffinityTerm, in the range 1-100.\n weight: 1\n # If the anti-affinity requirements specified by this field are\n # not met at scheduling time, the pod will not be scheduled\n # onto the node. If the anti-affinity requirements specified by\n # this field cease to be met at some point during pod\n # execution (e.g. due to a pod label update), the system may or\n # may not try to eventually evict the pod from its node. When\n # there are multiple elements, the lists of nodes corresponding\n # to each podAffinityTerm are intersected, i.e. all terms must\n # be satisfied.\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n # matchExpressions is a list of label selector requirements.\n # The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is\n # In or NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # A label query over the set of namespaces that the term\n # applies to. The term is applied to the union of the\n # namespaces selected by this field and the ones listed in\n # the namespaces field. null selector and null or empty\n # namespaces list means \"this pod's namespace\". An empty\n # selector ({}) matches all namespaces.\n namespaceSelector:\n # matchExpressions is a list of label selector requirements.\n # The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator is\n # In or NotIn, the values array must be non-empty. If the\n # operator is Exists or DoesNotExist, the values array\n # must be empty. This array is replaced during a\n # strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to an\n # element of matchExpressions, whose key field is \"key\",\n # the operator is \"In\", and the values array contains\n # only \"value\". The requirements are ANDed.\n matchLabels: {}\n # namespaces specifies a static list of namespace names that\n # the term applies to. The term is applied to the union of\n # the namespaces listed in this field and the ones selected\n # by namespaceSelector. null or empty namespaces list and\n # null namespaceSelector means \"this pod's namespace\".\n namespaces: [\"string\"]\n # This pod should be co-located (affinity) or not co-located\n # (anti-affinity) with the pods matching the labelSelector in\n # the specified namespaces, where co-located is defined as\n # running on a node whose value of the label with key\n # topologyKey matches that of any node on which any of the\n # selected pods is running. Empty topologyKey is not\n # allowed.\n topologyKey: string\n # Annotations - Annotations labels to be applied to the Statefulset\n # DB pods.\n annotations: {}\n # The name of the cluster to form.\n clusterName: string\n # The Ingress Endpoint that GAdmin will be running on.\n clusterSize:\n # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n # representation of the number of nodes in a simple to understand\n # T-Short size scheme. This indicates the size of the cluster\n # i.e. the number of nodes. It does not identify the size of the\n # cloud provider nodes. For node size see ClusterTypeEnum.\n # Supported Values are: - XS S M L XL XXL XXXL\n tshirtSize: string\n # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n # of the VM.\n tshirtType: string\n # Config Kinetica DB Configuration Object\n config: ai: apiKey: string\n # Provider - AI API provider type. The default is \"sqlgpt\"\n apiProvider: \"sqlgpt\" apiUrl: string\n # AlertManagerConfig\n alertManager:\n # AlertManager IP address (run on head node) default port\n # is \"2003\"\n ipAddress: \"${gaia.host0.address}\" port: 2003\n # AlertConfig\n alerts: alertDiskAbsolute: [integer]\n # Trigger an alert if available disk space on any given node\n # falls to or below a certain threshold, either absolute\n # (number of bytes) or percentage of total disk space. For\n # multiple thresholds, use a comma-delimited list of values.\n alertDiskPercentage: [1,5,10,20]\n # Trigger generic error message alerts, in cases of various\n # significant runtime errors.\n alertErrorMessages: true\n # Executable to run when an alert condition occurs. This\n # executable will only be run on **rank0** and does not need to\n # be present on other nodes.\n alertExe: \"\"\n # Trigger an alert whenever the status of a host or rank\n # changes.\n alertHostStatus: true\n # Optionally, filter host alerts for a comma-delimited list of\n # statuses. If a filter is empty, every host status change will\n # trigger an alert.\n alertHostStatusFilter: \"fatal_init_error\"\n # The maximum number of triggered alerts guaranteed to be stored\n # at any given time. When this number of alerts is exceeded,\n # older alerts may be discarded to stay within the limit.\n alertMaxStoredAlerts: 100 alertMemoryAbsolute: [integer]\n # Trigger an alert if available memory on any given node falls\n # to or below a certain threshold, either absolute (number of\n # bytes) or percentage of total memory. For multiple\n # thresholds, use a comma-delimited list of values.\n alertMemoryPercentage: [1,5,10,20]\n # Trigger an alert if a CUDA error occurs on a rank.\n alertRankCudaError: true\n # Trigger alerts when the fallback allocator is employed; e.g.,\n # host memory is allocated because GPU allocation fails. NOTE:\n # To prevent a flooding of alerts, if a fallback allocator is\n # triggered in bursts, not every use will generate an alert.\n alertRankFallbackAllocator: true\n # Trigger an alert whenever the status of a rank changes.\n alertRankStatus: true\n # Optionally, filter rank alerts for a comma-delimited list of\n # statuses. If a filter is empty, every rank status change will\n # trigger an alert.\n alertRankStatusFilter:\n [\"fatal_init_error\",\"not_responding\",\"terminated\"]\n # Enable the alerting system.\n enableAlerts: true\n # Directory where the trace event and summary files are stored.\n # Must be a fully qualified path with sufficient free space for\n # required volume of data.\n traceDirectory: \"/tmp\"\n # The maximum number of trace events to be collected\n traceEventBufferSize: 1000000\n # Audit - This section controls the request auditor, which will\n # audit all requests received by the server in full or in part\n # based on the settings.\n audit:\n # Controls whether the body of each request is audited (in JSON\n # format). If 'enable_audit' is \"false\" this setting has no\n # effect. NOTE: For requests that insert data records, this\n # setting does not control the auditing of the records being\n # inserted, only the rest of the request body; see 'audit_data'\n # below to control this. audit_body = false\n body: false\n # Controls whether records being inserted are audited (in JSON\n # format) for requests that insert data records. If\n # either 'enable_audit' or 'audit_body' is \"false\", this\n # setting has no effect. NOTE: Enabling this setting during\n # bulk ingestion of data will rapidly produce very large audit\n # logs and may cause disk space exhaustion; use with caution.\n # audit_data = false\n data: false\n # Controls whether request auditing is enabled. If set\n # to \"true\", the following information is audited for every\n # request: Job ID, URI, User, and Client Address. The settings\n # below control whether additional information about each\n # request is also audited. If set to \"false\", all auditing is\n # disabled. enable_audit = false\n enable: false\n # Controls whether HTTP headers are audited for each request.\n # If 'enable_audit' is \"false\" this setting has no effect.\n # audit_headers = false\n headers: true\n # Controls whether the above audit settings can be altered at\n # runtime via the /alter/system/properties endpoint. In a\n # secure environment where auditing is required at all times,\n # this should be set to \"true\" to lock the settings to what is\n # set in this file. lock_audit = false\n lock: false\n # Controls whether response information is audited for each\n # request. If 'enable_audit' is \"false\" this setting has no\n # effect. audit_response = false\n response: false\n # EventConfig\n events:\n # Run a statistics server to collect information about Kinetica\n # and the machines it runs on.\n internal: true\n # Statistics server IP address (run on head node) default port\n # is \"2003\"\n ipAddress: \"${gaia.host0.address}\" port: 2003\n # Statistics server namespace - should be a machine identifier\n statsServerNamespace: \"gpudb\"\n # ExternalFilesConfig\n externalFiles:\n # Defines the directory from which external files can be loaded\n directory: \"/opt/gpudb/persist\"\n # # Parquet files compression type egress_parquet_compression =\n # snappy\n egressParquetCompression: \"snappy\"\n # Max file size (in MB) to allow saving to a single file. May be\n # overridden by target limitations. egress_single_file_max_size\n # = 100\n egressSingleFileMaxSize: \"100\"\n # Maximum number of simultaneous threads allocated to a given\n # external file read request, on each rank. Note that thread\n # allocation may also be limited by resource group limits, the\n # subtask_concurrency_limit setting, or system load.\n readerNumTasks: \"-1\"\n # GeneralConfig - the root of the gpudb.conf configuration in the\n # CRD\n general:\n # Timeout (in seconds) to wait for a rank to start during a\n # cluster event (ex: failover) event is considered failed.\n clusterEventTimeoutStartupRank: \"300\"\n # Enable (if \"true\") multiple kernels to run concurrently on the\n # same GPU\n concurrentKernelExecution: true\n # Time-to-live in minutes of non-protected tables before they\n # are automatically deleted from the database.\n defaultTTL: \"20\"\n # Disallow the /clear/table request to clear all tables.\n disableClearAll: true\n # Enable overlapped-equi-join filters\n enableOverlappedEquiJoin: true\n # Enable predicate-equi-join filter plan type\n enablePredicateEquiJoin: true\n # If \"true\" then all filter execution will be host-only\n # (i.e. CPU). This can be useful for high-concurrency\n # situations and when PCIe bandwidth is a limiting factor.\n forceHostFilterExecution: false\n # Maximum number of kernels that can be running at the same time\n # on a given GPU. Set to \"0\" for no limit. Only takes effect\n # if 'concurrent_kernel_execution' is \"true\"\n maxConcurrentKernels: \"0\"\n # Maximum number of records that data retrieval requests such\n # as /get/records and /aggregate/groupby will return per\n # request.\n maxGetRecordsSize: 20000\n # Set an optional executable command that will be run once when\n # Kinetica is ready for client requests. This can be used to\n # perform any initialization logic that needs to be run before\n # clients connect. It will be run as the \"gpudb\" user, so you\n # must ensure that any required permissions are set on the file\n # to allow it to be executed. If the command cannot be\n # executed or returns a non-zero error code, then Kinetica will\n # be stopped. Output from the startup script will be logged\n # to \"/opt/gpudb/core/logs/gpudb-on-start.log\" (and its dated\n # relatives). The \"gpudb_env.sh\" script is run directly before\n # the command, so the path will be set to include the supplied\n # Python runtime. Example: on_startup_script\n # = /home/gpudb/on-start.sh param1 param2 ...\n onStartupScript: \"\"\n # Size in bytes of the pinned memory pool per-rank process to\n # speed up copying data to the GPU. Set to \"0\" to disable.\n pinnedMemoryPoolSize: 2000000000\n # Tables and collections with these names will not be deleted\n # (comma separated).\n protectedSets: \"MASTER,_MASTER,_DATASOURCE\"\n # Timeout (in minutes) for filter-type requests\n requestTimeout: \"20\"\n # Timeout (in seconds) to wait for a rank to exit gracefully\n # before it is force-killed. Machines with slow disk drives may\n # require longer times and data may be lost if a drive is not\n # responsive.\n timeoutShutdownRank: \"300\"\n # Timeout (in seconds) to wait for each database subsystem to\n # exit gracefully before it is force-killed.\n timeoutShutdownSubsystem: \"20\"\n # Timeout (in seconds) to wait for each database subsystem to\n # startup. Subsystems include the Query Planner, Graph,\n # Stats, & HTTP servers, as well as external text-search\n # ranks.\n timeoutStartupSubsystem: \"60\"\n # GraphConfig\n graph:\n # Enable the graph server\n enable: false\n # List of GPU devices to be used by graph server The server\n # would ideally be run on a different node with dedicated GPU\n # (s)\n gpuList: \"\"\n # Specify where the graph server should be run, defaults to head\n # node\n ipAddress: \"${gaia.rank0_ip_address}\"\n # Maximum memory that can be used by the graph server, set\n # to \"0\" to disable memory restriction\n maxMemory: 0\n # Port used for responses from the graph server to the database\n # server\n pullPort: 8100\n # Port used for requests from the database server to the graph\n # server\n pushPort: 8099\n # Number of seconds the graph client will wait for a response\n # from the graph server\n timeout: 1200\n # HardwareConfig\n hardware:\n # Rank0HardwareConfig\n rank0:\n # Specify the GPU to use for all calculations on the HTTP\n # server node, **rank0**. NOTE: The **rank0** GPU may be\n # shared with another rank.\n gpu: 0\n # Set the head HTTP **rank0** numa node(s). If left empty,\n # there will be no thread affinity or preferred memory node.\n # The node list may be either a single node number or a\n # range; e.g., \"1-5,7,10\". If there will be many simultaneous\n # users, specify as many nodes as possible that won't overlap\n # the **rank1** to **rankN** worker numa nodes that the GPUs\n # are on. If there will be few simultaneous users and WMS\n # speed is important, choose the numa node the 'rank0.gpu' is\n # on.\n numaNode: ranks:\n - baseNumaNode: string\n # Set each worker rank's preferred data numa node for CPU\n # affinity and memory allocation.\n # The 'rank<#>.data_numa_node' is the node or nodes that data\n # intensive threads will run in and should be set to the same\n # numa node that the GPU specified by the\n # corresponding 'rank<#>.taskcalc_gpu' is on for best\n # performance. If the 'rank<#>.taskcalc_gpu' is specified\n # the 'rank<#>.data_numa_node' will be automatically set to\n # the node the GPU is attached to, otherwise there will be no\n # CPU thread affinity or preferred node for memory allocation\n # if not specified or left empty. The node list may be a\n # single node number or a range; e.g., \"1-5,7,10\".\n dataNumaNode: string\n # Set the GPU device for each worker rank to use. If no GPUs\n # are specified, each rank will round-robin the available\n # GPUs per host system. Add 'rank<#>.taskcalc_gpu' as needed\n # for the worker ranks, where *#* ranges from \"1\" to the\n # highest *rank #* among the 'rank<#>.host' parameters\n # Example setting the GPUs to use for ranks 1 and 2: \n # # rank1.taskcalc_gpu = 0 # rank2.taskcalc_gpu = 1\n taskCalcGPU: kafka:\n # Maximum number of records to be ingested in a single batch\n # kafka.batch_size = 1000\n batchSize: 1000\n # Maximum time (milliseconds) for each poll to get records from\n # kafka kafka.poll_timeout = 0\n pollTimeout: 1\n # Maximum wait time (seconds) to buffer records received from\n # kafka before ingestion kafka.wait_time = 30\n waitTime: 30\n # KifsConfig\n kifs:\n # KIFs user data size limit\n dataLimit: \"4Gi\"\n # sudo usermod -a -G gpudb_proc <user>\n enable: false\n # Parent directory of the mount point for the KiFS file system.\n # Must be a fully qualified path. The actual mount point will\n # be a subdirectory *mount* below this directory. Note that\n # this folder must have read, write and execute permissions for\n # the \"gpudb\" user and the \"gpudb_proc\" group, and it cannot be\n # a path on an NFS.\n mountPoint: \"/gpudb/kifs\" useManagedCredentials: true\n # Etcd *ETCDConfig `json:\"etcd,omitempty\"` HA HAConfig\n # `json:\"ha,omitempty\"`\n ml:\n # Enable the ML server.\n enable: false\n # NetworkConfig\n network:\n # HAAddress - An optional address to allow inter-cluster\n # communication with HA when 'address' is not routable between\n # clusters.\n HAAddress: string\n # CompressNetworkData - Enables compression of inter-node\n # network data transfers.\n compressNetworkData: false\n # EnableHTTPDProxy - Start an HTTP server as a proxy to handle\n # LDAP and/or Kerberos authentication. Each host will run an\n # HTTP server and access to each rank is available through\n # http://host:8082/gpudb-1, where port \"8082\" is defined\n # by 'httpd_proxy_port'. NOTE: HTTP external endpoints are not\n # affected by the 'use_https' parameter above. If you wish to\n # enable HTTPS, you must edit\n # the \"/opt/gpudb/httpd/conf/httpd.conf\" and setup HTTPS as per\n # the Apache httpd documentation at\n # https://httpd.apache.org/docs/2.2/\n enableHTTPDProxy: true\n # EnableWorkerHTTPServers - Enable worker HTTP servers; each\n # process runs its own server for multi-head ingest.\n enableWorkerHTTPServers: true\n # GlobalManagerLocalPubPort - ?\n globalManagerLocalPubPort: 5554\n # GlobalManagerPortOne - Internal communication ports - Host\n # manager status notification channel\n globalManagerPortOne: 5552\n # GlobalManagerPubPort - Host manager synchronization message\n # publishing channel port\n globalManagerPubPort: 5553\n # HeadIPAddress - Head HTTP server IP address. Set to the\n # publicly accessible IP address of the first\n # process, **rank0**.\n headIPAddress: \"172.20.0.10\"\n # HeadPort - Head HTTP server port to use\n # for 'head_ip_address'.\n headPort: 9191\n # HostManagerHTTPPort - HTTP port for web portal of the host\n # manager\n hostManagerHTTPPort: 9300\n # HTTPAllowOrigin - Value to return via\n # Access-Control-Allow-Origin HTTP header (for Cross-Origin\n # Resource Sharing). Set to empty to not return the header and\n # disallow CORS.\n httpAllowOrigin: \"*\"\n # HTTPKeepAlive - Keep HTTP connections alive between requests\n httpKeepAlive: false\n # HTTPDProxyPort - TCP port that the httpd auth proxy server\n # will listen on if 'enable_httpd_proxy' is \"true\".\n httpdProxyPort: 8082\n # HTTPDProxyUseHTTPS - Set to \"true\" if the httpd auth proxy\n # server is configured to use HTTPS.\n httpdProxyUseHTTPS: false\n # HTTPSCertFile - File containing the SSL certificate e.g.\n # cert.pem If required, a self-signed certificate(expires after\n # 10 years) can be generated via the command: e.g. cert.pem\n # openssl req -newkey rsa:2048 -new -nodes -x509 \\ -days\n # 3650 -keyout key.pem -out cert.pem\n httpsCertFile: \"\"\n # HTTPSKeyFile - File containing the SSL private Key e.g.\n # key.pem If required, a self-signed certificate (expires after\n # 10 years) can be generated via the command: openssl\n # req -newkey rsa:2048 -new -nodes -x509 \\ -days 3650 -keyout\n # key.pem -out cert.pem\n httpsKeyFile: \"\"\n # Rank0IPAddress - Internal use IP address of the head HTTP\n # server, **rank0**. Set to either a second internal network\n # accessible by all ranks or to '${gaia.head_ip_address}'.\n rank0IPAddress: \"${gaia.rank0.host}\" ranks:\n - communicatorPort:\n # Number of port to expose on the pod's IP address. This\n # must be a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this\n # must be a valid port number, 0 < x < 65536. If\n # HostNetwork is specified, this must match ContainerPort.\n # Most containers do not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique\n # within the pod. Each named port in a pod must have a\n # unique name. Name for the port that can be referred to by\n # services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # Specify the hosts to run each rank worker process in the\n # cluster. For a single machine system, use \"127.0.0.1\", but\n # if using two or more machines, a hostname or IP address\n # must be specified for each rank that is accessible from the\n # other ranks. See also 'head_ip_address'\n # and 'rank0_ip_address'.\n host: string\n # Optionally, specify the worker HTTP server ports. The\n # default is to use ('head_port' + *rank #*) for each worker\n # process where rank number is from \"1\" to number of ranks\n # in 'rank<#>.host' below.\n httpServerPort:\n # Number of port to expose on the pod's IP address. This\n # must be a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this\n # must be a valid port number, 0 < x < 65536. If\n # HostNetwork is specified, this must match ContainerPort.\n # Most containers do not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique\n # within the pod. Each named port in a pod must have a\n # unique name. Name for the port that can be referred to by\n # services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # This is the Kubernetes pod IP Address of the current rank\n # which we need to populate in the operator. NOTE: Internal\n # Attribute\n podIP: string\n # Optionally, specify a public URL for each worker HTTP server\n # that clients should use to connect for multi-head\n # operations. NOTE: If specified for any ranks, a public URL\n # must be specified for all ranks.\n publicURL: \"https://:8082/gpudb-{{.Rank}}\"\n # Define the rank number of this rank.\n rank: 1\n # SetMonitorPort - Set monitor ZMQ publisher server port (-1 to\n # disable), uses the 'head_ip_address' interface.\n setMonitorPort: 9002\n # SetMonitorProxyPort - Set monitor ZMQ publisher internal proxy\n # server port (\"-1\" to disable), uses the 'head_ip_address'\n # interface. IMPORTANT: Disabling this port effectively\n # prevents worker nodes from publishing set monitor\n # notifications when multi-head ingest is enabled\n # (see 'enable_worker_http_servers').\n setMonitorProxyPort: 9003\n # SetMonitorQueueSize - Set monitor queue size\n setMonitorQueueSize: 1000\n # TriggerPort - Trigger ZMQ publisher server port (\"-1\" to\n # disable), uses the 'head_ip_address' interface.\n triggerPort: -1\n # UseHTTPS - Set to \"true\" to use HTTPS; if \"true\"\n # then 'https_key_file' and 'https_cert_file' must be provided\n useHttps: false\n # PersistenceConfig\n persistence:\n # Removed in 7.2\n IndexDBFlushImmediate: true\n # DataLoadingSchema Startup data-loading scheme\n buildMaterializedViewsOnStart: \"on_demand\"\n # DataLoadingSchema Startup data-loading scheme\n buildPKIndexOnStart: \"on_demand\"\n # Target maximum data size for any one column in a chunk\n # (512 MB) (0 = disable). chunk_max_memory = 8192000000\n chunkColumnMaxMemory: 8192000000\n # Target maximum total data size for all columns in a chunk\n # (8 GB) (0 = disable).\n chunkMaxMemory: 512000000\n # Number of records per chunk (\"0\" disables chunking)\n chunkSize: 8000000\n # Determines whether to execute kernels on host (CPU) or device\n # (GPU). Possible values are: \n # * \"default\" : engine decides * \"host\" : execute only\n # host * \"device\" : execute only device * *<rows>* :\n # execute on the host if chunked column contains the given\n # number of *rows* or fewer; otherwise, execute on device.\n executionMode: \"device\"\n # Removed in 7.2\n fsyncIndexDBImmediate: true\n # Removed in 7.2\n fsyncInodesImmediate: true\n # Removed in 7.2\n fsyncMetadataImmediate: true\n # Removed in 7.2\n fsyncOnInterval: true\n # Maximum number of open files for IndexedDb object file store.\n # Removed in 7.2\n indexDBMaxOpenFiles: \n # Table of contents size for IndexedDb object file store.\n # Removed in 7.2\n indexDBTOCSize: \n # Disable detection of sparse file support and use the full file\n # length which may be an over-estimate of the actual usage in\n # the persist tier. Removed in 7.2\n indexDBTierByFileLength: false\n # Startup data-loading scheme: \n # * \"always\" : load all the data into memory before\n # accepting requests * \"lazy\" : load the necessary\n # data to start, but load the remainder\n # lazily * \"on_demand\" : only load data as requests use it\n loadVectorsOnStart: \"on_demand\"\n # Removed in 7.2\n metadataFlushImmediate: true\n # Specify a base directory to store persistence data files.\n persistDirectory: \"/opt/gpudb/persist\"\n # Whether to use synchronous persistence file writing.\n # If \"false\", files will be written asynchronously. Removed in\n # 7.2\n persistSync: true\n # Duration in seconds, for which persistence files will be\n # force-synced if out of sync, once per minute. NOTE: Files are\n # always opportunistically saved; this simply enforces a\n # maximum time a file can be out of date. Set to a very high\n # number to disable.\n persistSyncTime: 5\n # The maximum number of bytes in the shadow aggregate cache\n shadowAggSize: 100000000\n # Whether to enable chunk caching\n shadowCubeEnabled: true\n # The maximum number of bytes in the shadow filter cache\n shadowFilterSize: 100000000\n # Base directory to store hashed strings.\n smsDirectory: \"${gaia.persist_directory}\"\n # Maximum number of open files (per-TOM) for the SMS\n # (string) store.\n smsMaxOpenFiles: 128\n # Synchronous compression: compress vectors on set compression.\n synchronousCompression: false\n # Directory for GPUdb to use to store temporary files. Must be a\n # fully qualified path, have at least 100Mb of free space, and\n # execute permission.\n tempDirectory: \"${gaia.persist_directory}/tmp\"\n # Base directory to store the text search index.\n textIndexDirectory: \"${gaia.persist_directory}\"\n # Enable checksum protection on the wal entries. New in 7.2\n walChecksum: true\n # Specifies how frequently wal entries are written with\n # background sync. New in 7.2\n walFlushFrequency: 60\n # Maximum size of each wal segment file New in 7.2\n walMaxSegmentSize: 500000000\n # Approximate number of segment files to split the wal across. A\n # minimum of two is required. The size of the wal is limited by\n # segment_count * max_segment_size. (per rank and per tom) Set\n # to 0 to remove a size limit on the wal itself, but still be\n # bounded by rank tier limits. Set to -1 to have the database\n # decide automatically per table. New in 7.2\n walSegmentCount: \n # Sync mode to use when persisting wal entries to disk: \n # \"none\" : Disable the wal \"background\" : Wal entries are\n # periodically written instead of immediately after each\n # operation \"flush\" : Protects entries in the event of a\n # database crash \"fsync\" : Protects entries in the event\n # of an OS crash New in 7.2\n walSyncPolicy: \"flush\"\n # If true, any table that is found to be corrupt after replaying\n # its wal at startup will automatically be truncated so that\n # the table becomes operable. If false, the user will be\n # responsible for resolving the issue via sql REPAIR TABLE or\n # similar. New in 7.2\n walTruncateCorruptTablesOnStart: true\n # PostgresProxy\n postgresProxy:\n # Postgres Proxy Server Start an Postgres(TCP) server as a proxy\n # to handle postgres wire protocol messages.\n enablePostgresProxy: false\n # Set idle connection timeout in seconds. (default: \"1200\")\n idleConnectionTimeout: 1200\n # Set max number of queued server connections. (default: \"1\")\n maxQueuedConnections: 1\n # Set max number of server threads to spawn. (default: \"64\")\n maxThreads: 64\n # Set min number of server threads to spawn. (default: \"2\")\n minThreads: 2\n # TCP port that the postgres proxy server will listen on\n # if 'enable_postgres_proxy' is \"true\".\n port:\n # Number of port to expose on the pod's IP address. This must\n # be a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this\n # must be a valid port number, 0 < x < 65536. If HostNetwork\n # is specified, this must match ContainerPort. Most\n # containers do not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique\n # within the pod. Each named port in a pod must have a unique\n # name. Name for the port that can be referred to by\n # services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # Set to \"true\" to use SSL; if \"true\" then 'ssl_key_file'\n # and 'ssl_cert_file' must be provided\n ssl: false sslCertFile: \"\"\n # Files containing the SSL private Key and the SSL certificate\n # for. If required, a self signed certificate (expires after 10\n # years) can be generated via the command: openssl req -newkey\n # rsa:2048 -new -nodes -x509 \\ -days 3650 -keyout key.pem -out\n # cert.pem\n sslKeyFile: \"\"\n # ProcessesConfig\n processes:\n # Set the maximum number of threads per tom for table\n # initialization on startup\n initTablesNumThreadsPerTom: 8\n # Set the number of parallel calculation threads to use for data\n # processing use -1 to use the max number of threads\n # (not recommended)\n kernelOmpThreads: 3\n # The maximum number of web server threads to spawn\n maxHttpThreads: 512\n # Set the maximum number of threads (both workers and masters)\n # to be passed to TBB on initialization. Generally\n # speaking, 'max_tbb_threads_per_rank' - \"1\" TBB workers will\n # be created. Use \"-1\" for no limit.\n maxTbbThreadsPerRank: \"-1\"\n # The minimum number of web server threads to spawn\n minHttpThreads: 8\n # Set the number of parallel jobs to create for multi-child set\n # calulations use \"-1\" to use the max number of threads\n # (not recommended)\n smOmpThreads: 2\n # Maximum number of simultaneous threads allocated to a given\n # request, on each rank. Note that thread allocation may also\n # be limted by resource group limits and/or system load.\n subtaskConcurrentyLimit: \"-1\"\n # Set the number of TaskCalculators per TOM, GPU data\n # processors.\n tcsPerTom: \"-1\"\n # Set the number of TOMs (data container shards) per rank\n tomsPerRank: 1\n # Set the number of TaskProcessors per TOM, CPU data\n # processors.\n tpsPerTom: \"-1\"\n # ProcsConfig\n procs:\n # Directory where proc files are stored at runtime. Must be a\n # fully qualified path with execute permission. If not\n # specified, 'temp_directory' will be used.\n directory:\n # PersistentVolumeClaim is a user's request for and claim to a\n # persistent volume\n persistVolumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and may\n # reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource this\n # object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the volume\n # should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An existing\n # PVC (PersistentVolumeClaim) If the provisioner or an\n # external controller can support the specified data\n # source, it will create a new volume based on the\n # contents of the specified data source. When the\n # AnyVolumeDataSource feature gate is enabled, dataSource\n # contents will be copied to dataSourceRef, and\n # dataSourceRef contents will be copied to dataSource\n # when dataSourceRef.namespace is not specified. If the\n # namespace is specified, then dataSourceRef will not be\n # copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For any\n # other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume is\n # desired. This may be any object from a non-empty API\n # group (non core object) or a PersistentVolumeClaim\n # object. When this field is specified, volume binding\n # will only succeed if the type of the specified object\n # matches some installed volume populator or dynamic\n # provisioner. This field will replace the functionality\n # of the dataSource field and as such if both fields are\n # non-empty, they must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other is\n # non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate to\n # be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For any\n # other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified, a\n # gateway.networking.k8s.io/ReferenceGrant object is\n # required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.(Alpha) This\n # field requires the CrossNamespaceVolumeDataSource\n # feature gate to be enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container.\n # This is an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed\n # Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set of\n # values. Valid operators are In, NotIn, Exists and\n # DoesNotExist.\n operator: string\n # values is an array of string values. If the operator\n # is In or NotIn, the values array must be non-empty.\n # If the operator is Exists or DoesNotExist, the\n # values array must be empty. This array is replaced\n # during a strategic merge patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values array\n # contains only \"value\". The requirements are ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the volume\n # backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request is\n # lowered, allocatedResources is only lowered if there\n # are no expansion operations in progress and if the\n # actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and requires\n # enabling RecoverVolumeExpansionFailure feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent volume\n # claim. If underlying persistent volume is being resized\n # then the Condition will be set to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value of\n # PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # VolumeMount describes a mounting of a Volume within a\n # container.\n volumeMount:\n # Path within the container at which the volume should be\n # mounted. Must not contain ':'.\n mountPath: string\n # mountPropagation determines how mounts are propagated from\n # the host to container and the other way around. When not\n # set, MountPropagationNone is used. This field is beta in\n # 1.10.\n mountPropagation: string\n # This must match the Name of a Volume.\n name: string\n # Mounted read-only if true, read-write otherwise (false or\n # unspecified). Defaults to false.\n readOnly: true\n # Path within the volume from which the container's volume\n # should be mounted. Defaults to \"\" (volume's root).\n subPath: string\n # Expanded path within the volume from which the container's\n # volume should be mounted. Behaves similarly to SubPath\n # but environment variable references $(VAR_NAME) are\n # expanded using the container's environment. Defaults\n # to \"\" (volume's root). SubPathExpr and SubPath are\n # mutually exclusive.\n subPathExpr: string\n # Enable procs (UDFs)\n enable: true\n # SecurityConfig\n security:\n # Automatically create accounts for externally-authenticated\n # users. If 'enable_external_authentication' is \"false\", this\n # setting has no effect. Note that accounts are not\n # automatically deleted if users are removed from the external\n # authentication provider and will be orphaned.\n autoCreateExternalUsers: false\n # Automatically add roles passed in via the \"KINETICA_ROLES\"\n # HTTP header to externally-authenticated users. Specified\n # roles that do not exist are ignored.\n # If 'enable_external_authentication' is \"false\", this setting\n # has no effect. IMPORTANT: DO NOT ENABLE unless the\n # authentication proxy is configured to block \"KINETICA_ROLES\"\n # HTTP headers passed in from clients.\n autoGrantExternalRoles: false\n # Comma-separated list of roles to revoke from\n # externally-authenticated users prior to granting roles passed\n # in via the \"KINETICA_ROLES\" HTTP header, or \"*\" to revoke all\n # roles. Preceding a role name with an \"!\" overrides the\n # revocation (e.g. \"*,!foo\" revokes all roles except \"foo\").\n # Leave blank to disable. If\n # either 'enable_external_authentication'\n # or 'auto_grant_external_roles' is \"false\", this setting has\n # no effect.\n autoRevokeExternalRoles: false\n # Enable authorization checks. When disabled, all requests will\n # be treated as the administrative user.\n enableAuthorization: true\n # Enable external (LDAP, Kerberos, etc.) authentication. User\n # IDs of externally-authenticated users must be passed in via\n # the \"REMOTE_USER\" HTTP header from the authentication proxy.\n # May be used in conjuntion with the 'enable_httpd_proxy'\n # setting above for an integrated external authentication\n # solution. IMPORTANT: DO NOT ENABLE unless external access to\n # GPUdb ports has been blocked via firewall AND the\n # authentication proxy is configured to block \"REMOTE_USER\"\n # HTTP headers passed in from clients. server.\n enableExternalAuthentication: true\n # ExternalSecurity\n externalSecurity:\n # Ranger\n ranger:\n # AuthorizerAddress - The network URI for the\n # ranger_authorizer to start. The URI can be either TCP or\n # IPC. TCP address is used to indicate the remote\n # ranger_authorizer which may run at other hosts. The IPC\n # address is for a local ranger_authorizer. Example\n # addresses for remote or TCP servers: tcp://127.0.0.1:9293\n # tcp://HOST_IP:9293 Example address for local IPC servers:\n # ipc:///tmp/gpudb-ranger-0\n # security.external.ranger_authorizer.address = ipc://$\n # {gaia.temp_directory}/gpudb-ranger-0\n authorizerAddress: \"ipc://$\n {gaia.temp_directory}/gpudb-ranger-0\"\n # Remote debugger port used for the ranger_authorizer.\n # Setting the port to \"0\" disables remote debugging. NOTE:\n # Recommended port to use is \"5005\"\n # security.external.ranger_authorizer.remote_debug_port =\n # 0\n authorizerRemoteDebugPort: 0\n # AuthorizerTimeout - Ranger Authorizer timeout in seconds\n # security.external.ranger_authorizer.timeout = 120\n authorizerTimeout: 120\n # CacheMinutes- Maximum minutes to hold on to data from\n # Ranger security.external.ranger.cache_minutes = 60\n cacheMinutes: 60\n # Name of the service created on the Ranger Server to manage\n # this Kinetica instance\n # security.external.ranger.service_name = kinetica\n name: \"kinetica\"\n # ExtURL - URL of Ranger REST API. E.g.,\n # https://localhost:6080/ Leave blank for no Ranger Server\n # security.external.ranger.url =\n url: string\n # The minimum allowable password length.\n minPasswordLength: 4\n # Require all users to be authenticated. Disable this to allow\n # users to access the database as the 'unauthenticated' user.\n # Useful for situations where the public needs to access the\n # data.\n requireAuthentication: true\n # UnifiedSecurityNamespace - Use a single namespace for internal\n # and external user IDs and role names. If false, external user\n # IDs must be prefixed with \"@\" to differentiate them from\n # internal user IDs and role names (except in the \"REMOTE_USER\"\n # HTTP header, where the \"@\" is omitted).\n # unified_security_namespace = true\n unifiedSecurityNamespace: true\n # SQLConfig\n sql:\n # SQLPlannerAddress is not included as it is just default\n # always\n address: \"ipc://${gaia.temp_directory}/gpudb-query-engine-0\"\n # Enable the cost-based optimizer\n costBasedOptimization: false\n # Enable distributed joins\n distributedJoins: true\n # Enable distributed operations\n distributedOperations: true\n # Enable Query Planner\n enablePlanner: true\n # Perform joins between only 2 tables at a time; default is all\n # tables involved in the operation at once\n forceBinaryJoins: false\n # Perform unions/intersections/exceptions between only 2 tables\n # at a time; default is all tables involved in the operation at\n # once\n forceBinarySetOps: false\n # Max parallel steps\n maxParallelSteps: 4\n # Max allowed view nesting levels. Valid range(1-64)\n maxViewNestingLevels: 16\n # TTL of the paging results table\n pagingTableTTL: 20\n # Enable parallel query evaluation\n parallelExecution: true\n # The maximum number of entries in the SQL plan cache. The\n # default is \"4000\" entries, but the configurable range\n # is \"1\" - \"1000000\". Plan caching will be disabled if the\n # value is set outside of that range.\n planCacheSize: 4000\n # The maximum memory for the query planner to use in Megabytes.\n plannerMaxMemory: 4096\n # The maximum stack size for the query planner threads to use in\n # Megabytes.\n plannerMaxStack: 6\n # Query planner timeout in seconds\n plannerTimeout: 120\n # Max Query planner threads\n plannerWorkers: 16\n # Remote debugger port used for the query planner. Setting the\n # port to \"0\" disables remote debugging. NOTE: Recommended\n # port to use is \"5005\"\n remoteDebugPort: 5005\n # TTL of the query cache results table\n resultsCacheTTL: 60\n # Enable query results caching\n resultsCaching: true\n # Enable rule-based query rewrites\n ruleBasedOptimization: true\n # SQLEngineConfig\n sqlEngine:\n # Enable the cost-based optimizer\n costBasedOptimization: false\n # Name of default collection for user tables\n defaultSchema: \"\"\n # Enable distributed joins\n distributedJoins: true\n # Enable distributed operations\n distributedOperations: true\n # Perform joins between only 2 tables at a time; default is all\n # tables involved in the operation at once\n forceBinaryJoins: false\n # Perform unions/intersections/exceptions between only 2 tables\n # at a time; default is all tables involved in the operation at\n # once\n forceBinarySetOps: false\n # Max parallel steps\n maxParallelSteps: 4\n # Max allowed view nesting levels. Valid range(1-64)\n maxViewNestingLevels: 16\n # TTL of the paging results table\n pagingTableTTL: 20\n # Enable parallel query evaluation\n parallelExecution: true\n # The maximum number of entries in the SQL plan cache. The\n # default is \"4000\" entries, but the configurable range\n # is \"1\" - \"1000000\". Plan caching will be disabled if the\n # value is set outside of that range.\n planCacheSize: 4000\n # PlannerConfig\n planner:\n # Enable Query Planner\n enablePlanner: true\n # The maximum memory for the query planner to use in\n # Megabytes.\n maxMemory: 4096\n # The maximum stack size for the query planner threads to use\n # in Megabytes.\n maxStack: 6\n # The network URI for the query planner to start. The URI can\n # be either TCP or IPC. TCP address is used to indicate the\n # remote query planner which may run at other hosts. The IPC\n # address is for a local query planner. Example for remote or\n # TCP servers: \n # # sql.planner.address = tcp://127.0.0.1:9293 #\n # sql.planner.address = tcp://HOST_IP:9293 Example for\n # local IPC servers: \n # # sql.planner.address = ipc:///tmp/gpudb-query-engine-0\n plannerAddress: \"ipc:///tmp/gpudb-query-engine-0\"\n # Remote debugger port used for the query planner. Setting the\n # port to \"0\" disables remote debugging. NOTE: Recommended\n # port to use is \"5005\"\n remoteDebugPort: 0\n # Query planner timeout in seconds\n timeout: 120\n # Max Query planner threads\n workers: 16 results:\n # TTL of the query cache results table\n cacheTTL: 60\n # Enable query results caching\n caching: true\n # Enable rule-based query rewrites\n ruleBasedOptimization: true\n # Name of collection that will be used to store result tables\n # generated as part of query execution\n tempCollection: \"__SQL_TEMP\"\n # StatisticsConfig\n statistics:\n # system_metadata.stats_aggr_rowcount = 10000\n aggrRowCount: 10000\n # system_metadata.stats_aggr_time = 1\n aggrTime: 1\n # Run a statistics server to collect information about Kinetica\n # and the machines it runs on.\n enable: true\n # Statistics server IP address (run on head node) default port\n # is \"2003\"\n ipAddress: \"${gaia.host0.address}\"\n # Statistics server namespace - should be a machine identifier\n namespace: \"gpudb\" port: 2003\n # System metadata catalog settings\n # system_metadata.stats_retention_days = 21\n retentionDays: 21\n # TextSearchConfig\n textSearch:\n # Enable text search capability within the database.\n enableTextSearch: false\n # Number of text indices to start for each rank\n textIndicesPerTom: 2\n # Searcher refresh intervals - specifies the maximum delay\n # (in seconds) between writing to the text search index and\n # being able to search for the value just written. A value\n # of \"0\" insures that writes to the index are immediately\n # available to be searched. A more nominal value of \"100\"\n # should improve ingest speed at the cost of some delay in\n # being able to text search newly added values.\n textSearcherRefreshInterval: 20\n # Use the production capable external text server instead of a\n # lightweight internal server which should only be used for\n # light testing. Note: The internal text server is deprecated\n # and may be removed in future versions.\n useExternalTextServer: true tieredStorage:\n # Cold Storage Tiers can be used to extend the storage capacity\n # of the Persist Tier. Assign a tier strategy with cold storage\n # to objects that will be infrequently accessed since they will\n # be moved as needed from the Persist Tier. The Cold Storage\n # Tier is typically a much larger capacity physical disk or a\n # cloud-based storage system which may not be as performant as\n # the Persist Tier storage. A default storage limit and\n # eviction thresholds can be set across all ranks for a given\n # Cold Storage Tier, while one or more ranks within a Cold\n # Storage Tier may be configured to override those defaults.\n # NOTE: If an object needs to be pulled out of cold storage\n # during a query, it may need to use the local persist\n # directory as a temporary swap space. This may trigger an\n # eviction of other persisted items to cold storage due to low\n # disk space condition defined by the watermark settings for\n # the Persist Tier.\n coldStorageTier:\n # ColdStorageAzure\n coldStorageAzure:\n # 'base_path' : A base path based on the\n # provider type for this tier.\n basePath: string clientID: string clientSecret: string\n # 'connection_timeout' : Timeout in seconds for\n # connecting to this storage provider.\n connectionTimeout: \"30\"\n # 'base_path' : A base path based on the\n # provider type for this tier. BasePath string\n # `json:\"basePath,omitempty\"`\n containerName: \"/gpudb/cold_storage\"\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\" sasToken:\n string storageAccountKey: string storageAccountName: string\n tenantID: string useManagedCredentials: false\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # 'wait_timeout' : Timeout in seconds for reading\n # from or writing to this storage provider.\n waitTimeout: \"90\"\n # ColdStorageDisk\n coldStorageDisk:\n # 'base_path' : A base path based on the\n # provider type for this tier.\n basePath: string\n # 'connection_timeout' : Timeout in seconds for\n # connecting to this storage provider.\n connectionTimeout: \"30\"\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # 'wait_timeout' : Timeout in seconds for reading\n # from or writing to this storage provider.\n waitTimeout: \"90\"\n # ColdStorageGCS - Google Cloud Storage-specific *parameter*\n # names: \n # * BucketName = 'gcs_bucket_name' *\n # ProjectID - 'gcs_project_id'\n # (optional) * AccountID - 'gcs_service_account_id'\n # (optional) *\n # AccountPrivateKey - 'gcs_service_account_private_key'\n # (optional) * AccountKeys - 'gcs_service_account_keys'\n # (optional) NOTE: If\n # the 'gcs_service_account_id', 'gcs_service_account_private_key'\n # and/or 'gcs_service_account_keys' values are not\n # specified, the Google Clould Client Libraries will\n # attempt to find and use service account credentials from\n # the GOOGLE_APPLICATION_CREDENTIALS environment\n # variable.\n coldStorageGCS: accountID: string accountKeys: string\n accountPrivateKey: string\n # 'base_path' : A base path based on the\n # provider type for this tier.\n basePath: string bucketName: string\n # 'connection_timeout' : Timeout in seconds for\n # connecting to this storage provider.\n connectionTimeout: \"30\"\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" projectID: string\n provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # 'wait_timeout' : Timeout in seconds for reading\n # from or writing to this storage provider.\n waitTimeout: \"90\"\n # ColdStorageHDFS\n coldStorageHDFS:\n # ColdStorageDisk\n default:\n # 'base_path' : A base path based on the\n # provider type for this tier.\n basePath: string\n # 'connection_timeout' : Timeout in seconds for\n # connecting to this storage provider.\n connectionTimeout: \"30\"\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and\n # continue until the 'low_watermark' percentage usage\n # is reached. Default is \"90\", signifying a 90%\n # memory usage threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that\n # can be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from\n # the endpoint the client submits requests to. Cannot\n # be updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support\n # the specified data source, it will create a new\n # volume based on the contents of the specified data\n # source. When the AnyVolumeDataSource feature gate\n # is enabled, dataSource contents will be copied to\n # dataSourceRef, and dataSourceRef contents will be\n # copied to dataSource when dataSourceRef.namespace\n # is not specified. If the namespace is specified,\n # then dataSourceRef will not be copied to\n # dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is\n # required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty\n # volume is desired. This may be any object from a\n # non-empty API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty,\n # they must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same\n # value and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object,\n # as well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values\n # (dropping them), dataSourceRef preserves all\n # values, and generates an error if a disallowed\n # value is specified. * While dataSource only allows\n # local objects, dataSourceRef allows objects in any\n # namespaces. (Beta) Using this field requires the\n # AnyVolumeDataSource feature gate to be enabled.\n # (Alpha) Using the namespace field of dataSourceRef\n # requires the CrossNamespaceVolumeDataSource feature\n # gate to be enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is\n # required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is\n # specified, a\n # gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow\n # that namespace's owner to accept the reference.\n # See the ReferenceGrant documentation for\n # details. (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the\n # volume should have. If\n # RecoverVolumeExpansionFailure feature is enabled\n # users are allowed to specify resource requirements\n # that are lower than previous value but must still\n # be higher than capacity recorded in the status\n # field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider\n # for binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a\n # set of values. Valid operators are In, NotIn,\n # Exists and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must\n # be non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A\n # single {key,value} in the matchLabels map is\n # equivalent to an element of matchExpressions,\n # whose key field is \"key\", the operator is \"In\",\n # and the values array contains only \"value\". The\n # requirements are ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required\n # by the claim. Value of Filesystem is implied when\n # not included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to\n # a PVC. It may be larger than the actual capacity\n # when a volume expansion operation is requested. For\n # storage quota, the larger value from\n # allocatedResources and PVC.spec.resources is used.\n # If allocatedResources is not set,\n # PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and\n # if the actual volume capacity is equal or lower\n # than the requested capacity. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short,\n # machine understandable string that gives the\n # reason for condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid\n # value of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when\n # expansion is complete resizeStatus is set to empty\n # string by resize controller or kubelet. This is an\n # alpha field and requires enabling\n # RecoverVolumeExpansionFailure feature.\n resizeStatus: string\n # 'wait_timeout' : Timeout in seconds for reading\n # from or writing to this storage provider.\n waitTimeout: \"90\"\n # 'hdfs_kerberos_keytab' : The Kerberos keytab file used to\n # authenticate the \"gpudb\" Kerberos\n kerberosKeytab: string\n # 'hdfs_principal' : The effective principal name to\n # use when connecting to the hadoop cluster.\n principal: string\n # 'hdfs_uri' : The host IP address & port for\n # the hadoop distributed file system. For example:\n # hdfs://localhost:8020\n uri: string\n # 'hdfs_use_kerberos' : Set to \"true\" to enable Kerberos\n # authentication to an HDFS storage server. The\n # credentials of the principal are in the file specified\n # by the 'hdfs_kerberos_keytab' parameter. Note that\n # Kerberos's *kinit* command will be run when the database\n # is started.\n useKerberos: true\n # ColdStorageS3\n coldStorageS3: awsAccessKeyId: string awsRoleARN: string\n awsSecretAccessKey: string\n # 'base_path' : A base path based on the\n # provider type for this tier.\n basePath: string bucketName: string\n # 'connection_timeout' : Timeout in seconds for\n # connecting to this storage provider.\n connectionTimeout: \"30\" encryptionCustomerAlgorithm: string\n encryptionCustomerKey: string\n # EncryptionType - This is optional and valid values are\n # sse-s3 (Encryption key is managed by Amazon S3) and\n # sse-kms (Encryption key is managed by AWS Key Management\n # Service (kms)).\n encryptionType: string\n # Endpoint - s3_endpoint\n endpoint: string\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # KMSKeyID - This is optional and must be specified when\n # encryption type is sse-kms.\n kmsKeyID: string\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\" region:\n string useManagedCredentials: true\n # UseVirtualAddressing - 's3_use_virtual_addressing' : If\n # true (default), S3 endpoints will be constructed using\n # the 'virtual' style which includes the bucket name as\n # part of the hostname. Set to false to use the 'path'\n # style which treats the bucket name as if it is a path in\n # the URI.\n useVirtualAddressing: true\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # 'wait_timeout' : Timeout in seconds for reading\n # from or writing to this storage provider.\n waitTimeout: \"90\"\n # ColdStorageType The storage provider type. Currently,\n # supports \"none\", \"disk\"(local/network storage), \"hdfs\"\n # (Hadoop distributed filesystem), \"s3\" (Amazon S3\n # bucket), \"azure_blob\" (Microsoft Azure Blob Storage)\n # and \"gcs\" (Google GCS Bucket).\n coldStorageType: \"none\" name: string\n # The DiskCacheTier are used as temporary swap space for data\n # that doesn't fit in RAM or VRAM. The disk should be as fast\n # or faster than the Persist Tier storage since this tier is\n # used as an intermediary cache between the RAM and Persist\n # Tiers.\n diskCacheTier:\n # DiskTierStorageLimit\n default:\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string defaultStorePersistentObjects: true\n ranks:\n - highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # GlobalTier Parameters\n globalTier:\n # Co-locates all disks to a single disk i.e. persist, cache,\n # UDF will be on a single PVC.\n colocateDisks: true\n # Timeout in seconds for subsequent requests to wait on a\n # locked resource\n concurrentWaitTimeout: 120\n # EncryptDataAtRest - Enable disk encryption of data at rest\n encryptDataAtRest: true\n # The PersistTier are used as temporary swap space for data that\n # doesn't fit in RAM or VRAM. The disk should be as fast or\n # faster than the Persist Tier storage since this tier is used\n # as an intermediary cache between the RAM and Persist Tiers.\n persistTier:\n # DiskTierStorageLimit\n default:\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string defaultStorePersistentObjects: true\n ranks:\n - highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # A base directory to use as a space for this tier.\n path: \"default\" provisioner: \"docker.io/hostpath\"\n # Kubernetes Persistent Volume Claim for this disk tier.\n volumeClaim:\n # APIVersion defines the versioned schema of this\n # representation of an object. Servers should convert\n # recognized schemas to the latest internal value, and\n # may reject unrecognized values. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n apiVersion: app.kinetica.com/v1\n # Kind is a string value representing the REST resource\n # this object represents. Servers may infer this from the\n # endpoint the client submits requests to. Cannot be\n # updated. In CamelCase. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n kind: KineticaCluster\n # Standard object's metadata. More info:\n # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n metadata: {}\n # spec defines the desired characteristics of a volume\n # requested by a pod author. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n spec:\n # accessModes contains the desired access modes the\n # volume should have. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # dataSource field can be used to specify either: * An\n # existing VolumeSnapshot object\n # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n # existing PVC (PersistentVolumeClaim) If the\n # provisioner or an external controller can support the\n # specified data source, it will create a new volume\n # based on the contents of the specified data source.\n # When the AnyVolumeDataSource feature gate is enabled,\n # dataSource contents will be copied to dataSourceRef,\n # and dataSourceRef contents will be copied to\n # dataSource when dataSourceRef.namespace is not\n # specified. If the namespace is specified, then\n # dataSourceRef will not be copied to dataSource.\n dataSource:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # dataSourceRef specifies the object from which to\n # populate the volume with data, if a non-empty volume\n # is desired. This may be any object from a non-empty\n # API group (non core object) or a\n # PersistentVolumeClaim object. When this field is\n # specified, volume binding will only succeed if the\n # type of the specified object matches some installed\n # volume populator or dynamic provisioner. This field\n # will replace the functionality of the dataSource\n # field and as such if both fields are non-empty, they\n # must have the same value. For backwards\n # compatibility, when namespace isn't specified in\n # dataSourceRef, both fields (dataSource and\n # dataSourceRef) will be set to the same value\n # automatically if one of them is empty and the other\n # is non-empty. When namespace is specified in\n # dataSourceRef, dataSource isn't set to the same value\n # and must be empty. There are three important\n # differences between dataSource and dataSourceRef: *\n # While dataSource only allows two specific types of\n # objects, dataSourceRef allows any non-core object, as\n # well as PersistentVolumeClaim objects. * While\n # dataSource ignores disallowed values (dropping them),\n # dataSourceRef preserves all values, and generates an\n # error if a disallowed value is specified. * While\n # dataSource only allows local objects, dataSourceRef\n # allows objects in any namespaces. (Beta) Using this\n # field requires the AnyVolumeDataSource feature gate\n # to be enabled. (Alpha) Using the namespace field of\n # dataSourceRef requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n dataSourceRef:\n # APIGroup is the group for the resource being\n # referenced. If APIGroup is not specified, the\n # specified Kind must be in the core API group. For\n # any other third-party types, APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # Namespace is the namespace of resource being\n # referenced Note that when a namespace is specified,\n # a gateway.networking.k8s.io/ReferenceGrant object\n # is required in the referent namespace to allow that\n # namespace's owner to accept the reference. See the\n # ReferenceGrant documentation for details.\n # (Alpha) This field requires the\n # CrossNamespaceVolumeDataSource feature gate to be\n # enabled.\n namespace: string\n # resources represents the minimum resources the volume\n # should have. If RecoverVolumeExpansionFailure feature\n # is enabled users are allowed to specify resource\n # requirements that are lower than previous value but\n # must still be higher than capacity recorded in the\n # status field of the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this\n # container. This is an alpha field and requires\n # enabling the DynamicResourceAllocation feature\n # gate. This field is immutable. It can only be set\n # for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute\n # resources allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute\n # resources required. If Requests is omitted for a\n # container, it defaults to Limits if that is\n # explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot\n # exceed Limits. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # selector is a label query over volumes to consider for\n # binding.\n selector:\n # matchExpressions is a list of label selector\n # requirements. The requirements are ANDed.\n matchExpressions:\n - key: string\n # operator represents a key's relationship to a set\n # of values. Valid operators are In, NotIn, Exists\n # and DoesNotExist.\n operator: string\n # values is an array of string values. If the\n # operator is In or NotIn, the values array must be\n # non-empty. If the operator is Exists or\n # DoesNotExist, the values array must be empty.\n # This array is replaced during a strategic merge\n # patch.\n values: [\"string\"]\n # matchLabels is a map of {key,value} pairs. A single\n # {key,value} in the matchLabels map is equivalent to\n # an element of matchExpressions, whose key field\n # is \"key\", the operator is \"In\", and the values\n # array contains only \"value\". The requirements are\n # ANDed.\n matchLabels: {}\n # storageClassName is the name of the StorageClass\n # required by the claim. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n storageClassName: string\n # volumeMode defines what type of volume is required by\n # the claim. Value of Filesystem is implied when not\n # included in claim spec.\n volumeMode: string\n # volumeName is the binding reference to the\n # PersistentVolume backing this claim.\n volumeName: string\n # status represents the current information/status of a\n # persistent volume claim. Read-only. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n status:\n # accessModes contains the actual access modes the\n # volume backing the PVC has. More info:\n # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n accessModes: [\"string\"]\n # allocatedResources is the storage resource within\n # AllocatedResources tracks the capacity allocated to a\n # PVC. It may be larger than the actual capacity when a\n # volume expansion operation is requested. For storage\n # quota, the larger value from allocatedResources and\n # PVC.spec.resources is used. If allocatedResources is\n # not set, PVC.spec.resources alone is used for quota\n # calculation. If a volume expansion capacity request\n # is lowered, allocatedResources is only lowered if\n # there are no expansion operations in progress and if\n # the actual volume capacity is equal or lower than the\n # requested capacity. This is an alpha field and\n # requires enabling RecoverVolumeExpansionFailure\n # feature.\n allocatedResources: {}\n # capacity represents the actual resources of the\n # underlying volume.\n capacity: {}\n # conditions is the current Condition of persistent\n # volume claim. If underlying persistent volume is\n # being resized then the Condition will be set\n # to 'ResizeStarted'.\n conditions:\n - lastProbeTime: string\n # lastTransitionTime is the time the condition\n # transitioned from one status to another.\n lastTransitionTime: string\n # message is the human-readable message indicating\n # details about last transition.\n message: string\n # reason is a unique, this should be a short, machine\n # understandable string that gives the reason for\n # condition's last transition. If it\n # reports \"ResizeStarted\" that means the underlying\n # persistent volume is being resized.\n reason: string status: string\n # PersistentVolumeClaimConditionType is a valid value\n # of PersistentVolumeClaimCondition.Type\n type: string\n # phase represents the current phase of\n # PersistentVolumeClaim.\n phase: string\n # resizeStatus stores status of resize operation.\n # ResizeStatus is not set by default but when expansion\n # is complete resizeStatus is set to empty string by\n # resize controller or kubelet. This is an alpha field\n # and requires enabling RecoverVolumeExpansionFailure\n # feature.\n resizeStatus: string\n # The RAMTier represents the RAM available for data storage per\n # rank. The RAM Tier is NOT used for small, non-data objects or\n # variables that are allocated and deallocated for program flow\n # control or used to store metadata or other similar\n # information; these continue to use either the stack or the\n # regular runtime memory allocator. This tier should be sized\n # on each machine such that there is sufficient RAM left over\n # to handle this overhead, as well as the needs of other\n # processes running on the same machine.\n ramTier:\n # The RAM Tier represents the RAM available for data storage\n # per rank. The RAM Tier is NOT used for small, non-data\n # objects or variables that are allocated and deallocated for\n # program flow control or used to store metadata or other\n # similar information; these continue to use either the stack\n # or the regular runtime memory allocator. This tier should\n # be sized on each machine such that there is sufficient RAM\n # left over to handle this overhead, as well as the needs of\n # other processes running on the same machine. A default\n # memory limit and eviction thresholds can be set across all\n # ranks, while one or more ranks may be configured to\n # override those defaults. The general format for RAM\n # settings: \n # # tier.ram.[default|rank<#>].<parameter> Valid *parameter*\n # names include: \n # * 'limit' : The maximum RAM (bytes) per rank that\n # can be allocated across all resource groups. Default\n # is -1, signifying no limit and ignore watermark\n # settings. * 'high_watermark' : RAM percentage used\n # eviction threshold. Once memory usage exceeds this\n # value, evictions from this tier will be scheduled in\n # the background and continue until the 'low_watermark'\n # percentage usage is reached. Default is \"90\",\n # signifying a 90% memory usage\n # threshold. * 'low_watermark' : RAM percentage used\n # recovery threshold. Once memory usage exceeds\n # the 'high_watermark', evictions will continue until\n # memory usage falls below this recovery threshold.\n # Default is \"50\", signifying a 50% memory usage\n # threshold.\n default:\n # * 'high_watermark' : Percentage used eviction threshold.\n # Once usage exceeds this value, evictions from this\n # tier will be scheduled in the background and continue\n # until the 'low_watermark' percentage usage is reached.\n # Default is \"90\", signifying a 90% memory usage\n # threshold.\n highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string\n # The maximum RAM (bytes) for processing data at rank 0.\n # Overrides the overall default RAM tier\n # limit. #tier.ram.rank0.limit = -1\n ranks:\n - highWatermark: 90\n # * 'limit' : The maximum (bytes) per rank that can\n # be allocated across all resource groups.\n limit: \"1Gi\"\n # * 'low_watermark' : Percentage used recovery threshold.\n # Once usage exceeds the 'high_watermark', evictions\n # will continue until usage falls below this recovery\n # threshold. Default is \"80\", signifying an 80% usage\n # threshold.\n lowWatermark: 80 name: string tieredStrategy:\n # Default strategy to apply to tables or columns when one was\n # not provided during table creation. This strategy is also\n # applied to a resource group that does not specify one at time\n # of creation. The strategy is formed by chaining together the\n # tier types and their respective eviction priorities. Any\n # given tier may appear no more than once in the chain and the\n # priority must be in range \"1\" - \"10\", where \"1\" is the lowest\n # priority (first to be evicted) and \"9\" is the highest\n # priority (last to be evicted). A priority of \"10\" indicates\n # that an object is unevictable. Each tier's priority is in\n # relation to the priority of other objects in the same tier;\n # e.g., \"RAM 9, DISK2 1\" indicates that an object will be the\n # highest evictable priority among objects in the RAM Tier\n # (last evicted), but that it will be the lowest priority among\n # objects in the Disk Tier named 'disk2' (first evicted). Note\n # that since an object can only have one Disk Tier instance in\n # its strategy, the corresponding priority will only apply in\n # relation to other objects in Disk Tier instance 'disk2'. See\n # the Tiered Storage section for more information about tier\n # type names. Format: <tier1> <priority>, <tier2> <priority>,\n # <tier3> <priority>, ... Examples using a Disk Tier\n # named 'disk2' and a Cold Storage Tier 'cold0': vram 3, ram 5,\n # disk2 3, persist 10 vram 3, ram 5, disk2 3, persist 6, cold0\n # 10 tier_strategy.default = VRAM 1, RAM 5, PERSIST 5\n default: \"VRAM 1, RAM 5, PERSIST 5\"\n # Predicate evaluation interval (in minutes) - indicates the\n # interval at which the tier strategy predicates are evaluated\n predicateEvaluationInterval: 60 video:\n # System default TTL for videos. Time-to-live (TTL) is the\n # number of minutes before a video will expire and be removed,\n # or -1 to disable. video_default_ttl = -1\n defaultTTL: \"-1\"\n # The maximum number of videos to allow on the system. Set to 0\n # to disable video rendering. Set to -1 to allow an unlimited\n # number of videos. video_max_count = -1\n maxCount: \"-1\"\n # Directory where video files should be temporarily stored while\n # rendering. Only accessed by rank 0. video_temp_directory = $\n # {gaia.temp_directory}/gpudb-temp-videos\n tmpDir: \"${gaia.temp_directory}/gpudb-temp-videos\"\n # VisualizationConfig\n visualization:\n # Enable level-of-details rendering for fast interaction with\n # large WKT polygon data. Only available for the OpenGL\n # renderer (when 'enable_opengl_renderer' is \"true\").\n enableLODRendering: true\n # If \"true\", enable hardware-accelerated OpenGL renderer;\n # if \"false\", use the software-based Cairo renderer.\n enableOpenGLRenderer: true\n # If \"true\", enable Vector Tile Service (VTS) to support\n # client-side visualization of geospatial data. Enabling this\n # option increases memory usage on ingestion.\n enableVectorTileService: false\n # Longitude and latitude ranges of geospatial data for which\n # level-of-details representations are being generated. The\n # parameter order is: <min_longitude> <min_latitude>\n # <max_longitude> <max_latitude> The default values span over\n # the world, but the level-of-details rendering becomes more\n # efficient when the precise extent of geospatial data is\n # specified. kubebuilder:default:={ -180, -90, 180, 90 }\n lodDataExtent: [integer]\n # The extent to which shape data are pre-processed for\n # level-of-details rendering during data insert/load or\n # processed on-the-fly in rendering time. This is a trade-off\n # between speed and memory. The higher the value, the faster\n # level-of-details rendering is, but the more memory is used\n # for storing processed shape data. The maximum level is \"10\"\n # (most shape data are pre-processed) and the minimum level\n # is \"0\".\n lodPreProcessingLevel: 5\n # The number of subregions in horizontal and vertical geospatial\n # data extent. The default values of \"12 6\" divide the world\n # into subregions of 30 degree (lon.) x 30 degree (lat.)\n lodSubRegionNum: [12,6]\n # A base image resolution (width and height in pixels) at which\n # a subregion would be rendered in a global view spanning over\n # the whole dataset. Based on this resolution level-of-details\n # representations are generated for the polygons located in the\n # subregion.\n lodSubRegionResolution: [512,512]\n # Maximum heatmap size (in pixels) that can be generated. This\n # reserves 'max_heatmap_size' ^ 2 * 8 bytes of GPU memory\n # at **rank0**\n maxHeatmapSize: 3072\n # The maximum number of levels in the level-of-details\n # rendering. As the number increases, level-of-details\n # rendering becomes effective at higher zoom levels, but it may\n # increase memory usage for storing level-of-details\n # representations.\n maxLODLevel: 8\n # Input geometries are pre-processed upon ingestion for faster\n # vector tile generation. This parameter determines the\n # zoomlevel at which the vector tile pre-processing stops. A\n # vector tile request for a higher zoomlevel than this\n # parameter takes additional time because the vector tile needs\n # to be generated on the fly.\n maxVectorTileZoomLevel: 8\n # Input geometries are pre-processed upon ingestion for faster\n # vector tile generation. This parameter determines the\n # zoomlevel from which the vector tile pre-processing starts. A\n # vector tile request for a lower zoomlevel than this parameter\n # takes additional time because the vector tile needs to be\n # generated on the fly.\n minVectorTileZoomLevel: 1\n # The number of samples to use for antialiasing. Higher numbers\n # will improve image quality but require more GPU memory to\n # store the samples on worker ranks. This affects only the\n # OpenGL renderer. Value may be \"0\", \"4\", \"8\" or \"16\". When \"0\"\n # antialiasing is disabled. The default value is \"0\".\n openGLAntialiasingLevel: 1\n # Threshold number of points (per-TOM) at which point rendering\n # switches to fast mode.\n pointRenderThreshold: 100000\n # Single-precision coordinates are used for usual rendering\n # processes, but depending on the precision of geometry data\n # and use case, double precision processing may be required at\n # a high zoomlevel. Double precision rendering processes are\n # used from the zoomlevel specified by this parameter, which is\n # corresponding to a zoomlevel of TMS or Google map service.\n renderingPrecisionThreshold: 30\n # The image width/height (in pixels) of svg symbols cached in\n # the OpenGL symbol cache.\n symbolResolution: 100\n # The width/height (in pixels) of an OpenGL texture which caches\n # symbol images for OpenGL rendering.\n symbolTextureSize: 4000\n # Threshold for the number of points (per-TOM) after which\n # symbology rendering falls back to regular rendering\n symbologyRenderThreshold: 10000\n # The name of map tiler used for Vector Tile Service. \"google\"\n # and \"tms\" map tilers are supported currently. This parameter\n # should be matched with the map tiler of clients' vector tile\n # renderer.\n vectorTileMapTiler: \"google\" workbench:\n # Start the Workbench app on the head host when host manager is\n # started. enable_workbench = false\n enable: false\n # # HTTP server port for Workbench if enabled. workbench_port =\n # 8000\n port:\n # Number of port to expose on the pod's IP address. This must\n # be a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this\n # must be a valid port number, 0 < x < 65536. If HostNetwork\n # is specified, this must match ContainerPort. Most\n # containers do not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique\n # within the pod. Each named port in a pod must have a unique\n # name. Name for the port that can be referred to by\n # services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # The fully qualified URL used on the Ingress records for any\n # exposed services. Completed buy yth Operator. DO NOT POPULATE\n # MANUALLY.\n fqdn: \"\"\n # The name of the parent HA Ring this cluster belongs to.\n haRingName: \"default\"\n # Whether to enable the separate node 'pools' for \"infra\", \"compute\"\n # pod scheduling. Default: false\n hasPools: true\n # The port the HostManager will be running in each pod in the\n # cluster. Default: 9300, TCP\n hostManagerPort:\n # Number of port to expose on the pod's IP address. This must be a\n # valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must be\n # a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name. Name\n # for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # Set the name of the container image to use.\n image: \"kinetica/kinetica-k8s-intel:v7.1.6.0\"\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets in\n # the same gpudb-namespace to use for pulling any of the images\n # used by this PodSpec. If specified, these secrets will be passed\n # to individual puller implementations for them to use. For\n # example, in the case of docker, only DockerConfig type secrets\n # are honored.\n imagePullSecrets:\n - name: string\n # Labels - Pod labels to be applied to the Statefulset DB pods.\n labels: {}\n # The Ingress Endpoint that GAdmin will be running on.\n letsEncrypt:\n # Enable LetsEncrypt for Certificate generation.\n enabled: false\n # LetsEncryptEnvironment\n environment: \"staging\"\n # Set the Kinetica DB License.\n license: string\n # Periodic probe of container liveness. Container will be restarted\n # if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # LoggerConfig Kinetica DB Logger Configuration Object Configure the\n # LOG4CPLUS logger for the DB. Field takes a string containing the\n # full configuration. If not specified a template file is used\n # during DB configuration generation.\n loggerConfig: configString: string\n # Metrics - DB Metrics scrape & forward configuration for\n # `fluent-bit`.\n metricsRegistryRepositoryTag:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets in\n # the same gpudb-namespace to use for pulling any of the images\n # used by this PodSpec. If specified, these secrets will be\n # passed to individual puller implementations for them to use.\n # For example, in the case of docker, only DockerConfig type\n # secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Metrics - `fluent-bit` container requests/limits.\n metricsResources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # NodeSelector - NodeSelector to be applied to the DB Pods\n nodeSelector: {}\n # Do not use internal Operator field only.\n originalReplicas: 1\n # podManagementPolicy controls how pods are created during initial\n # scale up, when replacing pods on nodes, or when scaling down. The\n # default policy is `OrderedReady`, where pods are created in\n # increasing order (pod-0, then pod-1, etc) and the controller will\n # wait until each pod is ready before continuing. When scaling\n # down, the pods are removed in the opposite order. The alternative\n # policy is `Parallel` which will create pods in parallel to match\n # the desired scale without waiting, and on scale down will delete\n # all pods at once.\n podManagementPolicy: \"Parallel\"\n # Number of ranks per node as a uint16 i.e. 1-65535 ranks per node.\n # Default: 1\n ranksPerNode: 1\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # The number of DB ranks i.e. replicas that the cluster will spin\n # up. Default: 3\n replicas: 3\n # Limit the resources a DB Pod can consume.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # SecurityContext holds security configuration that will be applied\n # to a container. Some fields are present in both SecurityContext\n # and PodSecurityContext. When both are set, the values in\n # SecurityContext take precedence.\n securityContext:\n # AllowPrivilegeEscalation controls whether a process can gain\n # more privileges than its parent process. This bool directly\n # controls if the no_new_privs flag will be set on the container\n # process. AllowPrivilegeEscalation is true always when the\n # container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note\n # that this field cannot be set when spec.os.name is windows.\n allowPrivilegeEscalation: true\n # The capabilities to add/drop when running containers. Defaults\n # to the default set of capabilities granted by the container\n # runtime. Note that this field cannot be set when spec.os.name\n # is windows.\n capabilities:\n # Added capabilities\n add: [\"string\"]\n # Removed capabilities\n drop: [\"string\"]\n # Run container in privileged mode. Processes in privileged\n # containers are essentially equivalent to root on the host.\n # Defaults to false. Note that this field cannot be set when\n # spec.os.name is windows.\n privileged: true\n # procMount denotes the type of proc mount to use for the\n # containers. The default is DefaultProcMount which uses the\n # container runtime defaults for readonly paths and masked paths.\n # This requires the ProcMountType feature flag to be enabled.\n # Note that this field cannot be set when spec.os.name is\n # windows.\n procMount: string\n # Whether this container has a read-only root filesystem. Default\n # is false. Note that this field cannot be set when spec.os.name\n # is windows.\n readOnlyRootFilesystem: true\n # The GID to run the entrypoint of the container process. Uses\n # runtime default if unset. May also be set in\n # PodSecurityContext. If set in both SecurityContext and\n # PodSecurityContext, the value specified in SecurityContext\n # takes precedence. Note that this field cannot be set when\n # spec.os.name is windows.\n runAsGroup: 1\n # Indicates that the container must run as a non-root user. If\n # true, the Kubelet will validate the image at runtime to ensure\n # that it does not run as UID 0 (root) and fail to start the\n # container if it does. If unset or false, no such validation\n # will be performed. May also be set in PodSecurityContext. If\n # set in both SecurityContext and PodSecurityContext, the value\n # specified in SecurityContext takes precedence.\n runAsNonRoot: true\n # The UID to run the entrypoint of the container process. Defaults\n # to user specified in image metadata if unspecified. May also be\n # set in PodSecurityContext. If set in both SecurityContext and\n # PodSecurityContext, the value specified in SecurityContext\n # takes precedence. Note that this field cannot be set when\n # spec.os.name is windows.\n runAsUser: 1\n # The SELinux context to be applied to the container. If\n # unspecified, the container runtime will allocate a random\n # SELinux context for each container. May also be set in\n # PodSecurityContext. If set in both SecurityContext and\n # PodSecurityContext, the value specified in SecurityContext\n # takes precedence. Note that this field cannot be set when\n # spec.os.name is windows.\n seLinuxOptions:\n # Level is SELinux level label that applies to the container.\n level: string\n # Role is a SELinux role label that applies to the container.\n role: string\n # Type is a SELinux type label that applies to the container.\n type: string\n # User is a SELinux user label that applies to the container.\n user: string\n # The seccomp options to use by this container. If seccomp options\n # are provided at both the pod & container level, the container\n # options override the pod options. Note that this field cannot\n # be set when spec.os.name is windows.\n seccompProfile:\n # localhostProfile indicates a profile defined in a file on the\n # node should be used. The profile must be preconfigured on the\n # node to work. Must be a descending path, relative to the\n # kubelet's configured seccomp profile location. Must only be\n # set if type is \"Localhost\".\n localhostProfile: string\n # type indicates which kind of seccomp profile will be applied.\n # Valid options are: Localhost - a profile defined in a file on\n # the node should be used. RuntimeDefault - the container\n # runtime default profile should be used. Unconfined - no\n # profile should be applied.\n type: string\n # The Windows specific settings applied to all containers. If\n # unspecified, the options from the PodSecurityContext will be\n # used. If set in both SecurityContext and PodSecurityContext,\n # the value specified in SecurityContext takes precedence. Note\n # that this field cannot be set when spec.os.name is linux.\n windowsOptions:\n # GMSACredentialSpec is where the GMSA admission webhook\n # (https://github.com/kubernetes-sigs/windows-gmsa) inlines the\n # contents of the GMSA credential spec named by the\n # GMSACredentialSpecName field.\n gmsaCredentialSpec: string\n # GMSACredentialSpecName is the name of the GMSA credential spec\n # to use.\n gmsaCredentialSpecName: string\n # HostProcess determines if a container should be run as a 'Host\n # Process' container. This field is alpha-level and will only\n # be honored by components that enable the\n # WindowsHostProcessContainers feature flag. Setting this field\n # without the feature flag will result in errors when\n # validating the Pod. All of a Pod's containers must have the\n # same effective HostProcess value (it is not allowed to have a\n # mix of HostProcess containers and non-HostProcess\n # containers). In addition, if HostProcess is true then\n # HostNetwork must also be set to true.\n hostProcess: true\n # The UserName in Windows to run the entrypoint of the container\n # process. Defaults to the user specified in image metadata if\n # unspecified. May also be set in PodSecurityContext. If set in\n # both SecurityContext and PodSecurityContext, the value\n # specified in SecurityContext takes precedence.\n runAsUserName: string\n # StartupProbe indicates that the Pod has successfully initialized.\n # If specified, no other probes are executed until this completes\n # successfully. If this probe fails, the Pod will be restarted,\n # just as if the livenessProbe failed. This can be used to provide\n # different probe parameters at the beginning of a Pod's lifecycle,\n # when it might take a long time to load data or warm a cache, than\n # during steady-state operation. This cannot be updated. This is an\n # alpha feature enabled by the StartupProbe feature flag. More\n # info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n startupProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # HostManagerMonitor is used to monitor the Kinetica DB Ranks. If a\n # rank is unavailable for the specified time(MaxRankFailureCount) the\n # cluster will be restarted.\n hostManagerMonitor:\n # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and\n # Liveness probes. Default: 8888\n db_healthz_port:\n # Number of port to expose on the pod's IP address. This must be a\n # valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must be\n # a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name. Name\n # for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and\n # Liveness probes. Default: 8889\n hm_healthz_port:\n # Number of port to expose on the pod's IP address. This must be a\n # valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must be\n # a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name. Name\n # for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # Periodic probe of container liveness. Container will be restarted\n # if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # Set the name of the container image to use.\n monitorRegistryRepositoryTag:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets in\n # the same gpudb-namespace to use for pulling any of the images\n # used by this PodSpec. If specified, these secrets will be\n # passed to individual puller implementations for them to use.\n # For example, in the case of docker, only DockerConfig type\n # secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # Allow for overriding resource requests/limits.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # StartupProbe indicates that the Pod has successfully initialized.\n # If specified, no other probes are executed until this completes\n # successfully. If this probe fails, the Pod will be restarted,\n # just as if the livenessProbe failed. This can be used to provide\n # different probe parameters at the beginning of a Pod's lifecycle,\n # when it might take a long time to load data or warm a cache, than\n # during steady-state operation. This cannot be updated. This is an\n # alpha feature enabled by the StartupProbe feature flag. More\n # info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n startupProbe:\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value is\n # 1.\n failureThreshold: 3\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 10\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 10\n # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n # etc.\n infra: \"on-prem\"\n # The Kubernetes Ingress Controller will be running on e.g.\n # ingress-nginx, Traefik, Ambassador, Gloo, Kong etc.\n ingressController: \"nginx\"\n # The LDAP server to connect to.\n ldap:\n # BaseDN - The root base LDAP Distinguished Name to use as the base\n # for the LDAP usage\n baseDN: \"dc=kinetica,dc=com\"\n # BindDN - The LDAP Distinguished Name to use for the LDAP\n # connectivity/data connectivity/bind\n bindDN: \"cn=admin,dc=kinetica,dc=com\"\n # Host - The name of the host to connect to. If IsInLocalK8S=true\n # then supply only the name e.g. `openldap` Default: openldap\n host: \"openldap\"\n # IsInLocalK8S - Is the LDAP server co-located in the same K8s\n # cluster the operator is running in. Default: true\n isInLocalK8S: true\n # IsLDAPS - IUse LDAPS instead of LDAP Default: false\n isLDAPS: false\n # Namespace - The namespace the Default: openldap\n namespace: \"gpudb\"\n # Port - Defaults to LDAP Port 389 Default: 389\n port: 389\n # Tells the operator to use Cloud Provider Pay As You Go\n # functionality.\n payAsYouGo: false\n # The Reveal Dashboard Configuration for the Kinetica Cluster.\n reveal:\n # The port that Reveal will be running on. It runs only on the head\n # node pod in the cluster. Default: 8080\n containerPort:\n # Number of port to expose on the pod's IP address. This must be a\n # valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must be\n # a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name. Name\n # for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # The Ingress Endpoint that Reveal will be running on.\n ingressPath:\n # backend defines the referenced service endpoint to which the\n # traffic will be forwarded to.\n backend:\n # resource is an ObjectRef to another Kubernetes resource in the\n # namespace of the Ingress object. If resource is specified,\n # serviceName and servicePort must not be specified.\n resource:\n # APIGroup is the group for the resource being referenced. If\n # APIGroup is not specified, the specified Kind must be in\n # the core API group. For any other third-party types,\n # APIGroup is required.\n apiGroup: string\n # Kind is the type of resource being referenced\n kind: KineticaCluster\n # Name is the name of resource being referenced\n name: string\n # serviceName specifies the name of the referenced service.\n serviceName: string\n # servicePort Specifies the port of the referenced service.\n servicePort: \n # path is matched against the path of an incoming request.\n # Currently it can contain characters disallowed from the\n # conventional \"path\" part of a URL as defined by RFC 3986. Paths\n # must begin with a '/' and must be present when using PathType\n # with value \"Exact\" or \"Prefix\".\n path: string\n # pathType determines the interpretation of the path matching.\n # PathType can be one of the following values: * Exact: Matches\n # the URL path exactly. * Prefix: Matches based on a URL path\n # prefix split by '/'. Matching is done on a path element by\n # element basis. A path element refers is the list of labels in\n # the path split by the '/' separator. A request is a match for\n # path p if every p is an element-wise prefix of p of the request\n # path. Note that if the last element of the path is a substring\n # of the last element in request path, it is not a match\n # (e.g. /foo/bar matches /foo/bar/baz, but does not\n # match /foo/barbaz). * ImplementationSpecific: Interpretation of\n # the Path matching is up to the IngressClass. Implementations\n # can treat this as a separate PathType or treat it identically\n # to Prefix or Exact path types. Implementations are required to\n # support all path types. Defaults to ImplementationSpecific.\n pathType: string\n # Whether to enable the Reveal Dashboard on the Cluster. Default:\n # true\n isEnabled: true\n # The Stats server to deploy & connect to if required.\n stats:\n # AlertManager - AlertManager specific configuration.\n alertManager:\n # Set the arguments for the command within the container to run.\n args:\n [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n --storage.tsdb.retention.time=7d --web.enable-lifecycle\"]\n # Set the command within the container to run.\n command: [\"/bin/sh\"]\n # ConfigFile - Set the location of the Loki configuration file.\n configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n # ConfigMap\n configFileAsConfigMap: true\n # The port that Stats will be running on. It runs only on the head\n # node pod in the cluster. Default: 9091\n containerPort:\n # Number of port to expose on the pod's IP address. This must be\n # a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must\n # be a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name.\n # Name for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # List of environment variables to set in the container.\n env:\n - name: string\n # Variable references $(VAR_NAME) are expanded using the\n # previously defined environment variables in the container and\n # any service environment variables. If a variable cannot be\n # resolved, the reference in the input string will be\n # unchanged. Double $$ are reduced to a single $, which allows\n # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n # produce the string literal \"$(VAR_NAME)\". Escaped references\n # will never be expanded, regardless of whether the variable\n # exists or not. Defaults to \"\".\n value: string\n # Source for the environment variable's value. Cannot be used if\n # value is not empty.\n valueFrom:\n # Selects a key of a ConfigMap.\n configMapKeyRef:\n # The key to select.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the ConfigMap or its key must be defined\n optional: true\n # Selects a field of the pod: supports metadata.name,\n # metadata.namespace, `metadata.labels\n # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n # spec.serviceAccountName, status.hostIP, status.podIP,\n # status.podIPs.\n fieldRef:\n # Version of the schema the FieldPath is written in terms\n # of, defaults to \"v1\".\n apiVersion: app.kinetica.com/v1\n # Path of the field to select in the specified API version.\n fieldPath: string\n # Selects a resource of the container: only resources limits\n # and requests (limits.cpu, limits.memory,\n # limits.ephemeral-storage, requests.cpu, requests.memory and\n # requests.ephemeral-storage) are currently supported.\n resourceFieldRef:\n # Container name: required for volumes, optional for env\n # vars\n containerName: string\n # Specifies the output format of the exposed resources,\n # defaults to \"1\"\n divisor: \n # Required: resource to select\n resource: string\n # Selects a key of a secret in the pod's namespace\n secretKeyRef:\n # The key of the secret to select from. Must be a valid\n # secret key.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the Secret or its key must be defined\n optional: true\n # Set the name of the container image to use.\n image:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets\n # in the same gpudb-namespace to use for pulling any of the\n # images used by this PodSpec. If specified, these secrets will\n # be passed to individual puller implementations for them to\n # use. For example, in the case of docker, only DockerConfig\n # type secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Whether to enable the Stats Server on the Cluster. Default:\n # true\n isEnabled: true\n # Periodic probe of container liveness. Container will be\n # restarted if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Logs - Set the location of the Loki configuration file.\n logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Resource Requests & Limits for the Stats Pod.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # StoragePath - Set the location of the AlertManager file\n # storage.\n storagePath: \"/opt/gpudb/kagent/stats/storage/alertmanager/alertmanager\"\n # WebConfigFile - Set the location of the AlertManager\n # alertmanager-web-config.yml.\n webConfigFile: \"/opt/gpudb/kagent/stats/alertmanager/alertmanager-web-config.yml\"\n # WebListenAddress - Set the location of the AlertManager\n # alertmanager-web-config.yml.\n webListenAddress: \"0.0.0.0:9089\"\n # Grafana - Grafana specific configuration.\n grafana:\n # Set the arguments for the command within the container to run.\n args:\n [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n --storage.tsdb.retention.time=7d --web.enable-lifecycle\"]\n # Set the command within the container to run.\n command: [\"/bin/sh\"]\n # ConfigFile - Set the location of the Loki configuration file.\n configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n # ConfigMap\n configFileAsConfigMap: true\n # The port that Stats will be running on. It runs only on the head\n # node pod in the cluster. Default: 9091\n containerPort:\n # Number of port to expose on the pod's IP address. This must be\n # a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must\n # be a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name.\n # Name for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # List of environment variables to set in the container.\n env:\n - name: string\n # Variable references $(VAR_NAME) are expanded using the\n # previously defined environment variables in the container and\n # any service environment variables. If a variable cannot be\n # resolved, the reference in the input string will be\n # unchanged. Double $$ are reduced to a single $, which allows\n # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n # produce the string literal \"$(VAR_NAME)\". Escaped references\n # will never be expanded, regardless of whether the variable\n # exists or not. Defaults to \"\".\n value: string\n # Source for the environment variable's value. Cannot be used if\n # value is not empty.\n valueFrom:\n # Selects a key of a ConfigMap.\n configMapKeyRef:\n # The key to select.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the ConfigMap or its key must be defined\n optional: true\n # Selects a field of the pod: supports metadata.name,\n # metadata.namespace, `metadata.labels\n # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n # spec.serviceAccountName, status.hostIP, status.podIP,\n # status.podIPs.\n fieldRef:\n # Version of the schema the FieldPath is written in terms\n # of, defaults to \"v1\".\n apiVersion: app.kinetica.com/v1\n # Path of the field to select in the specified API version.\n fieldPath: string\n # Selects a resource of the container: only resources limits\n # and requests (limits.cpu, limits.memory,\n # limits.ephemeral-storage, requests.cpu, requests.memory and\n # requests.ephemeral-storage) are currently supported.\n resourceFieldRef:\n # Container name: required for volumes, optional for env\n # vars\n containerName: string\n # Specifies the output format of the exposed resources,\n # defaults to \"1\"\n divisor: \n # Required: resource to select\n resource: string\n # Selects a key of a secret in the pod's namespace\n secretKeyRef:\n # The key of the secret to select from. Must be a valid\n # secret key.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the Secret or its key must be defined\n optional: true\n # HomePath - Set the location of the Grafana home directory.\n homePath: \"/opt/gpudb/kagent/stats/grafana\"\n # GraphiteHost - Host Address\n host: \"0.0.0.0\"\n # Set the name of the container image to use.\n image:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets\n # in the same gpudb-namespace to use for pulling any of the\n # images used by this PodSpec. If specified, these secrets will\n # be passed to individual puller implementations for them to\n # use. For example, in the case of docker, only DockerConfig\n # type secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Whether to enable the Stats Server on the Cluster. Default:\n # true\n isEnabled: true\n # Periodic probe of container liveness. Container will be\n # restarted if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Logs - Set the location of the Loki configuration file.\n logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Resource Requests & Limits for the Stats Pod.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # Whether to enable the Stats Server on the Cluster. Default: true\n isEnabled: true\n # Loki - Loki specific configuration.\n loki:\n # Set the arguments for the command within the container to run.\n args:\n [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n --storage.tsdb.retention.time=7d --web.enable-lifecycle\"]\n # Set the command within the container to run.\n command: [\"/bin/sh\"]\n # ConfigFile - Set the location of the Loki configuration file.\n configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n # ConfigMap\n configFileAsConfigMap: true\n # The port that Stats will be running on. It runs only on the head\n # node pod in the cluster. Default: 9091\n containerPort:\n # Number of port to expose on the pod's IP address. This must be\n # a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must\n # be a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name.\n # Name for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # List of environment variables to set in the container.\n env:\n - name: string\n # Variable references $(VAR_NAME) are expanded using the\n # previously defined environment variables in the container and\n # any service environment variables. If a variable cannot be\n # resolved, the reference in the input string will be\n # unchanged. Double $$ are reduced to a single $, which allows\n # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n # produce the string literal \"$(VAR_NAME)\". Escaped references\n # will never be expanded, regardless of whether the variable\n # exists or not. Defaults to \"\".\n value: string\n # Source for the environment variable's value. Cannot be used if\n # value is not empty.\n valueFrom:\n # Selects a key of a ConfigMap.\n configMapKeyRef:\n # The key to select.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the ConfigMap or its key must be defined\n optional: true\n # Selects a field of the pod: supports metadata.name,\n # metadata.namespace, `metadata.labels\n # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n # spec.serviceAccountName, status.hostIP, status.podIP,\n # status.podIPs.\n fieldRef:\n # Version of the schema the FieldPath is written in terms\n # of, defaults to \"v1\".\n apiVersion: app.kinetica.com/v1\n # Path of the field to select in the specified API version.\n fieldPath: string\n # Selects a resource of the container: only resources limits\n # and requests (limits.cpu, limits.memory,\n # limits.ephemeral-storage, requests.cpu, requests.memory and\n # requests.ephemeral-storage) are currently supported.\n resourceFieldRef:\n # Container name: required for volumes, optional for env\n # vars\n containerName: string\n # Specifies the output format of the exposed resources,\n # defaults to \"1\"\n divisor: \n # Required: resource to select\n resource: string\n # Selects a key of a secret in the pod's namespace\n secretKeyRef:\n # The key of the secret to select from. Must be a valid\n # secret key.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the Secret or its key must be defined\n optional: true\n # ExpandEnv\n expandEnv: true\n # Set the name of the container image to use.\n image:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets\n # in the same gpudb-namespace to use for pulling any of the\n # images used by this PodSpec. If specified, these secrets will\n # be passed to individual puller implementations for them to\n # use. For example, in the case of docker, only DockerConfig\n # type secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Whether to enable the Stats Server on the Cluster. Default:\n # true\n isEnabled: true\n # Periodic probe of container liveness. Container will be\n # restarted if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Logs - Set the location of the Loki configuration file.\n logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Resource Requests & Limits for the Stats Pod.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # Storage - Set the path of the Loki storage.\n storage: \"/opt/gpudb/kagent/stats/storage/loki-storage\"\n # Which vmss/node group etc. to use as the NodeSelector\n pool: \"compute\"\n # Prometheus - Prometheus specific configuration.\n prometheus:\n # Set the arguments for the command within the container to run.\n args:\n [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n --storage.tsdb.retention.time=7d --web.enable-lifecycle\"]\n # Set the command within the container to run.\n command: [\"/bin/sh\"]\n # ConfigFile - Set the location of the Loki configuration file.\n configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n # ConfigMap\n configFileAsConfigMap: true\n # The port that Stats will be running on. It runs only on the head\n # node pod in the cluster. Default: 9091\n containerPort:\n # Number of port to expose on the pod's IP address. This must be\n # a valid port number, 0 < x < 65536.\n containerPort: 1\n # What host IP to bind the external port to.\n hostIP: string\n # Number of port to expose on the host. If specified, this must\n # be a valid port number, 0 < x < 65536. If HostNetwork is\n # specified, this must match ContainerPort. Most containers do\n # not need this.\n hostPort: 1\n # If specified, this must be an IANA_SVC_NAME and unique within\n # the pod. Each named port in a pod must have a unique name.\n # Name for the port that can be referred to by services.\n name: string\n # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n # to \"TCP\".\n protocol: \"TCP\"\n # List of environment variables to set in the container.\n env:\n - name: string\n # Variable references $(VAR_NAME) are expanded using the\n # previously defined environment variables in the container and\n # any service environment variables. If a variable cannot be\n # resolved, the reference in the input string will be\n # unchanged. Double $$ are reduced to a single $, which allows\n # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n # produce the string literal \"$(VAR_NAME)\". Escaped references\n # will never be expanded, regardless of whether the variable\n # exists or not. Defaults to \"\".\n value: string\n # Source for the environment variable's value. Cannot be used if\n # value is not empty.\n valueFrom:\n # Selects a key of a ConfigMap.\n configMapKeyRef:\n # The key to select.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the ConfigMap or its key must be defined\n optional: true\n # Selects a field of the pod: supports metadata.name,\n # metadata.namespace, `metadata.labels\n # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n # spec.serviceAccountName, status.hostIP, status.podIP,\n # status.podIPs.\n fieldRef:\n # Version of the schema the FieldPath is written in terms\n # of, defaults to \"v1\".\n apiVersion: app.kinetica.com/v1\n # Path of the field to select in the specified API version.\n fieldPath: string\n # Selects a resource of the container: only resources limits\n # and requests (limits.cpu, limits.memory,\n # limits.ephemeral-storage, requests.cpu, requests.memory and\n # requests.ephemeral-storage) are currently supported.\n resourceFieldRef:\n # Container name: required for volumes, optional for env\n # vars\n containerName: string\n # Specifies the output format of the exposed resources,\n # defaults to \"1\"\n divisor: \n # Required: resource to select\n resource: string\n # Selects a key of a secret in the pod's namespace\n secretKeyRef:\n # The key of the secret to select from. Must be a valid\n # secret key.\n key: string\n # Name of the referent. More info:\n # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n # TODO: Add other useful fields. apiVersion, kind, uid?\n name: string\n # Specify whether the Secret or its key must be defined\n optional: true\n # Set the name of the container image to use.\n image:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets\n # in the same gpudb-namespace to use for pulling any of the\n # images used by this PodSpec. If specified, these secrets will\n # be passed to individual puller implementations for them to\n # use. For example, in the case of docker, only DockerConfig\n # type secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Whether to enable the Stats Server on the Cluster. Default:\n # true\n isEnabled: true\n # Periodic probe of container liveness. Container will be\n # restarted if the probe fails. Cannot be updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n livenessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Set the Prometheus logging level.\n logLevel: \"debug\"\n # Logs - Set the location of the Loki configuration file.\n logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n # Periodic probe of container service readiness. Container will be\n # removed from service endpoints if the probe fails. Cannot be\n # updated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n readinessProbe:\n # Exec specifies the action to take.\n exec:\n # Command is the command line to execute inside the container,\n # the working directory for the command is root ('/') in the\n # container's filesystem. The command is simply exec'd, it is\n # not run inside a shell, so traditional shell instructions\n # ('|', etc) won't work. To use a shell, you need to\n # explicitly call out to that shell. Exit status of 0 is\n # treated as live/healthy and non-zero is unhealthy.\n command: [\"string\"]\n # Minimum consecutive failures for the probe to be considered\n # failed after having succeeded. Defaults to 3. Minimum value\n # is 1.\n failureThreshold: 1\n # GRPC specifies an action involving a GRPC port.\n grpc:\n # Port number of the gRPC service. Number must be in the range\n # 1 to 65535.\n port: 1\n # Service is the name of the service to place in the gRPC\n # HealthCheckRequest\n # (see\n # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n # If this is not specified, the default behavior is defined\n # by gRPC.\n service: string\n # HTTPGet specifies the http request to perform.\n httpGet:\n # Host name to connect to, defaults to the pod IP. You\n # probably want to set \"Host\" in httpHeaders instead.\n host: string\n # Custom headers to set in the request. HTTP allows repeated\n # headers.\n httpHeaders:\n - name: string\n # The header field value\n value: string\n # Path to access on the HTTP server.\n path: string\n # Name or number of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Scheme to use for connecting to the host. Defaults to HTTP.\n scheme: string\n # Number of seconds after the container has started before\n # liveness probes are initiated. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n initialDelaySeconds: 1\n # How often (in seconds) to perform the probe. Default to 10\n # seconds. Minimum value is 1.\n periodSeconds: 1\n # Minimum consecutive successes for the probe to be considered\n # successful after having failed. Defaults to 1. Must be 1 for\n # liveness and startup. Minimum value is 1.\n successThreshold: 1\n # TCPSocket specifies an action involving a TCP port.\n tcpSocket:\n # Optional: Host name to connect to, defaults to the pod IP.\n host: string\n # Number or name of the port to access on the container.\n # Number must be in the range 1 to 65535. Name must be an\n # IANA_SVC_NAME.\n port: \n # Optional duration in seconds the pod needs to terminate\n # gracefully upon probe failure. The grace period is the\n # duration in seconds after the processes running in the pod\n # are sent a termination signal and the time when the processes\n # are forcibly halted with a kill signal. Set this value longer\n # than the expected cleanup time for your process. If this\n # value is nil, the pod's terminationGracePeriodSeconds will be\n # used. Otherwise, this value overrides the value provided by\n # the pod spec. Value must be non-negative integer. The value\n # zero indicates stop immediately via the kill signal\n # (no opportunity to shut down). This is a beta field and\n # requires enabling ProbeTerminationGracePeriod feature gate.\n # Minimum value is 1. spec.terminationGracePeriodSeconds is\n # used if unset.\n terminationGracePeriodSeconds: 1\n # Number of seconds after which the probe times out. Defaults to\n # 1 second. Minimum value is 1. More info:\n # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n timeoutSeconds: 1\n # Resource Requests & Limits for the Stats Pod.\n resources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # Set the location of the TSDB database.\n storageTSDBPath: \"/opt/gpudb/kagent/stats/storage/prometheus-storage\"\n # Set the time to hold data in the TSDB database.\n storageTSDBRetentionTime: \"7d\"\n # Timings - Prometheus Intervals & Timeouts\n timings: evaluationInterval: \"30s\" scrapeInterval: \"30s\"\n scrapeTimeout: \"10s\"\n # Whether to share a single PV for Loki, Prometheus & Grafana or\n # have a separate PV for each. Default: true\n sharedPV: true\n # Resource block specifically for use with SharedPV = true to set\n # storage `requests` & `limits`\n sharedPVResources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # Supporting images like socat,busybox etc.\n supportingImages:\n # Set the resource requests/limits for the BusyBox Pod(s).\n busyBoxResources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n # Set the name of the container image to use.\n busybox:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets in\n # the same gpudb-namespace to use for pulling any of the images\n # used by this PodSpec. If specified, these secrets will be\n # passed to individual puller implementations for them to use.\n # For example, in the case of docker, only DockerConfig type\n # secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Set the name of the container image to use.\n socat:\n # Set the policy for pulling container images.\n imagePullPolicy: \"IfNotPresent\"\n # ImagePullSecrets is an optional list of references to secrets in\n # the same gpudb-namespace to use for pulling any of the images\n # used by this PodSpec. If specified, these secrets will be\n # passed to individual puller implementations for them to use.\n # For example, in the case of docker, only DockerConfig type\n # secrets are honored.\n imagePullSecrets:\n - name: string\n # The image registry & optional port containing the repository.\n registry: \"docker.io\"\n # The image repository path.\n repository: \"kineticadevcloud/\"\n # SemVer = Semantic Version for the Tag SemVer semver.Version\n semVer: string\n # The image sha.\n sha: \"\"\n # The image tag.\n tag: \"v7.1.5.2\"\n # Set the resource requests/limits for the Socat Pod.\n socatResources:\n # Claims lists the names of resources, defined in\n # spec.resourceClaims, that are used by this container. This is\n # an alpha field and requires enabling the\n # DynamicResourceAllocation feature gate. This field is\n # immutable. It can only be set for containers.\n claims:\n - name: string\n # Limits describes the maximum amount of compute resources\n # allowed. More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n limits: {}\n # Requests describes the minimum amount of compute resources\n # required. If Requests is omitted for a container, it defaults\n # to Limits if that is explicitly specified, otherwise to an\n # implementation-defined value. Requests cannot exceed Limits.\n # More info:\n # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n requests: {}\n# KineticaClusterStatus defines the observed state of KineticaCluster\nstatus:\n # CloudProvider the DB is deployed on\n cloudProvider: string\n # CloudRegion the DB is deployed on\n cloudRegion: string\n # ClusterSize the current number of ranks & type i.e. CPU or GPU of\n # the cluster\n clusterSize:\n # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n # representation of the number of nodes in a simple to understand\n # T-Short size scheme. This indicates the size of the cluster i.e.\n # the number of nodes. It does not identify the size of the cloud\n # provider nodes. For node size see ClusterTypeEnum. Supported\n # Values are: - XS S M L XL XXL XXXL\n tshirtSize: string\n # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n # of the VM.\n tshirtType: string\n # The number of ranks (replicas) that the cluster was last run with\n currentReplicas: 0\n # The first start of a new cluster has completed.\n firstStartComplete: false\n # HostManagerStatusResponse - The contents of polling the HostManager\n # on port 9300n are added to the BR status field. This allows clients\n # to get the Host/Rank/Graph/ML status information.\n hmStatus: cluster_leader: string cluster_operation: string graph:\n status: string graph_status: string host_httpd_status: string\n host_mode: string host_num_gpus: string host_pid: 1\n host_stats_status: string host_status: string hostname: string hosts:\n graph_status: string host_httpd_status: string host_mode: string\n host_pid: 1 host_stats_status: string host_status: string ml_status:\n string query_planner_status: string reveal_status: string\n license_expiration: string license_status: string license_type:\n string ml_status: string query_planner_status: string ranks: mode:\n string\n # Pid - The OS Process Id for the Rank.\n pid: 1 status: string reveal_status: string system_idle_time:\n string system_mode: string system_rebalancing: 1 system_status:\n string text: status: string version: string\n # The fully qualified Ingress routes.\n ingressUrls: aaw: string dbMonitor: string files: string gadmin:\n string postgresProxy: string ranks: {} reveal: string\n # The fully qualified in-cluster Ingress routes.\n internalIngressUrls: aaw: string dbMonitor: string files: string\n gadmin: string postgresProxy: string ranks: {} reveal: string\n # Identify FreeSaaS Cluster\n isFreeSaaS: false\n # HostOptions used during DB Cluster Scaling Functions\n options: ram_limit: 1\n # OutstandingBilling - A list of hours not yet billed for. Will only\n # be present if the plan is Pay As You Go and the operator was unable\n # to send the billing information due to an issue with the cloud\n # providers billing APIs.\n outstandingBillableHour:\n - billable: true billed: true billedAt: string duration: string end:\n string start: string\n # The state or phase of the current DB installation\n phase: stringv\n</code></pre>","tags":["Reference"]},{"location":"Reference/kinetica_workbench/","title":"Workbench CRD Reference","text":"","tags":["Reference"]},{"location":"Reference/kinetica_workbench/#coming-soon","title":"Coming Soon","text":"","tags":["Reference"]},{"location":"Reference/workbench/","title":"Kinetica Workbench Configuration","text":"<ul> <li>kubectl (yaml)</li> <li>Helm Chart</li> </ul>","tags":["Reference"]},{"location":"Reference/workbench/#workbench","title":"Workbench","text":"kubectl <p>Using kubetctl a CustomResource of type <code>KineticaCluster</code> is used to define a new Kinetica DB Cluster in a yaml file.</p> <p>The basic Group, Version, Kind or GVK to instantiate a Kinetica Workbench is as follows: -</p> Workbench GVK<pre><code>apiVersion: workbench.com.kinetica/v1\nkind: Workbench\n</code></pre>","tags":["Reference"]},{"location":"Reference/workbench/#metadata","title":"Metadata","text":"<p>to which we add a <code>metadata:</code> block for the name of the DB CR along with the <code>namespace</code> into which we are targetting the installation of the DB cluster.</p> Workbench metadata<pre><code>apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n name: workbench-kinetica-cluster\n namespace: gpudb\n</code></pre> <p>The simplest valid Workbench CR looks as follows: -</p> workbench.yaml<pre><code>apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n name: workbench-kinetica-cluster\n namespace: gpudb\nspec:\n executeSqlLimit: 10000\n fqdn: kinetica-cluster.saas.kinetica.com\n image: kinetica/workbench:v7.1.9-8.rc1\n letsEncrypt:\n enabled: false\n userIdleTimeout: 60\n ingressController: nginx-ingress\n</code></pre> <p><code>1. clusterName</code> - the user defined name of the Kinetica DB Cluster</p> <p><code>2. clusterSize</code> - block that defines the number of DB Ranks to run</p> helm","tags":["Reference"]},{"location":"Setup/","title":"Kinetica for Kubernetes Setup","text":"<ul> <li> <p> Set up in 15 minutes </p> <p>Install the Kinetica DB locally on <code>Kind</code> or <code>k3s</code> with <code>helm</code> to get up and running in minutes. Quickstart</p> </li> <li> <p> Prepare to Install</p> <p>What you need to know & do before beginning a production installation. Preparation and Prerequisites</p> </li> <li> <p> Production DB Installation</p> <p>Install the Kinetica DB with helm to get up and running quickly Installation</p> </li> <li> <p> Channel Your Inner Ninja</p> <p>Advanced Installation Topics which go beyond the basic installation. Advanced Topics</p> </li> </ul>","tags":["Getting Started","Installation"]},{"location":"Support/","title":"Support","text":"<ul> <li> <p> Taking the next steps</p> <p>Further tutorials or help on configuring Kinetica in different environments. Help & Tutorials</p> </li> <li> <p> Locating Issues</p> <p>In the unlikely event you require information on how to troubleshoot your installation, help can be found here. Troubleshooting</p> </li> <li> <p> FAQ</p> <p>Frequently Asked Questions.. FAQ</p> </li> </ul>","tags":["Support"]},{"location":"Troubleshooting/troubleshooting/","title":"Troubleshooting","text":"","tags":["Support"]},{"location":"Troubleshooting/troubleshooting/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"tags/","title":"Categories","text":"<p>Following is a list of relevant documentation categories:</p>"},{"location":"tags/#aks","title":"AKS","text":"<ul> <li>Azure AKS</li> </ul>"},{"location":"tags/#administration","title":"Administration","text":"<ul> <li>Administration</li> <li>Grant management</li> <li>Resource group management</li> <li>Role Management</li> <li>Schema management</li> <li>User Management</li> <li>Kinetica Cluster Grants Reference</li> <li>Kinetica Cluster Resource Groups Reference</li> <li>Kinetica Cluster Roles Reference</li> <li>Kinetica Cluster Schemas Reference</li> <li>Kinetica Cluster Users Reference</li> </ul>"},{"location":"tags/#advanced","title":"Advanced","text":"<ul> <li>Advanced</li> <li> Advanced Topics</li> <li>Air-Gapped Environments</li> <li>Alternative Charts</li> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li>Kinetica DB on OS X (Arm64)</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li>Bare Metal/VM Installation - <code>kubeadm</code></li> <li>S3 Storage for Dev/Test</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> </ul>"},{"location":"tags/#architecture","title":"Architecture","text":"<ul> <li>Architecture</li> <li>Core Database Architecture</li> <li>Kubernetes Architecture</li> </ul>"},{"location":"tags/#configuration","title":"Configuration","text":"<ul> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> <li>How to change the Clusters FQDN</li> <li>OpenTelemetry</li> </ul>"},{"location":"tags/#development","title":"Development","text":"<ul> <li>Kinetica DB on OS X (Arm64)</li> <li>S3 Storage for Dev/Test</li> <li>Quickstart</li> </ul>"},{"location":"tags/#eks","title":"EKS","text":"<ul> <li>Amazon EKS</li> </ul>"},{"location":"tags/#getting-started","title":"Getting Started","text":"<ul> <li>Getting Started</li> <li>Azure AKS</li> <li>Amazon EKS</li> <li>Preparation & Prerequisites</li> <li>Quickstart</li> <li>Kinetica for Kubernetes Setup</li> </ul>"},{"location":"tags/#ingress","title":"Ingress","text":"<ul> <li>Ingress Configuration</li> <li> <code>ingress-nginx</code> Ingress Configuration</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li> <code>nginx-ingress</code> Ingress Configuration</li> </ul>"},{"location":"tags/#installation","title":"Installation","text":"<ul> <li>Air-Gapped Environments</li> <li>Alternative Charts</li> <li>Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations</li> <li>Bare Metal/VM Installation - <code>kubeadm</code></li> <li>S3 Storage for Dev/Test</li> <li>Getting Started</li> <li>Kinetica for Kubernetes Installation</li> <li>CPU</li> <li>GPU</li> <li>Preparation & Prerequisites</li> <li>Quickstart</li> <li> Core DB CRDs</li> <li>Kinetica for Kubernetes Setup</li> </ul>"},{"location":"tags/#monitoring","title":"Monitoring","text":"<ul> <li>Logs</li> <li> Metrics Collection & Display</li> <li>OpenTelemetry</li> </ul>"},{"location":"tags/#operations","title":"Operations","text":"<ul> <li>Logs</li> <li> Metrics Collection & Display</li> <li>Operational Management</li> <li>Kinetica for Kubernetes Backup & Restore</li> <li>OpenTelemetry</li> <li>Kinetica for Kubernetes Data Rebalancing</li> <li>Kinetica for Kubernetes Suspend & Resume</li> <li>Kinetica Cluster Backups Reference</li> <li> Core DB CRDs</li> <li>Kinetica Cluster Restores Reference</li> </ul>"},{"location":"tags/#reference","title":"Reference","text":"<ul> <li>Reference Section</li> <li>Kinetica Database Configuration</li> <li>Kinetica Operators</li> <li>Kinetica Cluster Admins Reference</li> <li>Kinetica Cluster Backups Reference</li> <li>Kinetica Cluster Grants Reference</li> <li> Core DB CRDs</li> <li>Kinetica Cluster Resource Groups Reference</li> <li>Kinetica Cluster Restores Reference</li> <li>Kinetica Cluster Roles Reference</li> <li>Kinetica Cluster Schemas Reference</li> <li>Kinetica Cluster Users Reference</li> <li>Kinetica Clusters Reference</li> <li>Kinetica Workbench Reference</li> <li>Kinetica Workbench Configuration</li> </ul>"},{"location":"tags/#storage","title":"Storage","text":"<ul> <li>S3 Storage for Dev/Test</li> <li>Amazon EKS</li> </ul>"},{"location":"tags/#support","title":"Support","text":"<ul> <li>How to change the Clusters FQDN</li> <li>FAQ</li> <li>Help & Tutorials</li> <li>Creating Users, Roles, Schemas and other Kinetica DB Objects</li> <li>Support</li> <li>Troubleshooting</li> </ul>"}]} \ No newline at end of file diff --git a/7.2/sitemap.xml.gz b/7.2/sitemap.xml.gz index 00b72d0..cb9d526 100644 Binary files a/7.2/sitemap.xml.gz and b/7.2/sitemap.xml.gz differ