Skip to content

Commit

Permalink
Deployed fd12123 to 7.2 with MkDocs 1.5.3 and mike 2.0.0
Browse files Browse the repository at this point in the history
  • Loading branch information
Don Stewart committed Apr 15, 2024
1 parent 1c53f95 commit cdb15b5
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 11 deletions.
13 changes: 3 additions & 10 deletions 7.2/GettingStarted/installation_gpu/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,9 @@
</span><span id=__span-1-8><a id=__codelineno-1-8 name=__codelineno-1-8 href=#__codelineno-1-8></a><span class=hll>--set<span class=w> </span>db.gpudbCluster.config.ai.apiProvider<span class=w> </span><span class=o>=</span><span class=w> </span><span class=s2>&quot;kineticallm&quot;</span>
</span></span></code></pre></div> <div class="admonition warning"> <p class=admonition-title>On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts &amp; <abbr title="VRAM is, in principle, the same thing as CPU system RAM but for the use of the GPU.">VRAM</abbr> Memory</p> <p>To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled <code>app.kinetica.com/pool=compute-llm</code>. In order for the On-Prem Kinetica LLM to run it requires <strong>40GB GPU <abbr title="VRAM is, in principle, the same thing as CPU system RAM but for the use of the GPU.">VRAM</abbr></strong> therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB <abbr title="VRAM is, in principle, the same thing as CPU system RAM but for the use of the GPU.">VRAM</abbr> is available e.g. 1x A100 GPU or 2x A10G GPU. </p> <div class=highlight><span class=filename>Label Kubernetes Nodes for LLM</span><pre><span></span><code><span id=__span-2-1><a id=__codelineno-2-1 name=__codelineno-2-1 href=#__codelineno-2-1></a>kubectl<span class=w> </span>label<span class=w> </span>node<span class=w> </span>k8snode3<span class=w> </span>app.kinetica.com/pool<span class=o>=</span>compute-llm
</span></code></pre></div> </div> <h3 id=check-installation-progress>Check installation progress<a class=headerlink href=#check-installation-progress title="Permanent link">&para;</a></h3> <p>After a few moments, follow the progression of the main database pod startup with:</p> <div class=highlight><span class=filename>Monitor the Kinetica installation progress</span><pre><span></span><code><span id=__span-3-1><a id=__codelineno-3-1 name=__codelineno-3-1 href=#__codelineno-3-1></a>kubectl<span class=w> </span>-n<span class=w> </span>gpudb<span class=w> </span>get<span class=w> </span>po<span class=w> </span>gpudb-0<span class=w> </span>-w
</span></code></pre></div> <p>until it reaches <code>"gpudb-0 3/3 Running"</code> at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the <code>helm</code> command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using <span class=keys><kbd class=key-control>Ctrl</kbd><span>+</span><kbd class=key-c>C</kbd></span>.</p> <details class=failure> <summary>error no pod named gpudb-0</summary> <p>If you receive an error message running <code>kubectl -n gpudb get po gpudb-0 -w</code> informing you that no pod named <code>gpudb-0</code> exists. Please check that the OpenLDAP pod is running by running</p> <div class=highlight><span class=filename>Text Only</span><pre><span></span><code>```shell title=&quot;Check OpenLDAP status&quot;
kubectl -n gpudb get pods
kubectl -n gpudb describe pod openldap-5f87f77c8b-trpmf
```

where the pod name `openldap-5f87f77c8b-trpmf` is that shown when running `kubectl -n gpudb get pods`

Validate if the pod is waiting for it&#39;s Persistent Volume Claim/Persistent Volume to be created
and bound to the pod.
</code></pre></div> </details> <h3 id=accessing-the-kinetica-installation>Accessing the Kinetica installation<a class=headerlink href=#accessing-the-kinetica-installation title="Permanent link">&para;</a></h3> <h2 id=target-platform-specifics>Target Platform Specifics<a class=headerlink href=#target-platform-specifics title="Permanent link">&para;</a></h2> <div class="tabbed-set tabbed-alternate" data-tabs=1:3><input checked=checked id=cloud name=__tabbed_1 type=radio><input id=local---dev name=__tabbed_1 type=radio><input id=bare-metal---prod name=__tabbed_1 type=radio><div class=tabbed-labels><label for=cloud>cloud</label><label for=local---dev>local - dev</label><label for=bare-metal---prod>bare metal - prod</label></div> <div class=tabbed-content> <div class=tabbed-block> <p>If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.</p> <p>As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.</p> <p>Option 1: Use the LoadBalancer domain <div class=highlight><span class=filename>Set your FQDN in Kinetica</span><pre><span></span><code><span id=__span-5-1><a id=__codelineno-5-1 name=__codelineno-5-1 href=#__codelineno-5-1></a>kubectl<span class=w> </span>get<span class=w> </span>svc<span class=w> </span>-n<span class=w> </span>kinetica-system
</span></code></pre></div> <p>until it reaches <code>"gpudb-0 3/3 Running"</code> at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the <code>helm</code> command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using <span class=keys><kbd class=key-control>Ctrl</kbd><span>+</span><kbd class=key-c>C</kbd></span>.</p> <details class=failure> <summary>error no pod named gpudb-0</summary> <p>If you receive an error message running <code>kubectl -n gpudb get po gpudb-0 -w</code> informing you that no pod named <code>gpudb-0</code> exists. Please check that the OpenLDAP pod is running by running</p> <div class=highlight><span class=filename>Check OpenLDAP status</span><pre><span></span><code><span id=__span-4-1><a id=__codelineno-4-1 name=__codelineno-4-1 href=#__codelineno-4-1></a>kubectl<span class=w> </span>-n<span class=w> </span>gpudb<span class=w> </span>get<span class=w> </span>pods
</span><span id=__span-4-2><a id=__codelineno-4-2 name=__codelineno-4-2 href=#__codelineno-4-2></a>kubectl<span class=w> </span>-n<span class=w> </span>gpudb<span class=w> </span>describe<span class=w> </span>pod<span class=w> </span>openldap-5f87f77c8b-trpmf
</span></code></pre></div> <p>where the pod name <code>openldap-5f87f77c8b-trpmf</code> is that shown when running <code>kubectl -n gpudb get pods</code></p> <p>Validate if the pod is waiting for it's Persistent Volume Claim/Persistent Volume to be created and bound to the pod.</p> </details> <h3 id=accessing-the-kinetica-installation>Accessing the Kinetica installation<a class=headerlink href=#accessing-the-kinetica-installation title="Permanent link">&para;</a></h3> <h2 id=target-platform-specifics>Target Platform Specifics<a class=headerlink href=#target-platform-specifics title="Permanent link">&para;</a></h2> <div class="tabbed-set tabbed-alternate" data-tabs=1:3><input checked=checked id=cloud name=__tabbed_1 type=radio><input id=local---dev name=__tabbed_1 type=radio><input id=bare-metal---prod name=__tabbed_1 type=radio><div class=tabbed-labels><label for=cloud>cloud</label><label for=local---dev>local - dev</label><label for=bare-metal---prod>bare metal - prod</label></div> <div class=tabbed-content> <div class=tabbed-block> <p>If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.</p> <p>As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.</p> <p>Option 1: Use the LoadBalancer domain <div class=highlight><span class=filename>Set your FQDN in Kinetica</span><pre><span></span><code><span id=__span-5-1><a id=__codelineno-5-1 name=__codelineno-5-1 href=#__codelineno-5-1></a>kubectl<span class=w> </span>get<span class=w> </span>svc<span class=w> </span>-n<span class=w> </span>kinetica-system
</span><span id=__span-5-2><a id=__codelineno-5-2 name=__codelineno-5-2 href=#__codelineno-5-2></a><span class=c1># look at the loadbalancer dns name, copy it</span>
</span><span id=__span-5-3><a id=__codelineno-5-3 name=__codelineno-5-3 href=#__codelineno-5-3></a>
</span><span id=__span-5-4><a id=__codelineno-5-4 name=__codelineno-5-4 href=#__codelineno-5-4></a>kubectl<span class=w> </span>-n<span class=w> </span>gpudb<span class=w> </span>edit<span class=w> </span><span class=k>$(</span>kubectl<span class=w> </span>-n<span class=w> </span>gpudb<span class=w> </span>get<span class=w> </span>kc<span class=w> </span>-o<span class=w> </span>name<span class=k>)</span>
Expand Down
2 changes: 1 addition & 1 deletion 7.2/search/search_index.json

Large diffs are not rendered by default.

Binary file modified 7.2/sitemap.xml.gz
Binary file not shown.

0 comments on commit cdb15b5

Please sign in to comment.