diff --git a/7.2/Advanced/airgapped/index.html b/7.2/Advanced/airgapped/index.html index 0dd7bd1..05ada25 100644 --- a/7.2/Advanced/airgapped/index.html +++ b/7.2/Advanced/airgapped/index.html @@ -14,14 +14,14 @@

If --containerd-namespace is not specified, images will be imported into k8s.io namespace.

sudo required

Depending on how containerd has been installed and configured it may require running the above command with sudo

mindthegap - Import to an internal OCI Registry

mindthegap import image-bundle
mindthegap push bundle --bundle <path/to/bundle.tar> \
 --to-registry <registry.address> \
 [--to-registry-insecure-skip-tls-verify]
-

It is possible with containerd to pull images, save them and load them either into a Container Registry in the air gapped environment or directly into another containerd instance.

If the target containerd is on a node running a Kubernetes Cluster then these images will be sourced by Kubernetes from the loaded images, via CRI, with no requirement to pull them from an external source e.g. a Registry or Mirror.

sudo required

Depending on how containerd has been installed and configured many of the example calls below may require running with sudo

containerd - Using containerd to pull and export an image

Similar to docker pull we can use ctr image pull so to pull the core Kinetica DB cpu based image

Pull a remote image (containerd)
ctr image pull docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-2.ga-4
-

We now need to export the pulled image as an archive to the local filesystem.

Export a local image (containerd)
ctr image export kinetica-k8s-cpu-v7.2.2-2.ga-4.tar \
-docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-2.ga-4
-

We can now transfer this archive (kinetica-k8s-cpu-v7.2.2-2.ga-4.tar) to the Kubernetes Node inside the air-gapped environment.

containerd - Using containerd to import an image

Using containerd to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.

Import the Images
ctr -n=k8s.io images import kinetica-k8s-cpu-v7.2.2-2.ga-4.tar
-

-n=k8s.io

It is possible to use ctr images import kinetica-k8s-cpu-v7.2.2-2.ga-4.rc-3.tar to import the image to containerd.

However, in order for the image to be visible to the Kubernetes Cluster running on containerd it is necessary to add the parameter -n=k8s.io.

containerd - Verifying the image is available

To verify the image is loaded into containerd on the node run the following on the node: -

Verify containerd Images
ctr image ls
+

It is possible with containerd to pull images, save them and load them either into a Container Registry in the air gapped environment or directly into another containerd instance.

If the target containerd is on a node running a Kubernetes Cluster then these images will be sourced by Kubernetes from the loaded images, via CRI, with no requirement to pull them from an external source e.g. a Registry or Mirror.

sudo required

Depending on how containerd has been installed and configured many of the example calls below may require running with sudo

containerd - Using containerd to pull and export an image

Similar to docker pull we can use ctr image pull so to pull the core Kinetica DB cpu based image

Pull a remote image (containerd)
ctr image pull docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2
+

We now need to export the pulled image as an archive to the local filesystem.

Export a local image (containerd)
ctr image export kinetica-k8s-cpu-v7.2.2-3.ga-2.tar \
+docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2
+

We can now transfer this archive (kinetica-k8s-cpu-v7.2.2-3.ga-2.tar) to the Kubernetes Node inside the air-gapped environment.

containerd - Using containerd to import an image

Using containerd to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.

Import the Images
ctr -n=k8s.io images import kinetica-k8s-cpu-v7.2.2-3.ga-2.tar
+

-n=k8s.io

It is possible to use ctr images import kinetica-k8s-cpu-v7.2.2-3.ga-2.rc-3.tar to import the image to containerd.

However, in order for the image to be visible to the Kubernetes Cluster running on containerd it is necessary to add the parameter -n=k8s.io.

containerd - Verifying the image is available

To verify the image is loaded into containerd on the node run the following on the node: -

Verify containerd Images
ctr image ls
 

To verify the image is visible to Kubernetes on the node run the following: -

Verify CRI Images
crictl images
-

It is possible with docker to pull images, save them and load them into an OCI Container Registry in the air gapped environment.

Pull a remote image (docker)
docker pull --platformlinux/amd64 docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-2.ga-4
-
Export a local image (docker)
docker export --platformlinux/amd64 -o kinetica-k8s-cpu-v7.2.2-2.ga-4.tar \
-docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-2.ga-4
-

We can now transfer this archive (kinetica-k8s-cpu-v7.2.2-2.ga-4.rc-3.tar) to the Kubernetes Node inside the air-gapped environment.

docker - Using docker to import an image

Using docker to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.

Import the Images
docker import --platformlinux/amd64 kinetica-k8s-cpu-v7.2.2-2.ga-4.tar registry:repository/kinetica-k8s-cpu:v7.2.0-3.rc-3
+

It is possible with docker to pull images, save them and load them into an OCI Container Registry in the air gapped environment.

Pull a remote image (docker)
docker pull --platformlinux/amd64 docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2
+
Export a local image (docker)
docker export --platformlinux/amd64 -o kinetica-k8s-cpu-v7.2.2-3.ga-2.tar \
+docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2
+

We can now transfer this archive (kinetica-k8s-cpu-v7.2.2-3.ga-2.rc-3.tar) to the Kubernetes Node inside the air-gapped environment.

docker - Using docker to import an image

Using docker to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.

Import the Images
docker import --platformlinux/amd64 kinetica-k8s-cpu-v7.2.2-3.ga-2.tar registry:repository/kinetica-k8s-cpu:v7.2.0-3.rc-3
 

\ No newline at end of file diff --git a/7.2/GettingStarted/preparation_and_prerequisites/index.html b/7.2/GettingStarted/preparation_and_prerequisites/index.html index 3d80f59..2c1f57a 100644 --- a/7.2/GettingStarted/preparation_and_prerequisites/index.html +++ b/7.2/GettingStarted/preparation_and_prerequisites/index.html @@ -15,7 +15,7 @@

whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label app.kinetica.com/pool=compute-gpu.

Label the Database Nodes
    kubectl label node k8snode2 app.kinetica.com/pool=compute-gpu
 

On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory

To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled app.kinetica.com/pool=compute-llm. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU.

Label Kubernetes Nodes for LLM
kubectl label node k8snode3 app.kinetica.com/pool=compute-llm
 

Pods Not Scheduling

If the Kubernetes are not labeled you may have a situation where Kinetica pods not schedule and sit in a 'Pending' state.

Install the kinetica-operators chart

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

Add the Kinetica chart repository

Add the repo locally as kinetica-operators:

Helm repo add
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest
-
Helm Repo Add

Helm Repo Add

Obtain the default Helm values file

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Helm values.yaml download
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.2/kinetica-operators/values.onPrem.k8s.yaml
+
Helm Repo Add

Helm Repo Add

Obtain the default Helm values file

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Helm values.yaml download
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k8s.yaml
 

Determine the following prior to the chart install

Default Admin User

the default admin user in the Helm chart is kadmin but this is configurable. Non-ASCII characters and typographical symbols in the password must be escaped with a "\". For example, --set dbAdminUser.password="MyPassword\!"

  1. Obtain a LICENSE-KEY as described in the introduction above.
  2. Choose a PASSWORD for the initial administrator user
  3. As the storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain the DEFAULT-STORAGE-CLASS name with the command:


Find the default storageclass
kubectl get sc -o name 
 
List StorageClass

Find Storage Class

use the name found after the /, For example, in storageclass.storage.k8s.io/local-path use "local-path" as the parameter.

Amazon EKS

If installing on Amazon EKS See here

Planning access to your Kinetica Cluster

Existing Ingress Controller?

If you have an existing Ingress Controller in your Kubernetes cluster and do not want Kinetica to install an ingresss-nginx to expose it's endpoints then you can disable ingresss-nginx installation in the values.yaml by editing the file and setting install: true to install: false: -

Text Only
```` yaml
 nodeSelector: {}
diff --git a/7.2/GettingStarted/quickstart/index.html b/7.2/GettingStarted/quickstart/index.html
index cc50180..b9f3a18 100644
--- a/7.2/GettingStarted/quickstart/index.html
+++ b/7.2/GettingStarted/quickstart/index.html
@@ -7,17 +7,17 @@
     .gdesc-inner { font-size: 0.75rem; }
     body[data-md-color-scheme="slate"] .gdesc-inner { background: var(--md-default-bg-color);}
     body[data-md-color-scheme="slate"] .gslide-title { color: var(--md-default-fg-color);}
-    body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);}     
Skip to content

Quickstart

For the quickstart we have examples for Kind or k3s.

  • Kind - is suitable for CPU only installations.
  • k3s - is suitable for CPU or GPU installations.

Kubernetes >= 1.25

The current version of the chart supports kubernetes version 1.25 and above.

Please select your target Kubernetes variant:

Kind (kubernetes in docker kind.sigs.k8s.io)

This installation in a kind cluster is for trying out the operators and the database in a non-production environment.

CPU Only

This method currently only supports installing a CPU version of the database.

Please contact Kinetica Support to request a trial key.

Create Kind Cluster 1.29

Create a new Kind Cluster
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.2/kinetica-operators/kind.yaml
+    body[data-md-color-scheme="slate"] .gslide-desc { color: var(--md-default-fg-color);}      

Quickstart

For the quickstart we have examples for Kind or k3s.

  • Kind - is suitable for CPU only installations.
  • k3s - is suitable for CPU or GPU installations.

Kubernetes >= 1.25

The current version of the chart supports kubernetes version 1.25 and above.

Please select your target Kubernetes variant:

Kind (kubernetes in docker kind.sigs.k8s.io)

This installation in a kind cluster is for trying out the operators and the database in a non-production environment.

CPU Only

This method currently only supports installing a CPU version of the database.

Please contact Kinetica Support to request a trial key.

Create Kind Cluster 1.29

Create a new Kind Cluster
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/kind.yaml
 kind create cluster --name kinetica --config kind.yaml
 
List Kind clusters
 kind get clusters
 

Set Kubernetes Context

Please set your Kubernetes Context to kind-kinetica before performing the following steps.

Kind - Install kinetica-operators including a sample db to try out

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

Kind - Install the Kinetica-Operators Chart
Add Kinetica Operators Chart Repo
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest
 

FQDN or Local Access

By default we create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace/set the FQDN in the values.yaml with the correct domain name or by adding --set.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Configure local access - /etc/hosts
127.0.0.1  local.kinetica
-
Get & install the Kinetica-Operators Chart
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.2/kinetica-operators/values.onPrem.kind.yaml
+
Get & install the Kinetica-Operators Chart
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.kind.yaml
 
 helm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password"
 

or if you have been asked by the Kinetica Support team to try a development version

Using a development version
helm search repo kinetica-operators --devel --versions
 
-helm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password" --devel --version 72.2.2
+helm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password" --devel --version 72.2.3
 

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

k3s (k3s.io)

Install k3s 1.29

Install k3s
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik  --node-name kinetica-master --token 12345" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -
 

Once installed we need to set the current Kubernetes context to point to the newly created k3s cluster.

Select if you want local or remote access to the Kubernetes Cluster: -

For only local access to the cluster we can simply set the KUBECONFIG environment variable

Set kubectl context
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
 

For remote access i.e. outside the host/VM k3s is installed on: -

Copy /etc/rancher/k3s/k3s.yaml on your machine located outside the cluster as ~/.kube/config. Then edit the file and replace the value of the server field with the IP or name of your K3s server.

Copy the kube config and set the context
sudo chmod 600 /etc/rancher/k3s/k3s.yaml
@@ -28,13 +28,13 @@
 export KUBECONFIG=~/.kube/config
 

K3s - Install kinetica-operators including a sample db to try out

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

FQDN or Local Access

By default we create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace/set the FQDN in the values.yaml with the correct domain name or by adding --set.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Configure local access - /etc/hosts
127.0.0.1  local.kinetica
 

K3S - Install the Kinetica-Operators Chart (CPU)

Add Kinetica Operators Chart Repo
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest
-
Download Template values.yaml
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.2/kinetica-operators/values.onPrem.k3s.yaml
+
Download Template values.yaml
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k3s.yaml
 
 helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password"
 

or if you have been asked by the Kinetica Support team to try a development version

Using a development version
helm search repo kinetica-operators --devel --versions
 
 helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password" --devel --version 7.2.0-2.rc-2
-

K3S - Install the Kinetica-Operators Chart (GPU)

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

k3s GPU Installation
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.2/kinetica-operators/values.onPrem.k3s.gpu.yaml
+

K3S - Install the Kinetica-Operators Chart (GPU)

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

k3s GPU Installation
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k3s.gpu.yaml
 
 helm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password"
 

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

Uninstall k3s

uninstall k3s
/usr/local/bin/k3s-uninstall.sh
diff --git a/7.2/index.yaml b/7.2/index.yaml
index 308d409..5425d8a 100644
--- a/7.2/index.yaml
+++ b/7.2/index.yaml
@@ -1,6 +1,31 @@
 apiVersion: v1
 entries:
   kinetica-operators:
+  - apiVersion: v2
+    appVersion: v7.2.2-3.ga-2
+    created: "2024-11-12T23:51:18.864504632Z"
+    dependencies:
+    - name: openldap
+      repository: ""
+    - condition: certManager.install
+      name: cert-manager
+      repository: ""
+    - condition: ingressNginx.install
+      name: ingress-nginx
+      repository: ""
+    - condition: gpuOperator.install
+      name: gpu-operator
+      repository: ""
+    - condition: supportBundle.install
+      name: support-bundle
+      repository: ""
+    description: A Helm chart for deploying Kinetica Operators
+    digest: 5f2bb53af4cb4407c8b1ab97171563bd4bd16c6c3c1d5d50fcfa13db7ec4e796
+    name: kinetica-operators
+    type: application
+    urls:
+    - https://kineticadb.github.io/charts/7.2/kinetica-operators-72.2.3.tgz
+    version: 72.2.3
   - apiVersion: v2
     appVersion: v7.2.2-3.rc-2
     created: "2024-11-07T21:37:32.618616005Z"
@@ -1858,4 +1883,4 @@ entries:
     urls:
     - https://kineticadb.github.io/charts/7.2/kinetica-operators-0.0.0.tgz
     version: 0.0.0
-generated: "2024-11-07T21:37:32.557417107Z"
+generated: "2024-11-12T23:51:18.80756507Z"
diff --git a/7.2/kinetica-operators-72.2.3.tgz b/7.2/kinetica-operators-72.2.3.tgz
new file mode 100644
index 0000000..d9b37ee
Binary files /dev/null and b/7.2/kinetica-operators-72.2.3.tgz differ
diff --git a/7.2/search/search_index.json b/7.2/search/search_index.json
index 02b2a8a..dc357a0 100644
--- a/7.2/search/search_index.json
+++ b/7.2/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"kineticadb/charts","text":"

Accelerate your AI and analytics. Kinetica harnesses real-time data and the power of CPUs & GPUs for lightning-fast insights due to it being uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

Kinetica DB can be quickly installed into Kubernetes using Helm.

  • Set up in 15 minutes

    Install the Kinetica DB locally on Kind or k3s with helm to get up and running in minutes. Quickstart

  • Prepare to Install

    What you need to know & do before beginning a production installation. Preparation and Prerequisites

  • Production DB Installation

    Install the Kinetica DB with helm to get up and running quickly Installation

  • Channel Your Inner Ninja

    Advanced Installation Topics which go beyond the basic installation. Advanced Topics

  • Running and Managing the Platform

    Metrics, Monitoring, Logs and Telemetry Distribution. Operations

  • Product Architecture

    The Modern Analytics Database Architected for Performance at Scale. Architecture

  • Support

    Additional Help, Tutorials and Troubleshooting resources. Support

  • Configuration in Detail

    Detailed reference material for the Helm Charts & Kinetica for Kubernetes CRDs. Reference Documentation

"},{"location":"tags/","title":"Categories","text":"

Following is a list of relevant documentation categories:

"},{"location":"tags/#aks","title":"AKS","text":"
  • Azure AKS
"},{"location":"tags/#administration","title":"Administration","text":"
  • Administration
  • Grant management
  • Resource group management
  • Role Management
  • Schema management
  • User Management
  • Kinetica Cluster Grants Reference
  • Kinetica Cluster Resource Groups Reference
  • Kinetica Cluster Roles Reference
  • Kinetica Cluster Schemas Reference
  • Kinetica Cluster Users Reference
"},{"location":"tags/#advanced","title":"Advanced","text":"
  • Advanced
  • Advanced Topics
  • Air-Gapped Environments
  • Alternative Charts
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • Kinetica DB on OS X (Arm64)
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • Bare Metal/VM Installation - kubeadm
  • S3 Storage for Dev/Test
  • nginx-ingress Ingress Configuration
"},{"location":"tags/#architecture","title":"Architecture","text":"
  • Architecture
  • Core Database Architecture
  • Kubernetes Architecture
"},{"location":"tags/#configuration","title":"Configuration","text":"
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • nginx-ingress Ingress Configuration
  • How to change the Clusters FQDN
  • OpenTelemetry
"},{"location":"tags/#development","title":"Development","text":"
  • Kinetica DB on OS X (Arm64)
  • S3 Storage for Dev/Test
  • Quickstart
"},{"location":"tags/#eks","title":"EKS","text":"
  • Amazon EKS
"},{"location":"tags/#getting-started","title":"Getting Started","text":"
  • Getting Started
  • Azure AKS
  • Amazon EKS
  • Preparation & Prerequisites
  • Quickstart
  • Kinetica for Kubernetes Setup
"},{"location":"tags/#ingress","title":"Ingress","text":"
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • nginx-ingress Ingress Configuration
"},{"location":"tags/#installation","title":"Installation","text":"
  • Air-Gapped Environments
  • Alternative Charts
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • Bare Metal/VM Installation - kubeadm
  • S3 Storage for Dev/Test
  • Getting Started
  • Kinetica for Kubernetes Installation
  • CPU
  • GPU
  • Preparation & Prerequisites
  • Quickstart
  • Core DB CRDs
  • Kinetica for Kubernetes Setup
"},{"location":"tags/#monitoring","title":"Monitoring","text":"
  • Logs
  • Metrics Collection & Display
  • OpenTelemetry
"},{"location":"tags/#operations","title":"Operations","text":"
  • Logs
  • Metrics Collection & Display
  • Operational Management
  • Kinetica for Kubernetes Backup & Restore
  • OpenTelemetry
  • Kinetica for Kubernetes Data Rebalancing
  • Kinetica for Kubernetes Suspend & Resume
  • Kinetica Cluster Backups Reference
  • Core DB CRDs
  • Kinetica Cluster Restores Reference
"},{"location":"tags/#reference","title":"Reference","text":"
  • Reference Section
  • Kinetica Database Configuration
  • Kinetica Operators
  • Kinetica Cluster Admins Reference
  • Kinetica Cluster Backups Reference
  • Kinetica Cluster Grants Reference
  • Core DB CRDs
  • Kinetica Cluster Resource Groups Reference
  • Kinetica Cluster Restores Reference
  • Kinetica Cluster Roles Reference
  • Kinetica Cluster Schemas Reference
  • Kinetica Cluster Users Reference
  • Kinetica Clusters Reference
  • Kinetica Workbench Reference
  • Kinetica Workbench Configuration
"},{"location":"tags/#storage","title":"Storage","text":"
  • S3 Storage for Dev/Test
  • Amazon EKS
"},{"location":"tags/#support","title":"Support","text":"
  • How to change the Clusters FQDN
  • FAQ
  • Help & Tutorials
  • Creating Users, Roles, Schemas and other Kinetica DB Objects
  • Support
  • Troubleshooting
"},{"location":"Administration/","title":"Administration","text":"
  • DB Clusters

    Core Kinetica Database Cluster Management.

    KineticaCluster

  • DB Users

    Kinetica Database User Management.

    KineticaUser

  • DB Roles

    Kinetica Database Role Management.

    KineticaRole

  • DB Schemas

    Kinetica Database Schema Management.

    KineticaSchema

  • DB Grants

    Kinetica Database Grant Management.

    KineticaGrant

  • DB Resource Groups

    Kinetica Database Resource Group Management.

    KineticaResourceGroup

  • DB Administration

    Kinetica Database Administration.

    KineticaAdmin

  • DB Backups

    Kinetica Database Backup Management.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaBackup

  • DB Restore

    Kinetica Database Restoration.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaRestore

Home

","tags":["Administration"]},{"location":"Administration/role_management/","title":"Role Management","text":"

Management of roles is done with the KineticaRole CRD.

kubectl Usage

From the kubectl command line they are referenced by kineticaroles or the short form is kr.

","tags":["Administration"]},{"location":"Administration/role_management/#list-roles","title":"List Roles","text":"

To list the roles deployed to a Kinetica DB installation we can use the following from the command-line: -

kubectl -n gpudb get kineticaroles or kubectl -n gpudb get kr

where the namespace -n gpudb matches the namespace of the Kinetica DB installation.

This outputs

Name Ring Name Role Resource Group Name LDAP DB db-users kinetica-k8s-sample db_users OK OK global-admins kinetica-k8s-sample global_admins OK OK","tags":["Administration"]},{"location":"Administration/role_management/#name","title":"Name","text":"

The name of the Kubernetes CR i.e. the metadata.name this is not necessarily the name of the user.

","tags":["Administration"]},{"location":"Administration/role_management/#ring-name","title":"Ring Name","text":"

The name of the KineticaCluster the user is created in.

","tags":["Administration"]},{"location":"Administration/role_management/#role-name","title":"Role Name","text":"

The name of the role as contained with LDAP & the DB.

","tags":["Administration"]},{"location":"Administration/role_management/#role-creation","title":"Role Creation","text":"test-role-2.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaRole\nmetadata:\n    name: test-role-2\n    namespace: gpudb\nspec:\n    ringName: kineticacluster-sample\n    role:\n        name: \"test_role2\"\n
","tags":["Administration"]},{"location":"Administration/role_management/#role-deletion","title":"Role Deletion","text":"

To delete a role from the Kinetica Cluster simply delete the Role CR from Kubernetes: -

Delete User
kubectl -n gpudb delete kr user-fred-smith \n
","tags":["Administration"]},{"location":"Administration/user_management/","title":"User Management","text":"

Management of users is done with the KineticaUser CRD.

kubectl Usage

From the kubectl command line they are referenced by kineticausers or the short form is ku.

","tags":["Administration"]},{"location":"Administration/user_management/#list-users","title":"List Users","text":"

To list the users deployed to a Kinetica DB installation we can use the following from the command-line: -

kubectl -n gpudb get kineticausers or kubectl -n gpudb get ku

where the namespace -n gpudb matches the namespace of the Kinetica DB installation.

This outputs

Name Action Ring Name UID Last Name Given Name Display Name LDAP DB Reveal kadmin upsert kinetica-k8s-sample kadmin Account Admin Admin Account OK OK OK","tags":["Administration"]},{"location":"Administration/user_management/#name","title":"Name","text":"

The name of the Kubernetes CR i.e. the metadata.name this is not necessarily the name of the user.

","tags":["Administration"]},{"location":"Administration/user_management/#action","title":"Action","text":"

There are two actions possible on a KineticaUser. The first is upsert which is for user creation or modification. The second is change-password which shows when a user password reset has been performed.

","tags":["Administration"]},{"location":"Administration/user_management/#ring-name","title":"Ring Name","text":"

The name of the KineticaCluster the user is created in.

","tags":["Administration"]},{"location":"Administration/user_management/#uid","title":"UID","text":"

The unique, user id to use in LDAP & the DB to reference this user.

","tags":["Administration"]},{"location":"Administration/user_management/#last-name","title":"Last Name","text":"

Last Name refers to last name or surname.

sn in LDAP terms.

","tags":["Administration"]},{"location":"Administration/user_management/#given-name","title":"Given Name","text":"

Given Name is the Firstname also called Christian name.

givenName in LDAP terms.

","tags":["Administration"]},{"location":"Administration/user_management/#display-name","title":"Display Name","text":"

The name shown on any UI representation.

","tags":["Administration"]},{"location":"Administration/user_management/#ldap","title":"LDAP","text":"

Identifies if the user has been successfully created within LDAP.

  • '' - if empty the user has not yet been created in LDAP
  • 'OK' - shows the user has been successfully created within LDAP
  • 'Failed' - shows there was a failure adding the user to LDAP
","tags":["Administration"]},{"location":"Administration/user_management/#db","title":"DB","text":"

Identifies if the user has been successfully created within the DB.

  • '' - if empty the user has not yet been created in the DB
  • 'OK' - shows the user has been successfully created within the DB
  • 'Failed' - shows there was a failure adding the user to the DB
","tags":["Administration"]},{"location":"Administration/user_management/#reveal","title":"Reveal","text":"

Identifies if the user has been successfully created within Reveal.

  • '' - if empty the user has not yet been created in Reveal
  • 'OK' - shows the user has been successfully created within Reveal
  • 'Failed' - shows there was a failure adding the user to Reveal
","tags":["Administration"]},{"location":"Administration/user_management/#user-creation","title":"User Creation","text":"

User creation requires two Kubernetes CRs to be submitted to Kubernetes and processed by the Kinetica DB Operator.

  • User Secret (Password)
  • Kinetica User

Creation Sequence

It is preferable to create the User Secret prior to creating the KineticaUser.

Secret Deletion

The User Secret will be deleted once the KineticaUser is created by the operator. The users password will be stored in LDAP and not be present in Kubernetes.

","tags":["Administration"]},{"location":"Administration/user_management/#user-secret","title":"User Secret","text":"

In this example a user Fred Smith will be created.

fred-smith-secret.yaml
apiVersion: v1\nkind: Secret\nmetadata:\n  name: fred-smith-secret\n  namespace: gpudb\nstringData:\n  password: testpassword\n
Create the User Password Secret
kubectl apply -f fred-smith-secret.yaml\n
","tags":["Administration"]},{"location":"Administration/user_management/#kineticauser","title":"KineticaUser","text":"user-fred-smith.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaUser\nmetadata:\n  name: user-fred-smith\n  namespace: gpudb\nspec:\n  ringName: kineticacluster-sample\n  uid: fred\n  action: upsert\n  reveal: true\n  upsert:\n    userPrincipalName: fred.smith@example.com\n    givenName: Fred\n    displayName: FredSmith\n    lastName: Smith\n    passwordSecret: fred-smith-secret\n
","tags":["Administration"]},{"location":"Administration/user_management/#user-deletion","title":"User Deletion","text":"

To delete a user from the Kinetica Cluster simply delete the User CR from Kubernetes: -

Delete User
kubectl -n gpudb delete ku user-fred-smith \n
","tags":["Administration"]},{"location":"Administration/user_management/#change-password","title":"Change Password","text":"

To change a users password we use the change-password action rather than the upsert action we used previously.

Creation Sequence

It is preferable to create the User Secret prior to creating the KineticaUser.

Secret Deletion

The User Secret will be deleted once the KineticaUser is created by the operator. The users password will be stored in LDAP and not be present in Kubernetes.

fred-smith-change-pwd-secret.yaml
apiVersion: v1\nkind: Secret\nmetadata:\n  name: fred-smith-change-pwd-secret\n  namespace: gpudb\nstringData:\n  password: testpassword\n
Create the User Password Secret
kubectl apply -f fred-smith-change-pwd-secret.yaml\n
user-fred-smith-change-password.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaUser\nmetadata:\n  name: user-fred-smith-change-password\n  namespace: gpudb\nspec:\n  ringName: kineticacluster-sample\n  uid: fred\n  action: change-password\n  changePassword:\n    passwordSecret: fred-smith-change-pwd-secret\n
","tags":["Administration"]},{"location":"Administration/user_management/#advanced-topics","title":"Advanced Topics","text":"","tags":["Administration"]},{"location":"Administration/user_management/#limit-user-resources","title":"Limit User Resources","text":"","tags":["Administration"]},{"location":"Administration/user_management/#data-limit","title":"Data Limit","text":"

KIFs user data size limit.

dataLimit
spec:\n  upsert:\n    dataLimit: 10Gi\n
","tags":["Administration"]},{"location":"Administration/user_management/#user-kifs-usage","title":"User Kifs Usage","text":"

Kifs Enablement

In order to use the Kifs user features below there is a requirement that Kifs is enabled on the Kinetica DB.

","tags":["Administration"]},{"location":"Administration/user_management/#home-directory","title":"Home Directory","text":"

When creating a new user it is possible to create that user a 'home' directory within the Kifs filesystem by using the createHomeDirectory option.

createHomeDirectory
spec:\n  upsert:\n    createHomeDirectory: true\n
","tags":["Administration"]},{"location":"Administration/user_management/#limit-directory-storage","title":"Limit Directory Storage","text":"

It is possible to limit the amount of Kifs file storage the user has by adding kifsDataLimit to the user creation yaml and setting the value to a Kubernetes Quantity e.g. 2Gi

kifsDataLimit
spec:\n  upsert:\n    kifsDataLimit: 2Gi\n
","tags":["Administration"]},{"location":"Advanced/","title":"Advanced Topics","text":"
  • Find alternative chart versions

    How to use pre-release or development Chart version if requested to by Kinetica Support. Alternative Charts

  • Configuring Ingress Records

    How to expose Kinetica via Kubernetes Ingress. Ingress Configuration

  • Air-Gapped Environments

    Specifics for installing Kinetica for Kubernetes in an Air-Gapped Environment Airgapped

  • Using your own OpenTelemetry Collector

    How to configure Kinetica for Kubernetes to use your open OpenTelemetry collector.

    External OTEL

  • Minio for Dev/Test S3 Storage

    Install Minio in order to enable S3 storage for Development.

    min.io

  • Creating Resources with Kubernetes APIs

    Create Users, Roles, DB Schema etc. using Kubernertes CRs. Resources

  • Kinetica on OS X (Apple Silicon)

    Install the Kinetica DB on a new Kubernetes 'production-like' cluster on Apple OS X (Apple Silicon) using UTM. Apple ARM64

  • Bare Metal/VM Installation from Scratch

    Install the Kinetica DB on a new Kubernetes 'production-like' bare metal (or VMs) cluster via kubeadm using cilium Networking, kube-vip LoadBalancer. Bare Metal/VM Installation

  • Software LoadBalancer

    Install a software Kubernetes CCM/LoadBalancer for bare metal or VM based Kubernetes CLusters. kube-vip LoadBalancer.

    Software LoadBalancer

","tags":["Advanced"]},{"location":"Advanced/advanced_topics/","title":"Advanced Topics","text":"","tags":["Advanced"]},{"location":"Advanced/advanced_topics/#install-from-a-developmentpre-release-chart-version","title":"Install from a development/pre-release chart version","text":"

Find all alternative chart versions with:

Find alternative chart versions
helm search repo kinetica-operators --devel --versions\n

Then append --devel --version [CHART-DEVEL-VERSION] to the end of the Helm install command. See here.

","tags":["Advanced"]},{"location":"Advanced/airgapped/","title":"Air-Gapped Environments","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#obtaining-the-kinetica-images","title":"Obtaining the Kinetica Images","text":"Kinetica Images for an Air-Gapped Environment

If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.

For information on ways to transfer the files into an air-gapped environment See here.

Please select the method to transfer the images: -

mindthegap containerd docker

It is possible to use mesosphere/mindthegap

mindthegap

mindthegap provides utilities to manage air-gapped image bundles, both creating image bundles and seeding images from a bundle into an existing OCI registry or directly loading them to containerd.

This makes it possible with mindthegap to

  • create a single archive bundle of all the required images outside the air-gapped environment
  • run mindthegap using the archive bundle on the Kubernetes Nodes to bulk load the images into contained in a single command.

Kinetica provides two mindthegap yaml files which list all the necessary images for Kinetica for Kubernetes.

  • CPU only
  • CPU & nVidia CUDA GPU
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#required-container-images","title":"Required Container Images","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"
  • docker.io/kinetica/kinetica-k8s-operator:v7.2.0-3.rc-3
    • docker.io/kinetica/kinetica-k8s-cpu:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-gpu:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench-operator:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench:v7.2.0-3.rc-3
  • docker.io/kinetica/kinetica-k8s-monitor:v7.2.0-3.rc-3
  • docker.io/kinetica/busybox:v7.2.0-3.rc-3
  • docker.io/kinetica/fluent-bit:v7.2.0-3.rc-3
  • docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • nvcr.io/nvidia/gpu-operator:v23.9.1
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • registry.k8s.io/nfd/node-feature-discovery:v0.14.2
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"
  • docker.io/bitnami/openldap:2.6.7
  • docker.io/alpine/openssl:latest (used by bitnami/openldap)
  • docker.io/otel/opentelemetry-collector-contrib:0.95.0
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"
  • quay.io/brancz/kube-rbac-proxy:v0.14.2
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#optional-container-images","title":"Optional Container Images","text":"

These images are only required if certain features are enabled as part of the Helm installation: -

  • CertManager
  • ingress-ninx
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"
  • quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"
  • registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)
  • registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c

It is possible with containerd to pull images, save them and load them either into a Container Registry in the air gapped environment or directly into another containerd instance.

If the target containerd is on a node running a Kubernetes Cluster then these images will be sourced by Kubernetes from the loaded images, via CRI, with no requirement to pull them from an external source e.g. a Registry or Mirror.

sudo required

Depending on how containerd has been installed and configured many of the example calls below may require running with sudo

It is possible with docker to pull images, save them and load them into an OCI Container Registry in the air gapped environment.

Pull a remote image (docker)
docker pull --platformlinux/amd64 docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-2.ga-4\n
Export a local image (docker)
docker export --platformlinux/amd64 -o kinetica-k8s-cpu-v7.2.2-2.ga-4.tar \\\ndocker.io/kinetica/kinetica-k8s-cpu:v7.2.2-2.ga-4\n

We can now transfer this archive (kinetica-k8s-cpu-v7.2.2-2.ga-4.rc-3.tar) to the Kubernetes Node inside the air-gapped environment.

","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2)
  1. It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image
  2. With a supported nVidia GPU.
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#install-mindthegap","title":"Install mindthegap","text":"Install mindthegap
wget https://github.com/mesosphere/mindthegap/releases/download/v1.13.1/mindthegap_v1.13.1_linux_amd64.tar.gz\ntar zxvf mindthegap_v1.13.1_linux_amd64.tar.gz\n
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-create-the-bundle","title":"mindthegap - Create the Bundle","text":"mindthegap create image-bundle
mindthegap create image-bundle --images-file mtg.yaml --platform linux/amd64\n

where --images-file is either the CPU or GPU Kinetica mindthegap yaml file.

","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-the-bundle","title":"mindthegap - Import the Bundle","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-to-containerd","title":"mindthegap - Import to containerd","text":"mindthegap import image-bundle
mindthegap import image-bundle --image-bundle images.tar [--containerd-namespace k8s.io]\n

If --containerd-namespace is not specified, images will be imported into k8s.io namespace.

sudo required

Depending on how containerd has been installed and configured it may require running the above command with sudo

","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-to-an-internal-oci-registry","title":"mindthegap - Import to an internal OCI Registry","text":"mindthegap import image-bundle
mindthegap push bundle --bundle <path/to/bundle.tar> \\\n--to-registry <registry.address> \\\n[--to-registry-insecure-skip-tls-verify]\n
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-using-containerd-to-pull-and-export-an-image","title":"containerd - Using containerd to pull and export an image","text":"

Similar to docker pull we can use ctr image pull so to pull the core Kinetica DB cpu based image

Pull a remote image (containerd)
ctr image pull docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-2.ga-4\n

We now need to export the pulled image as an archive to the local filesystem.

Export a local image (containerd)
ctr image export kinetica-k8s-cpu-v7.2.2-2.ga-4.tar \\\ndocker.io/kinetica/kinetica-k8s-cpu:v7.2.2-2.ga-4\n

We can now transfer this archive (kinetica-k8s-cpu-v7.2.2-2.ga-4.tar) to the Kubernetes Node inside the air-gapped environment.

","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-using-containerd-to-import-an-image","title":"containerd - Using containerd to import an image","text":"

Using containerd to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.

Import the Images
ctr -n=k8s.io images import kinetica-k8s-cpu-v7.2.2-2.ga-4.tar\n

-n=k8s.io

It is possible to use ctr images import kinetica-k8s-cpu-v7.2.2-2.ga-4.rc-3.tar to import the image to containerd.

However, in order for the image to be visible to the Kubernetes Cluster running on containerd it is necessary to add the parameter -n=k8s.io.

","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-verifying-the-image-is-available","title":"containerd - Verifying the image is available","text":"

To verify the image is loaded into containerd on the node run the following on the node: -

Verify containerd Images
ctr image ls\n

To verify the image is visible to Kubernetes on the node run the following: -

Verify CRI Images
crictl images\n
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#docker-using-docker-to-import-an-image","title":"docker - Using docker to import an image","text":"

Using docker to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.

Import the Images
docker import --platformlinux/amd64 kinetica-k8s-cpu-v7.2.2-2.ga-4.tar registry:repository/kinetica-k8s-cpu:v7.2.0-3.rc-3\n
","tags":["Advanced","Installation"]},{"location":"Advanced/alternative_charts/","title":"Using Alternative Helm Charts","text":"

If requested by Kinetica Support you can search and use pre-release versions of the Kinetica Helm Charts.

","tags":["Advanced","Installation"]},{"location":"Advanced/alternative_charts/#install-from-a-developmentpre-release-chart-version","title":"Install from a development/pre-release chart version","text":"

Find all alternative chart versions with:

Find alternative chart versions
helm search repo kinetica-operators --devel --versions\n

Then append --devel --version [CHART-DEVEL-VERSION] to the end of the Helm install command.

Helm install kinetica-operators
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--devel \\\n--version 72.0 \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
","tags":["Advanced","Installation"]},{"location":"Advanced/ingress_configuration/","title":"Ingress Configuration","text":"
  • ingress-nginx Configuration

    How to enable Ingress with ingress-nginx for Kinetica DB.

    ingress-nginx

  • nginx-ingress Configuration

    How to enable Ingress with nginx-ingress for Kinetica DB.

    nginx-ingress

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/","title":"ingress-nginx Ingress Configuration","text":"

To use an 'external' ingress-nginx controller i.e. not the one optionally installed by the Kinetica Operators Helm chart it is necessary to disable ingress in the KineticaCluster CR.

The field spec.ingressController: nginx should be set to spec.ingressController: none.

It is then necessary to create the required Ingress CRs by hand. Below is a list of the Ingress paths that need to be exposed along with sample ingress-nginx CRs.

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#required-ingress-routes","title":"Required Ingress Routes","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#ingress-routes","title":"Ingress Routes","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#gadmin-paths","title":"GAdmin Paths","text":"Path Service Port /gadmin cluster-name-gadmin-service gadmin (8080/TCP) /tableau cluster-name-gadmin-service gadmin (8080/TCP) /files cluster-name^-gadmin-service gadmin (8080/TCP)

where cluster-name is the name of the Kinetica Cluster i.e. what is in the .spec.gpudbCluster.clusterName in the KineticaCluster CR.

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#workbench-paths","title":"Workbench Paths","text":"Path Service Port / workbench-workbench-service workbench-port (8000/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#db-rank-0-paths","title":"DB rank-0 Paths","text":"Path Service Port /cluster-145025b8(/gpudb-0(/.*|$)) cluster-145025b8-rank0-service httpd (8082/TCP) /cluster-145025b8/gpudb-0/hostmanager(.*) cluster-145025b8-rank0-service hostmanager (9300/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#db-rank-n-paths","title":"DB rank-N Paths","text":"Path Service Port /cluster-145025b8(/gpudb-N(/.*|$)) cluster-145025b8-rank1-service httpd (8082/TCP) /cluster-145025b8/gpudb-N/hostmanager(.*) cluster-145025b8-rank1-service hostmanager (9300/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#reveal-paths","title":"Reveal Paths","text":"Path Service Port /reveal cluster-name-reveal-service reveal (8088/TCP) /caravel cluster-name-reveal-service reveal (8088/TCP) /static cluster-name-reveal-service reveal (8088/TCP) /logout cluster-name-reveal-service reveal (8088/TCP) /resetmypassword cluster-name-reveal-service reveal (8088/TCP) /dashboardmodelview cluster-name-reveal-service reveal (8088/TCP) /dashboardmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /slicemodelview cluster-name-reveal-service reveal (8088/TCP) /slicemodelviewasync cluster-name-reveal-service reveal (8088/TCP) /sliceaddview cluster-name-reveal-service reveal (8088/TCP) /databaseview cluster-name-reveal-service reveal (8088/TCP) /databaseasync cluster-name-reveal-service reveal (8088/TCP) /databasetablesasync cluster-name-reveal-service reveal (8088/TCP) /tablemodelview cluster-name-reveal-service reveal (8088/TCP) /csstemplatemodelview cluster-name-reveal-service reveal (8088/TCP) /csstemplatemodelviewasync cluster-name-reveal-service reveal (8088/TCP) /users cluster-name-reveal-service reveal (8088/TCP) /roles cluster-name-reveal-service reveal (8088/TCP) /userstatschartview cluster-name-reveal-service reveal (8088/TCP) /permissions cluster-name-reveal-service reveal (8088/TCP) /viewmenus cluster-name-reveal-service reveal (8088/TCP) /permissionviews cluster-name-reveal-service reveal (8088/TCP) /accessrequestsmodelview cluster-name-reveal-service reveal (8088/TCP) /accessrequestsmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /logmodelview cluster-name-reveal-service reveal (8088/TCP) /logmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /userinfoeditview cluster-name-reveal-service reveal (8088/TCP) /tablecolumninlineview cluster-name-reveal-service reveal (8088/TCP) /sqlmetricinlineview cluster-name-reveal-service reveal (8088/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-ingress-crs","title":"Example Ingress CRs","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-gadmin-ingress-cr","title":"Example GAdmin Ingress CR","text":"Example GAdmin Ingress CR

Example GAdmin Ingress CR

apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name:  cluster-name-gadmin-ingress #(1)!\n  namespace: gpudb\nspec:\n  ingressClassName: nginx\n  tls:\n    - hosts:\n        - cluster-name.example.com #(1)!\n      secretName: kinetica-tls\n  rules:\n    - host: cluster-name.example.com #(1)!\n      http:\n        paths:\n          - path: /gadmin\n            pathType: Prefix\n            backend:\n              service:\n                name: cluster-name-gadmin-service #(1)!\n                port:\n                  name: gadmin\n          - path: /tableau\n            pathType: Prefix\n            backend:\n              service:\n                name: cluster-name-gadmin-service #(1)!\n                port:\n                  name: gadmin\n          - path: /files\n            pathType: Prefix\n            backend:\n              service:\n                name: cluster-name-gadmin-service #(1)!\n                port:\n                  name: gadmin\n
1. where cluster-name is the name of the Kinetica Cluster

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-rank-ingress-cr","title":"Example Rank Ingress CR","text":"Example Rank Ingress CR Example Rank Ingress CR
apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: cluster-name-rank1-ingress\n  namespace: gpudb\nspec:\n  ingressClassName: nginx\n  tls:\n    - hosts:\n        - cluster-name.example.com\n      secretName: kinetica-tls\n  rules:\n    - host: cluster-name.example.com\n      http:\n        paths:\n          - path: /cluster-name(/gpudb-1(/.*|$))\n            pathType: Prefix\n            backend:\n              service:\n                name: cluster-name-rank1-service\n                port:\n                  name: httpd\n          - path: /cluster-name/gpudb-1/hostmanager(.*)\n            pathType: Prefix\n            backend:\n              service:\n                name: cluster-name-rank1-service\n                port:\n                  name: hostmanager\n
  1. where cluster-name is the name of the Kinetica Cluster
","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-reveal-ingress-cr","title":"Example Reveal Ingress CR","text":"Example Reveal Ingress CR

Example Reveal Ingress CR

    apiVersion: networking.k8s.io/v1\n    kind: Ingress\n    metadata:\n      name: cluster-name-reveal-ingress\n      namespace: gpudb\n    spec:\n      ingressClassName: nginx\n      tls:\n        - hosts:\n            - cluster-name.example.com\n          secretName: kinetica-tls\n      rules:\n        - host: cluster-name.example.com\n          http:\n            paths:\n              - path: /reveal\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /caravel\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /static\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /logout\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /resetmypassword\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /dashboardmodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /dashboardmodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /slicemodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /slicemodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /sliceaddview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /databaseview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /databaseasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /databasetablesasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /tablemodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /tablemodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /csstemplatemodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /csstemplatemodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /users\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /roles\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /userstatschartview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /permissions\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /viewmenus\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /permissionviews\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /accessrequestsmodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /accessrequestsmodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /logmodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /logmodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /userinfoeditview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /tablecolumninlineview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /sqlmetricinlineview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n
1. where cluster-name is the name of the Kinetica Cluster

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#exposing-the-postgres-proxy-port","title":"Exposing the Postgres Proxy Port","text":"

In order to access Kinetica's Postgres functionality some TCP (not HTTP) ports need to be open externally.

For ingress-nginx a configuration file needs to be created to enable port 5432.

tcp-services.yaml

apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: tcp-services\n  namespace: kinetica-system # (1)!\ndata:\n  '5432': gpudb/kinetica-k8s-sample-rank0-service:5432 #(2)!\n  '9002': gpudb/kinetica-k8s-sample-rank0-service:9002 #(3)!\n
1. Change the namespace to the namespace your ingress-nginx controller is running in. e.g. ingress-nginx 2. This exposes the postgres proxy port on the default 5432 port. If you wish to change this to a non-standard port then it needs to be changed here but also in the Helm values.yaml to match. 3. This port is the Table Monitor port and should always be exposed alongside the Postgres Proxy.

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_urls/","title":"Ingress urls","text":""},{"location":"Advanced/ingress_urls/#gadmin-paths","title":"GAdmin Paths","text":"Path Service Port /gadmin cluster-name-gadmin-service gadmin (8080/TCP) /tableau cluster-name-gadmin-service gadmin (8080/TCP) /files cluster-name^-gadmin-service gadmin (8080/TCP)

where cluster-name is the name of the Kinetica Cluster i.e. what is in the .spec.gpudbCluster.clusterName in the KineticaCluster CR.

"},{"location":"Advanced/ingress_urls/#workbench-paths","title":"Workbench Paths","text":"Path Service Port / workbench-workbench-service workbench-port (8000/TCP)"},{"location":"Advanced/ingress_urls/#db-rank-0-paths","title":"DB rank-0 Paths","text":"Path Service Port /cluster-145025b8(/gpudb-0(/.*|$)) cluster-145025b8-rank0-service httpd (8082/TCP) /cluster-145025b8/gpudb-0/hostmanager(.*) cluster-145025b8-rank0-service hostmanager (9300/TCP)"},{"location":"Advanced/ingress_urls/#db-rank-n-paths","title":"DB rank-N Paths","text":"Path Service Port /cluster-145025b8(/gpudb-N(/.*|$)) cluster-145025b8-rank1-service httpd (8082/TCP) /cluster-145025b8/gpudb-N/hostmanager(.*) cluster-145025b8-rank1-service hostmanager (9300/TCP)"},{"location":"Advanced/ingress_urls/#reveal-paths","title":"Reveal Paths","text":"Path Service Port /reveal cluster-name-reveal-service reveal (8088/TCP) /caravel cluster-name-reveal-service reveal (8088/TCP) /static cluster-name-reveal-service reveal (8088/TCP) /logout cluster-name-reveal-service reveal (8088/TCP) /resetmypassword cluster-name-reveal-service reveal (8088/TCP) /dashboardmodelview cluster-name-reveal-service reveal (8088/TCP) /dashboardmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /slicemodelview cluster-name-reveal-service reveal (8088/TCP) /slicemodelviewasync cluster-name-reveal-service reveal (8088/TCP) /sliceaddview cluster-name-reveal-service reveal (8088/TCP) /databaseview cluster-name-reveal-service reveal (8088/TCP) /databaseasync cluster-name-reveal-service reveal (8088/TCP) /databasetablesasync cluster-name-reveal-service reveal (8088/TCP) /tablemodelview cluster-name-reveal-service reveal (8088/TCP) /csstemplatemodelview cluster-name-reveal-service reveal (8088/TCP) /csstemplatemodelviewasync cluster-name-reveal-service reveal (8088/TCP) /users cluster-name-reveal-service reveal (8088/TCP) /roles cluster-name-reveal-service reveal (8088/TCP) /userstatschartview cluster-name-reveal-service reveal (8088/TCP) /permissions cluster-name-reveal-service reveal (8088/TCP) /viewmenus cluster-name-reveal-service reveal (8088/TCP) /permissionviews cluster-name-reveal-service reveal (8088/TCP) /accessrequestsmodelview cluster-name-reveal-service reveal (8088/TCP) /accessrequestsmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /logmodelview cluster-name-reveal-service reveal (8088/TCP) /logmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /userinfoeditview cluster-name-reveal-service reveal (8088/TCP) /tablecolumninlineview cluster-name-reveal-service reveal (8088/TCP) /sqlmetricinlineview cluster-name-reveal-service reveal (8088/TCP)"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/","title":"Kinetica images list for airgapped environments","text":"Kinetica Images for an Air-Gapped Environment

If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.

For information on ways to transfer the files into an air-gapped environment See here.

"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#required-container-images","title":"Required Container Images","text":""},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"
  • docker.io/kinetica/kinetica-k8s-operator:v7.2.0-3.rc-3
    • docker.io/kinetica/kinetica-k8s-cpu:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-gpu:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench-operator:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench:v7.2.0-3.rc-3
  • docker.io/kinetica/kinetica-k8s-monitor:v7.2.0-3.rc-3
  • docker.io/kinetica/busybox:v7.2.0-3.rc-3
  • docker.io/kinetica/fluent-bit:v7.2.0-3.rc-3
  • docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • nvcr.io/nvidia/gpu-operator:v23.9.1
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • registry.k8s.io/nfd/node-feature-discovery:v0.14.2
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"
  • docker.io/bitnami/openldap:2.6.7
  • docker.io/alpine/openssl:latest (used by bitnami/openldap)
  • docker.io/otel/opentelemetry-collector-contrib:0.95.0
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"
  • quay.io/brancz/kube-rbac-proxy:v0.14.2
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#optional-container-images","title":"Optional Container Images","text":"

These images are only required if certain features are enabled as part of the Helm installation: -

  • CertManager
  • ingress-ninx
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"
  • quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"
  • registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)
  • registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2)
  1. It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image
  2. With a supported nVidia GPU.
"},{"location":"Advanced/kinetica_mac_arm_k8s/","title":"Kinetica DB on Kubernetes","text":"

This walkthrough will show how to install Kinetica DB on a Mac running OS X. The Kubernetes cluster will be running on VMs with Ubuntu Linux 22.04 ARM64.

This solution is equivalent to a production bare metal installation and does not use Docker, Podman or QEMU but rather Apple native Virtualization.

The Kubernetes cluster will consist of one Master node k8smaster1 and two Worker nodes k8snode1 & k8snode2.

The virtualization platform is UTM.

Obtain a Kinetica License Key

A product license key will be required for install. Please contact Kinetica Support to request a trial key.

Download and install UTM.

","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#create-the-vms","title":"Create the VMs","text":"","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#k8smaster1","title":"k8smaster1","text":"

For this walkthrough the master node will be 4 vCPU, 8 GB RAM & 40-64 GB disk.

Start the creation of a new VM in UTM. Select Virtualize

Select Linux as the VM OS.

On the Linux page - Select Use Apple Virtualization and an Ubuntu 22.04 (Arm64) ISO.

As this is the master Kubernetes node (VM) it can be smaller than the nodes hosting the Kinetica DB itself.

Set the memory to 8 GB and the number of CPUs to 4.

Set the storage to between 40-64 GB.

This next step is optional if you wish to setup a shared folder between your Mac host & the Linux VM.

The final step to create the VM is a summary. Please check the values shown and hit Save

You should now see your new VM in the left hand pane of the UTM UI.

Go ahead and click the button.

Once the Ubuntu installer comes up follow the steps selecting whichever keyboard etc. you require.

The only changes you need to make are: -

  • Change the installation to Ubuntu Server (minimized)
  • Your server's name to k8smaster1
  • Enable OpenSSH server.

and complete the installation.

Reboot the VM, remove the ISO from the 'external' drive . Log in to the VM and get the VMs IP address with

Bash
ip a\n

Make a note of the IP for later use.

","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#k8snode1-k8snode2","title":"k8snode1 & k8snode2","text":"

Repeat the same process to provision one or two nodes depending on how much memory you have available on the Mac.

You need to change the RAM size to 16 GB. You can leave the vCPU count at 4. For the disk size that depends on how much data you want to ingest. It should however be at least 4x RAM size.

Once installed again log in to the VM and get the VMs IP address with

Bash
ip a\n

Note

Make a note of the IP(s) for later use.

Your VMs are complete

Continue installing your new VMs by following Bare Metal/VM Installation

","tags":["Advanced","Development"]},{"location":"Advanced/kube_vip_loadbalancer/","title":"Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations","text":"

For our example we are going to enable a Kubernetes based LoadBalancer to issue IP addresses to our Kubernetes Services of type LoadBalancer using kube-vip.

Ingress Service is pending

The ingress-nginx-controller is currently in the pending state as there is no CCM/LoadBalancer

","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kube-vip","title":"kube-vip","text":"

We will install two components into our Kubernetes CLuster

  • kube-vip-cloud-controller
  • Kubernetes Load-Balancer Service
","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kube-vip-cloud-controller","title":"kube-vip-cloud-controller","text":"

Quote

The kube-vip cloud provider can be used to populate an IP address for Services of type LoadBalancer similar to what public cloud providers allow through a Kubernetes CCM.

Install the kube-vip CCM

Install the kube-vip CCM
kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml\n

Now we need to setup the required RBAC permissions: -

Install the kube-vip RBAC

Install kube-vip RBAC
kubectl apply -f https://kube-vip.io/manifests/rbac.yaml\n

The following ConfigMap will configure the kube-vip-cloud-controller to obtain IP addresses from the host networks DHCP server. i.e. the DHCP on the physical network that the host machine or VM is connected to.

Install the kube-vip ConfigMap

Install the kube-vip ConfigMap
apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: kubevip\n  namespace: kube-system\ndata:\n  cidr-global: 0.0.0.0/32\n

It is possible to specify IP address ranges see here.

","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kubernetes-load-balancer-service","title":"Kubernetes Load-Balancer Service","text":"Obtain the Master Node IP address & Interface name Obtain the Master Node IP address & Interface name
ip a\n

In this example the network interface of the master node is 192.168.2.180 and the interface is enp0s1.

We need to apply the kube-vip daemonset but first we need to create the configuration

Install the kube-vip Daemonset
apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  labels:\n    app.kubernetes.io/name: kube-vip-ds\n    app.kubernetes.io/version: v0.7.2\n  name: kube-vip-ds\n  namespace: kube-system\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io/name: kube-vip-ds\n  template:\n    metadata:\n      labels:\n        app.kubernetes.io/name: kube-vip-ds\n        app.kubernetes.io/version: v0.7.2\n    spec:\n      affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n            - matchExpressions:\n              - key: node-role.kubernetes.io/master\n                operator: Exists\n            - matchExpressions:\n              - key: node-role.kubernetes.io/control-plane\n                operator: Exists\n      containers:\n      - args:\n        - manager\n        env:\n        - name: vip_arp\n          value: \"true\"\n        - name: port\n          value: \"6443\"\n        - name: vip_interface\n          value: enp0s1\n        - name: vip_cidr\n          value: \"32\"\n        - name: dns_mode\n          value: first\n        - name: cp_enable\n          value: \"true\"\n        - name: cp_namespace\n          value: kube-system\n        - name: svc_enable\n          value: \"true\"\n        - name: svc_leasename\n          value: plndr-svcs-lock\n        - name: vip_leaderelection\n          value: \"true\"\n        - name: vip_leasename\n          value: plndr-cp-lock\n        - name: vip_leaseduration\n          value: \"5\"\n        - name: vip_renewdeadline\n          value: \"3\"\n        - name: vip_retryperiod\n          value: \"1\"\n        - name: address\n          value: 192.168.2.180\n        - name: prometheus_server\n          value: :2112\n        image: ghcr.io/kube-vip/kube-vip:v0.7.2\n        imagePullPolicy: Always\n        name: kube-vip\n        resources: {}\n        securityContext:\n          capabilities:\n            add:\n            - NET_ADMIN\n            - NET_RAW\n      hostNetwork: true\n      serviceAccountName: kube-vip\n      tolerations:\n      - effect: NoSchedule\n        operator: Exists\n      - effect: NoExecute\n        operator: Exists\n  updateStrategy: {}\n

Lines 5, 7, 12, 16, 38 and 62 need modifying to your environment.

Install the kube-vip Daemonset

ARP or BGP

The Daemonset above uses ARP to communicate with the network it is also possible to use BGP. See Here

Example showing DHCP allocated external IP address to the Ingress Controller

Our ingress-nginx-controller has been allocated the IP Address 192.168.2.194.

Ingress Access

The ingress-niginx-controller requires the host FQDN to be on the user requests in order to know how to route the requests to the correct Kubernetes Service. Using the iP address in the URL will cause an error as ingress cannot select the correct service.

List Ingress

If you did not set the FQDN of the Kinetica Cluster to a DNS resolvable hostname add local.kinetica to your /etc/hosts/ file in order to be able to access the Kinetica URLs

Edit /etc/hosts

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/","title":"Bare Metal/VM Installation - kubeadm","text":"

This walkthrough will show how to install Kinetica DB. For this example the Kubernetes cluster will be running on 3 VMs with Ubuntu Linux 22.04 (ARM64).

This solution is equivalent to a production bare metal installation and does not use Docker, Podman or QEMU.

The Kubernetes cluster requires 3 VMs consiting of one Master node k8smaster1 and two Worker nodes k8snode1 & k8snode2.

Purple Example Boxes

The purple boxes in the instructions below can be expanded for a screen recording of the commands & their results.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#kubernetes-node-installation","title":"Kubernetes Node Installation","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#setup-the-kubernetes-nodes","title":"Setup the Kubernetes Nodes","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#edit-etchosts","title":"Edit /etc/hosts","text":"

SSH into each of the nodes and run the following: -

Edit `/etc/hosts
sudo vi /etc/hosts\n\nx.x.x.x k8smaster1\nx.x.x.x k8snode1\nx.x.x.x k8snode2\n

where x.x.x.x is the IP Address of the corresponding nose.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#disable-linux-swap","title":"Disable Linux Swap","text":"

Next we need to disable Swap on Linux: -

Disable Swap

Disable Swap
sudo swapoff -a\n\nsudo vi /etc/fstab\n

comment out the swap entry in /etc/fstab on each node.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#linux-system-configuration-changes","title":"Linux System Configuration Changes","text":"

We are using containerd as the container runtime but in order to do so we need to make some system level changes on Linux.

Linux System Configuration Changes

Linux System Configuration Changes
cat << EOF | sudo tee /etc/modules-load.d/containerd.conf\noverlay\nbr_netfilter\nEOF\n\nsudo modprobe overlay\n\nsudo modprobe br_netfilter\n\ncat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nEOF\n\nsudo sysctl --system\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#container-runtime-installation","title":"Container Runtime Installation","text":"

Run on all nodes (VMs)

Run the following commands, until advised not to, on all of the VMs you created.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-containerd","title":"Install containerd","text":"Install containerd Install `containerd`
sudo apt update\n\nsudo apt install -y containerd\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#create-a-default-containerd-config","title":"Create a Default containerd Config","text":"Create a Default containerd Config Create a Default `containerd` Config
sudo mkdir -p /etc/containerd\n\nsudo containerd config default | sudo tee /etc/containerd/config.toml\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#enable-system-cgroup","title":"Enable System CGroup","text":"

Change the SystemdCgroup value to true in the containerd configuration file and restart the service

Enable System CGroup

Enable System CGroup
sudo sed -i 's/SystemdCgroup \\= false/SystemdCgroup \\= true/g' /etc/containerd/config.toml\n\nsudo systemctl restart containerd\nsudo systemctl enable containerd\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-pre-requisiteutility-packages","title":"Install Pre-requisite/Utility packages","text":"Install Pre-requisite/Utility packages Install Pre-requisite/Utility packages
sudo apt update\n\nsudo apt install -y apt-transport-https ca-certificates curl gpg git\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#download-the-kubernetes-public-signing-key","title":"Download the Kubernetes public signing key","text":"Download the Kubernetes public signing key Download the Kubernetes public signing key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#add-the-kubernetes-package-repository","title":"Add the Kubernetes Package Repository","text":"Add the Kubernetes Package Repository
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-the-kubernetes-installation-and-management-tools","title":"Install the Kubernetes Installation and Management Tools","text":"Install the Kubernetes Installation and Management Tools Install the Kubernetes Installation and Management Tools
sudo apt update\n\nsudo apt install -y kubeadm=1.29.0-1.1  kubelet=1.29.0-1.1  kubectl=1.29.0-1.1 \n\nsudo apt-mark hold kubeadm kubelet kubectl\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#initialize-the-kubernetes-cluster","title":"Initialize the Kubernetes Cluster","text":"

Initialize the Kubernetes Cluster by using kubeadm on the k8smaster1 control plane node.

Note

You will need an IP Address range for the Kubernetes Pods. This range is provided to kubeadm as part of the initialization. For our cluster of three nodes, given the default number of pods supported by a node (110) we need a CIDR of at least 330 distinct IP Addresses. Therefore, for this example we will use a --pod-network-cidr of 10.1.1.0/22 which allows for 1007 usable IPs. The reason for this is each node will get /24 of the /22 total.

The apiserver-advertise-address should be the IP Address of the k8smaster1 VM.

Initialize the Kubernetes Cluster

Initialize the Kubernetes Cluster
sudo kubeadm init --pod-network-cidr 10.1.1.0/22 --apiserver-advertise-address 192.168.2.180 --kubernetes-version 1.29.2\n

You should now deploy a pod network to the cluster. Run kubectl apply -f [podnetwork].yaml with one of the options listed at: Cluster Administration Addons

Make a note of the portion of the shell output which gives the join command which we will need to add our worker nodes to the master.

Copy the kudeadm join command

Then you can join any number of worker nodes by running the following on each as root:

Copy the `kudeadm join` command
kubeadm join 192.168.2.180:6443 --token wonuiv.v93rkizr6wvxwe6l \\\n--discovery-token-ca-cert-hash sha256:046ffa6303e6b281285a636e856b8e9e51d8c755248d9d013e15ae5c5f6bb127\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#setup-kubeconfig","title":"Setup kubeconfig","text":"

Before we add the worker nodes we can setup the kubeconfig so we will be able to use kubectl going forwards.

Setup kubeconfig

Setup `kubeconfig`
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#connect-list-the-kubernetes-cluster-nodes","title":"Connect & List the Kubernetes Cluster Nodes","text":"

We can now run kubectl to connect to the Kubernetes API Server to display the nodes in the newly created Kubernetes CLuster.

Connect & List the Kubernetes Cluster Nodes

Connect & List the Kubernetes Cluster Nodes
kubectl get nodes\n

STATUS = NotReady

From the kubectl output the status of the k8smaster1 node is showing as NotReady as we have yet to install the Kubernetes Network to the cluster.

We will be installing cilium as that provider in a future step.

Warning

At this point we should complete the installations of the worker nodes to this same point before continuing.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#join-the-worker-nodes-to-the-cluster","title":"Join the Worker Nodes to the Cluster","text":"

Once installed we run the join on the worker nodes. Note that the command which was output from the kubeadm init needs to run with sudo

Join the Worker Nodes to the Cluster
sudo kubeadm join 192.168.2.180:6443 --token wonuiv.v93rkizr6wvxwe6l \\\n    --discovery-token-ca-cert-hash sha256:046ffa6303e6b281285a636e856b8e9e51d8c755248d9d013e15ae5c5f6bb127\n
kubectl get nodes

Now we can again run

`kubectl get nodes`
kubectl get nodes\n

Now we can see all the nodes are present in the Kubernetes Cluster.

Run on Head Node only

From now the following cpmmands need to be run on the Master Node only..

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-kubernetes-networking","title":"Install Kubernetes Networking","text":"

We now need to install a Kubernetes CNI (Container Network Interface) to enable the pod network.

We will use Cilium as the CNI for our cluster.

Installing the Cilium CLI
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-arm64.tar.gz\nsudo tar xzvfC cilium-linux-arm64.tar.gz /usr/local/bin\nrm cilium-linux-arm64.tar.gz\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-cilium","title":"Install cilium","text":"

You can now install Cilium with the following command:

Install `cilium`
cilium install\ncilium status \n

If cilium status shows errors you may need to wait until the Cilium pods have started.

You can check progress with

Bash
kubectl get po -A\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#check-cilium-status","title":"Check cilium Status","text":"

Once Cilium the Cilium pods are running we can check the status of Cilium again by using

Check cilium Status

Check `cilium` Status
cilium status \n

We can now recheck the Kubernetes Cluster Nodes

Bash
kubectl get nodes\n

and they should have Status Ready

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#kubernetes-node-preparation","title":"Kubernetes Node Preparation","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#label-kubernetes-nodes","title":"Label Kubernetes Nodes","text":"

Now we go ahead and label the nodes. Kinetica uses node labels in production clusters where there are separate 'node groups' configured so that the Kinetica Infrastructure pods are deployed on a smaller VM type and the DB itself is deployed on larger nodes or gpu enabled nodes.

If we were using a Cloud Provider Kubernetes these are synonymous with EKS Node Groups or AKS VMSS which would be created with the same two labels on two node groups.

Label Kubernetes Nodes
kubectl label node k8snode1 app.kinetica.com/pool=infra\nkubectl label node k8snode2 app.kinetica.com/pool=compute\n

additionally in our case as we have created a new cluster the 'role' of the worker nodes is not set so we can also set that. In many cases the role is already set to worker but here we have some latitude.

Bash
kubectl label node k8snode1 kubernetes.io/role=kinetica-infra\nkubectl label node k8snode2 kubernetes.io/role=kinetica-compute\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-storage-class","title":"Install Storage Class","text":"

Install a local path provisioner storage class. In this case we are using the Rancher Local Path provisioner

Install Storage Class

Install Storage Class
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#set-default-storage-class","title":"Set Default Storage Class","text":"Set Default Storage Class Set Default Storage Class
kubectl patch storageclass local-path -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'\n

Kubernetes Cluster Provision Complete

Your basre Kubernetes Cluster is now complete and ready to have the Kinetica DB installed on it using the Helm Chart.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-kinetica-for-kubernetes-using-helm","title":"Install Kinetica for Kubernetes using Helm","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#add-the-helm-repository","title":"Add the Helm Repository","text":"Add the Helm Repository Add the Helm Repository
helm repo add kinetica-operators https://kineticadb.github.io/charts\nhelm repo update\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#download-a-starter-helm-valuesyaml","title":"Download a Starter Helm values.yaml","text":"

Now we need to obtain a starter values.yaml file to pass to our Helm install. We can download one from the github.com/kineticadb/charts repo.

Download a Starter Helm values.yaml

Download a Starter Helm `values.yaml`
    wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n

Obtain a Kinetica License Key

A product license key will be required for install. Please contact Kinetica Support to request a trial key.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#helm-install-kinetica","title":"Helm Install Kinetica","text":"#### Helm Install Kinetica Helm install kinetica-operators
helm -n kinetica-system upgrade -i \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"local-path\"\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#monitor-kinetica-startup","title":"Monitor Kinetica Startup","text":"

After a few moments, follow the progression of the main database pod startup with:

Monitor the Kinetica installation progress
kubectl -n gpudb get po gpudb-0 -w\n

Kinetica DB Provision Complete

Once you see gpudb-0 3/3 Running the database is up and running.

Software LoadBalancer

If you require a software based LoadBalancer to allocate IP address to the Ingress Controller or exposed Kubernetes Services then see here

This is usually apparent if your ingress or other Kubernetes Services with the type LoadBalancer are stuck in the Pending state.

","tags":["Advanced","Installation"]},{"location":"Advanced/minio_s3_dev_test/","title":"Using Minio for S3 Storage in Dev/Test","text":"

If you require a new Minio installation

Please follow the installation instructions found here to install the Minio Operator and to create your first tenant.

","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#create-minio-tenant","title":"Create Minio Tenant","text":"

In our example below we have created a tenant kinetica in the gpudb namespace using the Kinetica storage class kinetica-k8s-sample-storageclass.

or use the minio kubectl plugin

minio cli - create tenant
kubectl minio tenant create kinetica --capacity 32Gi --servers 1 --volumes 1 --namespace gpudb --storage-class kinetica-k8s-sample-storageclass --disable-tls\n

Console Port Forward

Forward the minio console for our newly created tenant

Bash
kubectl port-forward service/kinetica-console -n gpudb 9443:9443\n

In that tenant we create a bucket kinetica-cold-storage and in that bucket we create the path gpudb/cold-storage.

Once you have a tenant up and running we can configure Kinetica for Kubernetes to use it as the DB Cold Storage tier.

Backup/Restore Storage

Minio can also be used as the S3 storage for Velero. This enables Backup/Restore functionality via the KineticaBackup & KineticaRestore CRs.

","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#configuring-kinetica-to-use-minio","title":"Configuring Kinetica to use Minio","text":"","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#cold-storage","title":"Cold Storage","text":"

In order to configure the Cold Storage Tier for the Database it is necessary to add a coldStorageTier to the KineticaCluster CR. As we are using S3 Buckets for storage we then require coldStorageS3 entry which allows us to set the awsSecretAccessKey & awsAccessKeyId which were generated when the tenant was created in Minio.

If we look in the gpudb name space we can see that Minio created a Kubernetes service called minio exposed on port 443.

In the coldStorageS3 we need to add an endpoint field which contains the minio service name and the namespace gpudb i.e. minio.gpudb.svc.cluster.local.

KineticaCluster coldStorageTier S3 Configuration
spec:\n  gpudbCluster:\n    config:\n      tieredStorage:\n        coldStorageTier:\n            coldStorageType: s3\n            coldStorageS3:\n            basePath: gpudb/cold-storage/\n            bucketName: kinetica-cold-storage\n            endpoint: minio.gpudb.svc.cluster.local:80\n            limit: \"32Gi\"\n            useHttps: false\n            useManagedCredentials: false\n            useVirtualAddressing: false\n            awsSecretAccessKey: 6rLaOOddP3KStwPDhf47XLHREPdBqdav\n            awsAccessKeyId: VvlP5rHbQqzcYPHG\n      tieredStrategy:\n        default: VRAM 1, RAM 5, PERSIST 5, COLD0 10\n
","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/nginx_ingress_config/","title":"nginx-ingress Ingress Configuration","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/nginx_ingress_config/#coming-soon","title":"Coming Soon","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Architecture/","title":"Architecture","text":"

Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance \u2013 particularly on streaming analytics and geospatial workloads.

Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

  • Kinetica Database Architecture

    Install the Kinetica DB with helm and get up and running in minutes Core Database Architecture

  • Kinetica for Kubernetes Architecture

    Install the Kinetica DB with helm and get up and running in minutes Kubernetes Architecture

","tags":["Architecture"]},{"location":"Architecture/db_architecture/","title":"Architecture","text":"

Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance \u2013 particularly on streaming analytics and geospatial workloads.

Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#database-architecture","title":"Database Architecture","text":"","tags":["Architecture"]},{"location":"Architecture/db_architecture/#scale-out-architecture","title":"Scale-out Architecture","text":"

Kinetica has a distributed architecture that has been designed for data processing at scale. A standard cluster consists of identical nodes run on commodity hardware. A single node is chosen to be the head aggregation node.

A cluster can be scaled up at any time to increase storage capacity and processing power, with near-linear scale processing improvements for most operations. Sharding of data can be done automatically, or specified and optimized by the user.

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#distributed-ingest-query","title":"Distributed Ingest & Query","text":"

Kinetica uses a shared-nothing data distribution across worker nodes. The head node receives a query and breaks it down into small tasks that can be spread across worker nodes. To avoid bottlenecks at the head node, ingestion can also be organized in parallel by all the worker nodes. Kinetica is able to distribute data client-side before sending it to designated worker nodes. This streamlines communication and processing time.

For the client application, there is no need to be aware of how many nodes are in the cluster, where they are, or how the data is distributed across them!

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#column-oriented","title":"Column Oriented","text":"

Columnar data structures lend themselves to low-latency reads of data. But from a user's perspective, Kinetica behaves very similarly to a standard relational database \u2013 with tables of rows and columns and it can be queried with SQL or through APIs. Available column types include the standard base types (int, long, float, double, string, & bytes), as well as numerous sub-types supporting date/time, geospatial, and other data forms.

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#vectorized-functions","title":"Vectorized Functions","text":"

Vectorization is Kinetica\u2019s secret sauce and the key feature that underpins its blazing fast performance.

Advanced vectorized kernels are optimized to use vectorized CPUs and GPUs for faster performance. The query engine automatically assigns tasks to the processor where they will be most performant. Aggregations, filters, window functions, joins and geospatial rendering are some of the capabilities that see performance improvements.

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#memory-first-tiered-storage","title":"Memory-First, Tiered Storage","text":"

Tiered storage makes it possible to optimize where data lives for performance and cost. Recent data (such as all data where the timestamp is within the last 2 weeks) can be held in-memory, while older data can be moved to disk, or even to external storage services.

Kinetica operates on an entire data corpus by intelligently managing data across GPU memory, system memory, SIMD, disk / SSD, HDFS, and cloud storage like S3 for optimal performance.

Kinetica can also query and process data stored in data lakes, joining it with data managed by Kinetica in highly parallelized queries.

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#performant-key-value-lookup","title":"Performant Key-Value Lookup","text":"

Kinetica is able to generate distributed key-value lookups, from columnar data, for high-performance and concurrency. Sharding logic is embedded directly within client APIs enabling linear scale-out as clients can lookup data directly from the node where the data lives.

","tags":["Architecture"]},{"location":"Architecture/kinetica_for_kubernetes_architecture/","title":"Kubernetes Architecture","text":"","tags":["Architecture"]},{"location":"Architecture/kinetica_for_kubernetes_architecture/#coming-soon","title":"Coming Soon","text":"","tags":["Architecture"]},{"location":"GettingStarted/","title":"Getting Started","text":"
  • Set up in 15 minutes (local install)

    Install the Kinetica DB locally on Kind or k3s with helm to get up and running in minutes (Dev/Test).

    Quickstart

  • Prepare to Install

    What you need to know & do before beginning an installation.

    Preparation and Prerequisites

  • Production Installation

    Install the Kinetica DB with helm to get up and running quickly (Production).

    Installation

  • Channel Your Inner Ninja

    Advanced Installation Topics which go beyond the basic installation.

    Advanced Topics

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/aks/","title":"Azure AKS Specifics","text":"

This page covers any Microsoft Azure AKS cluster installation specifics.

","tags":["AKS","Getting Started"]},{"location":"GettingStarted/eks/","title":"Amazon EKS Specifics","text":"

This page covers any Amazon EKS kubernetes cluster installation specifics.

","tags":["EKS","Getting Started","Storage"]},{"location":"GettingStarted/eks/#ebs-csi-driver","title":"EBS CSI driver","text":"

Warning

Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work.

Please refer to this AWS documentation for more information.

","tags":["EKS","Getting Started","Storage"]},{"location":"GettingStarted/helm_repo_add/","title":"Helm repo add","text":"Add Kinetica Operators Chart Repo
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n
"},{"location":"GettingStarted/installation/","title":"Kinetica for Kubernetes Installation","text":"
  • CPU Only Installation

    Install the Kinetica DB to run on Intel, AMD or ARM CPUs with no GPU acceleration. CPU

  • CPU & GPU Installation

    Install the Kinetica DB to run on nodes with nVidia GPU acceleration. Optionally enable Kinetica On-Prem SQLAssistant (LLM). GPU

","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/","title":"Installation - CPU Only","text":"

For managed Kubernetes solutions (AKS, EKS, GKE) or on-prem (kubeadm) Kubernetes variants, follow this generic guide to install the Kinetica Operators, Database and Workbench.

Preparation & Prequisites

Please make sure you have followed the Preparation & Prequisites steps

","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#install-the-helm-chart","title":"Install the helm chart","text":"

Run the following Helm install command after substituting values from Preparation & Prequisites

Helm install kinetica-operators
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#check-installation-progress","title":"Check installation progress","text":"

After a few moments, follow the progression of the main database pod startup with:

Monitor the Kinetica installation progress
kubectl -n gpudb get po gpudb-0 -w\n

until it reaches \"gpudb-0 3/3 Running\" at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the helm command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using Ctrl+C.

error no pod named gpudb-0

If you receive an error message running kubectl -n gpudb get po gpudb-0 -w informing you that no pod named gpudb-0 exists. Please check that the OpenLDAP pod is running by running

Check OpenLDAP status
kubectl -n gpudb get pods\nkubectl -n gpudb describe pod openldap-5f87f77c8b-trpmf\n

where the pod name openldap-5f87f77c8b-trpmf is that shown when running kubectl -n gpudb get pods

Validate if the pod is waiting for it's Persistent Volume Claim/Persistent Volume to be created and bound to the pod.

","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#accessing-the-kinetica-installation","title":"Accessing the Kinetica installation","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#target-platform-specifics","title":"Target Platform Specifics","text":"cloudlocal - devbare metal - prod

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.

Option 1: Use the LoadBalancer domain Set your FQDN in Kinetica

kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n

Option 2: Use your custom domain Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

Installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Configure local acces - /etc/hosts
127.0.0.1  local.kinetica\n

Note

The default chart configuration points to local.kinetica but this is configurable.

Installing on a bare metal machines which do not have an external hardware loadbalancer requires an Ingress controller along with a software loadbalancer in order to be accessible.

Kinetica for Kubernetes has been tested with kube-vip

","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/","title":"Installation - CPU with GPU Acceleration","text":"

For managed Kubernetes solutions (AKS, EKS, GKE) or on-prem (kubeadm) Kubernetes variants, follow this generic guide to install the Kinetica Operators, Database and Workbench.

Preparation & Prequisites

Please make sure you have followed the Preparation & Prequisites steps

","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#install-via-the-kinetica-operators-helm-chart","title":"Install via the kinetica-operators Helm Chart","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#gpu-cluster-with-remote-sqlassistant","title":"GPU Cluster with Remote SQLAssistant","text":"

Run the following Helm install command after substituting values from section 3

Helm install kinetica-operators (No On-Prem SQLAssistant)
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#gpu-cluster-with-on-prem-sqlassistant","title":"GPU Cluster with On-Prem SQLAssistant","text":"

or to enable SQLAssistant to be deployed and run 'On-Prem' i.e. in the same cluster

Helm install kinetica-operators (With On-Prem SQLAssistant)
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n--set db.gpudbCluster.config.ai.apiProvider = \"kineticallm\"\n

On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory

To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled app.kinetica.com/pool=compute-llm. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU.

Label Kubernetes Nodes for LLM
kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n
","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#check-installation-progress","title":"Check installation progress","text":"

After a few moments, follow the progression of the main database pod startup with:

Monitor the Kinetica installation progress
kubectl -n gpudb get po gpudb-0 -w\n

until it reaches \"gpudb-0 3/3 Running\" at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the helm command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using Ctrl+C.

error no pod named gpudb-0

If you receive an error message running kubectl -n gpudb get po gpudb-0 -w informing you that no pod named gpudb-0 exists. Please check that the OpenLDAP pod is running by running

Check OpenLDAP status
kubectl -n gpudb get pods\nkubectl -n gpudb describe pod openldap-5f87f77c8b-trpmf\n

where the pod name openldap-5f87f77c8b-trpmf is that shown when running kubectl -n gpudb get pods

Validate if the pod is waiting for it's Persistent Volume Claim/Persistent Volume to be created and bound to the pod.

","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#accessing-the-kinetica-installation","title":"Accessing the Kinetica installation","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#target-platform-specifics","title":"Target Platform Specifics","text":"cloudlocal - devbare metal - prod

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.

Option 1: Use the LoadBalancer domain Set your FQDN in Kinetica

kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n

Option 2: Use your custom domain Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

Installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Configure local acces - /etc/hosts
127.0.0.1  local.kinetica\n

Note

The default chart configuration points to local.kinetica but this is configurable.

Installing on a bare metal machines which do not have an external hardware loadbalancer requires an Ingress controller along with a software loadbalancer in order to be accessible.

Kinetica for Kubernetes has been tested with kube-vip

","tags":["Installation"]},{"location":"GettingStarted/local_kinetica_etc_hosts/","title":"Local kinetica etc hosts","text":"

FQDN or Local Access

By default we create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace/set the FQDN in the values.yaml with the correct domain name or by adding --set.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Configure local access - /etc/hosts
127.0.0.1  local.kinetica\n
"},{"location":"GettingStarted/note_additional_gpu_sqlassistant/","title":"Note additional gpu sqlassistant","text":"

On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory

To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled app.kinetica.com/pool=compute-llm. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU.

Label Kubernetes Nodes for LLM
kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n
"},{"location":"GettingStarted/preparation_and_prerequisites/","title":"Preparation & Prerequisites","text":"

Checks & steps to ensure a smooth installation.

Obtain a Kinetica License Key

A product license key will be required for install. Please contact Kinetica Support to request a trial key.

Failing to provide a license key at installation time will prevent the DB from starting.

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"

Free Resources

Your Kubernetes cluster version should be >= 1.22.x and have a minimum of 8 CPU, 8GB Ram and SSD or SATA 7200RPM hard drive(s) with 4X memory capacity.

GPU Support

For GPU enabled clusters the cards below have been tested in large-scale production environments and provide the best performance for the database.

GPU Driver P4/P40/P100 525.X (or higher) V100 525.X (or higher) T4 525.X (or higher) A10/A40/A100 525.X (or higher)","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#kubernetes-cluster-connectivity","title":"Kubernetes Cluster Connectivity","text":"

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. and the Kubernetes CLI kubectl.

The context for the desired target cluster must be selected from your ~/.kube/config file and set via the KUBECONFIG environment variable or kubectl ctx (if installed). Check to see if you have the correct context with,

show the current kubernetes context
kubectl config current-context\n

and that you can access this cluster correctly with,

list kubernetes cluster nodes
kubectl get nodes\n
Get Nodes

If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).

Kinetica Images for an Air-Gapped Environment

If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.

For information on ways to transfer the files into an air-gapped environment See here.

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#required-container-images","title":"Required Container Images","text":"","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"
  • docker.io/kinetica/kinetica-k8s-operator:v7.2.0-3.rc-3
    • docker.io/kinetica/kinetica-k8s-cpu:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-gpu:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench-operator:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench:v7.2.0-3.rc-3
  • docker.io/kinetica/kinetica-k8s-monitor:v7.2.0-3.rc-3
  • docker.io/kinetica/busybox:v7.2.0-3.rc-3
  • docker.io/kinetica/fluent-bit:v7.2.0-3.rc-3
  • docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • nvcr.io/nvidia/gpu-operator:v23.9.1
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • registry.k8s.io/nfd/node-feature-discovery:v0.14.2
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"
  • docker.io/bitnami/openldap:2.6.7
  • docker.io/alpine/openssl:latest (used by bitnami/openldap)
  • docker.io/otel/opentelemetry-collector-contrib:0.95.0
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"
  • quay.io/brancz/kube-rbac-proxy:v0.14.2
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#optional-container-images","title":"Optional Container Images","text":"

These images are only required if certain features are enabled as part of the Helm installation: -

  • CertManager
  • ingress-ninx
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"
  • quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"
  • registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)
  • registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2)
  1. It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image
  2. With a supported nVidia GPU.
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#label-the-kubernetes-nodes","title":"Label the Kubernetes Nodes","text":"

Kinetica requires some of the Kubernetes Nodes to be labeled as it splits some of the components into different deployment 'pools'. This enables different physical node types to be present in the Kubernetes Cluster allowing us to target which Kinetica components go where.

e.g. for a GPU installation some nodes in the cluster will have GPUs and others are CPU only. We can put the DB on the GPU nodes and our infrastructure components on CPU only nodes.

cpu gpu

The Kubernetes cluster nodes selected to host the Kinetica infrastructure pods i.e. non-DB Pods require the following label app.kinetica.com/pool=infra.

Label the Infrastructure Nodes
    kubectl label node k8snode1 app.kinetica.com/pool=infra\n

whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label app.kinetica.com/pool=compute.

Label the Database Nodes
    kubectl label node k8snode2 app.kinetica.com/pool=compute\n

The Kubernetes cluster nodes selected to host the Kinetica infrastructure pods i.e. non-DB Pods require the following label app.kinetica.com/pool=infra.

Label the Infrastructure Nodes
    kubectl label node k8snode1 app.kinetica.com/pool=infra\n

whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label app.kinetica.com/pool=compute-gpu.

Label the Database Nodes
    kubectl label node k8snode2 app.kinetica.com/pool=compute-gpu\n

On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory

To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled app.kinetica.com/pool=compute-llm. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU.

Label Kubernetes Nodes for LLM
kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n

Pods Not Scheduling

If the Kubernetes are not labeled you may have a situation where Kinetica pods not schedule and sit in a 'Pending' state.

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#add-the-kinetica-chart-repository","title":"Add the Kinetica chart repository","text":"

Add the repo locally as kinetica-operators:

Helm repo add
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n
Helm Repo Add

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#obtain-the-default-helm-values-file","title":"Obtain the default Helm values file","text":"

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Helm values.yaml download
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.2/kinetica-operators/values.onPrem.k8s.yaml\n
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#determine-the-following-prior-to-the-chart-install","title":"Determine the following prior to the chart install","text":"

Default Admin User

the default admin user in the Helm chart is kadmin but this is configurable. Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, --set dbAdminUser.password=\"MyPassword\\!\"

  1. Obtain a LICENSE-KEY as described in the introduction above.
  2. Choose a PASSWORD for the initial administrator user
  3. As the storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain the DEFAULT-STORAGE-CLASS name with the command:

Find the default storageclass
kubectl get sc -o name \n
List StorageClass

use the name found after the /, For example, in storageclass.storage.k8s.io/local-path use \"local-path\" as the parameter.

Amazon EKS

If installing on Amazon EKS See here

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#planning-access-to-your-kinetica-cluster","title":"Planning access to your Kinetica Cluster","text":"Existing Ingress Controller?

If you have an existing Ingress Controller in your Kubernetes cluster and do not want Kinetica to install an ingresss-nginx to expose it's endpoints then you can disable ingresss-nginx installation in the values.yaml by editing the file and setting install: true to install: false: -

Text Only
```` yaml\nnodeSelector: {}\ntolerations: []\naffinity: {}\n\ningressNginx:\n    install: false\n````\n
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/quickstart/","title":"Quickstart","text":"

For the quickstart we have examples for Kind or k3s.

  • Kind - is suitable for CPU only installations.
  • k3s - is suitable for CPU or GPU installations.

Kubernetes >= 1.25

The current version of the chart supports kubernetes version 1.25 and above.

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#please-select-your-target-kubernetes-variant","title":"Please select your target Kubernetes variant:","text":"kind k3s

Default User

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":"

This installation in a kind cluster is for trying out the operators and the database in a non-production environment.

CPU Only

This method currently only supports installing a CPU version of the database.

Please contact Kinetica Support to request a trial key.

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Create a new Kind Cluster
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.2/kinetica-operators/kind.yaml\nkind create cluster --name kinetica --config kind.yaml\n
List Kind clusters
 kind get clusters\n

Set Kubernetes Context

Please set your Kubernetes Context to kind-kinetica before performing the following steps.

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the Kinetica-Operators Chart","text":"Add Kinetica Operators Chart Repo
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n

FQDN or Local Access

By default we create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace/set the FQDN in the values.yaml with the correct domain name or by adding --set.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Configure local access - /etc/hosts
127.0.0.1  local.kinetica\n
Get & install the Kinetica-Operators Chart
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.2/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

or if you have been asked by the Kinetica Support team to try a development version

Using a development version
helm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 72.2.2\n

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-k3sio","title":"k3s (k3s.io)","text":"","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#install-k3s-129","title":"Install k3s 1.29","text":"Install k3s
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik  --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n

Once installed we need to set the current Kubernetes context to point to the newly created k3s cluster.

Select if you want local or remote access to the Kubernetes Cluster: -

Local AccessRemote Access

For only local access to the cluster we can simply set the KUBECONFIG environment variable

Set kubectl context
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml\n

For remote access i.e. outside the host/VM k3s is installed on: -

Copy /etc/rancher/k3s/k3s.yaml on your machine located outside the cluster as ~/.kube/config. Then edit the file and replace the value of the server field with the IP or name of your K3s server.

Copy the kube config and set the context
sudo chmod 600 /etc/rancher/k3s/k3s.yaml\nmkdir -p ~/.kube\nsudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config\nsudo chown \"${USER:=$(/usr/bin/logname)}:$USER\" ~/.kube/config\n# Edit the ~/.kube/config server field with the IP or name of your K3s server here\nexport KUBECONFIG=~/.kube/config\n
","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

FQDN or Local Access

By default we create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace/set the FQDN in the values.yaml with the correct domain name or by adding --set.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Configure local access - /etc/hosts
127.0.0.1  local.kinetica\n
","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-the-kinetica-operators-chart-cpu","title":"K3S - Install the Kinetica-Operators Chart (CPU)","text":"Add Kinetica Operators Chart Repo
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n
Download Template values.yaml
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.2/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

or if you have been asked by the Kinetica Support team to try a development version

Using a development version
helm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n
","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-the-kinetica-operators-chart-gpu","title":"K3S - Install the Kinetica-Operators Chart (GPU)","text":"

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

k3s GPU Installation
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.2/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#uninstall-k3s","title":"Uninstall k3s","text":"uninstall k3s
/usr/local/bin/k3s-uninstall.sh\n
","tags":["Development","Getting Started","Installation"]},{"location":"Help/changing_the_fqdn/","title":"How to change the Clusters FQDN","text":"","tags":["Configuration","Support"]},{"location":"Help/changing_the_fqdn/#coming-soon","title":"Coming Soon","text":"","tags":["Configuration","Support"]},{"location":"Help/faq/","title":"Frequently Asked Questions","text":"","tags":["Support"]},{"location":"Help/faq/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Help/help_and_tutorials/","title":"Help & Tutorials","text":"
  • Tutorials

    Tutorials

  • Help

    Help

","tags":["Support"]},{"location":"Help/help_and_tutorials/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Help/help_index/","title":"Creating Users, Roles, Schemas and other Kinetica DB Objects","text":"","tags":["Support"]},{"location":"Help/help_index/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Monitoring/logs/","title":"Log Collection & Display","text":"

It is possible to forward/server the Kinetica on Kubernetes logs via an OpenTelemetry [OTEL] collector.

By default an OpenTelemetry Collector is deployed in the kinetica-system namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the kinetica-system namespace and is called otel-collector-conf.

Detailed otel-collector-conf setup

For more details on the Kinetica installed OTEL Collector please see here.

There are many supported mechanisms to expose the logs here is one possibility: -

  • lokiexporter - Exports data via HTTP to Loki.

Tip

For a full list of supported OTEL exporters, including those for GrafanaCloud, AWS, Azure, Logz.io, Splunk and many databases please see here

","tags":["Operations","Monitoring"]},{"location":"Monitoring/logs/#lokiexporter-otel-collector-exporter","title":"lokiexporter OTEL Collector Exporter","text":"

Exports data via HTTP to Loki.

Example Configuration
exporters:\n  loki:\n    endpoint: https://loki.example.com:3100/loki/api/v1/push\n    default_labels_enabled:\n      exporter: false\n      job: true\n

For full details on configuring the OTEL collector exporter lokiexporter see here.

","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/","title":"Metrics Collection & Display","text":"

It is possible to forward/server the Kinetica on Kubernetes metrics via an OpenTelemetry [OTEL] collector.

By default an OpenTelemetry Collector is deployed in the kinetica-system namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the kinetica-system namespace and is called otel-collector-conf.

Detailed otel-collector-conf setup

For more details on the Kinetica installed OTEL Collector please see here.

There are many supported mechanisms to expose the metrics here are a few possibilities: -

  • prometheusremotewriteexporter - Prometheus Remote Write Exporter sends OpenTelemetry metrics to Prometheus remote write compatible backends.
  • prometheusexporter - allows the metrics to be scraped by a Prometheus server

Tip

For a full list of supported OTEL exporters, including those for Grafana Cloud, AWS, Azure and many databases please see here

","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/#prometheusremotewriteexporter-prometheus-otel-remote-write-exporter","title":"prometheusremotewriteexporter Prometheus OTEL Remote Write Exporter","text":"

prometheusremotewriteexporter OTEL Exporter

Prometheus Remote Write Exporter sends OpenTelemetry metrics to Prometheus remote write compatible backends such as Cortex, Mimir, and Thanos. By default, this exporter requires TLS and offers queued retry capabilities.

Warning

Non-cumulative monotonic, histogram, and summary OTLP metrics are dropped by this exporter.

Example Configuration
exporters:\n  prometheusremotewrite:\n    endpoint: \"https://my-cortex:7900/api/v1/push\"\n    external_labels:\n      label_name1: label_value1\n      label_name2: label_value2\n

For full details on configuring the OTEL collector exporter prometheusremotewriteexporter see here.

","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/#prometheusexporter-prometheus-otel-exporter","title":"prometheusexporter Prometheus OTEL Exporter","text":"

Exports data in the Prometheus format, which allows it to be scraped by a Prometheus server.

Example Configuration
exporters:\n  prometheus:\n    endpoint: \"1.2.3.4:1234\"\n    tls:\n      ca_file: \"/path/to/ca.pem\"\n      cert_file: \"/path/to/cert.pem\"\n      key_file: \"/path/to/key.pem\"\n    namespace: test-space\n    const_labels:\n      label1: value1\n      \"another label\": spaced value\n    send_timestamps: true\n    metric_expiration: 180m\n    enable_open_metrics: true\n    add_metric_suffixes: false\n    resource_to_telemetry_conversion:\n      enabled: true\n

For full details on configuring the OTEL collector exporter prometheusexporter see here.

","tags":["Operations","Monitoring"]},{"location":"Operations/","title":"Operational Management","text":"
  • Metrics

    Collecting and storing metrics as time series data. Metrics

  • Logs

    Log aggregation. Logs

  • Metric & Log Distribution

    Metrics & Logs can be distributed to other systems using OpenTelemetry. OpenTelemety

  • Backup & Restore

    Backup & Restore of the Kinetica DB. Backup & Restore

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

  • Reduce Costs

    Suspend & Resume Kinetica for Kubernetes. Suspend & Resume

  • Database Rebalancing

    Kinetica for Kubernetes Data Sharding & Rebalancing. Rebalancing

","tags":["Operations"]},{"location":"Operations/backup_and_restore/","title":"Kinetica for Kubernetes Backup & Restore","text":"

Kinetica for Kubernetes supports the Backup & Restoring of the installed Kinetica DB by leveraging Velero which is required to be installed into the same Kubernetes cluster that the kinetica-operators Helm chart is deployed.

Velero

Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises.

For Velero installation please see here.

Velero Installation

The kinetica-operators Helm chart does not deploy Velero it is a prerequisite for it to be installed before Backup & Restore will work correctly.

There are two ways to initiate a Backup or Restore

  • Workbench Initiated
  • Kubernetes CR Initiated

Preferred Backup/Restore Mechanism

The preferred way to Backup or Restore the Kinetica for Kubernetes DB instance is via Workbench.

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#workbench-initiated-backup-or-restore","title":"Workbench Initiated Backup or Restore","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#home","title":"> Home","text":"

From the Workbench Home page

we need to select the Manage option from the toolbar.

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#manage-cluster-overview","title":"> Manage > Cluster > Overview","text":"

On the Cluster Overview page select the 'Snapshots' tab

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#manage-cluster-snapshots","title":"> Manage > Cluster > Snapshots","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#backup","title":"Backup","text":"

Select the 'Backup Now' button

and the backup will start and you will be able to see the progress

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#restore","title":"Restore","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kubernetes-cr-initiated-backup-or-restore","title":"Kubernetes CR Initiated Backup or Restore","text":"

The Kinetica DB Operator supports two custom CRs

  • KineticaClusterBackup
  • KineticaClusterRestore

which can be used to perform a Backup of the database and a Restore of Kinetica namespaces.

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kineticaclusterbackup-cr","title":"KineticaClusterBackup CR","text":"

Submission of a KineticaClusterBackup CR will trigger the Kinetica DB Operator to perform a backup of a Kinetica DB instance.

Kinetica DB Offline

In order to perform a database backup the Kinetica DB needs to be suspended in order for Velero to have access to the necessary disks. The DB will be stopped & restarted automatically by the Kinetica DB Operator as part of the backup process.

Example KineticaClusterBackup CR yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaClusterBackup\nmetadata:\n  name: kineticaclusterbackup-sample\n  namespace: gpudb\nspec:\n  includedNamespaces:\n    - gpudb\n

The namespace of the backup CR should be different to that of the namespace the Kinetica DB is running in i.e. not gpudb. We recommend using the namespace Velero is deployed into.

Backup names are unique

The name of the KineticaClusterBackup CR is unique we therefore suggest creating the name of the CR containing the date + time of the backup to ensure uniqueness. Kubernetes CR names have a strict naming format so the specified name must conform to those patterns.

For a detailed description of the KineticaClusterBackup CRD see here

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kineticaclusterrestore-cr","title":"KineticaClusterRestore CR","text":"

In order to perform a restore of Kinetica for Kubernetes the easiest way is to simply delete the gpudb namespace from the Kubernetes cluster.

Delete the Kinetica DB
kubectl delete ns gpudb\n

Kinetica DB Offline

In order to perform a database restore the Kinetica DB needs to be suspended in order for Velero to have access to the necessary disks. The DB will be stopped & restarted automatically by the Kinetica DB Operator as part of the restore process.

Example KineticaClusterBackup CR yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaClusterRestore\nmetadata:\n  name: kineticaclusterrestore-sample\n  namespace: gpudb\nspec:\n  backupName: kineticaclusterbackup-sample\n

The namespace of the restore CR should be the same as that of the namespace the KineticaClusterBackup CR was placed in. i.e. Not the namespace Kinetica DB is running in.

Restore names are unique

The name of the KineticaClusterRestore CR is unique we therefore suggest creating the name of the CR containing the date + time of the restore process to ensure uniqueness. Kubernetes CR names have a strict naming format so the specified name must conform to those patterns.

For a detailed description of the KineticaClusterRestore CRD see here

","tags":["Operations"]},{"location":"Operations/otel/","title":"OTEL Integration for Metric & Log Distribution","text":"

Helm installed OTEL Collector

By default an OpenTelemetry Collector is deployed in the kinetica-system namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the kinetica-system namespace and is called otel-collector-conf.

The Kinetica DB Operators send information to an OpenTelemetry collector. There are two choices

  • install an OpenTelemetry collector with the Kinetica Operators Helm chart
  • use an existing provisioned OpenTelemetry collector within the Kubernetes Cluster
","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#install-an-opentelemetry-collector-with-the-kinetica-operators-helm-chart","title":"Install an OpenTelemetry collector with the Kinetica Operators Helm chart","text":"

To enable the Kinetica Operators Helm Chart to deploy an instance of the OpenTelemetry collector into the kinetica-system namespace you need to set the following configuration in the helm values: -

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#todo-add-helm-config-example-here","title":"TODO add Helm Config Example Here","text":"

A ConfigMap containing the OTEL collector configuration will be generated so that the necessary receivers and processors sections are correctly setup for a Kinetica DB Cluster.

This configuration will: -

receivers:

  • configure a syslog receiver which will receive logs from the Kinetica DB pod.
  • configure a prometheus receiver/scraper which will collect metrics form the Kinetica DB.
  • configure an otlp receiver which will receive trace spans from the Kinetica Operators (Optional).
  • configure the hostmetrics collection of host load & memory usage (Optional).
  • configure the k8s_events collection of Kubernetes Events for the Kinetica namespaces (Optional).

processors:

  • configure attribute processing to set some useful values
  • configure resource processing to set some useful values
","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#syslog-configuration","title":"syslog Configuration","text":"

The OpenTelemetry syslogreceiver documentation can be found here.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration","title":"OTEL Receivers Configuration","text":"YAML
receivers:  \n  syslog:  \n    tcp:  \n      listen_address: \"0.0.0.0:9601\"  \n    protocol: rfc5424  \n
","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-service-configuration","title":"OTEL Service Configuration","text":"

Tip

In order to batch pushes of log data upstream you can use the following processors section in the OTEL configuration.

YAML
processors:  \n  batch:  \n
YAML
service:   \n  pipelines:\n    logs:\n      receivers: [syslog]\n      processors: [resourcedetection, attributes, resource, batch]\n      exporters: ... # Requires configuring for your environment\n
","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otlp-configuration","title":"otlp Configuration","text":"

The default configuration opens both the OTEL gRPC & HTTP listeners.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration_1","title":"OTEL Receivers Configuration","text":"YAML
receivers:\n  otlp:  \n    protocols:  \n      grpc:  \n        endpoint: \"0.0.0.0:4317\"  \n      http:  \n        endpoint: \"0.0.0.0:4318\" \n
","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-service-configuration_1","title":"OTEL Service Configuration","text":"

Tip

In order to batch pushes of trace data upstream you can use the following processors section in the OTEL configuration.

YAML
processors:  \n  batch:  \n
YAML
service:\n  traces:\n    receivers: [otlp]\n    processors: [batch]\n    exporters:  ... # Requires configuring for your environment\n

exporters

The exporters will need to be manually configured to your specific environment e.g. forwarding logs/metrics to Grafana, Azure Monitor, AWS etc.

Otherwise the data will 'disappear into the ether' and not be relayed upstream.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#hostmetrics-configuration-optional","title":"hostmetrics Configuration (Optional)","text":"

The Host Metrics receiver generates metrics about the host system scraped from various sources. This is intended to be used when the collector is deployed as an agent.

The OpenTelemetry hostmetrics documentation can be found here.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration_2","title":"OTEL Receivers Configuration","text":"

hostmetricsreceiver

The OTEL hostmetricsreceiverrequires that the running OTEL collector is the 'contrib' version.

YAML
receivers:\n  hostmetrics:  \n    # https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver  \n    scrapers:  \n      load:  \n      memory: \n
Grafana

the attributes and resource processing enables finer grained selection using Grafana queries.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#k8s_events-configuration-optional","title":"k8s_events Configuration (Optional)","text":"

The kubernetes Events receiver collects events from the Kubernetes API server. It collects all the new or updated events that come in from the specified namespaces. Below we are collecting events from the two default Kinetica namespaces: -

YAML
receivers:\n  k8s_events:  \n    namespaces: [kinetica-system, gpudb]  \n

The OpenTelemetry k8seventsreceiver documentation can be found here.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#use-an-existing-provisioned-opentelemetry-collector","title":"Use an existing provisioned OpenTelemetry Collector","text":"","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#coming-soon","title":"Coming Soon","text":"","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/rebalance/","title":"Kinetica for Kubernetes Data Rebalancing","text":"","tags":["Operations"]},{"location":"Operations/rebalance/#coming-soon","title":"Coming Soon","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/","title":"Kinetica for Kubernetes Suspend & Resume","text":"

It is possible to supend Kinetica for Kubernetes which spins down the DB.

Infra Structure

For each deployment of Kinetica for Kubernetes there are two distinct types of pods: -

  • 'Compute' pods containing the Kinetica DB along with the Statics Pod
  • 'Infra' pods containing the supporting apps, e.g. Workbench, OpenLDAP etc, and the Kinetica Operators.

Whilst Kinetica for Kubernetes is in the Suspended state only the 'Compute' pods are scaled down. The 'Infra' pods remain running in order for Workbenchto be able to login, backup, restore and in this case Resume the suspended system.

There are three discrete ways to suspand and resume KInetica for Kubernetes: -

  • Manually from Workbench
  • Auto-Suspend set in Workbench or fro the Helm installation Chart.
  • Manually using a Kubernetes CR
","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-manually-from-workbench","title":"Suspend - Manually from Workbench","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-auto-suspend","title":"Suspend - Auto-Suspend","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-manually-using-a-kubernetes-cr","title":"Suspend - Manually using a Kubernetes CR","text":"","tags":["Operations"]},{"location":"Operators/k3s/","title":"Overview","text":"

Kinetica Operators can be installed in any on-prem kubernetes cluster. This document provides instructions to install the operators in k3s. If you are on another distribution, you should be able to change the values file to suit your environment.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/k3s/#kinetica-on-k3s-k3sio","title":"Kinetica on k3s (k3s.io)","text":"

Current version of the chart supports kubernetes version 1.25 and above.

"},{"location":"Operators/k3s/#install-k3s-129","title":"Install k3s 1.29","text":"Bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik  --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n
"},{"location":"Operators/k3s/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s -Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Text Only
127.0.0.1 local.kinetica\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart","title":"K3s - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart-gpu-capable-machine","title":"K3s - Install the kinetica-operators chart (GPU Capable Machine)","text":"

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Operators/k3s/#uninstall-k3s","title":"Uninstall k3s","text":"Bash
/usr/local/bin/k3s-uninstall.sh\n
"},{"location":"Operators/k8s/","title":"Overview","text":"

For managed Kubernetes solutions (AKS, EKS, GKE) or other on-prem K8s flavors, follow this generic guide to install the Kinetica Operators, Database and Workbench. A product license key will be required for install. Please contact Kinetica Support to request a trial key.

"},{"location":"Operators/k8s/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. kubectl is optional but highly recommended. The context for the desired target cluster must be selected from your ~/.kube/config file or set via the KUBECONFIG environment variable. Check to see if you have the correct context with,

Bash
kubectl config current-context\n

and that you can access this cluster correctly with,

Bash
kubectl get nodes\n

If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).

"},{"location":"Operators/k8s/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

Alternatively, if you are installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Bash
127.0.0.1  local.kinetica\n

Note that the default chart configuration points to local.kinetica but this is configurable.

"},{"location":"Operators/k8s/#1-add-the-kinetica-chart-repository","title":"1. Add the Kinetica chart repository","text":"

Add the repo locally as kinetica-operators:

Bash
helm repo add kinetica-operators https://kineticadb.github.io/charts\n
"},{"location":"Operators/k8s/#2-obtain-the-default-helm-values-file","title":"2. Obtain the default Helm values file","text":"

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n
"},{"location":"Operators/k8s/#3-determine-the-following-prior-to-the-chart-install","title":"3. Determine the following prior to the chart install","text":"

(a) Obtain a LICENSE-KEY as described in the introduction above. (b) Choose a PASSWORD for the initial administrator user (Note: the default in the chart for this user is kadmin but this is configurable). Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, --set dbAdminUser.password=\"MyPassword\\!\" \u00a9 As storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain DEFAULT-STORAGE-CLASS name with the command:

Bash
kubectl get sc -o name \n

use the name found after the /, For example, in \"storageclass.storage.k8s.io/TheName\" use \"TheName\" as the parameter.

"},{"location":"Operators/k8s/#4-install-the-helm-chart","title":"4. Install the helm chart","text":"

Run the following Helm install command after substituting values from section 3 above:

Bash
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
"},{"location":"Operators/k8s/#5-check-installation-progress","title":"5. Check installation progress","text":"

After a few moments, follow the progression of the main database pod startup with:

Bash
kubectl -n gpudb get po gpudb-0 -w\n

until it reaches \"gpudb-0 3/3 Running\" at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the helm command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using ctrl-c.

"},{"location":"Operators/k8s/#6-accessing-the-kinetica-installation","title":"6. Accessing the Kinetica installation","text":""},{"location":"Operators/k8s/#optional-install-a-development-chart-version","title":"(Optional) Install a development chart version","text":"

Find all alternative chart versions with:

Bash
helm search repo kinetica-operators --devel --versions\n

Then append --devel --version [CHART-DEVEL-VERSION] to the end of the Helm install command in section 4 above.

"},{"location":"Operators/k8s/#k8s-flavour-specific-notes","title":"K8s Flavour specific notes","text":""},{"location":"Operators/k8s/#eks","title":"EKS","text":""},{"location":"Operators/k8s/#ebs-csi-driver","title":"EBS CSI driver","text":"

Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work. Please refer to this AWS documentation for more information.

"},{"location":"Operators/k8s/#ingress","title":"Ingress","text":"

As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.

"},{"location":"Operators/k8s/#option-1-use-the-loadbalancer-domain","title":"Option 1: Use the LoadBalancer domain","text":"Bash
kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n
"},{"location":"Operators/k8s/#option-1-use-your-custom-domain","title":"Option 1: Use your custom domain","text":"

Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

"},{"location":"Operators/kind/","title":"Overview","text":"

This installation in a kind cluster is for trying out the operators and the database in a non production environment. This method currently only supports installing a CPU version of the database.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/kind/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":""},{"location":"Operators/kind/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Bash
kind create cluster --config charts/kinetica-operators/kind.yaml\n
"},{"location":"Operators/kind/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

"},{"location":"Operators/kind/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Operators/kinetica-operators/","title":"Kinetica DB Operator Helm Charts","text":"

To install all the required operators in a single command perform the following: -

Bash
helm install -n kinetica-system \\\nkinetica-operators kinetica-operators/kinetica-operators --create-namespace\n

This will install all the Kubernetes Operators required into the kinetica-system namespace and create the namespace if it is not currently present.

Note

Depending on what target platform you are installing to it may be necessary to supply an additional parameter pointing to a values file to successfully provision the DB.

Bash
helm install -n kinetica-system -f values.yaml --set provider=aks \\\nkinetica-operators kinetica-operators/kinetica-operators --create-namespace\n

The command above uses a custom values.yaml for helm and sets the install platform to Microsoft Azure AKS.

Currently supported providers are: -

  • aks - Microsoft Azure AKS
  • eks - Amazon AWS EKS
  • local - Generic 'On-Prem' Kubernetes Clusters e.g. one deployed using kubeadm

Example Helm values.yaml for different Cloud Providers/On-Prem installations: -

Azure AKSAmazon EKSOn-Prem values.yaml
namespace: kinetica-system\n\ndb:\n  serviceAccount: {}\n  image:\n    # Kinetica DB Operator installer image\n    repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n    #  Kinetica DB Operator installer image tag\n    tag: \"\"\n\n  parameters:\n    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n    kubeconfig: \"\"\n    # The storage class to use for PVCs\n    storageClass: \"managed-premium\"\n\n  storageClass:\n    persist:\n      # Workbench Operator Persistent Volume Storage Class\n      provisioner: \"disk.csi.azure.com\"\n    procs:\n      # Workbench Operator Procs Volume Storage Class\n      provisioner: \"disk.csi.azure.com\"\n    cache:\n      # Workbench Operator Cache Volume Storage Class\n      provisioner: \"disk.csi.azure.com\"\n

15 storageClass: \"managed-premium\" - sets the appropriate storageClass for Microsoft Azure AKS Persistent Volume (PV)

20 provisioner: \"disk.csi.azure.com\" - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure

23 provisioner: \"disk.csi.azure.com\" - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure

26 provisioner: \"disk.csi.azure.com\" - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure

values.yaml
namespace: kinetica-system\n\ndb:\n  serviceAccount: {}\n  image:\n    # Kinetica DB Operator installer image\n    repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n    #  Kinetica DB Operator installer image tag\n    tag: \"\"\n\n  parameters:\n    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n    kubeconfig: \"\"\n    # The storage class to use for PVCs\n    storageClass: \"gp2\"\n\n  storageClass:\n    persist:\n      # Workbench Operator Persistent Volume Storage Class\n      provisioner: \"kubernetes.io/aws-ebs\"\n    procs:\n      # Workbench Operator Procs Volume Storage Class\n      provisioner: \"kubernetes.io/aws-ebs\"\n    cache:\n      # Workbench Operator Cache Volume Storage Class\n      provisioner: \"kubernetes.io/aws-ebs\"\n

15 storageClass: \"gp2\" - sets the appropriate storageClass for Amazon EKS Persistent Volume (PV)

20 provisioner: \"kubernetes.io/aws-ebs\" - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure

23 provisioner: \"kubernetes.io/aws-ebs\" - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure

26 provisioner: \"kubernetes.io/aws-ebs\" - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure

values.yaml
namespace: kinetica-system\n\ndb:\n  serviceAccount: {}\n  image:\n    # Kinetica DB Operator installer image\n    repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n    #  Kinetica DB Operator installer image tag\n    tag: \"\"\n\n  parameters:\n    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n    kubeconfig: \"\"\n    # the type of installation e.g. aks, eks, local\n    environment: \"local\"\n    # The storage class to use for PVCs\n    storageClass: \"standard\"\n\n  storageClass:\n    procs: {}\n    persist: {}\n    cache: {}\n

15 environment: \"local\" - tells the DB Operator to deploy the DB as a 'local' instance to the Kubernetes Cluster

17 storageClass: \"standard\" - sets the appropriate storageClass for the On-Prem Persistent Volume Provisioner

storageClass

The storageClass should be present in the target environment.

A list of available storageClass can be obtained using: -

Bash
kubectl get sc\n
"},{"location":"Operators/kinetica-operators/#components","title":"Components","text":"

The kinetica-db Helm Chart wraps the deployment of a number of sub-components: -

  • Porter Operator
  • Kinetica Database Operator
  • Kinetica Workbench Operator

Installation/Upgrading/Deletion of the Kinetica Operators is done via two CRs which leverage porter.sh as the orchestrator. The corresponding Porter Operator, DB Operator & Workbench Operator CRs are submitted by running the appropriate helm command i.e.

  • install
  • upgrade
  • uninstall
"},{"location":"Operators/kinetica-operators/#porter-operator","title":"Porter Operator","text":""},{"location":"Operators/kinetica-operators/#database-operator","title":"Database Operator","text":"

The Kinetica DB Operator installation CR for the porter.sh operator is: -

YAML
apiVersion: porter.sh/v1\nkind: Installation\nmetadata:\n  annotations:\n    meta.helm.sh/release-name: kinetica-operators\n    meta.helm.sh/release-namespace: kinetica-system\n  labels:\n    app.kubernetes.io/instance: kinetica-operators\n    app.kubernetes.io/managed-by: Helm\n    app.kubernetes.io/name: kinetica-operators\n    app.kubernetes.io/version: 0.1.0\n    helm.sh/chart: kinetica-operators-0.1.0\n    installVersion: 0.38.10\n  name: kinetica-operators-operator-install\n  namespace: kinetica-system\nspec:\n  action: install\n  agentConfig:\n    volumeSize: '0'\n  parameters:\n    environment: local\n    storageclass: managed-premium\n  reference: docker.io/kinetica/kinetica-k8s-operator:v7.1.9-7.rc3\n
"},{"location":"Operators/kinetica-operators/#workbench-operator","title":"Workbench Operator","text":"

The Kinetica Workbench installation CR for the porter.sh operator is: -

YAML
apiVersion: porter.sh/v1\nkind: Installation\nmetadata:\n  annotations:\n    meta.helm.sh/release-name: kinetica-operators\n    meta.helm.sh/release-namespace: kinetica-system\n  labels:\n    app.kubernetes.io/instance: kinetica-operators\n    app.kubernetes.io/managed-by: Helm\n    app.kubernetes.io/name: kinetica-operators\n    app.kubernetes.io/version: 0.1.0\n    helm.sh/chart: kinetica-operators-0.1.0\n    installVersion: 0.38.10\n  name: kinetica-operators-wb-operator-install\n  namespace: kinetica-system\nspec:\n  action: install\n  agentConfig:\n    volumeSize: '0'\n  parameters:\n    environment: local\n  reference: docker.io/kinetica/workbench-operator:v7.1.9-7.rc3\n
"},{"location":"Operators/kinetica-operators/#overriding-images-tags","title":"Overriding Images Tags","text":"Bash
helm install -n kinetica-system kinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--set provider=aks  \n--set dbOperator.image.tag=v7.1.9-7.rc3 \\\n--set dbOperator.image.repository=docker.io/kinetica/kinetica-k8s-operator \\\n--set wbOperator.image.repository=docker.io/kinetica/workbench-operator \\\n--set wbOperator.image.tag=v7.1.9-7.rc3\n
"},{"location":"Reference/","title":"Reference Section","text":"
  • Kinetica Operators Helm

    Kinetica Operators Helm charts & values file reference data. Charts

  • Kinetica Core DB CRDs-

    Kinetica DB Kubernetes CRD & ConfigMap reference data. Cluster CRDs

  • Kinetica Workbench CRDs

    Kinetica Workbench Kubernetes CRD & ConfigMap reference data. Workbench

","tags":["Reference"]},{"location":"Reference/database/","title":"Kinetica Database Configuration","text":"
  • kubectl (yaml)
","tags":["Reference"]},{"location":"Reference/database/#kineticacluster","title":"KineticaCluster","text":"

To deploy a new Database Instance into a Kubernetes cluster...

kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica DB Cluster is as follows: -

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\n
","tags":["Reference"]},{"location":"Reference/database/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n
","tags":["Reference"]},{"location":"Reference/database/#spec","title":"Spec","text":"

Under the spec: section of the KineticaCLuster CR we have a number of sections supporting different aspects of the deployed DB cluster:-

  • gpudbCluster
  • autoSuspend
  • gadmin
","tags":["Reference"]},{"location":"Reference/database/#gpudbcluster","title":"gpudbCluster","text":"

Configuartion items specific to the DB itself.

kineticacluster.yaml - gpudbCluster
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  gpudbCluster:\n
","tags":["Reference"]},{"location":"Reference/database/#gpudbcluster_1","title":"gpudbCluster","text":"cluster name & size
clusterName: kinetica-cluster \nclusterSize: \n  tshirtSize: M \n  tshirtType: LargeCPU \nfqdn: kinetica-cluster.saas.kinetica.com\nhaRingName: default\nhasPools: false    \n

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

3. tshirtSize - sets the cluster size to a defined size based upon the t-shirt size. Valid sizes are: -

  • XS - 1 DB Rank
  • S - 2 DB Ranks
  • M - 4 DB Ranks
  • L - 8 DB Ranks
  • XL - 16 DB Ranks
  • XXL - 32 DB Ranks
  • XXXL - 64 DB Ranks

4. tshirtType - block that defines the tyoe DB Ranks to run: -

  • SmallCPU -
  • LargeCPU -
  • SmallGPU -
  • LargeGPU -

5. fqdn - The fully qualified URL for the DB cluster. Used on the Ingress records for any exposed services.

6. haRingName - Default: default

7. hasPools - Whether to enable the separate node 'pools' for \"infra\", \"compute\" pod scheduling. Default: false +optional

","tags":["Reference"]},{"location":"Reference/database/#autosuspend","title":"autoSuspend","text":"

The DB Cluster autosuspend section allows for the spinning down of the core DB Pods to release the underlying Kubernetes nodes to reduce infrastructure costs when the DB is not in use.

kineticacluster.yaml - autoSuspend
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  autoSuspend:\n    enabled: false\n    inactivityDuration: 1h0m0s\n

7. the start of the autoSuspend definition

8. enabled when set to true auto suspend of the DB cluster is enabled otherwise set to false and no automatic suspending of the DB takes place. If omitted it defaults to false

9. inactivityDuration the duration after which if no DB activity has taken place the DB will be suspended

Horizontal Pod Autoscaler

In order for autoSuspend to work correctly the Kubernetes Horizontal Pod Autoscaler needs to be deployed to the cluster.

","tags":["Reference"]},{"location":"Reference/database/#gadmin","title":"gadmin","text":"

GAdmin the Database Administration Console

kineticacluster.yaml - gadmin
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  gadmin:\n    containerPort:\n      containerPort: 8080\n      name: gadmin\n      protocol: TCP\n    isEnabled: true\n

7. gadmin configuration block definition

8. containerPort configuration block i.e. where gadmin is exposed on the DB Pod

9. containerPort the port number as an integer. Default: 8080

10. name the name of the port being exposed. Default: gadmin

11. protocol network protocal used. Default: TCP

12. isEnabled whether gadmin is exposed from the DB pod. Default: true

","tags":["Reference"]},{"location":"Reference/database/#kineticauser","title":"KineticaUser","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticagrant","title":"KineticaGrant","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticaschema","title":"KineticaSchema","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticaresourcegroup","title":"KineticaResourceGroup","text":"","tags":["Reference"]},{"location":"Reference/helm_kinetica_operators/","title":"Helm Chart Reference","text":"","tags":["Reference"]},{"location":"Reference/helm_kinetica_operators/#coming-soon","title":"Coming Soon","text":"","tags":["Reference"]},{"location":"Reference/kinetica_cluster_admins/","title":"Kinetica Cluster Admins Reference","text":"","tags":["Reference"]},{"location":"Reference/kinetica_cluster_admins/#full-kineticaclusteradmin-cr-structure","title":"Full KineticaClusterAdmin CR Structure","text":"kineticaclusteradmins.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterAdmin\nmetadata: {}\n# KineticaClusterAdminSpec defines the desired state of\n# KineticaClusterAdmin\nspec:\n  # ForceDBStatus - Force a Status of the DB.\n  forceDbStatus: string\n  # Name - The name of the cluster to target.\n  kineticaClusterName: string\n  # Offline - Pause/Resume of the DB.\n  offline:\n    # Set to true if desired state is offline. The supported values are:\n    # true false\n    offline: false\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: flush_to_disk Flush to disk when\n    # going offline The supported values are: true false\n    options: {}\n  # Rebalance of the DB.\n  rebalance:\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: rebalance_sharded_data        If true,\n    # sharded data will be rebalanced approximately equally across the\n    # cluster. Note that for clusters with large amounts of sharded\n    # data, this data transfer could be time-consuming and result in\n    # delayed query responses. The default value is true. The supported\n    # values are: true false rebalance_unsharded_data   If true,\n    # unsharded data (a.k.a. randomly-sharded) will be rebalanced\n    # approximately equally across the cluster. Note that for clusters\n    # with large amounts of unsharded data, this data transfer could be\n    # time-consuming and result in delayed query responses. The default\n    # value is true. The supported values are: true false\n    # table_includes                Comma-separated list of unsharded table names\n    # to rebalance. Not applicable to sharded tables because they are\n    # always rebalanced. Cannot be used simultaneously with\n    # table_excludes. This parameter is ignored if\n    # rebalance_unsharded_data is false.\n    # table_excludes                Comma-separated list of unsharded table names\n    # to not rebalance. Not applicable to sharded tables because they\n    # are always rebalanced. Cannot be used simultaneously with\n    # table_includes. This parameter is ignored if rebalance_\n    # unsharded_data is false. aggressiveness               Influences how much\n    # data is moved at a time during rebalance. A higher aggressiveness\n    # will complete the rebalance faster. A lower aggressiveness will\n    # take longer but allow for better interleaving between the\n    # rebalance and other queries. Valid values are constants from 1\n    # (lowest) to 10 (highest). The default value is '1'.\n    # compact_after_rebalance   Perform compaction of deleted records\n    # once the rebalance completes to reclaim memory and disk space.\n    # Default is true, unless repair_incorrectly_sharded_data is set to\n    # true. The default value is true. The supported values are: true\n    # false compact_only                If set to true, ignore rebalance options\n    # and attempt to perform compaction of deleted records to reclaim\n    # memory and disk space without rebalancing first. The default\n    # value is false. The supported values are: true false\n    # repair_incorrectly_sharded_data       Scans for any data sharded\n    # incorrectly and re-routes the data to the correct location. Only\n    # necessary if /admin/verifydb reports an error in sharding\n    # alignment. This can be done as part of a typical rebalance after\n    # expanding the cluster or in a standalone fashion when it is\n    # believed that data is sharded incorrectly somewhere in the\n    # cluster. Compaction will not be performed by default when this is\n    # enabled. If this option is set to true, the time necessary to\n    # rebalance and the memory used by the rebalance may increase. The\n    # default value is false. The supported values are: true false\n    options: {}\n  # RegenerateDBConfig - Force regenerate of DB ConfigMap. true -\n  # restarts DB Pods after config generation false - writes new\n  # configuration without restarting the DB Pods\n  regenerateDBConfig:\n    # Restart - Scales down the DB STS and back up once the DB\n    # Configuration has been regenerated.\n    restart: false\n# KineticaClusterAdminStatus defines the observed state of\n# KineticaClusterAdmin\nstatus:\n  # Phase - The current phase/state of the Admin request\n  phase: string\n  # Processed - Indicates if the admin request has already been\n  # processed. Avoids the request being rerun in the case the Operator\n  # gets restarted.\n  processed: false\n
","tags":["Reference"]},{"location":"Reference/kinetica_cluster_backups/","title":"Kinetica Cluster Backups Reference","text":"","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_backups/#full-kineticaclusterbackup-cr-structure","title":"Full KineticaClusterBackup CR Structure","text":"kineticaclusterbackups.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterBackup \nmetadata: {}\n# Fields specific to the linked backup engine\nprovider:\n  # Name of the backup/restore provider. FOR INTERNAL USE ONLY.\n  backupProvider: \"velero\"\n  # Name of the backup in the linked BackupProvider. FOR INTERNAL USE\n  # ONLY.\n  linkedItemName: \"\"\n# BackupSpec defines the specification for a Velero backup.\nspec:\n  # DefaultVolumesToRestic specifies whether restic should be used to\n  # take a backup of all pod volumes by default.\n  defaultVolumesToRestic: true\n  # ExcludedNamespaces contains a list of namespaces that are not\n  # included in the backup.\n  excludedNamespaces: [\"string\"]\n  # ExcludedResources is a slice of resource names that are not included\n  # in the backup.\n  excludedResources: [\"string\"]\n  # Hooks represent custom behaviors that should be executed at\n  # different phases of the backup.\n  hooks:\n    # Resources are hooks that should be executed when backing up\n    # individual instances of a resource.\n    resources:\n    - excludedNamespaces: [\"string\"]\n      # ExcludedResources specifies the resources to which this hook\n      # spec does not apply.\n      excludedResources: [\"string\"]\n      # IncludedNamespaces specifies the namespaces to which this hook\n      # spec applies. If empty, it applies to all namespaces.\n      includedNamespaces: [\"string\"]\n      # IncludedResources specifies the resources to which this hook\n      # spec applies. If empty, it applies to all resources.\n      includedResources: [\"string\"]\n      # LabelSelector, if specified, filters the resources to which this\n      # hook spec applies.\n      labelSelector:\n        # matchExpressions is a list of label selector requirements. The\n        # requirements are ANDed.\n        matchExpressions:\n        - key: string\n          # operator represents a key's relationship to a set of values.\n          # Valid operators are In, NotIn, Exists and DoesNotExist.\n          operator: string\n          # values is an array of string values. If the operator is In\n          # or NotIn, the values array must be non-empty. If the\n          # operator is Exists or DoesNotExist, the values array must\n          # be empty. This array is replaced during a strategic merge\n          # patch.\n          values: [\"string\"]\n        # matchLabels is a map of {key,value} pairs. A single\n        # {key,value} in the matchLabels map is equivalent to an\n        # element of matchExpressions, whose key field is \"key\", the\n        # operator is \"In\", and the values array contains only \"value\".\n        # The requirements are ANDed.\n        matchLabels: {}\n      # Name is the name of this hook.\n      name: string\n      # PostHooks is a list of BackupResourceHooks to execute after\n      # storing the item in the backup. These are executed after\n      # all \"additional items\" from item actions are processed.\n      post:\n      - exec:\n          # Command is the command and arguments to execute.\n          command: [\"string\"]\n          # Container is the container in the pod where the command\n          # should be executed. If not specified, the pod's first\n          # container is used.\n          container: string\n          # OnError specifies how Velero should behave if it encounters\n          # an error executing this hook.\n          onError: string\n          # Timeout defines the maximum amount of time Velero should\n          # wait for the hook to complete before considering the\n          # execution a failure.\n          timeout: string\n      # PreHooks is a list of BackupResourceHooks to execute prior to\n      # storing the item in the backup. These are executed before\n      # any \"additional items\" from item actions are processed.\n      pre:\n      - exec:\n          # Command is the command and arguments to execute.\n          command: [\"string\"]\n          # Container is the container in the pod where the command\n          # should be executed. If not specified, the pod's first\n          # container is used.\n          container: string\n          # OnError specifies how Velero should behave if it encounters\n          # an error executing this hook.\n          onError: string\n          # Timeout defines the maximum amount of time Velero should\n          # wait for the hook to complete before considering the\n          # execution a failure.\n          timeout: string\n  # IncludeClusterResources specifies whether cluster-scoped resources\n  # should be included for consideration in the backup.\n  includeClusterResources: true\n  # IncludedNamespaces is a slice of namespace names to include objects\n  # from. If empty, all namespaces are included.\n  includedNamespaces: [\"string\"]\n  # IncludedResources is a slice of resource names to include in the\n  # backup. If empty, all resources are included.\n  includedResources: [\"string\"]\n  # LabelSelector is a metav1.LabelSelector to filter with when adding\n  # individual objects to the backup. If empty or nil, all objects are\n  # included. Optional.\n  labelSelector:\n    # matchExpressions is a list of label selector requirements. The\n    # requirements are ANDed.\n    matchExpressions:\n    - key: string\n      # operator represents a key's relationship to a set of values.\n      # Valid operators are In, NotIn, Exists and DoesNotExist.\n      operator: string\n      # values is an array of string values. If the operator is In or\n      # NotIn, the values array must be non-empty. If the operator is\n      # Exists or DoesNotExist, the values array must be empty. This\n      # array is replaced during a strategic merge patch.\n      values: [\"string\"]\n    # matchLabels is a map of {key,value} pairs. A single {key,value} in\n    # the matchLabels map is equivalent to an element of\n    # matchExpressions, whose key field is \"key\", the operator is \"In\",\n    # and the values array contains only \"value\". The requirements are\n    # ANDed.\n    matchLabels: {} metadata: labels: {}\n  # OrderedResources specifies the backup order of resources of specific\n  # Kind. The map key is the Kind name and value is a list of resource\n  # names separated by commas. Each resource name has\n  # format \"namespace/resourcename\".  For cluster resources, simply\n  # use \"resourcename\".\n  orderedResources: {}\n  # SnapshotVolumes specifies whether to take cloud snapshots of any\n  # PV's referenced in the set of objects included in the Backup.\n  snapshotVolumes: true\n  # StorageLocation is a string containing the name of a\n  # BackupStorageLocation where the backup should be stored.\n  storageLocation: string\n  # TTL is a time.Duration-parseable string describing how long the\n  # Backup should be retained for.\n  ttl: string\n  # VolumeSnapshotLocations is a list containing names of\n  # VolumeSnapshotLocations associated with this backup.\n  volumeSnapshotLocations: [\"string\"] status:\n  # ClusterSize the current number of ranks & type i.e. CPU or GPU of\n  # the cluster when the backup took place.\n  clusterSize:\n    # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n    # representation of the number of nodes in a simple to understand\n    # T-Short size scheme. This indicates the size of the cluster i.e.\n    # the number of nodes. It does not identify the size of the cloud\n    # provider nodes. For node size see ClusterTypeEnum. Supported\n    # Values are: - XS S M L XL XXL XXXL\n    tshirtSize: string\n    # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n    # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n    # of the VM.\n    tshirtType: string coldTierBackup: string\n  # CompletionTimestamp records the time a backup was completed.\n  # Completion time is recorded even on failed backups. Completion time\n  # is recorded before uploading the backup object. The server's time\n  # is used for CompletionTimestamps\n  completionTimestamp: string\n  # Errors is a count of all error messages that were generated during\n  # execution of the backup.  The actual errors are in the backup's log\n  # file in object storage.\n  errors: 1\n  # Expiration is when this Backup is eligible for garbage-collection.\n  expiration: string\n  # FormatVersion is the backup format version, including major, minor,\n  # and patch version.\n  formatVersion: string\n  # Phase is the current state of the Backup.\n  phase: string\n  # Progress contains information about the backup's execution progress.\n  # Note that this information is best-effort only -- if Velero fails\n  # to update it during a backup for any reason, it may be\n  # inaccurate/stale.\n  progress:\n    # ItemsBackedUp is the number of items that have actually been\n    # written to the backup tarball so far.\n    itemsBackedUp: 1\n    # TotalItems is the total number of items to be backed up. This\n    # number may change throughout the execution of the backup due to\n    # plugins that return additional related items to back up, the\n    # velero.io/exclude-from-backup label, and various other filters\n    # that happen as items are processed.\n    totalItems: 1\n  # StartTimestamp records the time a backup was started. Separate from\n  # CreationTimestamp, since that value changes on restores. The\n  # server's time is used for StartTimestamps\n  startTimestamp: string\n  # ValidationErrors is a slice of all validation errors\n  # (if applicable).\n  validationErrors: [\"string\"]\n  # Version is the backup format major version. Deprecated: Please see\n  # FormatVersion\n  version: 1\n  # VolumeSnapshotsAttempted is the total number of attempted volume\n  # snapshots for this backup.\n  volumeSnapshotsAttempted: 1\n  # VolumeSnapshotsCompleted is the total number of successfully\n  # completed volume snapshots for this backup.\n  volumeSnapshotsCompleted: 1\n  # Warnings is a count of all warning messages that were generated\n  # during execution of the backup. The actual warnings are in the\n  # backup's log file in object storage.\n  warnings: 1\n
","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_grants/","title":"Kinetica Cluster Grants CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_grants/#full-kineticagrant-cr-structure","title":"Full KineticaGrant CR Structure","text":"kineticagrants.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaGrant \nmetadata: {}\n# KineticaGrantSpec defines the desired state of KineticaGrant\nspec:\n  # Grants system-level and/or table permissions to a user or role.\n  addGrantAllOnSchemaRequest:\n    # Name of the user or role that will be granted membership in input\n    # parameter role. Must be an existing user or role.\n    member: string\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # SchemaName - name of the schema on which to perform the Grant All\n    schemaName: string\n  # Grants system-level and/or table permissions to a user or role.\n  addGrantPermissionRequest:\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # Permission to grant to the user or role. Supported\n    # Values    Description system_admin    Full access to all data and\n    # system functions. system_user_admin   Access to administer users\n    # and roles that do not have system_admin permission.\n    # system_write  Read and write access to all tables.\n    # system_read   Read-only access to all tables.\n    systemPermission:\n      # UID of the user or role to which the permission will be granted.\n      # Must be an existing user or role.\n      name: string\n      # Optional parameters. The default value is an empty map (\n      # {} ). Supported Parameters: resource_group  Name of an existing\n      # resource group to associate with this role.\n      options: {}\n      # Permission to grant to the user or role. Supported\n      # Values  Description table_admin Full read/write and\n      # administrative access to the table. table_insert    Insert access\n      # to the table. table_update  Update access to the table.\n      # table_delete    Delete access to the table. table_read  Read access\n      # to the table.\n      permission: string\n    # Permission to grant to the user or role. Supported\n    # Values    Description<br/> system_admin   Full access to all data and\n    # system functions.<br/> system_user_admin  Access to administer\n    # users and roles that do not have system_admin permission.<br/>\n    # system_write  Read and write access to all tables.<br/>\n    # system_read   Read-only access to all tables.<br/>\n    tablePermissions:\n    - filter_expression: \"\"\n      # UID of the user or role to which the permission will be granted.\n      # Must be an existing user or role.\n      name: string\n      # Optional parameters. The default value is an empty map (\n      # {} ). Supported Parameters: resource_group  Name of an existing\n      # resource group to associate with this role.\n      options: {}\n      # Permission to grant to the user or role. Supported\n      # Values  Description table_admin Full read/write and\n      # administrative access to the table. table_insert    Insert access\n      # to the table. table_update  Update access to the table.\n      # table_delete    Delete access to the table. table_read  Read access\n      # to the table.\n      permission: string\n      # Name of the table for which the Permission is to be granted\n      table_name: string\n  # Grants membership in a role to a user or role.\n  addGrantRoleRequest:\n    # Name of the user or role that will be granted membership in input\n    # parameter role. Must be an existing user or role.\n    member: string\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # Name of the role in which membership will be granted. Must be an\n    # existing role.\n    role: string\n  # Debug debug the call\n  debug: false\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n# KineticaGrantStatus defines the observed state of KineticaGrant\nstatus:\n  # DBStringResponse - The GPUdb server embeds the endpoint response\n  # inside a standard response structure which contains status\n  # information and the actual response to the query.\n  db_response: data: string\n    # This embedded JSON represents the result of the endpoint\n    data_str: string\n    # API Call Specific\n    data_type: string\n    # Empty if success or an error message\n    message: string\n    # 'OK' or 'ERROR'\n    status: string \n    ldap_response: string\n
","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_reference/","title":"Core DB CRDs","text":"
  • DB Clusters

    Core Kinetica Database Cluster Management CRD & sample CR.

    KineticaCluster

  • DB Users

    Kinetica Database User Management CRD & sample CR.

    KineticaUser

  • DB Roles

    Kinetica Database Role Management CRD & sample CR.

    KineticaRole

  • DB Schemas

    Kinetica Database Schema Management CRD & sample CR.

    KineticaSchema

  • DB Grants

    Kinetica Database Grant Management CRD & sample CR.

    KineticaGrant

  • DB Resource Groups

    Kinetica Database Resource Group Management CRD & sample CR.

    KineticaResourceGroup

  • DB Administration

    Kinetica Database Administration CRD & sample CR.

    KineticaAdmin

  • DB Backups

    Kinetica Database Backup Management CRD & sample CR.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaBackup

  • DB Restore

    Kinetica Database Restore CRD & sample CR.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaRestore

","tags":["Reference","Installation","Operations"]},{"location":"Reference/kinetica_cluster_resource_groups/","title":"Kinetica Cluster Resource Groups CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_resource_groups/#full-kineticaresourcegroup-cr-structure","title":"Full KineticaResourceGroup CR Structure","text":"kineticaclusterresourcegroups.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterResourceGroup \nmetadata: {}\n# KineticaClusterResourceGroupSpec defines the desired state of\n# KineticaClusterResourceGroup\nspec: \n  db_create_resource_group_request:\n    # AdjoiningResourceGroup -\n    adjoining_resource_group: \"\"\n    # Name - name of the DB ResourceGroup\n    # https://docs.kinetica.com/7.1/azure/sql/resource_group/?search-highlight=resource+group#id-baea5b60-769c-5373-bff1-53f4f1ca5c21\n    name: string\n    # Options - DB Options used when creating the ResourceGroup\n    options: {}\n    # Ranking - Indicates the relative ranking among existing resource\n    # groups where this new resource group will be placed. When using\n    # before or after, specify which resource group this one will be\n    # inserted before or after in input parameter\n    # adjoining_resource_group. The supported values are: first last\n    # before after\n    ranking: \"\"\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n# KineticaClusterResourceGroupStatus defines the observed state of\n# KineticaClusterResourceGroup\nstatus: \n  provisioned: string\n
","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_restores/","title":"Kinetica Cluster Restores Reference","text":"","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_restores/#full-kineticaclusterrestore-cr-structure","title":"Full KineticaClusterRestore CR Structure","text":"kineticaclusterrestores.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterRestore \nmetadata: {}\n# RestoreSpec defines the specification for a Velero restore.\nspec:\n  # BackupName is the unique name of the Velero backup to restore from.\n  backupName: string\n  # ExcludedNamespaces contains a list of namespaces that are not\n  # included in the restore.\n  excludedNamespaces: [\"string\"]\n  # ExcludedResources is a slice of resource names that are not included\n  # in the restore.\n  excludedResources: [\"string\"]\n  # IncludeClusterResources specifies whether cluster-scoped resources\n  # should be included for consideration in the restore. If null,\n  # defaults to true.\n  includeClusterResources: true\n  # IncludedNamespaces is a slice of namespace names to include objects\n  # from. If empty, all namespaces are included.\n  includedNamespaces: [\"string\"]\n  # IncludedResources is a slice of resource names to include in the\n  # restore. If empty, all resources in the backup are included.\n  includedResources: [\"string\"]\n  # LabelSelector is a metav1.LabelSelector to filter with when\n  # restoring individual objects from the backup. If empty or nil, all\n  # objects are included. Optional.\n  labelSelector:\n    # matchExpressions is a list of label selector requirements. The\n    # requirements are ANDed.\n    matchExpressions:\n    - key: string\n      # operator represents a key's relationship to a set of values.\n      # Valid operators are In, NotIn, Exists and DoesNotExist.\n      operator: string\n      # values is an array of string values. If the operator is In or\n      # NotIn, the values array must be non-empty. If the operator is\n      # Exists or DoesNotExist, the values array must be empty. This\n      # array is replaced during a strategic merge patch.\n      values: [\"string\"]\n    # matchLabels is a map of {key,value} pairs. A single {key,value} in\n    # the matchLabels map is equivalent to an element of\n    # matchExpressions, whose key field is \"key\", the operator is \"In\",\n    # and the values array contains only \"value\". The requirements are\n    # ANDed.\n    matchLabels: {}\n  # NamespaceMapping is a map of source namespace names to target\n  # namespace names to restore into. Any source namespaces not included\n  # in the map will be restored into namespaces of the same name.\n  namespaceMapping: {}\n  # RestorePVs specifies whether to restore all included PVs from\n  # snapshot (via the cloudprovider).\n  restorePVs: true\n  # ScheduleName is the unique name of the Velero schedule to restore\n  # from. If specified, and BackupName is empty, Velero will restore\n  # from the most recent successful backup created from this schedule.\n  scheduleName: string status: coldTierRestore: \"\"\n  # CompletionTimestamp records the time the restore operation was\n  # completed. Completion time is recorded even on failed restore. The\n  # server's time is used for StartTimestamps\n  completionTimestamp: string\n  # Errors is a count of all error messages that were generated during\n  # execution of the restore. The actual errors are stored in object\n  # storage.\n  errors: 1\n  # FailureReason is an error that caused the entire restore to fail.\n  failureReason: string\n  # Phase is the current state of the Restore\n  phase: string\n  # Progress contains information about the restore's execution\n  # progress. Note that this information is best-effort only -- if\n  # Velero fails to update it during a restore for any reason, it may\n  # be inaccurate/stale.\n  progress:\n    # ItemsRestored is the number of items that have actually been\n    # restored so far\n    itemsRestored: 1\n    # TotalItems is the total number of items to be restored. This\n    # number may change throughout the execution of the restore due to\n    # plugins that return additional related items to restore\n    totalItems: 1\n  # StartTimestamp records the time the restore operation was started.\n  # The server's time is used for StartTimestamps\n  startTimestamp: string\n  # ValidationErrors is a slice of all validation errors(if applicable)\n  validationErrors: [\"string\"]\n  # Warnings is a count of all warning messages that were generated\n  # during execution of the restore. The actual warnings are stored in\n  # object storage.\n  warnings: 1\n
","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_roles/","title":"Kinetica Cluster Roles CRD","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_roles/#full-kineticarole-cr-structure","title":"Full KineticaRole CR Structure","text":"kineticaroles.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaRole \nmetadata: {}\n# KineticaRoleSpec defines the desired state of KineticaRole\nspec:\n  # AlterRoleRequest Kinetica DB REST API Request Format Object.\n  alter_role:\n    # Action - Modification operation to be applied to the role.\n    action: string\n    # Role UID - Name of the role to be altered. Must be an existing\n    # role.\n    name: string\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # Value - The value of the modification, depending on input\n    # parameter action.\n    value: string\n  # Debug debug the call\n  debug: false\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n  # AddRoleRequest Kinetica DB REST API Request Format Object.\n  role:\n    # User UID\n    name: string\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: resource_group    Name of an existing\n    # resource group to associate with this role.\n    options: {}\n    # ResourceGroupName of an existing resource group to associate with\n    # this role\n    resourceGroupName: \"\"\n# KineticaRoleStatus defines the observed state of KineticaRole\nstatus:\n  # DBStringResponse - The GPUdb server embeds the endpoint response\n  # inside a standard response structure which contains status\n  # information and the actual response to the query.\n  db_response: data: string\n    # This embedded JSON represents the result of the endpoint\n    data_str: string\n    # API Call Specific\n    data_type: string\n    # Empty if success or an error message\n    message: string\n    # 'OK' or 'ERROR'\n    status: string \n    ldap_response: string\n
","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_schemas/","title":"Kinetica Cluster Schemas CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_schemas/#full-kinetica-cluster-schemas-cr-structure","title":"Full Kinetica Cluster Schemas CR Structure","text":"kineticaclusterschemas.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterSchema \nmetadata: {}\n# KineticaClusterSchemaSpec defines the desired state of\n# KineticaClusterSchema\nspec: \n  db_create_schema_request:\n    # Name - the name of the resource group to create in the DB\n    name: string\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: \"max_cpu_concurrency\", \"max_data\"\n    options: {}\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n# KineticaClusterSchemaStatus defines the observed state of\n# KineticaClusterSchema\nstatus: \n  provisioned: string\n
","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_users/","title":"Kinetica Cluster Users CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_users/#full-kineticauser-cr-structure","title":"Full KineticaUser CR Structure","text":"kineticausers.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaUser\nmetadata: {}\n# KineticaUserSpec defines the desired state of KineticaUser\nspec:\n  # Action field contains UserActionEnum field indicating whether it is\n  # an Upsert or Change Password operation. For deletion delete the\n  # KineticaUser CR and a finalizer will remove the user from LDAP.\n  action: string\n  # ChangePassword specific fields\n  changePassword:\n    # PasswordSecret - Not the actual user password but the name of a\n    # Kubernetes Secret containing a Data element with a Password\n    # attribute. The secret is removed on user creation. Must be in the\n    # same namespace as the Kinetica Cluster. Must contain the\n    # following fields: - oldPassword newPassword\n    passwordSecret: string\n  # Debug debug the call\n  debug: false\n  # GroupID - Organisation or Team Id the user belongs to.\n  groupId: string\n  # Create the user in Reveal\n  reveal: true\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n  # UID is the username (not UUID UID).\n  uid: string\n  # Upsert specific fields\n  upsert:\n    # CreateHomeDirectory - when true, a home directory in KiFS is\n    # created for this user The default value is true. The supported\n    # values are: true false\n    createHomeDirectory: true\n    # DB Memory user data size limit\n    dataLimit: \"10Gi\"\n    # DisplayName\n    displayName: string\n    # GivenName is Firstname also called Christian name. givenName in\n    # LDAP terms.\n    givenName: string\n    # KIFs user data size limit\n    kifsDataLimit: \"2Gi\"\n    # LastName refers to last name or surname. sn in LDAP terms.\n    lastName: string\n    # Options -\n    options: {}\n    # PasswordSecret - Not the actual user password but the name of a\n    # Kubernetes Secret containing a Data element with a Password\n    # attribute. The secret is removed on user creation. Must be in the\n    # same namespace as the Kinetica Cluster.\n    passwordSecret: string\n    # UPN or UserPrincipalName - e.g. guyt@cp.com  \n    # Looks like an email address.\n    userPrincipalName: string\n  # UUID is the user unique UUID from the Control Plane.\n  uuid: string\n# KineticaUserStatus defines the observed state of KineticaUser\nstatus:\n  # DBStringResponse - The GPUdb server embeds the endpoint response\n  # inside a standard response structure which contains status\n  # information and the actual response to the query.\n  db_response: data: string\n    # This embedded JSON represents the result of the endpoint\n    data_str: string\n    # API Call Specific\n    data_type: string\n    # Empty if success or an error message\n    message: string\n    # 'OK' or 'ERROR'\n    status: string \n    ldap_response: string \n    reveal_admin: string\n
","tags":["Reference","Administration"]},{"location":"Reference/kinetica_clusters/","title":"image/svg+xml crd Kinetica Clusters CRD Reference","text":"

This page covers the Kinetica Cluster Kubernetes CRD.

","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#kubectl-cli-commands","title":"kubectl cli commands","text":"","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#kubectl-n-_namespace_-get-kc","title":"kubectl -n _namespace_ get kc","text":"

Lists the KineticaUsers defined within the specified anmespace to the console.

Bash
kubectl -n _namespace_ get ku\n
","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#full-kineticacluster-cr-structure","title":"Full KineticaCluster CR Structure","text":"kineticaclusters.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaCluster \nmetadata: {}\n# KineticaClusterSpec defines the configuration for KineticaCluster DB\nspec:\n  # An optional duration after which the database is stopped and DB\n  # resources are freed\n  autoSuspend: \n    enabled: false\n    # InactivityDuration - the duration which the cluster should be idle\n    # before auto-pausing the DB Cluster.\n    inactivityDuration: \"1h\"\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  awsConfig:\n    # ClusterName - AWS name of the EKS Cluster. NOTE: Marked as\n    # optional but is mandatory\n    clusterName: string\n    # MarketplaceAppConfig - Amazon AWS specific DB Cluster\n    # information.\n    marketplaceApp:\n      # KmsKeyId - Key for disk encryption. The full Amazon Resource\n      # Name of the key to use when encrypting the volume. If none is\n      # supplied but encrypted is true, a key is generated by AWS. See\n      # AWS docs for valid ARN value.\n      kmsKeyId: string\n      # ProductCode - used to uniquely identify a product in AWS\n      # Marketplace. The product code should be the same as the one\n      # used during the publishing of a new product.\n      productCode: \"1cmucncoyp9pi8xjdwqjimlf8\"\n      # PublicKeyVersion - Public Key Version provided by AWS\n      # Marketplace\n      publicKeyVersion: 1\n      # ParentResourceGroup - The resource group of the ManagedApp\n      # itself ParentResourceGroup     string\n      # `json:\"parentResourceGroup\"` ResourceId - Identifier of the\n      # resource against which usage is emitted Format is GUID\n      # (UUID)\n      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n      # Optional only if that exactly of  ResourceId or ResourceUri is\n      # specified.\n      resourceId: string\n    # NodeGroups - List of NodeGroups for this cluster MUST contain at\n    # least one of the following keys: - \n    #   * none\n    #   * infra \n    #   * infra_public \n    #   * compute \n    #   * compute-gpu \n    #   * aaw_cpu \n    # NOTE: Marked as optional but is mandatory\n    nodeGroups: {}\n    # OTELTracing - OpenTelemetry Tracing Specifics\n    otelTracing:\n      # Endpoint - Set the OpenTelemetry reporting Endpoint\n      endpoint: \"\"\n      # Key - KineticaCluster specific Key required to send Telemetry\n      # information to the Cloud\n      key: string\n      # MaxBatchSize - Telemetry Reporting Interval to use in seconds.\n      maxBatchInterval: 10\n      # MaxBatchSize - Telemetry Maximum Batch Size to send.\n      maxBatchSize: 1024\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  azureConfig:\n    # App Insights Specifics\n    appInsights:\n      # Endpoint - Override the default AppInsights reporting Endpoint\n      endpoint: \"\"\n      # Key - KineticaCluster specific Application Insights Key required\n      # to send Telemetry information to the Azure Portal\n      key: string\n      # MaxBatchSize - Telemetry Reporting Interval to use in seconds.\n      maxBatchInterval: 10\n      # MaxBatchSize - Telemetry Maximum Batch Size to send.\n      maxBatchSize: 1024\n    # AzureManagedAppConfig - Microsoft Azure specific DB Cluster\n    # information.\n    managedApp:\n      # DiskEncryptionSetID - By default, managed disks use\n      # platform-managed encryption keys. All managed disks, snapshots,\n      # images, and data written to existing managed disks are\n      # automatically encrypted-at-rest with platform-managed keys. You\n      # can choose to manage encryption at the level of each managed\n      # disk, with your own keys. When you specify a customer-managed\n      # key, that key is used to protect and control access to the key\n      # that encrypts your data. Customer-managed keys offer greater\n      # flexibility to manage access controls.\n      diskEncryptionSetId: string\n      # PlanId - The Azure Marketplace Plan/Offer identifier selected by\n      # the customer for this DB cluster e.g. BYOL, Pay-As-You-Go etc.\n      planId: string\n      # ParentResourceGroup - The resource group of the ManagedApp\n      # itself ParentResourceGroup     string\n      # `json:\"parentResourceGroup\"` ResourceId - Identifier of the\n      # resource against which usage is emitted Format is GUID\n      # (UUID)\n      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n      # Optional only if that exactly of  ResourceId or ResourceUri is\n      # specified.\n      resourceId: string\n      # ResourceUri - Identifier of the managed app resource against\n      # which usage is emitted\n      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n      # Optional only if that exactly of  ResourceId or ResourceUri is\n      # specified.\n      resourceUri: string\n  # Tells the operator we want to run in Debug mode.\n  debug: false\n  # Identifies the type of Kubernetes deployment.\n  deploymentType:\n    # CloudRegionEnum - The target Kubernetes type to deploy to.\n    # Supported Values are: - aws_useast_1 aws_useast_2 aws_uswest_1\n    # az_useast_1 az_uswest_1\n    region: string\n    # DeploymentTypeEnum - The type of the Deployment. Supported Values\n    # are: - Managed FreeSaaS DedicatedSaaS OnPrem\n    type: string\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  devEditionConfig:\n    # Host IPv4 address. Used by KiND based Developer Edition where\n    # ingress paths set to *. Provides qualified, routable URLs to\n    # workbench.\n    hostIpAddress: \"\"\n  # The GAdmin Dashboard Configuration for the Kinetica Cluster.\n  gadmin:\n    # The port that GAdmin will be running on. It runs only on the head\n    # node pod in the cluster. Default: 8080\n    containerPort:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # The Ingress Endpoint that GAdmin will be running on.\n    ingressPath:\n      # backend defines the referenced service endpoint to which the\n      # traffic will be forwarded to.\n      backend:\n        # resource is an ObjectRef to another Kubernetes resource in the\n        # namespace of the Ingress object. If resource is specified,\n        # serviceName and servicePort must not be specified.\n        resource:\n          # APIGroup is the group for the resource being referenced. If\n          # APIGroup is not specified, the specified Kind must be in\n          # the core API group. For any other third-party types,\n          # APIGroup is required.\n          apiGroup: string\n          # Kind is the type of resource being referenced\n          kind: KineticaCluster\n          # Name is the name of resource being referenced\n          name: string\n        # serviceName specifies the name of the referenced service.\n        serviceName: string\n        # servicePort Specifies the port of the referenced service.\n        servicePort: \n      # path is matched against the path of an incoming request.\n      # Currently it can contain characters disallowed from the\n      # conventional \"path\" part of a URL as defined by RFC 3986. Paths\n      # must begin with a '/' and must be present when using PathType\n      # with value \"Exact\" or \"Prefix\".\n      path: string\n      # pathType determines the interpretation of the path matching.\n      # PathType can be one of the following values: * Exact: Matches\n      # the URL path exactly. * Prefix: Matches based on a URL path\n      # prefix split by '/'. Matching is done on a path element by\n      # element basis. A path element refers is the list of labels in\n      # the path split by the '/' separator. A request is a match for\n      # path p if every p is an element-wise prefix of p of the request\n      # path. Note that if the last element of the path is a substring\n      # of the last element in request path, it is not a match\n      # (e.g. /foo/bar matches /foo/bar/baz, but does not\n      # match /foo/barbaz). * ImplementationSpecific: Interpretation of\n      # the Path matching is up to the IngressClass. Implementations\n      # can treat this as a separate PathType or treat it identically\n      # to Prefix or Exact path types. Implementations are required to\n      # support all path types. Defaults to ImplementationSpecific.\n      pathType: string\n    # Whether to enable the GAdmin Dashboard on the Cluster. Default:\n    # true\n    isEnabled: true\n  # Gaia - gaia.properties configuration\n  gaia: admin:\n      # AdminLoginOnlyGpudbDown - When GPUdb is down, only allow admin\n      # user to login\n      admin_login_only_gpudb_down: true\n      # Username - We do check for admin username in various places\n      admin_username: \"admin\"\n      # LoginAnimationEnabled - Display any animation in login page\n      login_animation_enabled: true\n      # AdminLoginOnlyGpudbDown - Convenience settings for dev mode\n      login_bypass_enabled: false\n      # RequireStrongPassword - Convenience settings for dev mode\n      require_strong_password: true\n      # SSLTruststorePasswordScript - Display any animation in login\n      # page\n      ssl_truststore_password_script: string\n    # DemoSchema - Schema-related configuration\n    demo_schema: \"demo\" gpudb:\n      # DataFileStringNullValue - Table import/export null value string\n      data_file_string_null_value: \"\\\\N\"\n      gpudb_ext_url: \"http://127.0.0.1:8082/gpudb-0\"\n      # URL - Current instance of gpudb, when running in HA mode change\n      # this to load balancer endpoint\n      gpudb_url: \"http://127.0.0.1:9191\"\n      # LoggingLogFileName - Which file to use when displaying logging\n      # on Cluster page.\n      logging_log_file_name: \"gpudb.log\"\n      # SampleRepoURL - Table import/export null value string\n      sample_repo_url: \"//s3.amazonaws.com/kinetica-ce-data\" hm:\n      gpudb_ext_hm_url: \"http://127.0.0.1:8082/gpudb-host-manager\"\n      gpudb_hm_url: \"http://127.0.0.1:9300\" http:\n      # ClientTimeout - Number of seconds for proxy request timeout\n      http_client_timeout: 3600\n      # ClientTimeoutV2 - Force override of previous default with 0 as\n      # infinite timeout\n      http_client_timeout_v2: 0\n      # TomcatPathKey - Name of folder where Tomcat apps are installed\n      tomcat_path_key: \"tomcat\"\n      # WebappContext - Web App context\n      webapp_context: \"gadmin\"\n    # GAdminIsRemote - True if the gadmin application is running on a\n    # remote machine (not on same node as gpudb). If running on a\n    # remote machine the manage options will be disabled.\n    is_remote: false\n    # KAgentCLIPath - Schema-related configuration\n    kagent_cli_path: \"/opt/gpudb/kagent/bin/kagent\"\n    # KIO - KIO-related configuration\n    kio: kio_log_file_path: \"/opt/gpudb/kitools/kio/logs/gadmin.log\"\n    kio_log_level: \"DEBUG\" kio_log_size_limit: 10485760 kisql:\n      # QueryResultsLimit - KiSQL limit on the number of results in each\n      # query\n      kisql_query_results_limit: 10000\n      # QueryTimezone - KiSQL TimeZoneId setting for queries\n      # (use \"system\" for local system time)\n      kisql_query_timezone: \"GMT\" license:\n      # Status - Stub for license manager\n      status: \"ok\"\n      # Type - Stub for license manager\n      type: \"unlimited\"\n    # MaxConcurrentUserSessions - Session management configuration\n    max_concurrent_user_sessions: 0\n    # PublicSchema - Schema-related configuration\n    public_schema: \"ki_home\"\n    # RevealDBInfoFile - Path to file containing Reveal DB location\n    reveal_db_info_file: \"/opt/gpudb/connectors/reveal/var/REVEAL_DB_DIR\"\n    # RootSchema - Schema-related configuration\n    root_schema: \"root\" stats:\n      # GraphanaURL -\n      graphana_url: \"http://127.0.0.1:3000\"\n      # GraphiteURL\n      graphite_url: \"http://127.0.0.1:8181\"\n      # StatsGrafanaURL - Port used to host the Grafana user interface\n      # and embeddable metric dashboards in GAdmin. Note: If this value\n      # is defaulted then it will be replaced by the name of the Stats\n      # service if it is deployed & Grafana is enabled e.g.\n      # cluster-1234.gpudb.svc.cluster.local\n      stats_grafana_url: \"http://127.0.0.1:9091\"\n  # https://github.com/kubernetes-sigs/controller-tools/issues/622 if we\n  # want to set usePools as false, need to set defaults GPUDBCluster is\n  # an instance of a Kinetica DB Cluster i.e. it's StatefulSet,\n  # Service, Ingress, ConfigMap etc.\n  gpudbCluster:\n    # Affinity - is a group of affinity scheduling rules.\n    affinity:\n      # Describes node affinity scheduling rules for the pod.\n      nodeAffinity:\n        # The scheduler will prefer to schedule pods to nodes that\n        # satisfy the affinity expressions specified by this field, but\n        # it may choose a node that violates one or more of the\n        # expressions. The node that is most preferred is the one with\n        # the greatest sum of weights, i.e. for each node that meets\n        # all of the scheduling requirements (resource request,\n        # requiredDuringScheduling affinity expressions, etc.), compute\n        # a sum by iterating through the elements of this field and\n        # adding \"weight\" to the sum if the node matches the\n        # corresponding matchExpressions; the node(s) with the highest\n        # sum are the most preferred.\n        preferredDuringSchedulingIgnoredDuringExecution:\n        - preference:\n            # A list of node selector requirements by node's labels.\n            matchExpressions:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n            # A list of node selector requirements by node's fields.\n            matchFields:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n          # Weight associated with matching the corresponding\n          # nodeSelectorTerm, in the range 1-100.\n          weight: 1\n        # If the affinity requirements specified by this field are not\n        # met at scheduling time, the pod will not be scheduled onto\n        # the node. If the affinity requirements specified by this\n        # field cease to be met at some point during pod execution\n        # (e.g. due to an update), the system may or may not try to\n        # eventually evict the pod from its node.\n        requiredDuringSchedulingIgnoredDuringExecution:\n          # Required. A list of node selector terms. The terms are\n          # ORed.\n          nodeSelectorTerms:\n          - matchExpressions:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n            # A list of node selector requirements by node's fields.\n            matchFields:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n      # Describes pod affinity scheduling rules (e.g. co-locate this pod\n      # in the same node, zone, etc. as some other pod(s)).\n      podAffinity:\n        # The scheduler will prefer to schedule pods to nodes that\n        # satisfy the affinity expressions specified by this field, but\n        # it may choose a node that violates one or more of the\n        # expressions. The node that is most preferred is the one with\n        # the greatest sum of weights, i.e. for each node that meets\n        # all of the scheduling requirements (resource request,\n        # requiredDuringScheduling affinity expressions, etc.), compute\n        # a sum by iterating through the elements of this field and\n        # adding \"weight\" to the sum if the node has pods which matches\n        # the corresponding podAffinityTerm; the node(s) with the\n        # highest sum are the most preferred.\n        preferredDuringSchedulingIgnoredDuringExecution:\n        - podAffinityTerm:\n            # A label query over a set of resources, in this case pods.\n            labelSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # A label query over the set of namespaces that the term\n            # applies to. The term is applied to the union of the\n            # namespaces selected by this field and the ones listed in\n            # the namespaces field. null selector and null or empty\n            # namespaces list means \"this pod's namespace\". An empty\n            # selector ({}) matches all namespaces.\n            namespaceSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # namespaces specifies a static list of namespace names that\n            # the term applies to. The term is applied to the union of\n            # the namespaces listed in this field and the ones selected\n            # by namespaceSelector. null or empty namespaces list and\n            # null namespaceSelector means \"this pod's namespace\".\n            namespaces: [\"string\"]\n            # This pod should be co-located (affinity) or not\n            # co-located (anti-affinity) with the pods matching the\n            # labelSelector in the specified namespaces, where\n            # co-located is defined as running on a node whose value of\n            # the label with key topologyKey matches that of any node\n            # on which any of the selected pods is running. Empty\n            # topologyKey is not allowed.\n            topologyKey: string\n          # weight associated with matching the corresponding\n          # podAffinityTerm, in the range 1-100.\n          weight: 1\n        # If the affinity requirements specified by this field are not\n        # met at scheduling time, the pod will not be scheduled onto\n        # the node. If the affinity requirements specified by this\n        # field cease to be met at some point during pod execution\n        # (e.g. due to a pod label update), the system may or may not\n        # try to eventually evict the pod from its node. When there are\n        # multiple elements, the lists of nodes corresponding to each\n        # podAffinityTerm are intersected, i.e. all terms must be\n        # satisfied.\n        requiredDuringSchedulingIgnoredDuringExecution:\n        - labelSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # A label query over the set of namespaces that the term\n          # applies to. The term is applied to the union of the\n          # namespaces selected by this field and the ones listed in\n          # the namespaces field. null selector and null or empty\n          # namespaces list means \"this pod's namespace\". An empty\n          # selector ({}) matches all namespaces.\n          namespaceSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # namespaces specifies a static list of namespace names that\n          # the term applies to. The term is applied to the union of\n          # the namespaces listed in this field and the ones selected\n          # by namespaceSelector. null or empty namespaces list and\n          # null namespaceSelector means \"this pod's namespace\".\n          namespaces: [\"string\"]\n          # This pod should be co-located (affinity) or not co-located\n          # (anti-affinity) with the pods matching the labelSelector in\n          # the specified namespaces, where co-located is defined as\n          # running on a node whose value of the label with key\n          # topologyKey matches that of any node on which any of the\n          # selected pods is running. Empty topologyKey is not\n          # allowed.\n          topologyKey: string\n      # Describes pod anti-affinity scheduling rules (e.g. avoid putting\n      # this pod in the same node, zone, etc. as some other pod(s)).\n      podAntiAffinity:\n        # The scheduler will prefer to schedule pods to nodes that\n        # satisfy the anti-affinity expressions specified by this\n        # field, but it may choose a node that violates one or more of\n        # the expressions. The node that is most preferred is the one\n        # with the greatest sum of weights, i.e. for each node that\n        # meets all of the scheduling requirements (resource request,\n        # requiredDuringScheduling anti-affinity expressions, etc.),\n        # compute a sum by iterating through the elements of this field\n        # and adding \"weight\" to the sum if the node has pods which\n        # matches the corresponding podAffinityTerm; the node(s) with\n        # the highest sum are the most preferred.\n        preferredDuringSchedulingIgnoredDuringExecution:\n        - podAffinityTerm:\n            # A label query over a set of resources, in this case pods.\n            labelSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # A label query over the set of namespaces that the term\n            # applies to. The term is applied to the union of the\n            # namespaces selected by this field and the ones listed in\n            # the namespaces field. null selector and null or empty\n            # namespaces list means \"this pod's namespace\". An empty\n            # selector ({}) matches all namespaces.\n            namespaceSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # namespaces specifies a static list of namespace names that\n            # the term applies to. The term is applied to the union of\n            # the namespaces listed in this field and the ones selected\n            # by namespaceSelector. null or empty namespaces list and\n            # null namespaceSelector means \"this pod's namespace\".\n            namespaces: [\"string\"]\n            # This pod should be co-located (affinity) or not\n            # co-located (anti-affinity) with the pods matching the\n            # labelSelector in the specified namespaces, where\n            # co-located is defined as running on a node whose value of\n            # the label with key topologyKey matches that of any node\n            # on which any of the selected pods is running. Empty\n            # topologyKey is not allowed.\n            topologyKey: string\n          # weight associated with matching the corresponding\n          # podAffinityTerm, in the range 1-100.\n          weight: 1\n        # If the anti-affinity requirements specified by this field are\n        # not met at scheduling time, the pod will not be scheduled\n        # onto the node. If the anti-affinity requirements specified by\n        # this field cease to be met at some point during pod\n        # execution (e.g. due to a pod label update), the system may or\n        # may not try to eventually evict the pod from its node. When\n        # there are multiple elements, the lists of nodes corresponding\n        # to each podAffinityTerm are intersected, i.e. all terms must\n        # be satisfied.\n        requiredDuringSchedulingIgnoredDuringExecution:\n        - labelSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # A label query over the set of namespaces that the term\n          # applies to. The term is applied to the union of the\n          # namespaces selected by this field and the ones listed in\n          # the namespaces field. null selector and null or empty\n          # namespaces list means \"this pod's namespace\". An empty\n          # selector ({}) matches all namespaces.\n          namespaceSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # namespaces specifies a static list of namespace names that\n          # the term applies to. The term is applied to the union of\n          # the namespaces listed in this field and the ones selected\n          # by namespaceSelector. null or empty namespaces list and\n          # null namespaceSelector means \"this pod's namespace\".\n          namespaces: [\"string\"]\n          # This pod should be co-located (affinity) or not co-located\n          # (anti-affinity) with the pods matching the labelSelector in\n          # the specified namespaces, where co-located is defined as\n          # running on a node whose value of the label with key\n          # topologyKey matches that of any node on which any of the\n          # selected pods is running. Empty topologyKey is not\n          # allowed.\n          topologyKey: string\n    # Annotations - Annotations labels to be applied to the Statefulset\n    # DB pods.\n    annotations: {}\n    # The name of the cluster to form.\n    clusterName: string\n    # The Ingress Endpoint that GAdmin will be running on.\n    clusterSize:\n      # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n      # representation of the number of nodes in a simple to understand\n      # T-Short size scheme. This indicates the size of the cluster\n      # i.e. the number of nodes. It does not identify the size of the\n      # cloud provider nodes. For node size see ClusterTypeEnum.\n      # Supported Values are: - XS S M L XL XXL XXXL\n      tshirtSize: string\n      # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n      # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n      # of the VM.\n      tshirtType: string\n    # Config Kinetica DB Configuration Object\n    config: ai: apiKey: string\n        # Provider - AI API provider type. The default is \"sqlgpt\"\n        apiProvider: \"sqlgpt\" apiUrl: string\n      # AlertManagerConfig\n      alertManager:\n        # AlertManager IP address (run on head node) default port\n        # is \"2003\"\n        ipAddress: \"${gaia.host0.address}\" port: 2003\n      # AlertConfig\n      alerts: alertDiskAbsolute: [integer]\n        # Trigger an alert if available disk space on any given node\n        # falls to or below a certain threshold, either absolute\n        # (number of bytes) or percentage of total disk space. For\n        # multiple thresholds, use a comma-delimited list of values.\n        alertDiskPercentage: [1,5,10,20]\n        # Trigger generic error message alerts, in cases of various\n        # significant runtime errors.\n        alertErrorMessages: true\n        # Executable to run when an alert condition occurs. This\n        # executable will only be run on **rank0** and does not need to\n        # be present on other nodes.\n        alertExe: \"\"\n        # Trigger an alert whenever the status of a host or rank\n        # changes.\n        alertHostStatus: true\n        # Optionally, filter host alerts for a comma-delimited list of\n        # statuses. If a filter is empty, every host status change will\n        # trigger an alert.\n        alertHostStatusFilter: \"fatal_init_error\"\n        # The maximum number of triggered alerts guaranteed to be stored\n        # at any given time. When this number of alerts is exceeded,\n        # older alerts may be discarded to stay within the limit.\n        alertMaxStoredAlerts: 100 alertMemoryAbsolute: [integer]\n        # Trigger an alert if available memory on any given node falls\n        # to or below a certain threshold, either absolute (number of\n        # bytes) or percentage of total memory. For multiple\n        # thresholds, use a comma-delimited list of values.\n        alertMemoryPercentage: [1,5,10,20]\n        # Trigger an alert if a CUDA error occurs on a rank.\n        alertRankCudaError: true\n        # Trigger alerts when the fallback allocator is employed; e.g.,\n        # host memory is allocated because GPU allocation fails. NOTE:\n        # To prevent a flooding of alerts, if a fallback allocator is\n        # triggered in bursts, not every use will generate an alert.\n        alertRankFallbackAllocator: true\n        # Trigger an alert whenever the status of a rank changes.\n        alertRankStatus: true\n        # Optionally, filter rank alerts for a comma-delimited list of\n        # statuses. If a filter is empty, every rank status change will\n        # trigger an alert.\n        alertRankStatusFilter:\n        [\"fatal_init_error\",\"not_responding\",\"terminated\"]\n        # Enable the alerting system.\n        enableAlerts: true\n        # Directory where the trace event and summary files are stored.\n        # Must be a fully qualified path with sufficient free space for\n        # required volume of data.\n        traceDirectory: \"/tmp\"\n        # The maximum number of trace events to be collected\n        traceEventBufferSize: 1000000\n      # Audit - This section controls the request auditor, which will\n      # audit all requests received by the server in full or in part\n      # based on the settings.\n      audit:\n        # Controls whether the body of each request is audited (in JSON\n        # format). If 'enable_audit' is \"false\" this setting has no\n        # effect. NOTE: For requests that insert data records, this\n        # setting does not control the auditing of the records being\n        # inserted, only the rest of the request body; see 'audit_data'\n        # below to control this. audit_body = false\n        body: false\n        # Controls whether records being inserted are audited (in JSON\n        # format) for requests that insert data records. If\n        # either 'enable_audit' or 'audit_body' is \"false\", this\n        # setting has no effect. NOTE: Enabling this setting during\n        # bulk ingestion of data will rapidly produce very large audit\n        # logs and may cause disk space exhaustion; use with caution.\n        # audit_data = false\n        data: false\n        # Controls whether request auditing is enabled. If set\n        # to \"true\", the following information is audited for every\n        # request: Job ID, URI, User, and Client Address. The settings\n        # below control whether additional information about each\n        # request is also audited. If set to \"false\", all auditing is\n        # disabled. enable_audit = false\n        enable: false\n        # Controls whether HTTP headers are audited for each request.\n        # If 'enable_audit' is \"false\" this setting has no effect.\n        # audit_headers = false\n        headers: true\n        # Controls whether the above audit settings can be altered at\n        # runtime via the /alter/system/properties endpoint. In a\n        # secure environment where auditing is required at all times,\n        # this should be set to \"true\" to lock the settings to what is\n        # set in this file. lock_audit = false\n        lock: false\n        # Controls whether response information is audited for each\n        # request. If 'enable_audit' is \"false\" this setting has no\n        # effect. audit_response = false\n        response: false\n      # EventConfig\n      events:\n        # Run a statistics server to collect information about Kinetica\n        # and the machines it runs on.\n        internal: true\n        # Statistics server IP address (run on head node) default port\n        # is \"2003\"\n        ipAddress: \"${gaia.host0.address}\" port: 2003\n        # Statistics server namespace - should be a machine identifier\n        statsServerNamespace: \"gpudb\"\n      # ExternalFilesConfig\n      externalFiles:\n        # Defines the directory from which external files can be loaded\n        directory: \"/opt/gpudb/persist\"\n        # # Parquet files compression type egress_parquet_compression =\n        #   snappy\n        egressParquetCompression: \"snappy\"\n        # Max file size (in MB) to allow saving to a single file. May be\n        # overridden by target limitations. egress_single_file_max_size\n        # = 100\n        egressSingleFileMaxSize: \"100\"\n        # Maximum number of simultaneous threads allocated to a given\n        # external file read request, on each rank. Note that thread\n        # allocation may also be limited by resource group limits, the\n        # subtask_concurrency_limit setting, or system load.\n        readerNumTasks: \"-1\"\n      # GeneralConfig - the root of the gpudb.conf configuration in the\n      # CRD\n      general:\n        # Timeout (in seconds) to wait for a rank to start during a\n        # cluster event (ex: failover) event is considered failed.\n        clusterEventTimeoutStartupRank: \"300\"\n        # Enable (if \"true\") multiple kernels to run concurrently on the\n        # same GPU\n        concurrentKernelExecution: true\n        # Time-to-live in minutes of non-protected tables before they\n        # are automatically deleted from the database.\n        defaultTTL: \"20\"\n        # Disallow the /clear/table request to clear all tables.\n        disableClearAll: true\n        # Enable overlapped-equi-join filters\n        enableOverlappedEquiJoin: true\n        # Enable predicate-equi-join filter plan type\n        enablePredicateEquiJoin: true\n        # If \"true\" then all filter execution will be host-only\n        # (i.e. CPU). This can be useful for high-concurrency\n        # situations and when PCIe bandwidth is a limiting factor.\n        forceHostFilterExecution: false\n        # Maximum number of kernels that can be running at the same time\n        # on a given GPU. Set to \"0\" for no limit. Only takes effect\n        # if 'concurrent_kernel_execution' is \"true\"\n        maxConcurrentKernels: \"0\"\n        # Maximum number of records that data retrieval requests such\n        # as /get/records and /aggregate/groupby will return per\n        # request.\n        maxGetRecordsSize: 20000\n        # Set an optional executable command that will be run once when\n        # Kinetica is ready for client requests. This can be used to\n        # perform any initialization logic that needs to be run before\n        # clients connect. It will be run as the \"gpudb\" user, so you\n        # must ensure that any required permissions are set on the file\n        # to allow it to be executed.  If the command cannot be\n        # executed or returns a non-zero error code, then Kinetica will\n        # be stopped.  Output from the startup script will be logged\n        # to \"/opt/gpudb/core/logs/gpudb-on-start.log\" (and its dated\n        # relatives).  The \"gpudb_env.sh\" script is run directly before\n        # the command, so the path will be set to include the supplied\n        # Python runtime. Example: on_startup_script\n        # = /home/gpudb/on-start.sh param1 param2 ...\n        onStartupScript: \"\"\n        # Size in bytes of the pinned memory pool per-rank process to\n        # speed up copying data to the GPU.  Set to \"0\" to disable.\n        pinnedMemoryPoolSize: 2000000000\n        # Tables and collections with these names will not be deleted\n        # (comma separated).\n        protectedSets: \"MASTER,_MASTER,_DATASOURCE\"\n        # Timeout (in minutes) for filter-type requests\n        requestTimeout: \"20\"\n        # Timeout (in seconds) to wait for a rank to exit gracefully\n        # before it is force-killed. Machines with slow disk drives may\n        # require longer times and data may be lost if a drive is not\n        # responsive.\n        timeoutShutdownRank: \"300\"\n        # Timeout (in seconds) to wait for each database subsystem to\n        # exit gracefully before it is force-killed.\n        timeoutShutdownSubsystem: \"20\"\n        # Timeout (in seconds) to wait for each database subsystem to\n        # startup. Subsystems include the Query Planner, Graph,\n        # Stats, & HTTP servers, as well as external text-search\n        # ranks.\n        timeoutStartupSubsystem: \"60\"\n      # GraphConfig\n      graph:\n        # Enable the graph server\n        enable: false\n        # List of GPU devices to be used by graph server The server\n        # would ideally be run on a different node with dedicated GPU\n        # (s)\n        gpuList: \"\"\n        # Specify where the graph server should be run, defaults to head\n        # node\n        ipAddress: \"${gaia.rank0_ip_address}\"\n        # Maximum memory that can be used by the graph server, set\n        # to \"0\" to disable memory restriction\n        maxMemory: 0\n        # Port used for responses from the graph server to the database\n        # server\n        pullPort: 8100\n        # Port used for requests from the database server to the graph\n        # server\n        pushPort: 8099\n        # Number of seconds the graph client will wait for a response\n        # from the graph server\n        timeout: 1200\n      # HardwareConfig\n      hardware:\n        # Rank0HardwareConfig\n        rank0:\n          # Specify the GPU to use for all calculations on the HTTP\n          # server node, **rank0**. NOTE: The **rank0** GPU may be\n          # shared with another rank.\n          gpu: 0\n          # Set the head HTTP **rank0** numa node(s). If left empty,\n          # there will be no thread affinity or preferred memory node.\n          # The node list may be either a single node number or a\n          # range; e.g., \"1-5,7,10\". If there will be many simultaneous\n          # users, specify as many nodes as possible that won't overlap\n          # the **rank1** to **rankN** worker numa nodes that the GPUs\n          # are on. If there will be few simultaneous users and WMS\n          # speed is important, choose the numa node the 'rank0.gpu' is\n          # on.\n          numaNode: ranks:\n        - baseNumaNode: string\n          # Set each worker rank's preferred data numa node for CPU\n          # affinity and memory allocation.\n          # The 'rank<#>.data_numa_node' is the node or nodes that data\n          # intensive threads will run in and should be set to the same\n          # numa node that the GPU specified by the\n          # corresponding 'rank<#>.taskcalc_gpu' is on for best\n          # performance. If the 'rank<#>.taskcalc_gpu' is specified\n          # the 'rank<#>.data_numa_node' will be automatically set to\n          # the node the GPU is attached to, otherwise there will be no\n          # CPU thread affinity or preferred node for memory allocation\n          # if not specified or left empty. The node list may be a\n          # single node number or a range; e.g., \"1-5,7,10\".\n          dataNumaNode: string\n          # Set the GPU device for each worker rank to use. If no GPUs\n          # are specified, each rank will round-robin the available\n          # GPUs per host system. Add 'rank<#>.taskcalc_gpu' as needed\n          # for the worker ranks, where *#* ranges from \"1\" to the\n          # highest *rank #* among the 'rank<#>.host' parameters\n          # Example setting the GPUs to use for ranks 1 and 2: \n          #  #   rank1.taskcalc_gpu = 0 #   rank2.taskcalc_gpu = 1\n          taskCalcGPU: kafka:\n        # Maximum number of records to be ingested in a single batch\n        # kafka.batch_size = 1000\n        batchSize: 1000\n        # Maximum time (milliseconds) for each poll to get records from\n        # kafka kafka.poll_timeout = 0\n        pollTimeout: 1\n        # Maximum wait time (seconds) to buffer records received from\n        # kafka before ingestion kafka.wait_time = 30\n        waitTime: 30\n      # KifsConfig\n      kifs:\n        # KIFs user data size limit\n        dataLimit: \"4Gi\"\n        # sudo usermod -a -G gpudb_proc <user>\n        enable: false\n        # Parent directory of the mount point for the KiFS file system.\n        # Must be a fully qualified path. The actual mount point will\n        # be a subdirectory *mount* below this directory. Note that\n        # this folder must have read, write and execute permissions for\n        # the \"gpudb\" user and the \"gpudb_proc\" group, and it cannot be\n        # a path on an NFS.\n        mountPoint: \"/gpudb/kifs\" useManagedCredentials: true\n      # Etcd *ETCDConfig `json:\"etcd,omitempty\"` HA        HAConfig\n      # `json:\"ha,omitempty\"`\n      ml:\n        # Enable the ML server.\n        enable: false\n      # NetworkConfig\n      network:\n        # HAAddress - An optional address to allow inter-cluster\n        # communication with HA when 'address' is not routable between\n        # clusters.\n        HAAddress: string\n        # CompressNetworkData - Enables compression of inter-node\n        # network data transfers.\n        compressNetworkData: false\n        # EnableHTTPDProxy - Start an HTTP server as a proxy to handle\n        # LDAP and/or Kerberos authentication. Each host will run an\n        # HTTP server and access to each rank is available through\n        # http://host:8082/gpudb-1, where port \"8082\" is defined\n        # by 'httpd_proxy_port'. NOTE: HTTP external endpoints are not\n        # affected by the 'use_https' parameter above. If you wish to\n        # enable HTTPS, you must edit\n        # the \"/opt/gpudb/httpd/conf/httpd.conf\" and setup HTTPS as per\n        # the Apache httpd documentation at\n        # https://httpd.apache.org/docs/2.2/\n        enableHTTPDProxy: true\n        # EnableWorkerHTTPServers - Enable worker HTTP servers; each\n        # process runs its own server for multi-head ingest.\n        enableWorkerHTTPServers: true\n        # GlobalManagerLocalPubPort - ?\n        globalManagerLocalPubPort: 5554\n        # GlobalManagerPortOne - Internal communication ports -  Host\n        # manager status notification channel\n        globalManagerPortOne: 5552\n        # GlobalManagerPubPort - Host manager synchronization message\n        # publishing channel port\n        globalManagerPubPort: 5553\n        # HeadIPAddress - Head HTTP server IP address. Set to the\n        # publicly accessible IP address of the first\n        # process, **rank0**.\n        headIPAddress: \"172.20.0.10\"\n        # HeadPort - Head HTTP server port to use\n        # for 'head_ip_address'.\n        headPort: 9191\n        # HostManagerHTTPPort - HTTP port for web portal of the host\n        # manager\n        hostManagerHTTPPort: 9300\n        # HTTPAllowOrigin - Value to return via\n        # Access-Control-Allow-Origin HTTP header (for Cross-Origin\n        # Resource Sharing). Set to empty to not return the header and\n        # disallow CORS.\n        httpAllowOrigin: \"*\"\n        # HTTPKeepAlive - Keep HTTP connections alive between requests\n        httpKeepAlive: false\n        # HTTPDProxyPort - TCP port that the httpd auth proxy server\n        # will listen on if 'enable_httpd_proxy' is \"true\".\n        httpdProxyPort: 8082\n        # HTTPDProxyUseHTTPS - Set to \"true\" if the httpd auth proxy\n        # server is configured to use HTTPS.\n        httpdProxyUseHTTPS: false\n        # HTTPSCertFile - File containing the SSL certificate  e.g.\n        # cert.pem If required, a self-signed certificate(expires after\n        # 10 years) can be generated via the command: e.g. cert.pem\n        # openssl req -newkey rsa:2048 -new -nodes -x509 \\ -days\n        # 3650 -keyout key.pem -out cert.pem\n        httpsCertFile: \"\"\n        # HTTPSKeyFile - File containing the SSL private Key e.g.\n        # key.pem If required, a self-signed certificate (expires after\n        # 10 years) can be generated via the command: openssl\n        # req -newkey rsa:2048 -new -nodes -x509 \\ -days 3650 -keyout\n        # key.pem -out cert.pem\n        httpsKeyFile: \"\"\n        # Rank0IPAddress - Internal use IP address of the head HTTP\n        # server, **rank0**. Set to either a second internal network\n        # accessible by all ranks or to '${gaia.head_ip_address}'.\n        rank0IPAddress: \"${gaia.rank0.host}\" ranks:\n        - communicatorPort:\n            # Number of port to expose on the pod's IP address. This\n            # must be a valid port number, 0 < x < 65536.\n            containerPort: 1\n            # What host IP to bind the external port to.\n            hostIP: string\n            # Number of port to expose on the host. If specified, this\n            # must be a valid port number, 0 < x < 65536. If\n            # HostNetwork is specified, this must match ContainerPort.\n            # Most containers do not need this.\n            hostPort: 1\n            # If specified, this must be an IANA_SVC_NAME and unique\n            # within the pod. Each named port in a pod must have a\n            # unique name. Name for the port that can be referred to by\n            # services.\n            name: string\n            # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n            # to \"TCP\".\n            protocol: \"TCP\"\n          # Specify the hosts to run each rank worker process in the\n          # cluster. For a single machine system, use \"127.0.0.1\", but\n          # if using two or more machines, a hostname or IP address\n          # must be specified for each rank that is accessible from the\n          # other ranks. See also 'head_ip_address'\n          # and 'rank0_ip_address'.\n          host: string\n          # Optionally, specify the worker HTTP server ports. The\n          # default is to use ('head_port' + *rank #*) for each worker\n          # process where rank number is from \"1\" to number of ranks\n          # in 'rank<#>.host' below.\n          httpServerPort:\n            # Number of port to expose on the pod's IP address. This\n            # must be a valid port number, 0 < x < 65536.\n            containerPort: 1\n            # What host IP to bind the external port to.\n            hostIP: string\n            # Number of port to expose on the host. If specified, this\n            # must be a valid port number, 0 < x < 65536. If\n            # HostNetwork is specified, this must match ContainerPort.\n            # Most containers do not need this.\n            hostPort: 1\n            # If specified, this must be an IANA_SVC_NAME and unique\n            # within the pod. Each named port in a pod must have a\n            # unique name. Name for the port that can be referred to by\n            # services.\n            name: string\n            # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n            # to \"TCP\".\n            protocol: \"TCP\"\n          # This is the Kubernetes pod IP Address of the current rank\n          # which we need to populate in the operator. NOTE: Internal\n          # Attribute\n          podIP: string\n          # Optionally, specify a public URL for each worker HTTP server\n          # that clients should use to connect for multi-head\n          # operations. NOTE: If specified for any ranks, a public URL\n          # must be specified for all ranks.\n          publicURL: \"https://:8082/gpudb-{{.Rank}}\"\n          # Define the rank number of this rank.\n          rank: 1\n        # SetMonitorPort - Set monitor ZMQ publisher server port (-1 to\n        # disable), uses the 'head_ip_address' interface.\n        setMonitorPort: 9002\n        # SetMonitorProxyPort - Set monitor ZMQ publisher internal proxy\n        # server port (\"-1\" to disable), uses the 'head_ip_address'\n        # interface. IMPORTANT:  Disabling this port effectively\n        # prevents worker nodes from publishing set monitor\n        # notifications when multi-head ingest is enabled\n        # (see 'enable_worker_http_servers').\n        setMonitorProxyPort: 9003\n        # SetMonitorQueueSize - Set monitor queue size\n        setMonitorQueueSize: 1000\n        # TriggerPort - Trigger ZMQ publisher server port (\"-1\" to\n        # disable), uses the 'head_ip_address' interface.\n        triggerPort: -1\n        # UseHTTPS - Set to \"true\" to use HTTPS; if \"true\"\n        # then 'https_key_file' and 'https_cert_file' must be provided\n        useHttps: false\n      # PersistenceConfig\n      persistence:\n        # Removed in 7.2\n        IndexDBFlushImmediate: true\n        # DataLoadingSchema Startup data-loading scheme\n        buildMaterializedViewsOnStart: \"on_demand\"\n        # DataLoadingSchema Startup data-loading scheme\n        buildPKIndexOnStart: \"on_demand\"\n        # Target maximum data size for any one column in a chunk\n        # (512 MB) (0 = disable). chunk_max_memory = 8192000000\n        chunkColumnMaxMemory: 8192000000\n        # Target maximum total data size for all columns in a chunk\n        # (8 GB) (0 = disable).\n        chunkMaxMemory: 512000000\n        # Number of records per chunk (\"0\" disables chunking)\n        chunkSize: 8000000\n        # Determines whether to execute kernels on host (CPU) or device\n        # (GPU). Possible values are: \n        #  * \"default\"   : engine decides * \"host\"      : execute only\n        #     host * \"device\"    : execute only device * *<rows>*    :\n        #     execute on the host if chunked column contains the given\n        #     number of *rows* or fewer; otherwise, execute on device.\n        executionMode: \"device\"\n        # Removed in 7.2\n        fsyncIndexDBImmediate: true\n        # Removed in 7.2\n        fsyncInodesImmediate: true\n        # Removed in 7.2\n        fsyncMetadataImmediate: true\n        # Removed in 7.2\n        fsyncOnInterval: true\n        # Maximum number of open files for IndexedDb object file store.\n        # Removed in 7.2\n        indexDBMaxOpenFiles: \n        # Table of contents size for IndexedDb object file store.\n        # Removed in 7.2\n        indexDBTOCSize: \n        # Disable detection of sparse file support and use the full file\n        # length which may be an over-estimate of the actual usage in\n        # the persist tier. Removed in 7.2\n        indexDBTierByFileLength: false\n        # Startup data-loading scheme: \n        #  * \"always\"    : load all the data into memory before\n        #     accepting requests * \"lazy\"      : load the necessary\n        #     data to start, but load the remainder\n        #     lazily * \"on_demand\" : only load data as requests use it\n        loadVectorsOnStart: \"on_demand\"\n        # Removed in 7.2\n        metadataFlushImmediate: true\n        # Specify a base directory to store persistence data files.\n        persistDirectory: \"/opt/gpudb/persist\"\n        # Whether to use synchronous persistence file writing.\n        # If \"false\", files will be written asynchronously. Removed in\n        # 7.2\n        persistSync: true\n        # Duration in seconds, for which persistence files will be\n        # force-synced if out of sync, once per minute. NOTE: Files are\n        # always opportunistically saved; this simply enforces a\n        # maximum time a file can be out of date. Set to a very high\n        # number to disable.\n        persistSyncTime: 5\n        # The maximum number of bytes in the shadow aggregate cache\n        shadowAggSize: 100000000\n        # Whether to enable chunk caching\n        shadowCubeEnabled: true\n        # The maximum number of bytes in the shadow filter cache\n        shadowFilterSize: 100000000\n        # Base directory to store hashed strings.\n        smsDirectory: \"${gaia.persist_directory}\"\n        # Maximum number of open files (per-TOM) for the SMS\n        # (string) store.\n        smsMaxOpenFiles: 128\n        # Synchronous compression: compress vectors on set compression.\n        synchronousCompression: false\n        # Directory for GPUdb to use to store temporary files. Must be a\n        # fully qualified path, have at least 100Mb of free space, and\n        # execute permission.\n        tempDirectory: \"${gaia.persist_directory}/tmp\"\n        # Base directory to store the text search index.\n        textIndexDirectory: \"${gaia.persist_directory}\"\n        # Enable checksum protection on the wal entries. New in 7.2\n        walChecksum: true\n        # Specifies how frequently wal entries are written with\n        # background sync. New in 7.2\n        walFlushFrequency: 60\n        # Maximum size of each wal segment file New in 7.2\n        walMaxSegmentSize: 500000000\n        # Approximate number of segment files to split the wal across. A\n        # minimum of two is required. The size of the wal is limited by\n        # segment_count * max_segment_size. (per rank and per tom) Set\n        # to 0 to remove a size limit on the wal itself, but still be\n        # bounded by rank tier limits. Set to -1 to have the database\n        # decide automatically per table. New in 7.2\n        walSegmentCount: \n        # Sync mode to use when persisting wal entries to disk: \n        #  \"none\"       : Disable the wal \"background\" : Wal entries are\n        #   periodically written instead of immediately after each\n        #   operation \"flush\"      : Protects entries in the event of a\n        #   database crash \"fsync\"      : Protects entries in the event\n        #   of an OS crash New in 7.2\n        walSyncPolicy: \"flush\"\n        # If true, any table that is found to be corrupt after replaying\n        # its wal at startup will automatically be truncated so that\n        # the table becomes operable. If false, the user will be\n        # responsible for resolving the issue via sql REPAIR TABLE or\n        # similar. New in 7.2\n        walTruncateCorruptTablesOnStart: true\n      # PostgresProxy\n      postgresProxy:\n        # Postgres Proxy Server Start an Postgres(TCP) server as a proxy\n        # to handle postgres wire protocol messages.\n        enablePostgresProxy: false\n        # Set idle connection  timeout in seconds. (default: \"1200\")\n        idleConnectionTimeout: 1200\n        # Set max number of queued server connections. (default: \"1\")\n        maxQueuedConnections: 1\n        # Set max number of server threads to spawn. (default: \"64\")\n        maxThreads: 64\n        # Set min number of server threads to spawn. (default: \"2\")\n        minThreads: 2\n        # TCP port that the postgres  proxy server will listen on\n        # if 'enable_postgres_proxy' is \"true\".\n        port:\n          # Number of port to expose on the pod's IP address. This must\n          # be a valid port number, 0 < x < 65536.\n          containerPort: 1\n          # What host IP to bind the external port to.\n          hostIP: string\n          # Number of port to expose on the host. If specified, this\n          # must be a valid port number, 0 < x < 65536. If HostNetwork\n          # is specified, this must match ContainerPort. Most\n          # containers do not need this.\n          hostPort: 1\n          # If specified, this must be an IANA_SVC_NAME and unique\n          # within the pod. Each named port in a pod must have a unique\n          # name. Name for the port that can be referred to by\n          # services.\n          name: string\n          # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n          # to \"TCP\".\n          protocol: \"TCP\"\n        # Set to \"true\" to use SSL; if \"true\" then 'ssl_key_file'\n        # and 'ssl_cert_file' must be provided\n        ssl: false sslCertFile: \"\"\n        # Files containing the SSL private Key and the SSL certificate\n        # for. If required, a self signed certificate (expires after 10\n        # years) can be generated via the command: openssl req -newkey\n        # rsa:2048 -new -nodes -x509 \\ -days 3650 -keyout key.pem -out\n        # cert.pem\n        sslKeyFile: \"\"\n      # ProcessesConfig\n      processes:\n        # Set the maximum number of threads per tom for table\n        # initialization on startup\n        initTablesNumThreadsPerTom: 8\n        # Set the number of parallel calculation threads to use for data\n        # processing use -1 to use the max number of threads\n        # (not recommended)\n        kernelOmpThreads: 3\n        # The maximum number of web server threads to spawn\n        maxHttpThreads: 512\n        # Set the maximum number of threads (both workers and masters)\n        # to be passed to TBB on initialization.  Generally\n        # speaking, 'max_tbb_threads_per_rank' - \"1\" TBB workers will\n        # be created.  Use \"-1\" for no limit.\n        maxTbbThreadsPerRank: \"-1\"\n        # The minimum number of web server threads to spawn\n        minHttpThreads: 8\n        # Set the number of parallel jobs to create for multi-child set\n        # calulations use \"-1\" to use the max number of threads\n        # (not recommended)\n        smOmpThreads: 2\n        # Maximum number of simultaneous threads allocated to a given\n        # request, on each rank. Note that thread allocation may also\n        # be limted by resource group limits and/or system load.\n        subtaskConcurrentyLimit: \"-1\"\n        # Set the number of TaskCalculators per TOM, GPU data\n        # processors.\n        tcsPerTom: \"-1\"\n        # Set the number of TOMs (data container shards) per rank\n        tomsPerRank: 1\n        # Set the number of TaskProcessors per TOM, CPU data\n        # processors.\n        tpsPerTom: \"-1\"\n      # ProcsConfig\n      procs:\n        # Directory where proc files are stored at runtime. Must be a\n        # fully qualified path with execute permission. If not\n        # specified, 'temp_directory' will be used.\n        directory:\n          # PersistentVolumeClaim is a user's request for and claim to a\n          # persistent volume\n          persistVolumeClaim:\n            # APIVersion defines the versioned schema of this\n            # representation of an object. Servers should convert\n            # recognized schemas to the latest internal value, and may\n            # reject unrecognized values. More info:\n            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            apiVersion: app.kinetica.com/v1\n            # Kind is a string value representing the REST resource this\n            # object represents. Servers may infer this from the\n            # endpoint the client submits requests to. Cannot be\n            # updated. In CamelCase. More info:\n            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            kind: KineticaCluster\n            # Standard object's metadata. More info:\n            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n            metadata: {}\n            # spec defines the desired characteristics of a volume\n            # requested by a pod author. More info:\n            # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n            spec:\n              # accessModes contains the desired access modes the volume\n              # should have. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n              accessModes: [\"string\"]\n              # dataSource field can be used to specify either: * An\n              # existing VolumeSnapshot object\n              # (snapshot.storage.k8s.io/VolumeSnapshot) * An existing\n              # PVC (PersistentVolumeClaim) If the provisioner or an\n              # external controller can support the specified data\n              # source, it will create a new volume based on the\n              # contents of the specified data source. When the\n              # AnyVolumeDataSource feature gate is enabled, dataSource\n              # contents will be copied to dataSourceRef, and\n              # dataSourceRef contents will be copied to dataSource\n              # when dataSourceRef.namespace is not specified. If the\n              # namespace is specified, then dataSourceRef will not be\n              # copied to dataSource.\n              dataSource:\n                # APIGroup is the group for the resource being\n                # referenced. If APIGroup is not specified, the\n                # specified Kind must be in the core API group. For any\n                # other third-party types, APIGroup is required.\n                apiGroup: string\n                # Kind is the type of resource being referenced\n                kind: KineticaCluster\n                # Name is the name of resource being referenced\n                name: string\n              # dataSourceRef specifies the object from which to\n              # populate the volume with data, if a non-empty volume is\n              # desired. This may be any object from a non-empty API\n              # group (non core object) or a PersistentVolumeClaim\n              # object. When this field is specified, volume binding\n              # will only succeed if the type of the specified object\n              # matches some installed volume populator or dynamic\n              # provisioner. This field will replace the functionality\n              # of the dataSource field and as such if both fields are\n              # non-empty, they must have the same value. For backwards\n              # compatibility, when namespace isn't specified in\n              # dataSourceRef, both fields (dataSource and\n              # dataSourceRef) will be set to the same value\n              # automatically if one of them is empty and the other is\n              # non-empty. When namespace is specified in\n              # dataSourceRef, dataSource isn't set to the same value\n              # and must be empty. There are three important\n              # differences between dataSource and dataSourceRef: *\n              # While dataSource only allows two specific types of\n              # objects, dataSourceRef allows any non-core object, as\n              # well as PersistentVolumeClaim objects. * While\n              # dataSource ignores disallowed values (dropping them),\n              # dataSourceRef preserves all values, and generates an\n              # error if a disallowed value is specified. * While\n              # dataSource only allows local objects, dataSourceRef\n              # allows objects in any namespaces. (Beta) Using this\n              # field requires the AnyVolumeDataSource feature gate to\n              # be enabled. (Alpha) Using the namespace field of\n              # dataSourceRef requires the\n              # CrossNamespaceVolumeDataSource feature gate to be\n              # enabled.\n              dataSourceRef:\n                # APIGroup is the group for the resource being\n                # referenced. If APIGroup is not specified, the\n                # specified Kind must be in the core API group. For any\n                # other third-party types, APIGroup is required.\n                apiGroup: string\n                # Kind is the type of resource being referenced\n                kind: KineticaCluster\n                # Name is the name of resource being referenced\n                name: string\n                # Namespace is the namespace of resource being\n                # referenced Note that when a namespace is specified, a\n                # gateway.networking.k8s.io/ReferenceGrant object is\n                # required in the referent namespace to allow that\n                # namespace's owner to accept the reference. See the\n                # ReferenceGrant documentation for details.(Alpha) This\n                # field requires the CrossNamespaceVolumeDataSource\n                # feature gate to be enabled.\n                namespace: string\n              # resources represents the minimum resources the volume\n              # should have. If RecoverVolumeExpansionFailure feature\n              # is enabled users are allowed to specify resource\n              # requirements that are lower than previous value but\n              # must still be higher than capacity recorded in the\n              # status field of the claim. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n              resources:\n                # Claims lists the names of resources, defined in\n                # spec.resourceClaims, that are used by this container.\n                # This is an alpha field and requires enabling the\n                # DynamicResourceAllocation feature gate. This field is\n                # immutable. It can only be set for containers.\n                claims:\n                - name: string\n                # Limits describes the maximum amount of compute\n                # resources allowed. More info:\n                # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                limits: {}\n                # Requests describes the minimum amount of compute\n                # resources required. If Requests is omitted for a\n                # container, it defaults to Limits if that is\n                # explicitly specified, otherwise to an\n                # implementation-defined value. Requests cannot exceed\n                # Limits. More info:\n                # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                requests: {}\n              # selector is a label query over volumes to consider for\n              # binding.\n              selector:\n                # matchExpressions is a list of label selector\n                # requirements. The requirements are ANDed.\n                matchExpressions:\n                - key: string\n                  # operator represents a key's relationship to a set of\n                  # values. Valid operators are In, NotIn, Exists and\n                  # DoesNotExist.\n                  operator: string\n                  # values is an array of string values. If the operator\n                  # is In or NotIn, the values array must be non-empty.\n                  # If the operator is Exists or DoesNotExist, the\n                  # values array must be empty. This array is replaced\n                  # during a strategic merge patch.\n                  values: [\"string\"]\n                # matchLabels is a map of {key,value} pairs. A single\n                # {key,value} in the matchLabels map is equivalent to\n                # an element of matchExpressions, whose key field\n                # is \"key\", the operator is \"In\", and the values array\n                # contains only \"value\". The requirements are ANDed.\n                matchLabels: {}\n              # storageClassName is the name of the StorageClass\n              # required by the claim. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n              storageClassName: string\n              # volumeMode defines what type of volume is required by\n              # the claim. Value of Filesystem is implied when not\n              # included in claim spec.\n              volumeMode: string\n              # volumeName is the binding reference to the\n              # PersistentVolume backing this claim.\n              volumeName: string\n            # status represents the current information/status of a\n            # persistent volume claim. Read-only. More info:\n            # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n            status:\n              # accessModes contains the actual access modes the volume\n              # backing the PVC has. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n              accessModes: [\"string\"]\n              # allocatedResources is the storage resource within\n              # AllocatedResources tracks the capacity allocated to a\n              # PVC. It may be larger than the actual capacity when a\n              # volume expansion operation is requested. For storage\n              # quota, the larger value from allocatedResources and\n              # PVC.spec.resources is used. If allocatedResources is\n              # not set, PVC.spec.resources alone is used for quota\n              # calculation. If a volume expansion capacity request is\n              # lowered, allocatedResources is only lowered if there\n              # are no expansion operations in progress and if the\n              # actual volume capacity is equal or lower than the\n              # requested capacity. This is an alpha field and requires\n              # enabling RecoverVolumeExpansionFailure feature.\n              allocatedResources: {}\n              # capacity represents the actual resources of the\n              # underlying volume.\n              capacity: {}\n              # conditions is the current Condition of persistent volume\n              # claim. If underlying persistent volume is being resized\n              # then the Condition will be set to 'ResizeStarted'.\n              conditions:\n              - lastProbeTime: string\n                # lastTransitionTime is the time the condition\n                # transitioned from one status to another.\n                lastTransitionTime: string\n                # message is the human-readable message indicating\n                # details about last transition.\n                message: string\n                # reason is a unique, this should be a short, machine\n                # understandable string that gives the reason for\n                # condition's last transition. If it\n                # reports \"ResizeStarted\" that means the underlying\n                # persistent volume is being resized.\n                reason: string status: string\n                # PersistentVolumeClaimConditionType is a valid value of\n                # PersistentVolumeClaimCondition.Type\n                type: string\n              # phase represents the current phase of\n              # PersistentVolumeClaim.\n              phase: string\n              # resizeStatus stores status of resize operation.\n              # ResizeStatus is not set by default but when expansion\n              # is complete resizeStatus is set to empty string by\n              # resize controller or kubelet. This is an alpha field\n              # and requires enabling RecoverVolumeExpansionFailure\n              # feature.\n              resizeStatus: string\n          # VolumeMount describes a mounting of a Volume within a\n          # container.\n          volumeMount:\n            # Path within the container at which the volume should be\n            # mounted.  Must not contain ':'.\n            mountPath: string\n            # mountPropagation determines how mounts are propagated from\n            # the host to container and the other way around. When not\n            # set, MountPropagationNone is used. This field is beta in\n            # 1.10.\n            mountPropagation: string\n            # This must match the Name of a Volume.\n            name: string\n            # Mounted read-only if true, read-write otherwise (false or\n            # unspecified). Defaults to false.\n            readOnly: true\n            # Path within the volume from which the container's volume\n            # should be mounted. Defaults to \"\" (volume's root).\n            subPath: string\n            # Expanded path within the volume from which the container's\n            # volume should be mounted. Behaves similarly to SubPath\n            # but environment variable references $(VAR_NAME) are\n            # expanded using the container's environment. Defaults\n            # to \"\" (volume's root). SubPathExpr and SubPath are\n            # mutually exclusive.\n            subPathExpr: string\n        # Enable procs (UDFs)\n        enable: true\n      # SecurityConfig\n      security:\n        # Automatically create accounts for externally-authenticated\n        # users. If 'enable_external_authentication' is \"false\", this\n        # setting has no effect. Note that accounts are not\n        # automatically deleted if users are removed from the external\n        # authentication provider and will be orphaned.\n        autoCreateExternalUsers: false\n        # Automatically add roles passed in via the \"KINETICA_ROLES\"\n        # HTTP header to externally-authenticated users. Specified\n        # roles that do not exist are ignored.\n        # If 'enable_external_authentication' is \"false\", this setting\n        # has no effect. IMPORTANT: DO NOT ENABLE unless the\n        # authentication proxy is configured to block \"KINETICA_ROLES\"\n        # HTTP headers passed in from clients.\n        autoGrantExternalRoles: false\n        # Comma-separated list of roles to revoke from\n        # externally-authenticated users prior to granting roles passed\n        # in via the \"KINETICA_ROLES\" HTTP header, or \"*\" to revoke all\n        # roles. Preceding a role name with an \"!\" overrides the\n        # revocation (e.g. \"*,!foo\" revokes all roles except \"foo\").\n        # Leave blank to disable. If\n        # either 'enable_external_authentication'\n        # or 'auto_grant_external_roles' is \"false\", this setting has\n        # no effect.\n        autoRevokeExternalRoles: false\n        # Enable authorization checks.  When disabled, all requests will\n        # be treated as the administrative user.\n        enableAuthorization: true\n        # Enable external (LDAP, Kerberos, etc.) authentication. User\n        # IDs of externally-authenticated users must be passed in via\n        # the \"REMOTE_USER\" HTTP header from the authentication proxy.\n        # May be used in conjuntion with the 'enable_httpd_proxy'\n        # setting above for an integrated external authentication\n        # solution. IMPORTANT: DO NOT ENABLE unless external access to\n        # GPUdb ports has been blocked via firewall AND the\n        # authentication proxy is configured to block \"REMOTE_USER\"\n        # HTTP headers passed in from clients. server.\n        enableExternalAuthentication: true\n        # ExternalSecurity\n        externalSecurity:\n          # Ranger\n          ranger:\n            # AuthorizerAddress - The network URI for the\n            # ranger_authorizer to start. The URI can be either TCP or\n            # IPC. TCP address is used to indicate the remote\n            # ranger_authorizer which may run at other hosts. The IPC\n            # address is for a local ranger_authorizer. Example\n            # addresses for remote or TCP servers: tcp://127.0.0.1:9293\n            # tcp://HOST_IP:9293 Example address for local IPC servers:\n            # ipc:///tmp/gpudb-ranger-0\n            # security.external.ranger_authorizer.address = ipc://$\n            # {gaia.temp_directory}/gpudb-ranger-0\n            authorizerAddress: \"ipc://$\n            {gaia.temp_directory}/gpudb-ranger-0\"\n            # Remote debugger port used for the ranger_authorizer.\n            # Setting the port to \"0\" disables remote debugging. NOTE:\n            # Recommended port to use is \"5005\"\n            # security.external.ranger_authorizer.remote_debug_port =\n            # 0\n            authorizerRemoteDebugPort: 0\n            # AuthorizerTimeout - Ranger Authorizer timeout in seconds\n            # security.external.ranger_authorizer.timeout = 120\n            authorizerTimeout: 120\n            # CacheMinutes- Maximum minutes to hold on to data from\n            # Ranger security.external.ranger.cache_minutes = 60\n            cacheMinutes: 60\n            # Name of the service created on the Ranger Server to manage\n            # this Kinetica instance\n            # security.external.ranger.service_name = kinetica\n            name: \"kinetica\"\n            # ExtURL - URL of Ranger REST API.  E.g.,\n            # https://localhost:6080/ Leave blank for no Ranger Server\n            # security.external.ranger.url =\n            url: string\n        # The minimum allowable password length.\n        minPasswordLength: 4\n        # Require all users to be authenticated.  Disable this to allow\n        # users to access the database as the 'unauthenticated' user.\n        # Useful for situations where the public needs to access the\n        # data.\n        requireAuthentication: true\n        # UnifiedSecurityNamespace - Use a single namespace for internal\n        # and external user IDs and role names. If false, external user\n        # IDs must be prefixed with \"@\" to differentiate them from\n        # internal user IDs and role names (except in the \"REMOTE_USER\"\n        # HTTP header, where the \"@\" is omitted).\n        # unified_security_namespace = true\n        unifiedSecurityNamespace: true\n      # SQLConfig\n      sql:\n        # SQLPlannerAddress is not included as it is just default\n        # always\n        address: \"ipc://${gaia.temp_directory}/gpudb-query-engine-0\"\n        # Enable the cost-based optimizer\n        costBasedOptimization: false\n        # Enable distributed joins\n        distributedJoins: true\n        # Enable distributed operations\n        distributedOperations: true\n        # Enable Query Planner\n        enablePlanner: true\n        # Perform joins between only 2 tables at a time; default is all\n        # tables involved in the operation at once\n        forceBinaryJoins: false\n        # Perform unions/intersections/exceptions between only 2 tables\n        # at a time; default is all tables involved in the operation at\n        # once\n        forceBinarySetOps: false\n        # Max parallel steps\n        maxParallelSteps: 4\n        # Max allowed view nesting levels. Valid range(1-64)\n        maxViewNestingLevels: 16\n        # TTL of the paging results table\n        pagingTableTTL: 20\n        # Enable parallel query evaluation\n        parallelExecution: true\n        # The maximum number of entries in the SQL plan cache.  The\n        # default is \"4000\" entries, but the configurable range\n        # is \"1\" - \"1000000\".  Plan caching will be disabled if the\n        # value is set outside of that range.\n        planCacheSize: 4000\n        # The maximum memory for the query planner to use in Megabytes.\n        plannerMaxMemory: 4096\n        # The maximum stack size for the query planner threads to use in\n        # Megabytes.\n        plannerMaxStack: 6\n        # Query planner timeout in seconds\n        plannerTimeout: 120\n        # Max Query planner threads\n        plannerWorkers: 16\n        # Remote debugger port used for the query planner. Setting the\n        # port to \"0\" disables remote debugging. NOTE:  Recommended\n        # port to use is \"5005\"\n        remoteDebugPort: 5005\n        # TTL of the query cache results table\n        resultsCacheTTL: 60\n        # Enable query results caching\n        resultsCaching: true\n        # Enable rule-based query rewrites\n        ruleBasedOptimization: true\n      # SQLEngineConfig\n      sqlEngine:\n        # Enable the cost-based optimizer\n        costBasedOptimization: false\n        # Name of default collection for user tables\n        defaultSchema: \"\"\n        # Enable distributed joins\n        distributedJoins: true\n        # Enable distributed operations\n        distributedOperations: true\n        # Perform joins between only 2 tables at a time; default is all\n        # tables involved in the operation at once\n        forceBinaryJoins: false\n        # Perform unions/intersections/exceptions between only 2 tables\n        # at a time; default is all tables involved in the operation at\n        # once\n        forceBinarySetOps: false\n        # Max parallel steps\n        maxParallelSteps: 4\n        # Max allowed view nesting levels. Valid range(1-64)\n        maxViewNestingLevels: 16\n        # TTL of the paging results table\n        pagingTableTTL: 20\n        # Enable parallel query evaluation\n        parallelExecution: true\n        # The maximum number of entries in the SQL plan cache.  The\n        # default is \"4000\" entries, but the configurable range\n        # is \"1\" - \"1000000\".  Plan caching will be disabled if the\n        # value is set outside of that range.\n        planCacheSize: 4000\n        # PlannerConfig\n        planner:\n          # Enable Query Planner\n          enablePlanner: true\n          # The maximum memory for the query planner to use in\n          # Megabytes.\n          maxMemory: 4096\n          # The maximum stack size for the query planner threads to use\n          # in Megabytes.\n          maxStack: 6\n          # The network URI for the query planner to start. The URI can\n          # be either TCP or IPC. TCP address is used to indicate the\n          # remote query planner which may run at other hosts. The IPC\n          # address is for a local query planner. Example for remote or\n          # TCP servers: \n          #  #  sql.planner.address  = tcp://127.0.0.1:9293 #\n          #     sql.planner.address  = tcp://HOST_IP:9293 Example for\n          #     local IPC servers: \n          #  #  sql.planner.address  = ipc:///tmp/gpudb-query-engine-0\n          plannerAddress: \"ipc:///tmp/gpudb-query-engine-0\"\n          # Remote debugger port used for the query planner. Setting the\n          # port to \"0\" disables remote debugging. NOTE:  Recommended\n          # port to use is \"5005\"\n          remoteDebugPort: 0\n          # Query planner timeout in seconds\n          timeout: 120\n          # Max Query planner threads\n          workers: 16 results:\n          # TTL of the query cache results table\n          cacheTTL: 60\n          # Enable query results caching\n          caching: true\n        # Enable rule-based query rewrites\n        ruleBasedOptimization: true\n        # Name of collection that will be used to store result tables\n        # generated as part of query execution\n        tempCollection: \"__SQL_TEMP\"\n      # StatisticsConfig\n      statistics:\n        # system_metadata.stats_aggr_rowcount = 10000\n        aggrRowCount: 10000\n        # system_metadata.stats_aggr_time = 1\n        aggrTime: 1\n        # Run a statistics server to collect information about Kinetica\n        # and the machines it runs on.\n        enable: true\n        # Statistics server IP address (run on head node) default port\n        # is \"2003\"\n        ipAddress: \"${gaia.host0.address}\"\n        # Statistics server namespace - should be a machine identifier\n        namespace: \"gpudb\" port: 2003\n        # System metadata catalog settings\n        # system_metadata.stats_retention_days = 21\n        retentionDays: 21\n      # TextSearchConfig\n      textSearch:\n        # Enable text search capability within the database.\n        enableTextSearch: false\n        # Number of text indices to start for each rank\n        textIndicesPerTom: 2\n        # Searcher refresh intervals - specifies the maximum delay\n        # (in seconds) between writing to the text search index and\n        # being able to search for the value just written.  A value\n        # of \"0\" insures that writes to the index are immediately\n        # available to be searched.  A more nominal value of \"100\"\n        # should improve ingest speed at the cost of some delay in\n        # being able to text search newly added values.\n        textSearcherRefreshInterval: 20\n        # Use the production capable external text server instead of a\n        # lightweight internal server which should only be used for\n        # light testing. Note: The internal text server is deprecated\n        # and may be removed in future versions.\n        useExternalTextServer: true tieredStorage:\n        # Cold Storage Tiers can be used to extend the storage capacity\n        # of the Persist Tier. Assign a tier strategy with cold storage\n        # to objects that will be infrequently accessed since they will\n        # be moved as needed from the Persist Tier. The Cold Storage\n        # Tier is typically a much larger capacity physical disk or a\n        # cloud-based storage system which may not be as performant as\n        # the Persist Tier storage. A default storage limit and\n        # eviction thresholds can be set across all ranks for a given\n        # Cold Storage Tier, while one or more ranks within a Cold\n        # Storage Tier may be configured to override those defaults.\n        # NOTE: If an object needs to be pulled out of cold storage\n        # during a query, it may need to use the local persist\n        # directory as a temporary swap space. This may trigger an\n        # eviction of other persisted items to cold storage due to low\n        # disk space condition defined by the watermark settings for\n        # the Persist Tier.\n        coldStorageTier:\n          # ColdStorageAzure\n          coldStorageAzure:\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string clientID: string clientSecret: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\"\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier. BasePath string\n            #  `json:\"basePath,omitempty\"`\n            containerName: \"/gpudb/cold_storage\"\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\" sasToken:\n            string storageAccountKey: string storageAccountName: string\n            tenantID: string useManagedCredentials: false\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageDisk\n          coldStorageDisk:\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\"\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageGCS - Google Cloud Storage-specific *parameter*\n          # names: \n          #  * BucketName =        'gcs_bucket_name' *\n          #    ProjectID - 'gcs_project_id'\n          #    (optional) * AccountID - 'gcs_service_account_id'\n          #    (optional) *\n          #    AccountPrivateKey - 'gcs_service_account_private_key'\n          #    (optional) * AccountKeys -  'gcs_service_account_keys'\n          #    (optional) NOTE: If\n          #    the 'gcs_service_account_id', 'gcs_service_account_private_key'\n          #    and/or 'gcs_service_account_keys' values are not\n          #    specified, the Google Clould Client Libraries will\n          #    attempt to find and use service account credentials from\n          #    the GOOGLE_APPLICATION_CREDENTIALS environment\n          #    variable.\n          coldStorageGCS: accountID: string accountKeys: string\n          accountPrivateKey: string\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string bucketName: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\"\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" projectID: string\n            provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageHDFS\n          coldStorageHDFS:\n            # ColdStorageDisk\n            default:\n              # 'base_path'             : A base path based on the\n              #  provider type for this tier.\n              basePath: string\n              # 'connection_timeout'    : Timeout in seconds for\n              #  connecting to this storage provider.\n              connectionTimeout: \"30\"\n              # * 'high_watermark' : Percentage used eviction threshold.\n              #    Once usage exceeds this value, evictions from this\n              #    tier will be scheduled in the background and\n              #    continue until the 'low_watermark' percentage usage\n              #    is reached.  Default is \"90\", signifying a 90%\n              #    memory usage threshold.\n              highWatermark: 90\n              # * 'limit'          : The maximum (bytes) per rank that\n              #    can be allocated across all resource groups.\n              limit: \"1Gi\"\n              # * 'low_watermark'  : Percentage used recovery threshold.\n              #    Once usage exceeds the 'high_watermark', evictions\n              #    will continue until usage falls below this recovery\n              #    threshold. Default is \"80\", signifying an 80% usage\n              #    threshold.\n              lowWatermark: 80 name: string\n              # A base directory to use as a space for this tier.\n              path: \"default\" provisioner: \"docker.io/hostpath\"\n              # Kubernetes Persistent Volume Claim for this disk tier.\n              volumeClaim:\n                # APIVersion defines the versioned schema of this\n                # representation of an object. Servers should convert\n                # recognized schemas to the latest internal value, and\n                # may reject unrecognized values. More info:\n                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n                apiVersion: app.kinetica.com/v1\n                # Kind is a string value representing the REST resource\n                # this object represents. Servers may infer this from\n                # the endpoint the client submits requests to. Cannot\n                # be updated. In CamelCase. More info:\n                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n                kind: KineticaCluster\n                # Standard object's metadata. More info:\n                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n                metadata: {}\n                # spec defines the desired characteristics of a volume\n                # requested by a pod author. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n                spec:\n                  # accessModes contains the desired access modes the\n                  # volume should have. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                  accessModes: [\"string\"]\n                  # dataSource field can be used to specify either: * An\n                  # existing VolumeSnapshot object\n                  # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                  # existing PVC (PersistentVolumeClaim) If the\n                  # provisioner or an external controller can support\n                  # the specified data source, it will create a new\n                  # volume based on the contents of the specified data\n                  # source. When the AnyVolumeDataSource feature gate\n                  # is enabled, dataSource contents will be copied to\n                  # dataSourceRef, and dataSourceRef contents will be\n                  # copied to dataSource when dataSourceRef.namespace\n                  # is not specified. If the namespace is specified,\n                  # then dataSourceRef will not be copied to\n                  # dataSource.\n                  dataSource:\n                    # APIGroup is the group for the resource being\n                    # referenced. If APIGroup is not specified, the\n                    # specified Kind must be in the core API group. For\n                    # any other third-party types, APIGroup is\n                    # required.\n                    apiGroup: string\n                    # Kind is the type of resource being referenced\n                    kind: KineticaCluster\n                    # Name is the name of resource being referenced\n                    name: string\n                  # dataSourceRef specifies the object from which to\n                  # populate the volume with data, if a non-empty\n                  # volume is desired. This may be any object from a\n                  # non-empty API group (non core object) or a\n                  # PersistentVolumeClaim object. When this field is\n                  # specified, volume binding will only succeed if the\n                  # type of the specified object matches some installed\n                  # volume populator or dynamic provisioner. This field\n                  # will replace the functionality of the dataSource\n                  # field and as such if both fields are non-empty,\n                  # they must have the same value. For backwards\n                  # compatibility, when namespace isn't specified in\n                  # dataSourceRef, both fields (dataSource and\n                  # dataSourceRef) will be set to the same value\n                  # automatically if one of them is empty and the other\n                  # is non-empty. When namespace is specified in\n                  # dataSourceRef, dataSource isn't set to the same\n                  # value and must be empty. There are three important\n                  # differences between dataSource and dataSourceRef: *\n                  # While dataSource only allows two specific types of\n                  # objects, dataSourceRef allows any non-core object,\n                  # as well as PersistentVolumeClaim objects. * While\n                  # dataSource ignores disallowed values\n                  # (dropping them), dataSourceRef preserves all\n                  # values, and generates an error if a disallowed\n                  # value is specified. * While dataSource only allows\n                  # local objects, dataSourceRef allows objects in any\n                  # namespaces. (Beta) Using this field requires the\n                  # AnyVolumeDataSource feature gate to be enabled.\n                  # (Alpha) Using the namespace field of dataSourceRef\n                  # requires the CrossNamespaceVolumeDataSource feature\n                  # gate to be enabled.\n                  dataSourceRef:\n                    # APIGroup is the group for the resource being\n                    # referenced. If APIGroup is not specified, the\n                    # specified Kind must be in the core API group. For\n                    # any other third-party types, APIGroup is\n                    # required.\n                    apiGroup: string\n                    # Kind is the type of resource being referenced\n                    kind: KineticaCluster\n                    # Name is the name of resource being referenced\n                    name: string\n                    # Namespace is the namespace of resource being\n                    # referenced Note that when a namespace is\n                    # specified, a\n                    # gateway.networking.k8s.io/ReferenceGrant object\n                    # is required in the referent namespace to allow\n                    # that namespace's owner to accept the reference.\n                    # See the ReferenceGrant documentation for\n                    # details. (Alpha) This field requires the\n                    # CrossNamespaceVolumeDataSource feature gate to be\n                    # enabled.\n                    namespace: string\n                  # resources represents the minimum resources the\n                  # volume should have. If\n                  # RecoverVolumeExpansionFailure feature is enabled\n                  # users are allowed to specify resource requirements\n                  # that are lower than previous value but must still\n                  # be higher than capacity recorded in the status\n                  # field of the claim. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                  resources:\n                    # Claims lists the names of resources, defined in\n                    # spec.resourceClaims, that are used by this\n                    # container. This is an alpha field and requires\n                    # enabling the DynamicResourceAllocation feature\n                    # gate. This field is immutable. It can only be set\n                    # for containers.\n                    claims:\n                    - name: string\n                    # Limits describes the maximum amount of compute\n                    # resources allowed. More info:\n                    # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                    limits: {}\n                    # Requests describes the minimum amount of compute\n                    # resources required. If Requests is omitted for a\n                    # container, it defaults to Limits if that is\n                    # explicitly specified, otherwise to an\n                    # implementation-defined value. Requests cannot\n                    # exceed Limits. More info:\n                    # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                    requests: {}\n                  # selector is a label query over volumes to consider\n                  # for binding.\n                  selector:\n                    # matchExpressions is a list of label selector\n                    # requirements. The requirements are ANDed.\n                    matchExpressions:\n                    - key: string\n                      # operator represents a key's relationship to a\n                      # set of values. Valid operators are In, NotIn,\n                      # Exists and DoesNotExist.\n                      operator: string\n                      # values is an array of string values. If the\n                      # operator is In or NotIn, the values array must\n                      # be non-empty. If the operator is Exists or\n                      # DoesNotExist, the values array must be empty.\n                      # This array is replaced during a strategic merge\n                      # patch.\n                      values: [\"string\"]\n                    # matchLabels is a map of {key,value} pairs. A\n                    # single {key,value} in the matchLabels map is\n                    # equivalent to an element of matchExpressions,\n                    # whose key field is \"key\", the operator is \"In\",\n                    # and the values array contains only \"value\". The\n                    # requirements are ANDed.\n                    matchLabels: {}\n                  # storageClassName is the name of the StorageClass\n                  # required by the claim. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                  storageClassName: string\n                  # volumeMode defines what type of volume is required\n                  # by the claim. Value of Filesystem is implied when\n                  # not included in claim spec.\n                  volumeMode: string\n                  # volumeName is the binding reference to the\n                  # PersistentVolume backing this claim.\n                  volumeName: string\n                # status represents the current information/status of a\n                # persistent volume claim. Read-only. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n                status:\n                  # accessModes contains the actual access modes the\n                  # volume backing the PVC has. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                  accessModes: [\"string\"]\n                  # allocatedResources is the storage resource within\n                  # AllocatedResources tracks the capacity allocated to\n                  # a PVC. It may be larger than the actual capacity\n                  # when a volume expansion operation is requested. For\n                  # storage quota, the larger value from\n                  # allocatedResources and PVC.spec.resources is used.\n                  # If allocatedResources is not set,\n                  # PVC.spec.resources alone is used for quota\n                  # calculation. If a volume expansion capacity request\n                  # is lowered, allocatedResources is only lowered if\n                  # there are no expansion operations in progress and\n                  # if the actual volume capacity is equal or lower\n                  # than the requested capacity. This is an alpha field\n                  # and requires enabling RecoverVolumeExpansionFailure\n                  # feature.\n                  allocatedResources: {}\n                  # capacity represents the actual resources of the\n                  # underlying volume.\n                  capacity: {}\n                  # conditions is the current Condition of persistent\n                  # volume claim. If underlying persistent volume is\n                  # being resized then the Condition will be set\n                  # to 'ResizeStarted'.\n                  conditions:\n                  - lastProbeTime: string\n                    # lastTransitionTime is the time the condition\n                    # transitioned from one status to another.\n                    lastTransitionTime: string\n                    # message is the human-readable message indicating\n                    # details about last transition.\n                    message: string\n                    # reason is a unique, this should be a short,\n                    # machine understandable string that gives the\n                    # reason for condition's last transition. If it\n                    # reports \"ResizeStarted\" that means the underlying\n                    # persistent volume is being resized.\n                    reason: string status: string\n                    # PersistentVolumeClaimConditionType is a valid\n                    # value of PersistentVolumeClaimCondition.Type\n                    type: string\n                  # phase represents the current phase of\n                  # PersistentVolumeClaim.\n                  phase: string\n                  # resizeStatus stores status of resize operation.\n                  # ResizeStatus is not set by default but when\n                  # expansion is complete resizeStatus is set to empty\n                  # string by resize controller or kubelet. This is an\n                  # alpha field and requires enabling\n                  # RecoverVolumeExpansionFailure feature.\n                  resizeStatus: string\n              # 'wait_timeout'          : Timeout in seconds for reading\n              #  from or writing to this storage provider.\n              waitTimeout: \"90\"\n            # 'hdfs_kerberos_keytab'  : The Kerberos keytab file used to\n            #  authenticate the \"gpudb\" Kerberos\n            kerberosKeytab: string\n            # 'hdfs_principal'        : The effective principal name to\n            #  use when connecting to the hadoop cluster.\n            principal: string\n            # 'hdfs_uri'              : The host IP address & port for\n            #  the hadoop distributed file system. For example:\n            #  hdfs://localhost:8020\n            uri: string\n            # 'hdfs_use_kerberos'     : Set to \"true\" to enable Kerberos\n            #  authentication to an HDFS storage server. The\n            #  credentials of the principal are in the file specified\n            #  by the 'hdfs_kerberos_keytab' parameter. Note that\n            #  Kerberos's *kinit* command will be run when the database\n            #  is started.\n            useKerberos: true\n          # ColdStorageS3\n          coldStorageS3: awsAccessKeyId: string awsRoleARN: string\n          awsSecretAccessKey: string\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string bucketName: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\" encryptionCustomerAlgorithm: string\n            encryptionCustomerKey: string\n            # EncryptionType - This is optional and valid values are\n            # sse-s3 (Encryption key is managed by Amazon S3) and\n            # sse-kms (Encryption key is managed by AWS Key Management\n            # Service (kms)).\n            encryptionType: string\n            # Endpoint - s3_endpoint\n            endpoint: string\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # KMSKeyID - This is optional and must be specified when\n            # encryption type is sse-kms.\n            kmsKeyID: string\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\" region:\n            string useManagedCredentials: true\n            # UseVirtualAddressing - 's3_use_virtual_addressing'  : If\n            # true (default), S3 endpoints will be constructed using\n            # the 'virtual' style which includes the bucket name as\n            # part of the hostname. Set to false to use the 'path'\n            # style which treats the bucket name as if it is a path in\n            # the URI.\n            useVirtualAddressing: true\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageType The storage provider type. Currently,\n          # supports \"none\", \"disk\"(local/network storage), \"hdfs\"\n          # (Hadoop distributed filesystem), \"s3\" (Amazon S3\n          # bucket), \"azure_blob\" (Microsoft Azure Blob Storage)\n          # and \"gcs\" (Google GCS Bucket).\n          coldStorageType: \"none\" name: string\n        # The DiskCacheTier are used as temporary swap space for data\n        # that doesn't fit in RAM or VRAM. The disk should be as fast\n        # or faster than the Persist Tier storage since this tier is\n        # used as an intermediary cache between the RAM and Persist\n        # Tiers.\n        diskCacheTier:\n          # DiskTierStorageLimit\n          default:\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string defaultStorePersistentObjects: true\n                ranks:\n          - highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n        # GlobalTier Parameters\n        globalTier:\n          # Co-locates all disks to a single disk i.e. persist, cache,\n          # UDF will be on a single PVC.\n          colocateDisks: true\n          # Timeout in seconds for subsequent requests to wait on a\n          # locked resource\n          concurrentWaitTimeout: 120\n          # EncryptDataAtRest - Enable disk encryption of data at rest\n          encryptDataAtRest: true\n        # The PersistTier are used as temporary swap space for data that\n        # doesn't fit in RAM or VRAM. The disk should be as fast or\n        # faster than the Persist Tier storage since this tier is used\n        # as an intermediary cache between the RAM and Persist Tiers.\n        persistTier:\n          # DiskTierStorageLimit\n          default:\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string defaultStorePersistentObjects: true\n                ranks:\n          - highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n        # The RAMTier represents the RAM available for data storage per\n        # rank. The RAM Tier is NOT used for small, non-data objects or\n        # variables that are allocated and deallocated for program flow\n        # control or used to store metadata or other similar\n        # information; these continue to use either the stack or the\n        # regular runtime memory allocator. This tier should be sized\n        # on each machine such that there is sufficient RAM left over\n        # to handle this overhead, as well as the needs of other\n        # processes running on the same machine.\n        ramTier:\n          # The RAM Tier represents the RAM available for data storage\n          # per rank. The RAM Tier is NOT used for small, non-data\n          # objects or variables that are allocated and deallocated for\n          # program flow control or used to store metadata or other\n          # similar information; these continue to use either the stack\n          # or the regular runtime memory allocator. This tier should\n          # be sized on each machine such that there is sufficient RAM\n          # left over to handle this overhead, as well as the needs of\n          # other processes running on the same machine. A default\n          # memory limit and eviction thresholds can be set across all\n          # ranks, while one or more ranks may be configured to\n          # override those defaults. The general format for RAM\n          # settings: \n          #  # tier.ram.[default|rank<#>].<parameter> Valid *parameter*\n          #    names include: \n          #  * 'limit'          : The maximum RAM (bytes) per rank that\n          #     can be allocated across all resource groups.  Default\n          #     is -1, signifying no limit and ignore watermark\n          #     settings. * 'high_watermark' : RAM percentage used\n          #     eviction threshold.  Once memory usage exceeds this\n          #     value, evictions from this tier will be scheduled in\n          #     the background and continue until the 'low_watermark'\n          #     percentage usage is reached.  Default is \"90\",\n          #     signifying a 90% memory usage\n          #     threshold. * 'low_watermark'  : RAM percentage used\n          #     recovery threshold.  Once memory usage exceeds\n          #     the 'high_watermark', evictions will continue until\n          #     memory usage falls below this recovery threshold.\n          #     Default is \"50\", signifying a 50% memory usage\n          #     threshold.\n          default:\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n          # The maximum RAM (bytes) for processing data at rank 0.\n          # Overrides the overall default RAM tier\n          # limit. #tier.ram.rank0.limit = -1\n          ranks:\n          - highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string tieredStrategy:\n        # Default strategy to apply to tables or columns when one was\n        # not provided during table creation. This strategy is also\n        # applied to a resource group that does not specify one at time\n        # of creation. The strategy is formed by chaining together the\n        # tier types and their respective eviction priorities. Any\n        # given tier may appear no more than once in the chain and the\n        # priority must be in range \"1\" - \"10\", where \"1\" is the lowest\n        # priority (first to be evicted) and \"9\" is the highest\n        # priority (last to be evicted).  A priority of \"10\" indicates\n        # that an object is unevictable. Each tier's priority is in\n        # relation to the priority of other objects in the same tier;\n        # e.g., \"RAM 9, DISK2 1\" indicates that an object will be the\n        # highest evictable priority among objects in the RAM Tier\n        # (last evicted), but that it will be the lowest priority among\n        # objects in the Disk Tier named 'disk2' (first evicted).  Note\n        # that since an object can only have one Disk Tier instance in\n        # its strategy, the corresponding priority will only apply in\n        # relation to other objects in Disk Tier instance 'disk2'. See\n        # the Tiered Storage section for more information about tier\n        # type names. Format: <tier1> <priority>, <tier2> <priority>,\n        # <tier3> <priority>, ... Examples using a Disk Tier\n        # named 'disk2' and a Cold Storage Tier 'cold0': vram 3, ram 5,\n        # disk2 3, persist 10 vram 3, ram 5, disk2 3, persist 6, cold0\n        # 10 tier_strategy.default = VRAM 1, RAM 5, PERSIST 5\n        default: \"VRAM 1, RAM 5, PERSIST 5\"\n        # Predicate evaluation interval (in minutes) -  indicates the\n        # interval at which the tier strategy predicates are evaluated\n        predicateEvaluationInterval: 60 video:\n        # System default TTL for videos. Time-to-live (TTL) is the\n        # number of minutes before a video will expire and be removed,\n        # or -1 to disable. video_default_ttl = -1\n        defaultTTL: \"-1\"\n        # The maximum number of videos to allow on the system. Set to 0\n        # to disable video rendering.  Set to -1 to allow an unlimited\n        # number of videos. video_max_count = -1\n        maxCount: \"-1\"\n        # Directory where video files should be temporarily stored while\n        # rendering. Only accessed by rank 0. video_temp_directory = $\n        # {gaia.temp_directory}/gpudb-temp-videos\n        tmpDir: \"${gaia.temp_directory}/gpudb-temp-videos\"\n      # VisualizationConfig\n      visualization:\n        # Enable level-of-details rendering for fast interaction with\n        # large WKT polygon data.  Only available for the OpenGL\n        # renderer (when 'enable_opengl_renderer' is \"true\").\n        enableLODRendering: true\n        # If \"true\", enable hardware-accelerated OpenGL renderer;\n        # if \"false\", use the software-based Cairo renderer.\n        enableOpenGLRenderer: true\n        # If \"true\", enable Vector Tile Service (VTS) to support\n        # client-side visualization of geospatial data. Enabling this\n        # option increases memory usage on ingestion.\n        enableVectorTileService: false\n        # Longitude and latitude ranges of geospatial data for which\n        # level-of-details representations are being generated. The\n        # parameter order is: <min_longitude> <min_latitude>\n        # <max_longitude> <max_latitude> The default values span over\n        # the world, but the level-of-details rendering becomes more\n        # efficient when the precise extent of geospatial data is\n        # specified. kubebuilder:default:={ -180, -90, 180, 90 }\n        lodDataExtent: [integer]\n        # The extent to which shape data are pre-processed for\n        # level-of-details rendering during data insert/load or\n        # processed on-the-fly in rendering time. This is a trade-off\n        # between speed and memory. The higher the value, the faster\n        # level-of-details rendering is, but the more memory is used\n        # for storing processed shape data. The maximum level is \"10\"\n        # (most shape data are pre-processed) and the minimum level\n        # is \"0\".\n        lodPreProcessingLevel: 5\n        # The number of subregions in horizontal and vertical geospatial\n        # data extent. The default values of \"12 6\" divide the world\n        # into subregions of 30 degree (lon.) x 30 degree (lat.)\n        lodSubRegionNum: [12,6]\n        # A base image resolution (width and height in pixels) at which\n        # a subregion would be rendered in a global view spanning over\n        # the whole dataset. Based on this resolution level-of-details\n        # representations are generated for the polygons located in the\n        # subregion.\n        lodSubRegionResolution: [512,512]\n        # Maximum heatmap size (in pixels) that can be generated. This\n        # reserves 'max_heatmap_size' ^ 2 * 8 bytes of GPU memory\n        # at **rank0**\n        maxHeatmapSize: 3072\n        # The maximum number of levels in the level-of-details\n        # rendering. As the number increases, level-of-details\n        # rendering becomes effective at higher zoom levels, but it may\n        # increase memory usage for storing level-of-details\n        # representations.\n        maxLODLevel: 8\n        # Input geometries are pre-processed upon ingestion for faster\n        # vector tile generation. This parameter determines the\n        # zoomlevel at which the vector tile pre-processing stops. A\n        # vector tile request for a higher zoomlevel than this\n        # parameter takes additional time because the vector tile needs\n        # to be generated on the fly.\n        maxVectorTileZoomLevel: 8\n        # Input geometries are pre-processed upon ingestion for faster\n        # vector tile generation. This parameter determines the\n        # zoomlevel from which the vector tile pre-processing starts. A\n        # vector tile request for a lower zoomlevel than this parameter\n        # takes additional time because the vector tile needs to be\n        # generated on the fly.\n        minVectorTileZoomLevel: 1\n        # The number of samples to use for antialiasing. Higher numbers\n        # will improve image quality but require more GPU memory to\n        # store the samples on worker ranks.  This affects only the\n        # OpenGL renderer. Value may be \"0\", \"4\", \"8\" or \"16\". When \"0\"\n        # antialiasing is disabled. The default value is \"0\".\n        openGLAntialiasingLevel: 1\n        # Threshold number of points (per-TOM) at which point rendering\n        # switches to fast mode.\n        pointRenderThreshold: 100000\n        # Single-precision coordinates are used for usual rendering\n        # processes, but depending on the precision of geometry data\n        # and use case, double precision processing may be required at\n        # a high zoomlevel. Double precision rendering processes are\n        # used from the zoomlevel specified by this parameter, which is\n        # corresponding to a zoomlevel of TMS or Google map service.\n        renderingPrecisionThreshold: 30\n        # The image width/height (in pixels) of svg symbols cached in\n        # the OpenGL symbol cache.\n        symbolResolution: 100\n        # The width/height (in pixels) of an OpenGL texture which caches\n        # symbol images for OpenGL rendering.\n        symbolTextureSize: 4000\n        # Threshold for the number of points (per-TOM) after which\n        # symbology rendering falls back to regular rendering\n        symbologyRenderThreshold: 10000\n        # The name of map tiler used for Vector Tile Service. \"google\"\n        # and \"tms\" map tilers are supported currently. This parameter\n        # should be matched with the map tiler of clients' vector tile\n        # renderer.\n        vectorTileMapTiler: \"google\" workbench:\n        # Start the Workbench app on the head host when host manager is\n        # started. enable_workbench = false\n        enable: false\n        # # HTTP server port for Workbench if enabled. workbench_port =\n        #   8000\n        port:\n          # Number of port to expose on the pod's IP address. This must\n          # be a valid port number, 0 < x < 65536.\n          containerPort: 1\n          # What host IP to bind the external port to.\n          hostIP: string\n          # Number of port to expose on the host. If specified, this\n          # must be a valid port number, 0 < x < 65536. If HostNetwork\n          # is specified, this must match ContainerPort. Most\n          # containers do not need this.\n          hostPort: 1\n          # If specified, this must be an IANA_SVC_NAME and unique\n          # within the pod. Each named port in a pod must have a unique\n          # name. Name for the port that can be referred to by\n          # services.\n          name: string\n          # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n          # to \"TCP\".\n          protocol: \"TCP\"\n    # The fully qualified URL used on the Ingress records for any\n    # exposed services. Completed buy yth Operator. DO NOT POPULATE\n    # MANUALLY.\n    fqdn: \"\"\n    # The name of the parent HA Ring this cluster belongs to.\n    haRingName: \"default\"\n    # Whether to enable the separate node 'pools' for \"infra\", \"compute\"\n    # pod scheduling. Default: false\n    hasPools: true\n    # The port the HostManager will be running in each pod in the\n    # cluster. Default: 9300, TCP\n    hostManagerPort:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # Set the name of the container image to use.\n    image: \"kinetica/kinetica-k8s-intel:v7.1.6.0\"\n    # Set the policy for pulling container images.\n    imagePullPolicy: \"IfNotPresent\"\n    # ImagePullSecrets is an optional list of references to secrets in\n    # the same gpudb-namespace to use for pulling any of the images\n    # used by this PodSpec. If specified, these secrets will be passed\n    # to individual puller implementations for them to use. For\n    # example, in the case of docker, only DockerConfig type secrets\n    # are honored.\n    imagePullSecrets:\n    - name: string\n    # Labels - Pod labels to be applied to the Statefulset DB pods.\n    labels: {}\n    # The Ingress Endpoint that GAdmin will be running on.\n    letsEncrypt:\n      # Enable LetsEncrypt for Certificate generation.\n      enabled: false\n      # LetsEncryptEnvironment\n      environment: \"staging\"\n    # Set the Kinetica DB License.\n    license: string\n    # Periodic probe of container liveness. Container will be restarted\n    # if the probe fails. Cannot be updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    livenessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # LoggerConfig Kinetica DB Logger Configuration Object Configure the\n    # LOG4CPLUS logger for the DB. Field takes a string containing the\n    # full configuration. If not specified a template file is used\n    # during DB configuration generation.\n    loggerConfig: configString: string\n    # Metrics - DB Metrics scrape & forward configuration for\n    # `fluent-bit`.\n    metricsRegistryRepositoryTag:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Metrics - `fluent-bit` container requests/limits.\n    metricsResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # NodeSelector - NodeSelector to be applied to the DB Pods\n    nodeSelector: {}\n    # Do not use internal Operator field only.\n    originalReplicas: 1\n    # podManagementPolicy controls how pods are created during initial\n    # scale up, when replacing pods on nodes, or when scaling down. The\n    # default policy is `OrderedReady`, where pods are created in\n    # increasing order (pod-0, then pod-1, etc) and the controller will\n    # wait until each pod is ready before continuing. When scaling\n    # down, the pods are removed in the opposite order. The alternative\n    # policy is `Parallel` which will create pods in parallel to match\n    # the desired scale without waiting, and on scale down will delete\n    # all pods at once.\n    podManagementPolicy: \"Parallel\"\n    # Number of ranks per node as a uint16 i.e. 1-65535 ranks per node.\n    # Default: 1\n    ranksPerNode: 1\n    # Periodic probe of container service readiness. Container will be\n    # removed from service endpoints if the probe fails. Cannot be\n    # updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    readinessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # The number of DB ranks i.e. replicas that the cluster will spin\n    # up. Default: 3\n    replicas: 3\n    # Limit the resources a DB Pod can consume.\n    resources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # SecurityContext holds security configuration that will be applied\n    # to a container. Some fields are present in both SecurityContext\n    # and PodSecurityContext.  When both are set, the values in\n    # SecurityContext take precedence.\n    securityContext:\n      # AllowPrivilegeEscalation controls whether a process can gain\n      # more privileges than its parent process. This bool directly\n      # controls if the no_new_privs flag will be set on the container\n      # process. AllowPrivilegeEscalation is true always when the\n      # container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note\n      # that this field cannot be set when spec.os.name is windows.\n      allowPrivilegeEscalation: true\n      # The capabilities to add/drop when running containers. Defaults\n      # to the default set of capabilities granted by the container\n      # runtime. Note that this field cannot be set when spec.os.name\n      # is windows.\n      capabilities:\n        # Added capabilities\n        add: [\"string\"]\n        # Removed capabilities\n        drop: [\"string\"]\n      # Run container in privileged mode. Processes in privileged\n      # containers are essentially equivalent to root on the host.\n      # Defaults to false. Note that this field cannot be set when\n      # spec.os.name is windows.\n      privileged: true\n      # procMount denotes the type of proc mount to use for the\n      # containers. The default is DefaultProcMount which uses the\n      # container runtime defaults for readonly paths and masked paths.\n      # This requires the ProcMountType feature flag to be enabled.\n      # Note that this field cannot be set when spec.os.name is\n      # windows.\n      procMount: string\n      # Whether this container has a read-only root filesystem. Default\n      # is false. Note that this field cannot be set when spec.os.name\n      # is windows.\n      readOnlyRootFilesystem: true\n      # The GID to run the entrypoint of the container process. Uses\n      # runtime default if unset. May also be set in\n      # PodSecurityContext.  If set in both SecurityContext and\n      # PodSecurityContext, the value specified in SecurityContext\n      # takes precedence. Note that this field cannot be set when\n      # spec.os.name is windows.\n      runAsGroup: 1\n      # Indicates that the container must run as a non-root user. If\n      # true, the Kubelet will validate the image at runtime to ensure\n      # that it does not run as UID 0 (root) and fail to start the\n      # container if it does. If unset or false, no such validation\n      # will be performed. May also be set in PodSecurityContext.  If\n      # set in both SecurityContext and PodSecurityContext, the value\n      # specified in SecurityContext takes precedence.\n      runAsNonRoot: true\n      # The UID to run the entrypoint of the container process. Defaults\n      # to user specified in image metadata if unspecified. May also be\n      # set in PodSecurityContext.  If set in both SecurityContext and\n      # PodSecurityContext, the value specified in SecurityContext\n      # takes precedence. Note that this field cannot be set when\n      # spec.os.name is windows.\n      runAsUser: 1\n      # The SELinux context to be applied to the container. If\n      # unspecified, the container runtime will allocate a random\n      # SELinux context for each container.  May also be set in\n      # PodSecurityContext.  If set in both SecurityContext and\n      # PodSecurityContext, the value specified in SecurityContext\n      # takes precedence. Note that this field cannot be set when\n      # spec.os.name is windows.\n      seLinuxOptions:\n        # Level is SELinux level label that applies to the container.\n        level: string\n        # Role is a SELinux role label that applies to the container.\n        role: string\n        # Type is a SELinux type label that applies to the container.\n        type: string\n        # User is a SELinux user label that applies to the container.\n        user: string\n      # The seccomp options to use by this container. If seccomp options\n      # are provided at both the pod & container level, the container\n      # options override the pod options. Note that this field cannot\n      # be set when spec.os.name is windows.\n      seccompProfile:\n        # localhostProfile indicates a profile defined in a file on the\n        # node should be used. The profile must be preconfigured on the\n        # node to work. Must be a descending path, relative to the\n        # kubelet's configured seccomp profile location. Must only be\n        # set if type is \"Localhost\".\n        localhostProfile: string\n        # type indicates which kind of seccomp profile will be applied.\n        # Valid options are: Localhost - a profile defined in a file on\n        # the node should be used. RuntimeDefault - the container\n        # runtime default profile should be used. Unconfined - no\n        # profile should be applied.\n        type: string\n      # The Windows specific settings applied to all containers. If\n      # unspecified, the options from the PodSecurityContext will be\n      # used. If set in both SecurityContext and PodSecurityContext,\n      # the value specified in SecurityContext takes precedence. Note\n      # that this field cannot be set when spec.os.name is linux.\n      windowsOptions:\n        # GMSACredentialSpec is where the GMSA admission webhook\n        # (https://github.com/kubernetes-sigs/windows-gmsa) inlines the\n        # contents of the GMSA credential spec named by the\n        # GMSACredentialSpecName field.\n        gmsaCredentialSpec: string\n        # GMSACredentialSpecName is the name of the GMSA credential spec\n        # to use.\n        gmsaCredentialSpecName: string\n        # HostProcess determines if a container should be run as a 'Host\n        # Process' container. This field is alpha-level and will only\n        # be honored by components that enable the\n        # WindowsHostProcessContainers feature flag. Setting this field\n        # without the feature flag will result in errors when\n        # validating the Pod. All of a Pod's containers must have the\n        # same effective HostProcess value (it is not allowed to have a\n        # mix of HostProcess containers and non-HostProcess\n        # containers).  In addition, if HostProcess is true then\n        # HostNetwork must also be set to true.\n        hostProcess: true\n        # The UserName in Windows to run the entrypoint of the container\n        # process. Defaults to the user specified in image metadata if\n        # unspecified. May also be set in PodSecurityContext. If set in\n        # both SecurityContext and PodSecurityContext, the value\n        # specified in SecurityContext takes precedence.\n        runAsUserName: string\n    # StartupProbe indicates that the Pod has successfully initialized.\n    # If specified, no other probes are executed until this completes\n    # successfully. If this probe fails, the Pod will be restarted,\n    # just as if the livenessProbe failed. This can be used to provide\n    # different probe parameters at the beginning of a Pod's lifecycle,\n    # when it might take a long time to load data or warm a cache, than\n    # during steady-state operation. This cannot be updated. This is an\n    # alpha feature enabled by the StartupProbe feature flag. More\n    # info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    startupProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n  # HostManagerMonitor is used to monitor the Kinetica DB Ranks. If a\n  # rank is unavailable for the specified time(MaxRankFailureCount) the\n  # cluster will be restarted.\n  hostManagerMonitor:\n    # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and\n    # Liveness probes. Default: 8888\n    db_healthz_port:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and\n    # Liveness probes. Default: 8889\n    hm_healthz_port:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # Periodic probe of container liveness. Container will be restarted\n    # if the probe fails. Cannot be updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    livenessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # Set the name of the container image to use.\n    monitorRegistryRepositoryTag:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Periodic probe of container service readiness. Container will be\n    # removed from service endpoints if the probe fails. Cannot be\n    # updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    readinessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # Allow for overriding resource requests/limits.\n    resources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # StartupProbe indicates that the Pod has successfully initialized.\n    # If specified, no other probes are executed until this completes\n    # successfully. If this probe fails, the Pod will be restarted,\n    # just as if the livenessProbe failed. This can be used to provide\n    # different probe parameters at the beginning of a Pod's lifecycle,\n    # when it might take a long time to load data or warm a cache, than\n    # during steady-state operation. This cannot be updated. This is an\n    # alpha feature enabled by the StartupProbe feature flag. More\n    # info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    startupProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  infra: \"on-prem\"\n  # The Kubernetes Ingress Controller will be running on e.g.\n  # ingress-nginx, Traefik, Ambassador, Gloo, Kong etc.\n  ingressController: \"nginx\"\n  # The LDAP server to connect to.\n  ldap:\n    # BaseDN - The root base LDAP Distinguished Name to use as the base\n    # for the LDAP usage\n    baseDN: \"dc=kinetica,dc=com\"\n    # BindDN - The LDAP Distinguished Name to use for the LDAP\n    # connectivity/data connectivity/bind\n    bindDN: \"cn=admin,dc=kinetica,dc=com\"\n    # Host - The name of the host to connect to. If IsInLocalK8S=true\n    # then supply only the name e.g. `openldap` Default: openldap\n    host: \"openldap\"\n    # IsInLocalK8S - Is the LDAP server co-located in the same K8s\n    # cluster the operator is running in. Default: true\n    isInLocalK8S: true\n    # IsLDAPS - IUse LDAPS instead of LDAP Default: false\n    isLDAPS: false\n    # Namespace - The namespace the Default: openldap\n    namespace: \"gpudb\"\n    # Port - Defaults to LDAP Port 389 Default: 389\n    port: 389\n  # Tells the operator to use Cloud Provider Pay As You Go\n  # functionality.\n  payAsYouGo: false\n  # The Reveal Dashboard Configuration for the Kinetica Cluster.\n  reveal:\n    # The port that Reveal will be running on. It runs only on the head\n    # node pod in the cluster. Default: 8080\n    containerPort:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # The Ingress Endpoint that Reveal will be running on.\n    ingressPath:\n      # backend defines the referenced service endpoint to which the\n      # traffic will be forwarded to.\n      backend:\n        # resource is an ObjectRef to another Kubernetes resource in the\n        # namespace of the Ingress object. If resource is specified,\n        # serviceName and servicePort must not be specified.\n        resource:\n          # APIGroup is the group for the resource being referenced. If\n          # APIGroup is not specified, the specified Kind must be in\n          # the core API group. For any other third-party types,\n          # APIGroup is required.\n          apiGroup: string\n          # Kind is the type of resource being referenced\n          kind: KineticaCluster\n          # Name is the name of resource being referenced\n          name: string\n        # serviceName specifies the name of the referenced service.\n        serviceName: string\n        # servicePort Specifies the port of the referenced service.\n        servicePort: \n      # path is matched against the path of an incoming request.\n      # Currently it can contain characters disallowed from the\n      # conventional \"path\" part of a URL as defined by RFC 3986. Paths\n      # must begin with a '/' and must be present when using PathType\n      # with value \"Exact\" or \"Prefix\".\n      path: string\n      # pathType determines the interpretation of the path matching.\n      # PathType can be one of the following values: * Exact: Matches\n      # the URL path exactly. * Prefix: Matches based on a URL path\n      # prefix split by '/'. Matching is done on a path element by\n      # element basis. A path element refers is the list of labels in\n      # the path split by the '/' separator. A request is a match for\n      # path p if every p is an element-wise prefix of p of the request\n      # path. Note that if the last element of the path is a substring\n      # of the last element in request path, it is not a match\n      # (e.g. /foo/bar matches /foo/bar/baz, but does not\n      # match /foo/barbaz). * ImplementationSpecific: Interpretation of\n      # the Path matching is up to the IngressClass. Implementations\n      # can treat this as a separate PathType or treat it identically\n      # to Prefix or Exact path types. Implementations are required to\n      # support all path types. Defaults to ImplementationSpecific.\n      pathType: string\n    # Whether to enable the Reveal Dashboard on the Cluster. Default:\n    # true\n    isEnabled: true\n  # The Stats server to deploy & connect to if required.\n  stats:\n    # AlertManager - AlertManager specific configuration.\n    alertManager:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n      # StoragePath - Set the location of the AlertManager file\n      # storage.\n      storagePath: \"/opt/gpudb/kagent/stats/storage/alertmanager/alertmanager\"\n      # WebConfigFile - Set the location of the AlertManager\n      # alertmanager-web-config.yml.\n      webConfigFile: \"/opt/gpudb/kagent/stats/alertmanager/alertmanager-web-config.yml\"\n      # WebListenAddress - Set the location of the AlertManager\n      # alertmanager-web-config.yml.\n      webListenAddress: \"0.0.0.0:9089\"\n    # Grafana - Grafana specific configuration.\n    grafana:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # HomePath - Set the location of the Grafana home directory.\n      homePath: \"/opt/gpudb/kagent/stats/grafana\"\n      # GraphiteHost - Host Address\n      host: \"0.0.0.0\"\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n    # Whether to enable the Stats Server on the Cluster. Default: true\n    isEnabled: true\n    # Loki - Loki specific configuration.\n    loki:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # ExpandEnv\n      expandEnv: true\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n      # Storage - Set the path of the Loki storage.\n      storage: \"/opt/gpudb/kagent/stats/storage/loki-storage\"\n    # Which vmss/node group etc. to use as the NodeSelector\n    pool: \"compute\"\n    # Prometheus - Prometheus specific configuration.\n    prometheus:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Set the Prometheus logging level.\n      logLevel: \"debug\"\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n      # Set the location of the TSDB database.\n      storageTSDBPath: \"/opt/gpudb/kagent/stats/storage/prometheus-storage\"\n      # Set the time to hold data in the TSDB database.\n      storageTSDBRetentionTime: \"7d\"\n      # Timings - Prometheus Intervals & Timeouts\n      timings: evaluationInterval: \"30s\" scrapeInterval: \"30s\"\n      scrapeTimeout: \"10s\"\n    # Whether to share a single PV for Loki, Prometheus & Grafana or\n    # have a separate PV for each. Default: true\n    sharedPV: true\n    # Resource block specifically for use with SharedPV = true to set\n    # storage `requests` & `limits`\n    sharedPVResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n  # Supporting images like socat,busybox etc.\n  supportingImages:\n    # Set the resource requests/limits for the BusyBox Pod(s).\n    busyBoxResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # Set the name of the container image to use.\n    busybox:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Set the name of the container image to use.\n    socat:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Set the resource requests/limits for the Socat Pod.\n    socatResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n# KineticaClusterStatus defines the observed state of KineticaCluster\nstatus:\n  # CloudProvider the DB is deployed on\n  cloudProvider: string\n  # CloudRegion the DB is deployed on\n  cloudRegion: string\n  # ClusterSize the current number of ranks & type i.e. CPU or GPU of\n  # the cluster\n  clusterSize:\n    # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n    # representation of the number of nodes in a simple to understand\n    # T-Short size scheme. This indicates the size of the cluster i.e.\n    # the number of nodes. It does not identify the size of the cloud\n    # provider nodes. For node size see ClusterTypeEnum. Supported\n    # Values are: - XS S M L XL XXL XXXL\n    tshirtSize: string\n    # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n    # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n    # of the VM.\n    tshirtType: string\n  # The number of ranks (replicas) that the cluster was last run with\n  currentReplicas: 0\n  # The first start of a new cluster has completed.\n  firstStartComplete: false\n  # HostManagerStatusResponse - The contents of polling the HostManager\n  # on port 9300n are added to the BR status field. This allows clients\n  # to get the Host/Rank/Graph/ML status information.\n  hmStatus: cluster_leader: string cluster_operation: string graph:\n  status: string graph_status: string host_httpd_status: string\n  host_mode: string host_num_gpus: string host_pid: 1\n  host_stats_status: string host_status: string hostname: string hosts:\n  graph_status: string host_httpd_status: string host_mode: string\n  host_pid: 1 host_stats_status: string host_status: string ml_status:\n  string query_planner_status: string reveal_status: string\n  license_expiration: string license_status: string license_type:\n  string ml_status: string query_planner_status: string ranks: mode:\n  string\n      # Pid - The OS Process Id for the Rank.\n      pid: 1 status: string reveal_status: string system_idle_time:\n      string system_mode: string system_rebalancing: 1 system_status:\n      string text: status: string version: string\n  # The fully qualified Ingress routes.\n  ingressUrls: aaw: string dbMonitor: string files: string gadmin:\n  string postgresProxy: string ranks: {} reveal: string\n  # The fully qualified in-cluster Ingress routes.\n  internalIngressUrls: aaw: string dbMonitor: string files: string\n  gadmin: string postgresProxy: string ranks: {} reveal: string\n  # Identify FreeSaaS Cluster\n  isFreeSaaS: false\n  # HostOptions used during DB Cluster Scaling Functions\n  options: ram_limit: 1\n  # OutstandingBilling - A list of hours not yet billed for. Will only\n  # be present if the plan is Pay As You Go and the operator was unable\n  # to send the billing information due to an issue with the cloud\n  # providers billing APIs.\n  outstandingBillableHour:\n  - billable: true billed: true billedAt: string duration: string end:\n    string start: string\n  # The state or phase of the current DB installation\n  phase: stringv\n
","tags":["Reference"]},{"location":"Reference/kinetica_workbench/","title":"Workbench CRD Reference","text":"","tags":["Reference"]},{"location":"Reference/kinetica_workbench/#coming-soon","title":"Coming Soon","text":"","tags":["Reference"]},{"location":"Reference/workbench/","title":"Kinetica Workbench Configuration","text":"
  • kubectl (yaml)
  • Helm Chart
","tags":["Reference"]},{"location":"Reference/workbench/#workbench","title":"Workbench","text":"kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica Workbench is as follows: -

Workbench GVK
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\n
","tags":["Reference"]},{"location":"Reference/workbench/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

Workbench metadata
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n  name: workbench-kinetica-cluster\n  namespace: gpudb\n

The simplest valid Workbench CR looks as follows: -

workbench.yaml
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n  name: workbench-kinetica-cluster\n  namespace: gpudb\nspec:\n  executeSqlLimit: 10000\n  fqdn: kinetica-cluster.saas.kinetica.com\n  image: kinetica/workbench:v7.1.9-8.rc1\n  letsEncrypt:\n    enabled: false\n  userIdleTimeout: 60\n  ingressController: nginx-ingress\n

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

helm","tags":["Reference"]},{"location":"Setup/","title":"Kinetica for Kubernetes Setup","text":"
  • Set up in 15 minutes

    Install the Kinetica DB locally on Kind or k3s with helm to get up and running in minutes. Quickstart

  • Prepare to Install

    What you need to know & do before beginning a production installation. Preparation and Prerequisites

  • Production DB Installation

    Install the Kinetica DB with helm to get up and running quickly Installation

  • Channel Your Inner Ninja

    Advanced Installation Topics which go beyond the basic installation. Advanced Topics

","tags":["Getting Started","Installation"]},{"location":"Support/","title":"Support","text":"
  • Taking the next steps

    Further tutorials or help on configuring Kinetica in different environments. Help & Tutorials

  • Locating Issues

    In the unlikely event you require information on how to troubleshoot your installation, help can be found here. Troubleshooting

  • FAQ

    Frequently Asked Questions.. FAQ

","tags":["Support"]},{"location":"Troubleshooting/troubleshooting/","title":"Troubleshooting","text":"","tags":["Support"]},{"location":"Troubleshooting/troubleshooting/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"tags/","title":"Categories","text":"

Following is a list of relevant documentation categories:

"},{"location":"tags/#aks","title":"AKS","text":"
  • Azure AKS
"},{"location":"tags/#administration","title":"Administration","text":"
  • Administration
  • Grant management
  • Resource group management
  • Role Management
  • Schema management
  • User Management
  • Kinetica Cluster Grants Reference
  • Kinetica Cluster Resource Groups Reference
  • Kinetica Cluster Roles Reference
  • Kinetica Cluster Schemas Reference
  • Kinetica Cluster Users Reference
"},{"location":"tags/#advanced","title":"Advanced","text":"
  • Advanced
  • Advanced Topics
  • Air-Gapped Environments
  • Alternative Charts
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • Kinetica DB on OS X (Arm64)
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • Bare Metal/VM Installation - kubeadm
  • S3 Storage for Dev/Test
  • nginx-ingress Ingress Configuration
"},{"location":"tags/#architecture","title":"Architecture","text":"
  • Architecture
  • Core Database Architecture
  • Kubernetes Architecture
"},{"location":"tags/#configuration","title":"Configuration","text":"
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • nginx-ingress Ingress Configuration
  • How to change the Clusters FQDN
  • OpenTelemetry
"},{"location":"tags/#development","title":"Development","text":"
  • Kinetica DB on OS X (Arm64)
  • S3 Storage for Dev/Test
  • Quickstart
"},{"location":"tags/#eks","title":"EKS","text":"
  • Amazon EKS
"},{"location":"tags/#getting-started","title":"Getting Started","text":"
  • Getting Started
  • Azure AKS
  • Amazon EKS
  • Preparation & Prerequisites
  • Quickstart
  • Kinetica for Kubernetes Setup
"},{"location":"tags/#ingress","title":"Ingress","text":"
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • nginx-ingress Ingress Configuration
"},{"location":"tags/#installation","title":"Installation","text":"
  • Air-Gapped Environments
  • Alternative Charts
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • Bare Metal/VM Installation - kubeadm
  • S3 Storage for Dev/Test
  • Getting Started
  • Kinetica for Kubernetes Installation
  • CPU
  • GPU
  • Preparation & Prerequisites
  • Quickstart
  • Core DB CRDs
  • Kinetica for Kubernetes Setup
"},{"location":"tags/#monitoring","title":"Monitoring","text":"
  • Logs
  • Metrics Collection & Display
  • OpenTelemetry
"},{"location":"tags/#operations","title":"Operations","text":"
  • Logs
  • Metrics Collection & Display
  • Operational Management
  • Kinetica for Kubernetes Backup & Restore
  • OpenTelemetry
  • Kinetica for Kubernetes Data Rebalancing
  • Kinetica for Kubernetes Suspend & Resume
  • Kinetica Cluster Backups Reference
  • Core DB CRDs
  • Kinetica Cluster Restores Reference
"},{"location":"tags/#reference","title":"Reference","text":"
  • Reference Section
  • Kinetica Database Configuration
  • Kinetica Operators
  • Kinetica Cluster Admins Reference
  • Kinetica Cluster Backups Reference
  • Kinetica Cluster Grants Reference
  • Core DB CRDs
  • Kinetica Cluster Resource Groups Reference
  • Kinetica Cluster Restores Reference
  • Kinetica Cluster Roles Reference
  • Kinetica Cluster Schemas Reference
  • Kinetica Cluster Users Reference
  • Kinetica Clusters Reference
  • Kinetica Workbench Reference
  • Kinetica Workbench Configuration
"},{"location":"tags/#storage","title":"Storage","text":"
  • S3 Storage for Dev/Test
  • Amazon EKS
"},{"location":"tags/#support","title":"Support","text":"
  • How to change the Clusters FQDN
  • FAQ
  • Help & Tutorials
  • Creating Users, Roles, Schemas and other Kinetica DB Objects
  • Support
  • Troubleshooting
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"kineticadb/charts","text":"

Accelerate your AI and analytics. Kinetica harnesses real-time data and the power of CPUs & GPUs for lightning-fast insights due to it being uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

Kinetica DB can be quickly installed into Kubernetes using Helm.

  • Set up in 15 minutes

    Install the Kinetica DB locally on Kind or k3s with helm to get up and running in minutes. Quickstart

  • Prepare to Install

    What you need to know & do before beginning a production installation. Preparation and Prerequisites

  • Production DB Installation

    Install the Kinetica DB with helm to get up and running quickly Installation

  • Channel Your Inner Ninja

    Advanced Installation Topics which go beyond the basic installation. Advanced Topics

  • Running and Managing the Platform

    Metrics, Monitoring, Logs and Telemetry Distribution. Operations

  • Product Architecture

    The Modern Analytics Database Architected for Performance at Scale. Architecture

  • Support

    Additional Help, Tutorials and Troubleshooting resources. Support

  • Configuration in Detail

    Detailed reference material for the Helm Charts & Kinetica for Kubernetes CRDs. Reference Documentation

"},{"location":"tags/","title":"Categories","text":"

Following is a list of relevant documentation categories:

"},{"location":"tags/#aks","title":"AKS","text":"
  • Azure AKS
"},{"location":"tags/#administration","title":"Administration","text":"
  • Administration
  • Grant management
  • Resource group management
  • Role Management
  • Schema management
  • User Management
  • Kinetica Cluster Grants Reference
  • Kinetica Cluster Resource Groups Reference
  • Kinetica Cluster Roles Reference
  • Kinetica Cluster Schemas Reference
  • Kinetica Cluster Users Reference
"},{"location":"tags/#advanced","title":"Advanced","text":"
  • Advanced
  • Advanced Topics
  • Air-Gapped Environments
  • Alternative Charts
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • Kinetica DB on OS X (Arm64)
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • Bare Metal/VM Installation - kubeadm
  • S3 Storage for Dev/Test
  • nginx-ingress Ingress Configuration
"},{"location":"tags/#architecture","title":"Architecture","text":"
  • Architecture
  • Core Database Architecture
  • Kubernetes Architecture
"},{"location":"tags/#configuration","title":"Configuration","text":"
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • nginx-ingress Ingress Configuration
  • How to change the Clusters FQDN
  • OpenTelemetry
"},{"location":"tags/#development","title":"Development","text":"
  • Kinetica DB on OS X (Arm64)
  • S3 Storage for Dev/Test
  • Quickstart
"},{"location":"tags/#eks","title":"EKS","text":"
  • Amazon EKS
"},{"location":"tags/#getting-started","title":"Getting Started","text":"
  • Getting Started
  • Azure AKS
  • Amazon EKS
  • Preparation & Prerequisites
  • Quickstart
  • Kinetica for Kubernetes Setup
"},{"location":"tags/#ingress","title":"Ingress","text":"
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • nginx-ingress Ingress Configuration
"},{"location":"tags/#installation","title":"Installation","text":"
  • Air-Gapped Environments
  • Alternative Charts
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • Bare Metal/VM Installation - kubeadm
  • S3 Storage for Dev/Test
  • Getting Started
  • Kinetica for Kubernetes Installation
  • CPU
  • GPU
  • Preparation & Prerequisites
  • Quickstart
  • Core DB CRDs
  • Kinetica for Kubernetes Setup
"},{"location":"tags/#monitoring","title":"Monitoring","text":"
  • Logs
  • Metrics Collection & Display
  • OpenTelemetry
"},{"location":"tags/#operations","title":"Operations","text":"
  • Logs
  • Metrics Collection & Display
  • Operational Management
  • Kinetica for Kubernetes Backup & Restore
  • OpenTelemetry
  • Kinetica for Kubernetes Data Rebalancing
  • Kinetica for Kubernetes Suspend & Resume
  • Kinetica Cluster Backups Reference
  • Core DB CRDs
  • Kinetica Cluster Restores Reference
"},{"location":"tags/#reference","title":"Reference","text":"
  • Reference Section
  • Kinetica Database Configuration
  • Kinetica Operators
  • Kinetica Cluster Admins Reference
  • Kinetica Cluster Backups Reference
  • Kinetica Cluster Grants Reference
  • Core DB CRDs
  • Kinetica Cluster Resource Groups Reference
  • Kinetica Cluster Restores Reference
  • Kinetica Cluster Roles Reference
  • Kinetica Cluster Schemas Reference
  • Kinetica Cluster Users Reference
  • Kinetica Clusters Reference
  • Kinetica Workbench Reference
  • Kinetica Workbench Configuration
"},{"location":"tags/#storage","title":"Storage","text":"
  • S3 Storage for Dev/Test
  • Amazon EKS
"},{"location":"tags/#support","title":"Support","text":"
  • How to change the Clusters FQDN
  • FAQ
  • Help & Tutorials
  • Creating Users, Roles, Schemas and other Kinetica DB Objects
  • Support
  • Troubleshooting
"},{"location":"Administration/","title":"Administration","text":"
  • DB Clusters

    Core Kinetica Database Cluster Management.

    KineticaCluster

  • DB Users

    Kinetica Database User Management.

    KineticaUser

  • DB Roles

    Kinetica Database Role Management.

    KineticaRole

  • DB Schemas

    Kinetica Database Schema Management.

    KineticaSchema

  • DB Grants

    Kinetica Database Grant Management.

    KineticaGrant

  • DB Resource Groups

    Kinetica Database Resource Group Management.

    KineticaResourceGroup

  • DB Administration

    Kinetica Database Administration.

    KineticaAdmin

  • DB Backups

    Kinetica Database Backup Management.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaBackup

  • DB Restore

    Kinetica Database Restoration.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaRestore

Home

","tags":["Administration"]},{"location":"Administration/role_management/","title":"Role Management","text":"

Management of roles is done with the KineticaRole CRD.

kubectl Usage

From the kubectl command line they are referenced by kineticaroles or the short form is kr.

","tags":["Administration"]},{"location":"Administration/role_management/#list-roles","title":"List Roles","text":"

To list the roles deployed to a Kinetica DB installation we can use the following from the command-line: -

kubectl -n gpudb get kineticaroles or kubectl -n gpudb get kr

where the namespace -n gpudb matches the namespace of the Kinetica DB installation.

This outputs

Name Ring Name Role Resource Group Name LDAP DB db-users kinetica-k8s-sample db_users OK OK global-admins kinetica-k8s-sample global_admins OK OK","tags":["Administration"]},{"location":"Administration/role_management/#name","title":"Name","text":"

The name of the Kubernetes CR i.e. the metadata.name this is not necessarily the name of the user.

","tags":["Administration"]},{"location":"Administration/role_management/#ring-name","title":"Ring Name","text":"

The name of the KineticaCluster the user is created in.

","tags":["Administration"]},{"location":"Administration/role_management/#role-name","title":"Role Name","text":"

The name of the role as contained with LDAP & the DB.

","tags":["Administration"]},{"location":"Administration/role_management/#role-creation","title":"Role Creation","text":"test-role-2.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaRole\nmetadata:\n    name: test-role-2\n    namespace: gpudb\nspec:\n    ringName: kineticacluster-sample\n    role:\n        name: \"test_role2\"\n
","tags":["Administration"]},{"location":"Administration/role_management/#role-deletion","title":"Role Deletion","text":"

To delete a role from the Kinetica Cluster simply delete the Role CR from Kubernetes: -

Delete User
kubectl -n gpudb delete kr user-fred-smith \n
","tags":["Administration"]},{"location":"Administration/user_management/","title":"User Management","text":"

Management of users is done with the KineticaUser CRD.

kubectl Usage

From the kubectl command line they are referenced by kineticausers or the short form is ku.

","tags":["Administration"]},{"location":"Administration/user_management/#list-users","title":"List Users","text":"

To list the users deployed to a Kinetica DB installation we can use the following from the command-line: -

kubectl -n gpudb get kineticausers or kubectl -n gpudb get ku

where the namespace -n gpudb matches the namespace of the Kinetica DB installation.

This outputs

Name Action Ring Name UID Last Name Given Name Display Name LDAP DB Reveal kadmin upsert kinetica-k8s-sample kadmin Account Admin Admin Account OK OK OK","tags":["Administration"]},{"location":"Administration/user_management/#name","title":"Name","text":"

The name of the Kubernetes CR i.e. the metadata.name this is not necessarily the name of the user.

","tags":["Administration"]},{"location":"Administration/user_management/#action","title":"Action","text":"

There are two actions possible on a KineticaUser. The first is upsert which is for user creation or modification. The second is change-password which shows when a user password reset has been performed.

","tags":["Administration"]},{"location":"Administration/user_management/#ring-name","title":"Ring Name","text":"

The name of the KineticaCluster the user is created in.

","tags":["Administration"]},{"location":"Administration/user_management/#uid","title":"UID","text":"

The unique, user id to use in LDAP & the DB to reference this user.

","tags":["Administration"]},{"location":"Administration/user_management/#last-name","title":"Last Name","text":"

Last Name refers to last name or surname.

sn in LDAP terms.

","tags":["Administration"]},{"location":"Administration/user_management/#given-name","title":"Given Name","text":"

Given Name is the Firstname also called Christian name.

givenName in LDAP terms.

","tags":["Administration"]},{"location":"Administration/user_management/#display-name","title":"Display Name","text":"

The name shown on any UI representation.

","tags":["Administration"]},{"location":"Administration/user_management/#ldap","title":"LDAP","text":"

Identifies if the user has been successfully created within LDAP.

  • '' - if empty the user has not yet been created in LDAP
  • 'OK' - shows the user has been successfully created within LDAP
  • 'Failed' - shows there was a failure adding the user to LDAP
","tags":["Administration"]},{"location":"Administration/user_management/#db","title":"DB","text":"

Identifies if the user has been successfully created within the DB.

  • '' - if empty the user has not yet been created in the DB
  • 'OK' - shows the user has been successfully created within the DB
  • 'Failed' - shows there was a failure adding the user to the DB
","tags":["Administration"]},{"location":"Administration/user_management/#reveal","title":"Reveal","text":"

Identifies if the user has been successfully created within Reveal.

  • '' - if empty the user has not yet been created in Reveal
  • 'OK' - shows the user has been successfully created within Reveal
  • 'Failed' - shows there was a failure adding the user to Reveal
","tags":["Administration"]},{"location":"Administration/user_management/#user-creation","title":"User Creation","text":"

User creation requires two Kubernetes CRs to be submitted to Kubernetes and processed by the Kinetica DB Operator.

  • User Secret (Password)
  • Kinetica User

Creation Sequence

It is preferable to create the User Secret prior to creating the KineticaUser.

Secret Deletion

The User Secret will be deleted once the KineticaUser is created by the operator. The users password will be stored in LDAP and not be present in Kubernetes.

","tags":["Administration"]},{"location":"Administration/user_management/#user-secret","title":"User Secret","text":"

In this example a user Fred Smith will be created.

fred-smith-secret.yaml
apiVersion: v1\nkind: Secret\nmetadata:\n  name: fred-smith-secret\n  namespace: gpudb\nstringData:\n  password: testpassword\n
Create the User Password Secret
kubectl apply -f fred-smith-secret.yaml\n
","tags":["Administration"]},{"location":"Administration/user_management/#kineticauser","title":"KineticaUser","text":"user-fred-smith.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaUser\nmetadata:\n  name: user-fred-smith\n  namespace: gpudb\nspec:\n  ringName: kineticacluster-sample\n  uid: fred\n  action: upsert\n  reveal: true\n  upsert:\n    userPrincipalName: fred.smith@example.com\n    givenName: Fred\n    displayName: FredSmith\n    lastName: Smith\n    passwordSecret: fred-smith-secret\n
","tags":["Administration"]},{"location":"Administration/user_management/#user-deletion","title":"User Deletion","text":"

To delete a user from the Kinetica Cluster simply delete the User CR from Kubernetes: -

Delete User
kubectl -n gpudb delete ku user-fred-smith \n
","tags":["Administration"]},{"location":"Administration/user_management/#change-password","title":"Change Password","text":"

To change a users password we use the change-password action rather than the upsert action we used previously.

Creation Sequence

It is preferable to create the User Secret prior to creating the KineticaUser.

Secret Deletion

The User Secret will be deleted once the KineticaUser is created by the operator. The users password will be stored in LDAP and not be present in Kubernetes.

fred-smith-change-pwd-secret.yaml
apiVersion: v1\nkind: Secret\nmetadata:\n  name: fred-smith-change-pwd-secret\n  namespace: gpudb\nstringData:\n  password: testpassword\n
Create the User Password Secret
kubectl apply -f fred-smith-change-pwd-secret.yaml\n
user-fred-smith-change-password.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaUser\nmetadata:\n  name: user-fred-smith-change-password\n  namespace: gpudb\nspec:\n  ringName: kineticacluster-sample\n  uid: fred\n  action: change-password\n  changePassword:\n    passwordSecret: fred-smith-change-pwd-secret\n
","tags":["Administration"]},{"location":"Administration/user_management/#advanced-topics","title":"Advanced Topics","text":"","tags":["Administration"]},{"location":"Administration/user_management/#limit-user-resources","title":"Limit User Resources","text":"","tags":["Administration"]},{"location":"Administration/user_management/#data-limit","title":"Data Limit","text":"

KIFs user data size limit.

dataLimit
spec:\n  upsert:\n    dataLimit: 10Gi\n
","tags":["Administration"]},{"location":"Administration/user_management/#user-kifs-usage","title":"User Kifs Usage","text":"

Kifs Enablement

In order to use the Kifs user features below there is a requirement that Kifs is enabled on the Kinetica DB.

","tags":["Administration"]},{"location":"Administration/user_management/#home-directory","title":"Home Directory","text":"

When creating a new user it is possible to create that user a 'home' directory within the Kifs filesystem by using the createHomeDirectory option.

createHomeDirectory
spec:\n  upsert:\n    createHomeDirectory: true\n
","tags":["Administration"]},{"location":"Administration/user_management/#limit-directory-storage","title":"Limit Directory Storage","text":"

It is possible to limit the amount of Kifs file storage the user has by adding kifsDataLimit to the user creation yaml and setting the value to a Kubernetes Quantity e.g. 2Gi

kifsDataLimit
spec:\n  upsert:\n    kifsDataLimit: 2Gi\n
","tags":["Administration"]},{"location":"Advanced/","title":"Advanced Topics","text":"
  • Find alternative chart versions

    How to use pre-release or development Chart version if requested to by Kinetica Support. Alternative Charts

  • Configuring Ingress Records

    How to expose Kinetica via Kubernetes Ingress. Ingress Configuration

  • Air-Gapped Environments

    Specifics for installing Kinetica for Kubernetes in an Air-Gapped Environment Airgapped

  • Using your own OpenTelemetry Collector

    How to configure Kinetica for Kubernetes to use your open OpenTelemetry collector.

    External OTEL

  • Minio for Dev/Test S3 Storage

    Install Minio in order to enable S3 storage for Development.

    min.io

  • Creating Resources with Kubernetes APIs

    Create Users, Roles, DB Schema etc. using Kubernertes CRs. Resources

  • Kinetica on OS X (Apple Silicon)

    Install the Kinetica DB on a new Kubernetes 'production-like' cluster on Apple OS X (Apple Silicon) using UTM. Apple ARM64

  • Bare Metal/VM Installation from Scratch

    Install the Kinetica DB on a new Kubernetes 'production-like' bare metal (or VMs) cluster via kubeadm using cilium Networking, kube-vip LoadBalancer. Bare Metal/VM Installation

  • Software LoadBalancer

    Install a software Kubernetes CCM/LoadBalancer for bare metal or VM based Kubernetes CLusters. kube-vip LoadBalancer.

    Software LoadBalancer

","tags":["Advanced"]},{"location":"Advanced/advanced_topics/","title":"Advanced Topics","text":"","tags":["Advanced"]},{"location":"Advanced/advanced_topics/#install-from-a-developmentpre-release-chart-version","title":"Install from a development/pre-release chart version","text":"

Find all alternative chart versions with:

Find alternative chart versions
helm search repo kinetica-operators --devel --versions\n

Then append --devel --version [CHART-DEVEL-VERSION] to the end of the Helm install command. See here.

","tags":["Advanced"]},{"location":"Advanced/airgapped/","title":"Air-Gapped Environments","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#obtaining-the-kinetica-images","title":"Obtaining the Kinetica Images","text":"Kinetica Images for an Air-Gapped Environment

If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.

For information on ways to transfer the files into an air-gapped environment See here.

Please select the method to transfer the images: -

mindthegap containerd docker

It is possible to use mesosphere/mindthegap

mindthegap

mindthegap provides utilities to manage air-gapped image bundles, both creating image bundles and seeding images from a bundle into an existing OCI registry or directly loading them to containerd.

This makes it possible with mindthegap to

  • create a single archive bundle of all the required images outside the air-gapped environment
  • run mindthegap using the archive bundle on the Kubernetes Nodes to bulk load the images into contained in a single command.

Kinetica provides two mindthegap yaml files which list all the necessary images for Kinetica for Kubernetes.

  • CPU only
  • CPU & nVidia CUDA GPU
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#required-container-images","title":"Required Container Images","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"
  • docker.io/kinetica/kinetica-k8s-operator:v7.2.0-3.rc-3
    • docker.io/kinetica/kinetica-k8s-cpu:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-gpu:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench-operator:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench:v7.2.0-3.rc-3
  • docker.io/kinetica/kinetica-k8s-monitor:v7.2.0-3.rc-3
  • docker.io/kinetica/busybox:v7.2.0-3.rc-3
  • docker.io/kinetica/fluent-bit:v7.2.0-3.rc-3
  • docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • nvcr.io/nvidia/gpu-operator:v23.9.1
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • registry.k8s.io/nfd/node-feature-discovery:v0.14.2
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"
  • docker.io/bitnami/openldap:2.6.7
  • docker.io/alpine/openssl:latest (used by bitnami/openldap)
  • docker.io/otel/opentelemetry-collector-contrib:0.95.0
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"
  • quay.io/brancz/kube-rbac-proxy:v0.14.2
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#optional-container-images","title":"Optional Container Images","text":"

These images are only required if certain features are enabled as part of the Helm installation: -

  • CertManager
  • ingress-ninx
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"
  • quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"
  • registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)
  • registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c

It is possible with containerd to pull images, save them and load them either into a Container Registry in the air gapped environment or directly into another containerd instance.

If the target containerd is on a node running a Kubernetes Cluster then these images will be sourced by Kubernetes from the loaded images, via CRI, with no requirement to pull them from an external source e.g. a Registry or Mirror.

sudo required

Depending on how containerd has been installed and configured many of the example calls below may require running with sudo

It is possible with docker to pull images, save them and load them into an OCI Container Registry in the air gapped environment.

Pull a remote image (docker)
docker pull --platformlinux/amd64 docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2\n
Export a local image (docker)
docker export --platformlinux/amd64 -o kinetica-k8s-cpu-v7.2.2-3.ga-2.tar \\\ndocker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2\n

We can now transfer this archive (kinetica-k8s-cpu-v7.2.2-3.ga-2.rc-3.tar) to the Kubernetes Node inside the air-gapped environment.

","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2)
  1. It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image
  2. With a supported nVidia GPU.
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#install-mindthegap","title":"Install mindthegap","text":"Install mindthegap
wget https://github.com/mesosphere/mindthegap/releases/download/v1.13.1/mindthegap_v1.13.1_linux_amd64.tar.gz\ntar zxvf mindthegap_v1.13.1_linux_amd64.tar.gz\n
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-create-the-bundle","title":"mindthegap - Create the Bundle","text":"mindthegap create image-bundle
mindthegap create image-bundle --images-file mtg.yaml --platform linux/amd64\n

where --images-file is either the CPU or GPU Kinetica mindthegap yaml file.

","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-the-bundle","title":"mindthegap - Import the Bundle","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-to-containerd","title":"mindthegap - Import to containerd","text":"mindthegap import image-bundle
mindthegap import image-bundle --image-bundle images.tar [--containerd-namespace k8s.io]\n

If --containerd-namespace is not specified, images will be imported into k8s.io namespace.

sudo required

Depending on how containerd has been installed and configured it may require running the above command with sudo

","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#mindthegap-import-to-an-internal-oci-registry","title":"mindthegap - Import to an internal OCI Registry","text":"mindthegap import image-bundle
mindthegap push bundle --bundle <path/to/bundle.tar> \\\n--to-registry <registry.address> \\\n[--to-registry-insecure-skip-tls-verify]\n
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-using-containerd-to-pull-and-export-an-image","title":"containerd - Using containerd to pull and export an image","text":"

Similar to docker pull we can use ctr image pull so to pull the core Kinetica DB cpu based image

Pull a remote image (containerd)
ctr image pull docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2\n

We now need to export the pulled image as an archive to the local filesystem.

Export a local image (containerd)
ctr image export kinetica-k8s-cpu-v7.2.2-3.ga-2.tar \\\ndocker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2\n

We can now transfer this archive (kinetica-k8s-cpu-v7.2.2-3.ga-2.tar) to the Kubernetes Node inside the air-gapped environment.

","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-using-containerd-to-import-an-image","title":"containerd - Using containerd to import an image","text":"

Using containerd to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.

Import the Images
ctr -n=k8s.io images import kinetica-k8s-cpu-v7.2.2-3.ga-2.tar\n

-n=k8s.io

It is possible to use ctr images import kinetica-k8s-cpu-v7.2.2-3.ga-2.rc-3.tar to import the image to containerd.

However, in order for the image to be visible to the Kubernetes Cluster running on containerd it is necessary to add the parameter -n=k8s.io.

","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#containerd-verifying-the-image-is-available","title":"containerd - Verifying the image is available","text":"

To verify the image is loaded into containerd on the node run the following on the node: -

Verify containerd Images
ctr image ls\n

To verify the image is visible to Kubernetes on the node run the following: -

Verify CRI Images
crictl images\n
","tags":["Advanced","Installation"]},{"location":"Advanced/airgapped/#docker-using-docker-to-import-an-image","title":"docker - Using docker to import an image","text":"

Using docker to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.

Import the Images
docker import --platformlinux/amd64 kinetica-k8s-cpu-v7.2.2-3.ga-2.tar registry:repository/kinetica-k8s-cpu:v7.2.0-3.rc-3\n
","tags":["Advanced","Installation"]},{"location":"Advanced/alternative_charts/","title":"Using Alternative Helm Charts","text":"

If requested by Kinetica Support you can search and use pre-release versions of the Kinetica Helm Charts.

","tags":["Advanced","Installation"]},{"location":"Advanced/alternative_charts/#install-from-a-developmentpre-release-chart-version","title":"Install from a development/pre-release chart version","text":"

Find all alternative chart versions with:

Find alternative chart versions
helm search repo kinetica-operators --devel --versions\n

Then append --devel --version [CHART-DEVEL-VERSION] to the end of the Helm install command.

Helm install kinetica-operators
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--devel \\\n--version 72.0 \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
","tags":["Advanced","Installation"]},{"location":"Advanced/ingress_configuration/","title":"Ingress Configuration","text":"
  • ingress-nginx Configuration

    How to enable Ingress with ingress-nginx for Kinetica DB.

    ingress-nginx

  • nginx-ingress Configuration

    How to enable Ingress with nginx-ingress for Kinetica DB.

    nginx-ingress

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/","title":"ingress-nginx Ingress Configuration","text":"

To use an 'external' ingress-nginx controller i.e. not the one optionally installed by the Kinetica Operators Helm chart it is necessary to disable ingress in the KineticaCluster CR.

The field spec.ingressController: nginx should be set to spec.ingressController: none.

It is then necessary to create the required Ingress CRs by hand. Below is a list of the Ingress paths that need to be exposed along with sample ingress-nginx CRs.

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#required-ingress-routes","title":"Required Ingress Routes","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#ingress-routes","title":"Ingress Routes","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#gadmin-paths","title":"GAdmin Paths","text":"Path Service Port /gadmin cluster-name-gadmin-service gadmin (8080/TCP) /tableau cluster-name-gadmin-service gadmin (8080/TCP) /files cluster-name^-gadmin-service gadmin (8080/TCP)

where cluster-name is the name of the Kinetica Cluster i.e. what is in the .spec.gpudbCluster.clusterName in the KineticaCluster CR.

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#workbench-paths","title":"Workbench Paths","text":"Path Service Port / workbench-workbench-service workbench-port (8000/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#db-rank-0-paths","title":"DB rank-0 Paths","text":"Path Service Port /cluster-145025b8(/gpudb-0(/.*|$)) cluster-145025b8-rank0-service httpd (8082/TCP) /cluster-145025b8/gpudb-0/hostmanager(.*) cluster-145025b8-rank0-service hostmanager (9300/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#db-rank-n-paths","title":"DB rank-N Paths","text":"Path Service Port /cluster-145025b8(/gpudb-N(/.*|$)) cluster-145025b8-rank1-service httpd (8082/TCP) /cluster-145025b8/gpudb-N/hostmanager(.*) cluster-145025b8-rank1-service hostmanager (9300/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#reveal-paths","title":"Reveal Paths","text":"Path Service Port /reveal cluster-name-reveal-service reveal (8088/TCP) /caravel cluster-name-reveal-service reveal (8088/TCP) /static cluster-name-reveal-service reveal (8088/TCP) /logout cluster-name-reveal-service reveal (8088/TCP) /resetmypassword cluster-name-reveal-service reveal (8088/TCP) /dashboardmodelview cluster-name-reveal-service reveal (8088/TCP) /dashboardmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /slicemodelview cluster-name-reveal-service reveal (8088/TCP) /slicemodelviewasync cluster-name-reveal-service reveal (8088/TCP) /sliceaddview cluster-name-reveal-service reveal (8088/TCP) /databaseview cluster-name-reveal-service reveal (8088/TCP) /databaseasync cluster-name-reveal-service reveal (8088/TCP) /databasetablesasync cluster-name-reveal-service reveal (8088/TCP) /tablemodelview cluster-name-reveal-service reveal (8088/TCP) /csstemplatemodelview cluster-name-reveal-service reveal (8088/TCP) /csstemplatemodelviewasync cluster-name-reveal-service reveal (8088/TCP) /users cluster-name-reveal-service reveal (8088/TCP) /roles cluster-name-reveal-service reveal (8088/TCP) /userstatschartview cluster-name-reveal-service reveal (8088/TCP) /permissions cluster-name-reveal-service reveal (8088/TCP) /viewmenus cluster-name-reveal-service reveal (8088/TCP) /permissionviews cluster-name-reveal-service reveal (8088/TCP) /accessrequestsmodelview cluster-name-reveal-service reveal (8088/TCP) /accessrequestsmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /logmodelview cluster-name-reveal-service reveal (8088/TCP) /logmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /userinfoeditview cluster-name-reveal-service reveal (8088/TCP) /tablecolumninlineview cluster-name-reveal-service reveal (8088/TCP) /sqlmetricinlineview cluster-name-reveal-service reveal (8088/TCP)","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-ingress-crs","title":"Example Ingress CRs","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-gadmin-ingress-cr","title":"Example GAdmin Ingress CR","text":"Example GAdmin Ingress CR

Example GAdmin Ingress CR

apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name:  cluster-name-gadmin-ingress #(1)!\n  namespace: gpudb\nspec:\n  ingressClassName: nginx\n  tls:\n    - hosts:\n        - cluster-name.example.com #(1)!\n      secretName: kinetica-tls\n  rules:\n    - host: cluster-name.example.com #(1)!\n      http:\n        paths:\n          - path: /gadmin\n            pathType: Prefix\n            backend:\n              service:\n                name: cluster-name-gadmin-service #(1)!\n                port:\n                  name: gadmin\n          - path: /tableau\n            pathType: Prefix\n            backend:\n              service:\n                name: cluster-name-gadmin-service #(1)!\n                port:\n                  name: gadmin\n          - path: /files\n            pathType: Prefix\n            backend:\n              service:\n                name: cluster-name-gadmin-service #(1)!\n                port:\n                  name: gadmin\n
1. where cluster-name is the name of the Kinetica Cluster

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-rank-ingress-cr","title":"Example Rank Ingress CR","text":"Example Rank Ingress CR Example Rank Ingress CR
apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: cluster-name-rank1-ingress\n  namespace: gpudb\nspec:\n  ingressClassName: nginx\n  tls:\n    - hosts:\n        - cluster-name.example.com\n      secretName: kinetica-tls\n  rules:\n    - host: cluster-name.example.com\n      http:\n        paths:\n          - path: /cluster-name(/gpudb-1(/.*|$))\n            pathType: Prefix\n            backend:\n              service:\n                name: cluster-name-rank1-service\n                port:\n                  name: httpd\n          - path: /cluster-name/gpudb-1/hostmanager(.*)\n            pathType: Prefix\n            backend:\n              service:\n                name: cluster-name-rank1-service\n                port:\n                  name: hostmanager\n
  1. where cluster-name is the name of the Kinetica Cluster
","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#example-reveal-ingress-cr","title":"Example Reveal Ingress CR","text":"Example Reveal Ingress CR

Example Reveal Ingress CR

    apiVersion: networking.k8s.io/v1\n    kind: Ingress\n    metadata:\n      name: cluster-name-reveal-ingress\n      namespace: gpudb\n    spec:\n      ingressClassName: nginx\n      tls:\n        - hosts:\n            - cluster-name.example.com\n          secretName: kinetica-tls\n      rules:\n        - host: cluster-name.example.com\n          http:\n            paths:\n              - path: /reveal\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /caravel\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /static\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /logout\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /resetmypassword\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /dashboardmodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /dashboardmodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /slicemodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /slicemodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /sliceaddview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /databaseview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /databaseasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /databasetablesasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /tablemodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /tablemodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /csstemplatemodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /csstemplatemodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /users\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /roles\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /userstatschartview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /permissions\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /viewmenus\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /permissionviews\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /accessrequestsmodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /accessrequestsmodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /logmodelview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /logmodelviewasync\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /userinfoeditview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /tablecolumninlineview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n              - path: /sqlmetricinlineview\n                pathType: Prefix\n                backend:\n                  service:\n                    name: cluster-name-reveal-service\n                    port:\n                      name: reveal\n
1. where cluster-name is the name of the Kinetica Cluster

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_nginx_config/#exposing-the-postgres-proxy-port","title":"Exposing the Postgres Proxy Port","text":"

In order to access Kinetica's Postgres functionality some TCP (not HTTP) ports need to be open externally.

For ingress-nginx a configuration file needs to be created to enable port 5432.

tcp-services.yaml

apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: tcp-services\n  namespace: kinetica-system # (1)!\ndata:\n  '5432': gpudb/kinetica-k8s-sample-rank0-service:5432 #(2)!\n  '9002': gpudb/kinetica-k8s-sample-rank0-service:9002 #(3)!\n
1. Change the namespace to the namespace your ingress-nginx controller is running in. e.g. ingress-nginx 2. This exposes the postgres proxy port on the default 5432 port. If you wish to change this to a non-standard port then it needs to be changed here but also in the Helm values.yaml to match. 3. This port is the Table Monitor port and should always be exposed alongside the Postgres Proxy.

","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/ingress_urls/","title":"Ingress urls","text":""},{"location":"Advanced/ingress_urls/#gadmin-paths","title":"GAdmin Paths","text":"Path Service Port /gadmin cluster-name-gadmin-service gadmin (8080/TCP) /tableau cluster-name-gadmin-service gadmin (8080/TCP) /files cluster-name^-gadmin-service gadmin (8080/TCP)

where cluster-name is the name of the Kinetica Cluster i.e. what is in the .spec.gpudbCluster.clusterName in the KineticaCluster CR.

"},{"location":"Advanced/ingress_urls/#workbench-paths","title":"Workbench Paths","text":"Path Service Port / workbench-workbench-service workbench-port (8000/TCP)"},{"location":"Advanced/ingress_urls/#db-rank-0-paths","title":"DB rank-0 Paths","text":"Path Service Port /cluster-145025b8(/gpudb-0(/.*|$)) cluster-145025b8-rank0-service httpd (8082/TCP) /cluster-145025b8/gpudb-0/hostmanager(.*) cluster-145025b8-rank0-service hostmanager (9300/TCP)"},{"location":"Advanced/ingress_urls/#db-rank-n-paths","title":"DB rank-N Paths","text":"Path Service Port /cluster-145025b8(/gpudb-N(/.*|$)) cluster-145025b8-rank1-service httpd (8082/TCP) /cluster-145025b8/gpudb-N/hostmanager(.*) cluster-145025b8-rank1-service hostmanager (9300/TCP)"},{"location":"Advanced/ingress_urls/#reveal-paths","title":"Reveal Paths","text":"Path Service Port /reveal cluster-name-reveal-service reveal (8088/TCP) /caravel cluster-name-reveal-service reveal (8088/TCP) /static cluster-name-reveal-service reveal (8088/TCP) /logout cluster-name-reveal-service reveal (8088/TCP) /resetmypassword cluster-name-reveal-service reveal (8088/TCP) /dashboardmodelview cluster-name-reveal-service reveal (8088/TCP) /dashboardmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /slicemodelview cluster-name-reveal-service reveal (8088/TCP) /slicemodelviewasync cluster-name-reveal-service reveal (8088/TCP) /sliceaddview cluster-name-reveal-service reveal (8088/TCP) /databaseview cluster-name-reveal-service reveal (8088/TCP) /databaseasync cluster-name-reveal-service reveal (8088/TCP) /databasetablesasync cluster-name-reveal-service reveal (8088/TCP) /tablemodelview cluster-name-reveal-service reveal (8088/TCP) /csstemplatemodelview cluster-name-reveal-service reveal (8088/TCP) /csstemplatemodelviewasync cluster-name-reveal-service reveal (8088/TCP) /users cluster-name-reveal-service reveal (8088/TCP) /roles cluster-name-reveal-service reveal (8088/TCP) /userstatschartview cluster-name-reveal-service reveal (8088/TCP) /permissions cluster-name-reveal-service reveal (8088/TCP) /viewmenus cluster-name-reveal-service reveal (8088/TCP) /permissionviews cluster-name-reveal-service reveal (8088/TCP) /accessrequestsmodelview cluster-name-reveal-service reveal (8088/TCP) /accessrequestsmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /logmodelview cluster-name-reveal-service reveal (8088/TCP) /logmodelviewasync cluster-name-reveal-service reveal (8088/TCP) /userinfoeditview cluster-name-reveal-service reveal (8088/TCP) /tablecolumninlineview cluster-name-reveal-service reveal (8088/TCP) /sqlmetricinlineview cluster-name-reveal-service reveal (8088/TCP)"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/","title":"Kinetica images list for airgapped environments","text":"Kinetica Images for an Air-Gapped Environment

If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.

For information on ways to transfer the files into an air-gapped environment See here.

"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#required-container-images","title":"Required Container Images","text":""},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"
  • docker.io/kinetica/kinetica-k8s-operator:v7.2.0-3.rc-3
    • docker.io/kinetica/kinetica-k8s-cpu:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-gpu:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench-operator:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench:v7.2.0-3.rc-3
  • docker.io/kinetica/kinetica-k8s-monitor:v7.2.0-3.rc-3
  • docker.io/kinetica/busybox:v7.2.0-3.rc-3
  • docker.io/kinetica/fluent-bit:v7.2.0-3.rc-3
  • docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • nvcr.io/nvidia/gpu-operator:v23.9.1
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • registry.k8s.io/nfd/node-feature-discovery:v0.14.2
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"
  • docker.io/bitnami/openldap:2.6.7
  • docker.io/alpine/openssl:latest (used by bitnami/openldap)
  • docker.io/otel/opentelemetry-collector-contrib:0.95.0
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"
  • quay.io/brancz/kube-rbac-proxy:v0.14.2
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#optional-container-images","title":"Optional Container Images","text":"

These images are only required if certain features are enabled as part of the Helm installation: -

  • CertManager
  • ingress-ninx
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"
  • quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"
  • registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)
  • registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c
"},{"location":"Advanced/kinetica_images_list_for_airgapped_environments/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2)
  1. It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image
  2. With a supported nVidia GPU.
"},{"location":"Advanced/kinetica_mac_arm_k8s/","title":"Kinetica DB on Kubernetes","text":"

This walkthrough will show how to install Kinetica DB on a Mac running OS X. The Kubernetes cluster will be running on VMs with Ubuntu Linux 22.04 ARM64.

This solution is equivalent to a production bare metal installation and does not use Docker, Podman or QEMU but rather Apple native Virtualization.

The Kubernetes cluster will consist of one Master node k8smaster1 and two Worker nodes k8snode1 & k8snode2.

The virtualization platform is UTM.

Obtain a Kinetica License Key

A product license key will be required for install. Please contact Kinetica Support to request a trial key.

Download and install UTM.

","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#create-the-vms","title":"Create the VMs","text":"","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#k8smaster1","title":"k8smaster1","text":"

For this walkthrough the master node will be 4 vCPU, 8 GB RAM & 40-64 GB disk.

Start the creation of a new VM in UTM. Select Virtualize

Select Linux as the VM OS.

On the Linux page - Select Use Apple Virtualization and an Ubuntu 22.04 (Arm64) ISO.

As this is the master Kubernetes node (VM) it can be smaller than the nodes hosting the Kinetica DB itself.

Set the memory to 8 GB and the number of CPUs to 4.

Set the storage to between 40-64 GB.

This next step is optional if you wish to setup a shared folder between your Mac host & the Linux VM.

The final step to create the VM is a summary. Please check the values shown and hit Save

You should now see your new VM in the left hand pane of the UTM UI.

Go ahead and click the button.

Once the Ubuntu installer comes up follow the steps selecting whichever keyboard etc. you require.

The only changes you need to make are: -

  • Change the installation to Ubuntu Server (minimized)
  • Your server's name to k8smaster1
  • Enable OpenSSH server.

and complete the installation.

Reboot the VM, remove the ISO from the 'external' drive . Log in to the VM and get the VMs IP address with

Bash
ip a\n

Make a note of the IP for later use.

","tags":["Advanced","Development"]},{"location":"Advanced/kinetica_mac_arm_k8s/#k8snode1-k8snode2","title":"k8snode1 & k8snode2","text":"

Repeat the same process to provision one or two nodes depending on how much memory you have available on the Mac.

You need to change the RAM size to 16 GB. You can leave the vCPU count at 4. For the disk size that depends on how much data you want to ingest. It should however be at least 4x RAM size.

Once installed again log in to the VM and get the VMs IP address with

Bash
ip a\n

Note

Make a note of the IP(s) for later use.

Your VMs are complete

Continue installing your new VMs by following Bare Metal/VM Installation

","tags":["Advanced","Development"]},{"location":"Advanced/kube_vip_loadbalancer/","title":"Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations","text":"

For our example we are going to enable a Kubernetes based LoadBalancer to issue IP addresses to our Kubernetes Services of type LoadBalancer using kube-vip.

Ingress Service is pending

The ingress-nginx-controller is currently in the pending state as there is no CCM/LoadBalancer

","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kube-vip","title":"kube-vip","text":"

We will install two components into our Kubernetes CLuster

  • kube-vip-cloud-controller
  • Kubernetes Load-Balancer Service
","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kube-vip-cloud-controller","title":"kube-vip-cloud-controller","text":"

Quote

The kube-vip cloud provider can be used to populate an IP address for Services of type LoadBalancer similar to what public cloud providers allow through a Kubernetes CCM.

Install the kube-vip CCM

Install the kube-vip CCM
kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml\n

Now we need to setup the required RBAC permissions: -

Install the kube-vip RBAC

Install kube-vip RBAC
kubectl apply -f https://kube-vip.io/manifests/rbac.yaml\n

The following ConfigMap will configure the kube-vip-cloud-controller to obtain IP addresses from the host networks DHCP server. i.e. the DHCP on the physical network that the host machine or VM is connected to.

Install the kube-vip ConfigMap

Install the kube-vip ConfigMap
apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: kubevip\n  namespace: kube-system\ndata:\n  cidr-global: 0.0.0.0/32\n

It is possible to specify IP address ranges see here.

","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kube_vip_loadbalancer/#kubernetes-load-balancer-service","title":"Kubernetes Load-Balancer Service","text":"Obtain the Master Node IP address & Interface name Obtain the Master Node IP address & Interface name
ip a\n

In this example the network interface of the master node is 192.168.2.180 and the interface is enp0s1.

We need to apply the kube-vip daemonset but first we need to create the configuration

Install the kube-vip Daemonset
apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  labels:\n    app.kubernetes.io/name: kube-vip-ds\n    app.kubernetes.io/version: v0.7.2\n  name: kube-vip-ds\n  namespace: kube-system\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io/name: kube-vip-ds\n  template:\n    metadata:\n      labels:\n        app.kubernetes.io/name: kube-vip-ds\n        app.kubernetes.io/version: v0.7.2\n    spec:\n      affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n            - matchExpressions:\n              - key: node-role.kubernetes.io/master\n                operator: Exists\n            - matchExpressions:\n              - key: node-role.kubernetes.io/control-plane\n                operator: Exists\n      containers:\n      - args:\n        - manager\n        env:\n        - name: vip_arp\n          value: \"true\"\n        - name: port\n          value: \"6443\"\n        - name: vip_interface\n          value: enp0s1\n        - name: vip_cidr\n          value: \"32\"\n        - name: dns_mode\n          value: first\n        - name: cp_enable\n          value: \"true\"\n        - name: cp_namespace\n          value: kube-system\n        - name: svc_enable\n          value: \"true\"\n        - name: svc_leasename\n          value: plndr-svcs-lock\n        - name: vip_leaderelection\n          value: \"true\"\n        - name: vip_leasename\n          value: plndr-cp-lock\n        - name: vip_leaseduration\n          value: \"5\"\n        - name: vip_renewdeadline\n          value: \"3\"\n        - name: vip_retryperiod\n          value: \"1\"\n        - name: address\n          value: 192.168.2.180\n        - name: prometheus_server\n          value: :2112\n        image: ghcr.io/kube-vip/kube-vip:v0.7.2\n        imagePullPolicy: Always\n        name: kube-vip\n        resources: {}\n        securityContext:\n          capabilities:\n            add:\n            - NET_ADMIN\n            - NET_RAW\n      hostNetwork: true\n      serviceAccountName: kube-vip\n      tolerations:\n      - effect: NoSchedule\n        operator: Exists\n      - effect: NoExecute\n        operator: Exists\n  updateStrategy: {}\n

Lines 5, 7, 12, 16, 38 and 62 need modifying to your environment.

Install the kube-vip Daemonset

ARP or BGP

The Daemonset above uses ARP to communicate with the network it is also possible to use BGP. See Here

Example showing DHCP allocated external IP address to the Ingress Controller

Our ingress-nginx-controller has been allocated the IP Address 192.168.2.194.

Ingress Access

The ingress-niginx-controller requires the host FQDN to be on the user requests in order to know how to route the requests to the correct Kubernetes Service. Using the iP address in the URL will cause an error as ingress cannot select the correct service.

List Ingress

If you did not set the FQDN of the Kinetica Cluster to a DNS resolvable hostname add local.kinetica to your /etc/hosts/ file in order to be able to access the Kinetica URLs

Edit /etc/hosts

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

","tags":["Advanced","Ingress","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/","title":"Bare Metal/VM Installation - kubeadm","text":"

This walkthrough will show how to install Kinetica DB. For this example the Kubernetes cluster will be running on 3 VMs with Ubuntu Linux 22.04 (ARM64).

This solution is equivalent to a production bare metal installation and does not use Docker, Podman or QEMU.

The Kubernetes cluster requires 3 VMs consiting of one Master node k8smaster1 and two Worker nodes k8snode1 & k8snode2.

Purple Example Boxes

The purple boxes in the instructions below can be expanded for a screen recording of the commands & their results.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#kubernetes-node-installation","title":"Kubernetes Node Installation","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#setup-the-kubernetes-nodes","title":"Setup the Kubernetes Nodes","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#edit-etchosts","title":"Edit /etc/hosts","text":"

SSH into each of the nodes and run the following: -

Edit `/etc/hosts
sudo vi /etc/hosts\n\nx.x.x.x k8smaster1\nx.x.x.x k8snode1\nx.x.x.x k8snode2\n

where x.x.x.x is the IP Address of the corresponding nose.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#disable-linux-swap","title":"Disable Linux Swap","text":"

Next we need to disable Swap on Linux: -

Disable Swap

Disable Swap
sudo swapoff -a\n\nsudo vi /etc/fstab\n

comment out the swap entry in /etc/fstab on each node.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#linux-system-configuration-changes","title":"Linux System Configuration Changes","text":"

We are using containerd as the container runtime but in order to do so we need to make some system level changes on Linux.

Linux System Configuration Changes

Linux System Configuration Changes
cat << EOF | sudo tee /etc/modules-load.d/containerd.conf\noverlay\nbr_netfilter\nEOF\n\nsudo modprobe overlay\n\nsudo modprobe br_netfilter\n\ncat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nEOF\n\nsudo sysctl --system\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#container-runtime-installation","title":"Container Runtime Installation","text":"

Run on all nodes (VMs)

Run the following commands, until advised not to, on all of the VMs you created.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-containerd","title":"Install containerd","text":"Install containerd Install `containerd`
sudo apt update\n\nsudo apt install -y containerd\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#create-a-default-containerd-config","title":"Create a Default containerd Config","text":"Create a Default containerd Config Create a Default `containerd` Config
sudo mkdir -p /etc/containerd\n\nsudo containerd config default | sudo tee /etc/containerd/config.toml\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#enable-system-cgroup","title":"Enable System CGroup","text":"

Change the SystemdCgroup value to true in the containerd configuration file and restart the service

Enable System CGroup

Enable System CGroup
sudo sed -i 's/SystemdCgroup \\= false/SystemdCgroup \\= true/g' /etc/containerd/config.toml\n\nsudo systemctl restart containerd\nsudo systemctl enable containerd\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-pre-requisiteutility-packages","title":"Install Pre-requisite/Utility packages","text":"Install Pre-requisite/Utility packages Install Pre-requisite/Utility packages
sudo apt update\n\nsudo apt install -y apt-transport-https ca-certificates curl gpg git\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#download-the-kubernetes-public-signing-key","title":"Download the Kubernetes public signing key","text":"Download the Kubernetes public signing key Download the Kubernetes public signing key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#add-the-kubernetes-package-repository","title":"Add the Kubernetes Package Repository","text":"Add the Kubernetes Package Repository
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-the-kubernetes-installation-and-management-tools","title":"Install the Kubernetes Installation and Management Tools","text":"Install the Kubernetes Installation and Management Tools Install the Kubernetes Installation and Management Tools
sudo apt update\n\nsudo apt install -y kubeadm=1.29.0-1.1  kubelet=1.29.0-1.1  kubectl=1.29.0-1.1 \n\nsudo apt-mark hold kubeadm kubelet kubectl\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#initialize-the-kubernetes-cluster","title":"Initialize the Kubernetes Cluster","text":"

Initialize the Kubernetes Cluster by using kubeadm on the k8smaster1 control plane node.

Note

You will need an IP Address range for the Kubernetes Pods. This range is provided to kubeadm as part of the initialization. For our cluster of three nodes, given the default number of pods supported by a node (110) we need a CIDR of at least 330 distinct IP Addresses. Therefore, for this example we will use a --pod-network-cidr of 10.1.1.0/22 which allows for 1007 usable IPs. The reason for this is each node will get /24 of the /22 total.

The apiserver-advertise-address should be the IP Address of the k8smaster1 VM.

Initialize the Kubernetes Cluster

Initialize the Kubernetes Cluster
sudo kubeadm init --pod-network-cidr 10.1.1.0/22 --apiserver-advertise-address 192.168.2.180 --kubernetes-version 1.29.2\n

You should now deploy a pod network to the cluster. Run kubectl apply -f [podnetwork].yaml with one of the options listed at: Cluster Administration Addons

Make a note of the portion of the shell output which gives the join command which we will need to add our worker nodes to the master.

Copy the kudeadm join command

Then you can join any number of worker nodes by running the following on each as root:

Copy the `kudeadm join` command
kubeadm join 192.168.2.180:6443 --token wonuiv.v93rkizr6wvxwe6l \\\n--discovery-token-ca-cert-hash sha256:046ffa6303e6b281285a636e856b8e9e51d8c755248d9d013e15ae5c5f6bb127\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#setup-kubeconfig","title":"Setup kubeconfig","text":"

Before we add the worker nodes we can setup the kubeconfig so we will be able to use kubectl going forwards.

Setup kubeconfig

Setup `kubeconfig`
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#connect-list-the-kubernetes-cluster-nodes","title":"Connect & List the Kubernetes Cluster Nodes","text":"

We can now run kubectl to connect to the Kubernetes API Server to display the nodes in the newly created Kubernetes CLuster.

Connect & List the Kubernetes Cluster Nodes

Connect & List the Kubernetes Cluster Nodes
kubectl get nodes\n

STATUS = NotReady

From the kubectl output the status of the k8smaster1 node is showing as NotReady as we have yet to install the Kubernetes Network to the cluster.

We will be installing cilium as that provider in a future step.

Warning

At this point we should complete the installations of the worker nodes to this same point before continuing.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#join-the-worker-nodes-to-the-cluster","title":"Join the Worker Nodes to the Cluster","text":"

Once installed we run the join on the worker nodes. Note that the command which was output from the kubeadm init needs to run with sudo

Join the Worker Nodes to the Cluster
sudo kubeadm join 192.168.2.180:6443 --token wonuiv.v93rkizr6wvxwe6l \\\n    --discovery-token-ca-cert-hash sha256:046ffa6303e6b281285a636e856b8e9e51d8c755248d9d013e15ae5c5f6bb127\n
kubectl get nodes

Now we can again run

`kubectl get nodes`
kubectl get nodes\n

Now we can see all the nodes are present in the Kubernetes Cluster.

Run on Head Node only

From now the following cpmmands need to be run on the Master Node only..

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-kubernetes-networking","title":"Install Kubernetes Networking","text":"

We now need to install a Kubernetes CNI (Container Network Interface) to enable the pod network.

We will use Cilium as the CNI for our cluster.

Installing the Cilium CLI
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-arm64.tar.gz\nsudo tar xzvfC cilium-linux-arm64.tar.gz /usr/local/bin\nrm cilium-linux-arm64.tar.gz\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-cilium","title":"Install cilium","text":"

You can now install Cilium with the following command:

Install `cilium`
cilium install\ncilium status \n

If cilium status shows errors you may need to wait until the Cilium pods have started.

You can check progress with

Bash
kubectl get po -A\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#check-cilium-status","title":"Check cilium Status","text":"

Once Cilium the Cilium pods are running we can check the status of Cilium again by using

Check cilium Status

Check `cilium` Status
cilium status \n

We can now recheck the Kubernetes Cluster Nodes

Bash
kubectl get nodes\n

and they should have Status Ready

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#kubernetes-node-preparation","title":"Kubernetes Node Preparation","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#label-kubernetes-nodes","title":"Label Kubernetes Nodes","text":"

Now we go ahead and label the nodes. Kinetica uses node labels in production clusters where there are separate 'node groups' configured so that the Kinetica Infrastructure pods are deployed on a smaller VM type and the DB itself is deployed on larger nodes or gpu enabled nodes.

If we were using a Cloud Provider Kubernetes these are synonymous with EKS Node Groups or AKS VMSS which would be created with the same two labels on two node groups.

Label Kubernetes Nodes
kubectl label node k8snode1 app.kinetica.com/pool=infra\nkubectl label node k8snode2 app.kinetica.com/pool=compute\n

additionally in our case as we have created a new cluster the 'role' of the worker nodes is not set so we can also set that. In many cases the role is already set to worker but here we have some latitude.

Bash
kubectl label node k8snode1 kubernetes.io/role=kinetica-infra\nkubectl label node k8snode2 kubernetes.io/role=kinetica-compute\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-storage-class","title":"Install Storage Class","text":"

Install a local path provisioner storage class. In this case we are using the Rancher Local Path provisioner

Install Storage Class

Install Storage Class
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#set-default-storage-class","title":"Set Default Storage Class","text":"Set Default Storage Class Set Default Storage Class
kubectl patch storageclass local-path -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}'\n

Kubernetes Cluster Provision Complete

Your basre Kubernetes Cluster is now complete and ready to have the Kinetica DB installed on it using the Helm Chart.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#install-kinetica-for-kubernetes-using-helm","title":"Install Kinetica for Kubernetes using Helm","text":"","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#add-the-helm-repository","title":"Add the Helm Repository","text":"Add the Helm Repository Add the Helm Repository
helm repo add kinetica-operators https://kineticadb.github.io/charts\nhelm repo update\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#download-a-starter-helm-valuesyaml","title":"Download a Starter Helm values.yaml","text":"

Now we need to obtain a starter values.yaml file to pass to our Helm install. We can download one from the github.com/kineticadb/charts repo.

Download a Starter Helm values.yaml

Download a Starter Helm `values.yaml`
    wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n

Obtain a Kinetica License Key

A product license key will be required for install. Please contact Kinetica Support to request a trial key.

","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#helm-install-kinetica","title":"Helm Install Kinetica","text":"#### Helm Install Kinetica Helm install kinetica-operators
helm -n kinetica-system upgrade -i \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"local-path\"\n
","tags":["Advanced","Installation"]},{"location":"Advanced/kubernetes_bare_metal_vm_install/#monitor-kinetica-startup","title":"Monitor Kinetica Startup","text":"

After a few moments, follow the progression of the main database pod startup with:

Monitor the Kinetica installation progress
kubectl -n gpudb get po gpudb-0 -w\n

Kinetica DB Provision Complete

Once you see gpudb-0 3/3 Running the database is up and running.

Software LoadBalancer

If you require a software based LoadBalancer to allocate IP address to the Ingress Controller or exposed Kubernetes Services then see here

This is usually apparent if your ingress or other Kubernetes Services with the type LoadBalancer are stuck in the Pending state.

","tags":["Advanced","Installation"]},{"location":"Advanced/minio_s3_dev_test/","title":"Using Minio for S3 Storage in Dev/Test","text":"

If you require a new Minio installation

Please follow the installation instructions found here to install the Minio Operator and to create your first tenant.

","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#create-minio-tenant","title":"Create Minio Tenant","text":"

In our example below we have created a tenant kinetica in the gpudb namespace using the Kinetica storage class kinetica-k8s-sample-storageclass.

or use the minio kubectl plugin

minio cli - create tenant
kubectl minio tenant create kinetica --capacity 32Gi --servers 1 --volumes 1 --namespace gpudb --storage-class kinetica-k8s-sample-storageclass --disable-tls\n

Console Port Forward

Forward the minio console for our newly created tenant

Bash
kubectl port-forward service/kinetica-console -n gpudb 9443:9443\n

In that tenant we create a bucket kinetica-cold-storage and in that bucket we create the path gpudb/cold-storage.

Once you have a tenant up and running we can configure Kinetica for Kubernetes to use it as the DB Cold Storage tier.

Backup/Restore Storage

Minio can also be used as the S3 storage for Velero. This enables Backup/Restore functionality via the KineticaBackup & KineticaRestore CRs.

","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#configuring-kinetica-to-use-minio","title":"Configuring Kinetica to use Minio","text":"","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/minio_s3_dev_test/#cold-storage","title":"Cold Storage","text":"

In order to configure the Cold Storage Tier for the Database it is necessary to add a coldStorageTier to the KineticaCluster CR. As we are using S3 Buckets for storage we then require coldStorageS3 entry which allows us to set the awsSecretAccessKey & awsAccessKeyId which were generated when the tenant was created in Minio.

If we look in the gpudb name space we can see that Minio created a Kubernetes service called minio exposed on port 443.

In the coldStorageS3 we need to add an endpoint field which contains the minio service name and the namespace gpudb i.e. minio.gpudb.svc.cluster.local.

KineticaCluster coldStorageTier S3 Configuration
spec:\n  gpudbCluster:\n    config:\n      tieredStorage:\n        coldStorageTier:\n            coldStorageType: s3\n            coldStorageS3:\n            basePath: gpudb/cold-storage/\n            bucketName: kinetica-cold-storage\n            endpoint: minio.gpudb.svc.cluster.local:80\n            limit: \"32Gi\"\n            useHttps: false\n            useManagedCredentials: false\n            useVirtualAddressing: false\n            awsSecretAccessKey: 6rLaOOddP3KStwPDhf47XLHREPdBqdav\n            awsAccessKeyId: VvlP5rHbQqzcYPHG\n      tieredStrategy:\n        default: VRAM 1, RAM 5, PERSIST 5, COLD0 10\n
","tags":["Advanced","Development","Installation","Storage"]},{"location":"Advanced/nginx_ingress_config/","title":"nginx-ingress Ingress Configuration","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Advanced/nginx_ingress_config/#coming-soon","title":"Coming Soon","text":"","tags":["Advanced","Configuration","Ingress"]},{"location":"Architecture/","title":"Architecture","text":"

Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance \u2013 particularly on streaming analytics and geospatial workloads.

Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

  • Kinetica Database Architecture

    Install the Kinetica DB with helm and get up and running in minutes Core Database Architecture

  • Kinetica for Kubernetes Architecture

    Install the Kinetica DB with helm and get up and running in minutes Kubernetes Architecture

","tags":["Architecture"]},{"location":"Architecture/db_architecture/","title":"Architecture","text":"

Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance \u2013 particularly on streaming analytics and geospatial workloads.

Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#database-architecture","title":"Database Architecture","text":"","tags":["Architecture"]},{"location":"Architecture/db_architecture/#scale-out-architecture","title":"Scale-out Architecture","text":"

Kinetica has a distributed architecture that has been designed for data processing at scale. A standard cluster consists of identical nodes run on commodity hardware. A single node is chosen to be the head aggregation node.

A cluster can be scaled up at any time to increase storage capacity and processing power, with near-linear scale processing improvements for most operations. Sharding of data can be done automatically, or specified and optimized by the user.

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#distributed-ingest-query","title":"Distributed Ingest & Query","text":"

Kinetica uses a shared-nothing data distribution across worker nodes. The head node receives a query and breaks it down into small tasks that can be spread across worker nodes. To avoid bottlenecks at the head node, ingestion can also be organized in parallel by all the worker nodes. Kinetica is able to distribute data client-side before sending it to designated worker nodes. This streamlines communication and processing time.

For the client application, there is no need to be aware of how many nodes are in the cluster, where they are, or how the data is distributed across them!

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#column-oriented","title":"Column Oriented","text":"

Columnar data structures lend themselves to low-latency reads of data. But from a user's perspective, Kinetica behaves very similarly to a standard relational database \u2013 with tables of rows and columns and it can be queried with SQL or through APIs. Available column types include the standard base types (int, long, float, double, string, & bytes), as well as numerous sub-types supporting date/time, geospatial, and other data forms.

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#vectorized-functions","title":"Vectorized Functions","text":"

Vectorization is Kinetica\u2019s secret sauce and the key feature that underpins its blazing fast performance.

Advanced vectorized kernels are optimized to use vectorized CPUs and GPUs for faster performance. The query engine automatically assigns tasks to the processor where they will be most performant. Aggregations, filters, window functions, joins and geospatial rendering are some of the capabilities that see performance improvements.

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#memory-first-tiered-storage","title":"Memory-First, Tiered Storage","text":"

Tiered storage makes it possible to optimize where data lives for performance and cost. Recent data (such as all data where the timestamp is within the last 2 weeks) can be held in-memory, while older data can be moved to disk, or even to external storage services.

Kinetica operates on an entire data corpus by intelligently managing data across GPU memory, system memory, SIMD, disk / SSD, HDFS, and cloud storage like S3 for optimal performance.

Kinetica can also query and process data stored in data lakes, joining it with data managed by Kinetica in highly parallelized queries.

","tags":["Architecture"]},{"location":"Architecture/db_architecture/#performant-key-value-lookup","title":"Performant Key-Value Lookup","text":"

Kinetica is able to generate distributed key-value lookups, from columnar data, for high-performance and concurrency. Sharding logic is embedded directly within client APIs enabling linear scale-out as clients can lookup data directly from the node where the data lives.

","tags":["Architecture"]},{"location":"Architecture/kinetica_for_kubernetes_architecture/","title":"Kubernetes Architecture","text":"","tags":["Architecture"]},{"location":"Architecture/kinetica_for_kubernetes_architecture/#coming-soon","title":"Coming Soon","text":"","tags":["Architecture"]},{"location":"GettingStarted/","title":"Getting Started","text":"
  • Set up in 15 minutes (local install)

    Install the Kinetica DB locally on Kind or k3s with helm to get up and running in minutes (Dev/Test).

    Quickstart

  • Prepare to Install

    What you need to know & do before beginning an installation.

    Preparation and Prerequisites

  • Production Installation

    Install the Kinetica DB with helm to get up and running quickly (Production).

    Installation

  • Channel Your Inner Ninja

    Advanced Installation Topics which go beyond the basic installation.

    Advanced Topics

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/aks/","title":"Azure AKS Specifics","text":"

This page covers any Microsoft Azure AKS cluster installation specifics.

","tags":["AKS","Getting Started"]},{"location":"GettingStarted/eks/","title":"Amazon EKS Specifics","text":"

This page covers any Amazon EKS kubernetes cluster installation specifics.

","tags":["EKS","Getting Started","Storage"]},{"location":"GettingStarted/eks/#ebs-csi-driver","title":"EBS CSI driver","text":"

Warning

Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work.

Please refer to this AWS documentation for more information.

","tags":["EKS","Getting Started","Storage"]},{"location":"GettingStarted/helm_repo_add/","title":"Helm repo add","text":"Add Kinetica Operators Chart Repo
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n
"},{"location":"GettingStarted/installation/","title":"Kinetica for Kubernetes Installation","text":"
  • CPU Only Installation

    Install the Kinetica DB to run on Intel, AMD or ARM CPUs with no GPU acceleration. CPU

  • CPU & GPU Installation

    Install the Kinetica DB to run on nodes with nVidia GPU acceleration. Optionally enable Kinetica On-Prem SQLAssistant (LLM). GPU

","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/","title":"Installation - CPU Only","text":"

For managed Kubernetes solutions (AKS, EKS, GKE) or on-prem (kubeadm) Kubernetes variants, follow this generic guide to install the Kinetica Operators, Database and Workbench.

Preparation & Prequisites

Please make sure you have followed the Preparation & Prequisites steps

","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#install-the-helm-chart","title":"Install the helm chart","text":"

Run the following Helm install command after substituting values from Preparation & Prequisites

Helm install kinetica-operators
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#check-installation-progress","title":"Check installation progress","text":"

After a few moments, follow the progression of the main database pod startup with:

Monitor the Kinetica installation progress
kubectl -n gpudb get po gpudb-0 -w\n

until it reaches \"gpudb-0 3/3 Running\" at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the helm command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using Ctrl+C.

error no pod named gpudb-0

If you receive an error message running kubectl -n gpudb get po gpudb-0 -w informing you that no pod named gpudb-0 exists. Please check that the OpenLDAP pod is running by running

Check OpenLDAP status
kubectl -n gpudb get pods\nkubectl -n gpudb describe pod openldap-5f87f77c8b-trpmf\n

where the pod name openldap-5f87f77c8b-trpmf is that shown when running kubectl -n gpudb get pods

Validate if the pod is waiting for it's Persistent Volume Claim/Persistent Volume to be created and bound to the pod.

","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#accessing-the-kinetica-installation","title":"Accessing the Kinetica installation","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_cpu/#target-platform-specifics","title":"Target Platform Specifics","text":"cloudlocal - devbare metal - prod

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.

Option 1: Use the LoadBalancer domain Set your FQDN in Kinetica

kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n

Option 2: Use your custom domain Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

Installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Configure local acces - /etc/hosts
127.0.0.1  local.kinetica\n

Note

The default chart configuration points to local.kinetica but this is configurable.

Installing on a bare metal machines which do not have an external hardware loadbalancer requires an Ingress controller along with a software loadbalancer in order to be accessible.

Kinetica for Kubernetes has been tested with kube-vip

","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/","title":"Installation - CPU with GPU Acceleration","text":"

For managed Kubernetes solutions (AKS, EKS, GKE) or on-prem (kubeadm) Kubernetes variants, follow this generic guide to install the Kinetica Operators, Database and Workbench.

Preparation & Prequisites

Please make sure you have followed the Preparation & Prequisites steps

","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#install-via-the-kinetica-operators-helm-chart","title":"Install via the kinetica-operators Helm Chart","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#gpu-cluster-with-remote-sqlassistant","title":"GPU Cluster with Remote SQLAssistant","text":"

Run the following Helm install command after substituting values from section 3

Helm install kinetica-operators (No On-Prem SQLAssistant)
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#gpu-cluster-with-on-prem-sqlassistant","title":"GPU Cluster with On-Prem SQLAssistant","text":"

or to enable SQLAssistant to be deployed and run 'On-Prem' i.e. in the same cluster

Helm install kinetica-operators (With On-Prem SQLAssistant)
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n--set db.gpudbCluster.config.ai.apiProvider = \"kineticallm\"\n

On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory

To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled app.kinetica.com/pool=compute-llm. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU.

Label Kubernetes Nodes for LLM
kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n
","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#check-installation-progress","title":"Check installation progress","text":"

After a few moments, follow the progression of the main database pod startup with:

Monitor the Kinetica installation progress
kubectl -n gpudb get po gpudb-0 -w\n

until it reaches \"gpudb-0 3/3 Running\" at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the helm command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using Ctrl+C.

error no pod named gpudb-0

If you receive an error message running kubectl -n gpudb get po gpudb-0 -w informing you that no pod named gpudb-0 exists. Please check that the OpenLDAP pod is running by running

Check OpenLDAP status
kubectl -n gpudb get pods\nkubectl -n gpudb describe pod openldap-5f87f77c8b-trpmf\n

where the pod name openldap-5f87f77c8b-trpmf is that shown when running kubectl -n gpudb get pods

Validate if the pod is waiting for it's Persistent Volume Claim/Persistent Volume to be created and bound to the pod.

","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#accessing-the-kinetica-installation","title":"Accessing the Kinetica installation","text":"","tags":["Installation"]},{"location":"GettingStarted/installation_gpu/#target-platform-specifics","title":"Target Platform Specifics","text":"cloudlocal - devbare metal - prod

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.

Option 1: Use the LoadBalancer domain Set your FQDN in Kinetica

kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n

Option 2: Use your custom domain Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

Installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Configure local acces - /etc/hosts
127.0.0.1  local.kinetica\n

Note

The default chart configuration points to local.kinetica but this is configurable.

Installing on a bare metal machines which do not have an external hardware loadbalancer requires an Ingress controller along with a software loadbalancer in order to be accessible.

Kinetica for Kubernetes has been tested with kube-vip

","tags":["Installation"]},{"location":"GettingStarted/local_kinetica_etc_hosts/","title":"Local kinetica etc hosts","text":"

FQDN or Local Access

By default we create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace/set the FQDN in the values.yaml with the correct domain name or by adding --set.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Configure local access - /etc/hosts
127.0.0.1  local.kinetica\n
"},{"location":"GettingStarted/note_additional_gpu_sqlassistant/","title":"Note additional gpu sqlassistant","text":"

On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory

To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled app.kinetica.com/pool=compute-llm. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU.

Label Kubernetes Nodes for LLM
kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n
"},{"location":"GettingStarted/preparation_and_prerequisites/","title":"Preparation & Prerequisites","text":"

Checks & steps to ensure a smooth installation.

Obtain a Kinetica License Key

A product license key will be required for install. Please contact Kinetica Support to request a trial key.

Failing to provide a license key at installation time will prevent the DB from starting.

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"

Free Resources

Your Kubernetes cluster version should be >= 1.22.x and have a minimum of 8 CPU, 8GB Ram and SSD or SATA 7200RPM hard drive(s) with 4X memory capacity.

GPU Support

For GPU enabled clusters the cards below have been tested in large-scale production environments and provide the best performance for the database.

GPU Driver P4/P40/P100 525.X (or higher) V100 525.X (or higher) T4 525.X (or higher) A10/A40/A100 525.X (or higher)","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#kubernetes-cluster-connectivity","title":"Kubernetes Cluster Connectivity","text":"

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. and the Kubernetes CLI kubectl.

The context for the desired target cluster must be selected from your ~/.kube/config file and set via the KUBECONFIG environment variable or kubectl ctx (if installed). Check to see if you have the correct context with,

show the current kubernetes context
kubectl config current-context\n

and that you can access this cluster correctly with,

list kubernetes cluster nodes
kubectl get nodes\n
Get Nodes

If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).

Kinetica Images for an Air-Gapped Environment

If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.

For information on ways to transfer the files into an air-gapped environment See here.

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#required-container-images","title":"Required Container Images","text":"","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"
  • docker.io/kinetica/kinetica-k8s-operator:v7.2.0-3.rc-3
    • docker.io/kinetica/kinetica-k8s-cpu:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-gpu:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench-operator:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench:v7.2.0-3.rc-3
  • docker.io/kinetica/kinetica-k8s-monitor:v7.2.0-3.rc-3
  • docker.io/kinetica/busybox:v7.2.0-3.rc-3
  • docker.io/kinetica/fluent-bit:v7.2.0-3.rc-3
  • docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • nvcr.io/nvidia/gpu-operator:v23.9.1
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • registry.k8s.io/nfd/node-feature-discovery:v0.14.2
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"
  • docker.io/bitnami/openldap:2.6.7
  • docker.io/alpine/openssl:latest (used by bitnami/openldap)
  • docker.io/otel/opentelemetry-collector-contrib:0.95.0
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"
  • quay.io/brancz/kube-rbac-proxy:v0.14.2
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#optional-container-images","title":"Optional Container Images","text":"

These images are only required if certain features are enabled as part of the Helm installation: -

  • CertManager
  • ingress-ninx
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"
  • quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"
  • registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)
  • registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2)
  1. It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image
  2. With a supported nVidia GPU.
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#label-the-kubernetes-nodes","title":"Label the Kubernetes Nodes","text":"

Kinetica requires some of the Kubernetes Nodes to be labeled as it splits some of the components into different deployment 'pools'. This enables different physical node types to be present in the Kubernetes Cluster allowing us to target which Kinetica components go where.

e.g. for a GPU installation some nodes in the cluster will have GPUs and others are CPU only. We can put the DB on the GPU nodes and our infrastructure components on CPU only nodes.

cpu gpu

The Kubernetes cluster nodes selected to host the Kinetica infrastructure pods i.e. non-DB Pods require the following label app.kinetica.com/pool=infra.

Label the Infrastructure Nodes
    kubectl label node k8snode1 app.kinetica.com/pool=infra\n

whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label app.kinetica.com/pool=compute.

Label the Database Nodes
    kubectl label node k8snode2 app.kinetica.com/pool=compute\n

The Kubernetes cluster nodes selected to host the Kinetica infrastructure pods i.e. non-DB Pods require the following label app.kinetica.com/pool=infra.

Label the Infrastructure Nodes
    kubectl label node k8snode1 app.kinetica.com/pool=infra\n

whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label app.kinetica.com/pool=compute-gpu.

Label the Database Nodes
    kubectl label node k8snode2 app.kinetica.com/pool=compute-gpu\n

On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory

To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled app.kinetica.com/pool=compute-llm. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU.

Label Kubernetes Nodes for LLM
kubectl label node k8snode3 app.kinetica.com/pool=compute-llm\n

Pods Not Scheduling

If the Kubernetes are not labeled you may have a situation where Kinetica pods not schedule and sit in a 'Pending' state.

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#add-the-kinetica-chart-repository","title":"Add the Kinetica chart repository","text":"

Add the repo locally as kinetica-operators:

Helm repo add
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n
Helm Repo Add

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#obtain-the-default-helm-values-file","title":"Obtain the default Helm values file","text":"

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Helm values.yaml download
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k8s.yaml\n
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#determine-the-following-prior-to-the-chart-install","title":"Determine the following prior to the chart install","text":"

Default Admin User

the default admin user in the Helm chart is kadmin but this is configurable. Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, --set dbAdminUser.password=\"MyPassword\\!\"

  1. Obtain a LICENSE-KEY as described in the introduction above.
  2. Choose a PASSWORD for the initial administrator user
  3. As the storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain the DEFAULT-STORAGE-CLASS name with the command:

Find the default storageclass
kubectl get sc -o name \n
List StorageClass

use the name found after the /, For example, in storageclass.storage.k8s.io/local-path use \"local-path\" as the parameter.

Amazon EKS

If installing on Amazon EKS See here

","tags":["Getting Started","Installation"]},{"location":"GettingStarted/preparation_and_prerequisites/#planning-access-to-your-kinetica-cluster","title":"Planning access to your Kinetica Cluster","text":"Existing Ingress Controller?

If you have an existing Ingress Controller in your Kubernetes cluster and do not want Kinetica to install an ingresss-nginx to expose it's endpoints then you can disable ingresss-nginx installation in the values.yaml by editing the file and setting install: true to install: false: -

Text Only
```` yaml\nnodeSelector: {}\ntolerations: []\naffinity: {}\n\ningressNginx:\n    install: false\n````\n
","tags":["Getting Started","Installation"]},{"location":"GettingStarted/quickstart/","title":"Quickstart","text":"

For the quickstart we have examples for Kind or k3s.

  • Kind - is suitable for CPU only installations.
  • k3s - is suitable for CPU or GPU installations.

Kubernetes >= 1.25

The current version of the chart supports kubernetes version 1.25 and above.

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#please-select-your-target-kubernetes-variant","title":"Please select your target Kubernetes variant:","text":"kind k3s

Default User

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":"

This installation in a kind cluster is for trying out the operators and the database in a non-production environment.

CPU Only

This method currently only supports installing a CPU version of the database.

Please contact Kinetica Support to request a trial key.

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Create a new Kind Cluster
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/kind.yaml\nkind create cluster --name kinetica --config kind.yaml\n
List Kind clusters
 kind get clusters\n

Set Kubernetes Context

Please set your Kubernetes Context to kind-kinetica before performing the following steps.

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the Kinetica-Operators Chart","text":"Add Kinetica Operators Chart Repo
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n

FQDN or Local Access

By default we create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace/set the FQDN in the values.yaml with the correct domain name or by adding --set.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Configure local access - /etc/hosts
127.0.0.1  local.kinetica\n
Get & install the Kinetica-Operators Chart
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

or if you have been asked by the Kinetica Support team to try a development version

Using a development version
helm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system upgrade -i kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 72.2.3\n

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-k3sio","title":"k3s (k3s.io)","text":"","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#install-k3s-129","title":"Install k3s 1.29","text":"Install k3s
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik  --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n

Once installed we need to set the current Kubernetes context to point to the newly created k3s cluster.

Select if you want local or remote access to the Kubernetes Cluster: -

Local AccessRemote Access

For only local access to the cluster we can simply set the KUBECONFIG environment variable

Set kubectl context
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml\n

For remote access i.e. outside the host/VM k3s is installed on: -

Copy /etc/rancher/k3s/k3s.yaml on your machine located outside the cluster as ~/.kube/config. Then edit the file and replace the value of the server field with the IP or name of your K3s server.

Copy the kube config and set the context
sudo chmod 600 /etc/rancher/k3s/k3s.yaml\nmkdir -p ~/.kube\nsudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config\nsudo chown \"${USER:=$(/usr/bin/logname)}:$USER\" ~/.kube/config\n# Edit the ~/.kube/config server field with the IP or name of your K3s server here\nexport KUBECONFIG=~/.kube/config\n
","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

FQDN or Local Access

By default we create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace/set the FQDN in the values.yaml with the correct domain name or by adding --set.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Configure local access - /etc/hosts
127.0.0.1  local.kinetica\n
","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-the-kinetica-operators-chart-cpu","title":"K3S - Install the Kinetica-Operators Chart (CPU)","text":"Add Kinetica Operators Chart Repo
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest\n
Download Template values.yaml
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

or if you have been asked by the Kinetica Support team to try a development version

Using a development version
helm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n
","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#k3s-install-the-kinetica-operators-chart-gpu","title":"K3S - Install the Kinetica-Operators Chart (GPU)","text":"

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

k3s GPU Installation
wget https://raw.githubusercontent.com/kineticadb/charts/72.2.3/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

","tags":["Development","Getting Started","Installation"]},{"location":"GettingStarted/quickstart/#uninstall-k3s","title":"Uninstall k3s","text":"uninstall k3s
/usr/local/bin/k3s-uninstall.sh\n
","tags":["Development","Getting Started","Installation"]},{"location":"Help/changing_the_fqdn/","title":"How to change the Clusters FQDN","text":"","tags":["Configuration","Support"]},{"location":"Help/changing_the_fqdn/#coming-soon","title":"Coming Soon","text":"","tags":["Configuration","Support"]},{"location":"Help/faq/","title":"Frequently Asked Questions","text":"","tags":["Support"]},{"location":"Help/faq/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Help/help_and_tutorials/","title":"Help & Tutorials","text":"
  • Tutorials

    Tutorials

  • Help

    Help

","tags":["Support"]},{"location":"Help/help_and_tutorials/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Help/help_index/","title":"Creating Users, Roles, Schemas and other Kinetica DB Objects","text":"","tags":["Support"]},{"location":"Help/help_index/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"Monitoring/logs/","title":"Log Collection & Display","text":"

It is possible to forward/server the Kinetica on Kubernetes logs via an OpenTelemetry [OTEL] collector.

By default an OpenTelemetry Collector is deployed in the kinetica-system namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the kinetica-system namespace and is called otel-collector-conf.

Detailed otel-collector-conf setup

For more details on the Kinetica installed OTEL Collector please see here.

There are many supported mechanisms to expose the logs here is one possibility: -

  • lokiexporter - Exports data via HTTP to Loki.

Tip

For a full list of supported OTEL exporters, including those for GrafanaCloud, AWS, Azure, Logz.io, Splunk and many databases please see here

","tags":["Operations","Monitoring"]},{"location":"Monitoring/logs/#lokiexporter-otel-collector-exporter","title":"lokiexporter OTEL Collector Exporter","text":"

Exports data via HTTP to Loki.

Example Configuration
exporters:\n  loki:\n    endpoint: https://loki.example.com:3100/loki/api/v1/push\n    default_labels_enabled:\n      exporter: false\n      job: true\n

For full details on configuring the OTEL collector exporter lokiexporter see here.

","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/","title":"Metrics Collection & Display","text":"

It is possible to forward/server the Kinetica on Kubernetes metrics via an OpenTelemetry [OTEL] collector.

By default an OpenTelemetry Collector is deployed in the kinetica-system namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the kinetica-system namespace and is called otel-collector-conf.

Detailed otel-collector-conf setup

For more details on the Kinetica installed OTEL Collector please see here.

There are many supported mechanisms to expose the metrics here are a few possibilities: -

  • prometheusremotewriteexporter - Prometheus Remote Write Exporter sends OpenTelemetry metrics to Prometheus remote write compatible backends.
  • prometheusexporter - allows the metrics to be scraped by a Prometheus server

Tip

For a full list of supported OTEL exporters, including those for Grafana Cloud, AWS, Azure and many databases please see here

","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/#prometheusremotewriteexporter-prometheus-otel-remote-write-exporter","title":"prometheusremotewriteexporter Prometheus OTEL Remote Write Exporter","text":"

prometheusremotewriteexporter OTEL Exporter

Prometheus Remote Write Exporter sends OpenTelemetry metrics to Prometheus remote write compatible backends such as Cortex, Mimir, and Thanos. By default, this exporter requires TLS and offers queued retry capabilities.

Warning

Non-cumulative monotonic, histogram, and summary OTLP metrics are dropped by this exporter.

Example Configuration
exporters:\n  prometheusremotewrite:\n    endpoint: \"https://my-cortex:7900/api/v1/push\"\n    external_labels:\n      label_name1: label_value1\n      label_name2: label_value2\n

For full details on configuring the OTEL collector exporter prometheusremotewriteexporter see here.

","tags":["Operations","Monitoring"]},{"location":"Monitoring/metrics/#prometheusexporter-prometheus-otel-exporter","title":"prometheusexporter Prometheus OTEL Exporter","text":"

Exports data in the Prometheus format, which allows it to be scraped by a Prometheus server.

Example Configuration
exporters:\n  prometheus:\n    endpoint: \"1.2.3.4:1234\"\n    tls:\n      ca_file: \"/path/to/ca.pem\"\n      cert_file: \"/path/to/cert.pem\"\n      key_file: \"/path/to/key.pem\"\n    namespace: test-space\n    const_labels:\n      label1: value1\n      \"another label\": spaced value\n    send_timestamps: true\n    metric_expiration: 180m\n    enable_open_metrics: true\n    add_metric_suffixes: false\n    resource_to_telemetry_conversion:\n      enabled: true\n

For full details on configuring the OTEL collector exporter prometheusexporter see here.

","tags":["Operations","Monitoring"]},{"location":"Operations/","title":"Operational Management","text":"
  • Metrics

    Collecting and storing metrics as time series data. Metrics

  • Logs

    Log aggregation. Logs

  • Metric & Log Distribution

    Metrics & Logs can be distributed to other systems using OpenTelemetry. OpenTelemety

  • Backup & Restore

    Backup & Restore of the Kinetica DB. Backup & Restore

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

  • Reduce Costs

    Suspend & Resume Kinetica for Kubernetes. Suspend & Resume

  • Database Rebalancing

    Kinetica for Kubernetes Data Sharding & Rebalancing. Rebalancing

","tags":["Operations"]},{"location":"Operations/backup_and_restore/","title":"Kinetica for Kubernetes Backup & Restore","text":"

Kinetica for Kubernetes supports the Backup & Restoring of the installed Kinetica DB by leveraging Velero which is required to be installed into the same Kubernetes cluster that the kinetica-operators Helm chart is deployed.

Velero

Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises.

For Velero installation please see here.

Velero Installation

The kinetica-operators Helm chart does not deploy Velero it is a prerequisite for it to be installed before Backup & Restore will work correctly.

There are two ways to initiate a Backup or Restore

  • Workbench Initiated
  • Kubernetes CR Initiated

Preferred Backup/Restore Mechanism

The preferred way to Backup or Restore the Kinetica for Kubernetes DB instance is via Workbench.

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#workbench-initiated-backup-or-restore","title":"Workbench Initiated Backup or Restore","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#home","title":"> Home","text":"

From the Workbench Home page

we need to select the Manage option from the toolbar.

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#manage-cluster-overview","title":"> Manage > Cluster > Overview","text":"

On the Cluster Overview page select the 'Snapshots' tab

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#manage-cluster-snapshots","title":"> Manage > Cluster > Snapshots","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#backup","title":"Backup","text":"

Select the 'Backup Now' button

and the backup will start and you will be able to see the progress

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#restore","title":"Restore","text":"","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kubernetes-cr-initiated-backup-or-restore","title":"Kubernetes CR Initiated Backup or Restore","text":"

The Kinetica DB Operator supports two custom CRs

  • KineticaClusterBackup
  • KineticaClusterRestore

which can be used to perform a Backup of the database and a Restore of Kinetica namespaces.

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kineticaclusterbackup-cr","title":"KineticaClusterBackup CR","text":"

Submission of a KineticaClusterBackup CR will trigger the Kinetica DB Operator to perform a backup of a Kinetica DB instance.

Kinetica DB Offline

In order to perform a database backup the Kinetica DB needs to be suspended in order for Velero to have access to the necessary disks. The DB will be stopped & restarted automatically by the Kinetica DB Operator as part of the backup process.

Example KineticaClusterBackup CR yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaClusterBackup\nmetadata:\n  name: kineticaclusterbackup-sample\n  namespace: gpudb\nspec:\n  includedNamespaces:\n    - gpudb\n

The namespace of the backup CR should be different to that of the namespace the Kinetica DB is running in i.e. not gpudb. We recommend using the namespace Velero is deployed into.

Backup names are unique

The name of the KineticaClusterBackup CR is unique we therefore suggest creating the name of the CR containing the date + time of the backup to ensure uniqueness. Kubernetes CR names have a strict naming format so the specified name must conform to those patterns.

For a detailed description of the KineticaClusterBackup CRD see here

","tags":["Operations"]},{"location":"Operations/backup_and_restore/#kineticaclusterrestore-cr","title":"KineticaClusterRestore CR","text":"

In order to perform a restore of Kinetica for Kubernetes the easiest way is to simply delete the gpudb namespace from the Kubernetes cluster.

Delete the Kinetica DB
kubectl delete ns gpudb\n

Kinetica DB Offline

In order to perform a database restore the Kinetica DB needs to be suspended in order for Velero to have access to the necessary disks. The DB will be stopped & restarted automatically by the Kinetica DB Operator as part of the restore process.

Example KineticaClusterBackup CR yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaClusterRestore\nmetadata:\n  name: kineticaclusterrestore-sample\n  namespace: gpudb\nspec:\n  backupName: kineticaclusterbackup-sample\n

The namespace of the restore CR should be the same as that of the namespace the KineticaClusterBackup CR was placed in. i.e. Not the namespace Kinetica DB is running in.

Restore names are unique

The name of the KineticaClusterRestore CR is unique we therefore suggest creating the name of the CR containing the date + time of the restore process to ensure uniqueness. Kubernetes CR names have a strict naming format so the specified name must conform to those patterns.

For a detailed description of the KineticaClusterRestore CRD see here

","tags":["Operations"]},{"location":"Operations/otel/","title":"OTEL Integration for Metric & Log Distribution","text":"

Helm installed OTEL Collector

By default an OpenTelemetry Collector is deployed in the kinetica-system namespace as part of the Helm install of the the kinetica-operators Helm chart along with a Kubernetes ConfigMap to configure this collector. The ConfigMap is in the kinetica-system namespace and is called otel-collector-conf.

The Kinetica DB Operators send information to an OpenTelemetry collector. There are two choices

  • install an OpenTelemetry collector with the Kinetica Operators Helm chart
  • use an existing provisioned OpenTelemetry collector within the Kubernetes Cluster
","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#install-an-opentelemetry-collector-with-the-kinetica-operators-helm-chart","title":"Install an OpenTelemetry collector with the Kinetica Operators Helm chart","text":"

To enable the Kinetica Operators Helm Chart to deploy an instance of the OpenTelemetry collector into the kinetica-system namespace you need to set the following configuration in the helm values: -

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#todo-add-helm-config-example-here","title":"TODO add Helm Config Example Here","text":"

A ConfigMap containing the OTEL collector configuration will be generated so that the necessary receivers and processors sections are correctly setup for a Kinetica DB Cluster.

This configuration will: -

receivers:

  • configure a syslog receiver which will receive logs from the Kinetica DB pod.
  • configure a prometheus receiver/scraper which will collect metrics form the Kinetica DB.
  • configure an otlp receiver which will receive trace spans from the Kinetica Operators (Optional).
  • configure the hostmetrics collection of host load & memory usage (Optional).
  • configure the k8s_events collection of Kubernetes Events for the Kinetica namespaces (Optional).

processors:

  • configure attribute processing to set some useful values
  • configure resource processing to set some useful values
","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#syslog-configuration","title":"syslog Configuration","text":"

The OpenTelemetry syslogreceiver documentation can be found here.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration","title":"OTEL Receivers Configuration","text":"YAML
receivers:  \n  syslog:  \n    tcp:  \n      listen_address: \"0.0.0.0:9601\"  \n    protocol: rfc5424  \n
","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-service-configuration","title":"OTEL Service Configuration","text":"

Tip

In order to batch pushes of log data upstream you can use the following processors section in the OTEL configuration.

YAML
processors:  \n  batch:  \n
YAML
service:   \n  pipelines:\n    logs:\n      receivers: [syslog]\n      processors: [resourcedetection, attributes, resource, batch]\n      exporters: ... # Requires configuring for your environment\n
","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otlp-configuration","title":"otlp Configuration","text":"

The default configuration opens both the OTEL gRPC & HTTP listeners.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration_1","title":"OTEL Receivers Configuration","text":"YAML
receivers:\n  otlp:  \n    protocols:  \n      grpc:  \n        endpoint: \"0.0.0.0:4317\"  \n      http:  \n        endpoint: \"0.0.0.0:4318\" \n
","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-service-configuration_1","title":"OTEL Service Configuration","text":"

Tip

In order to batch pushes of trace data upstream you can use the following processors section in the OTEL configuration.

YAML
processors:  \n  batch:  \n
YAML
service:\n  traces:\n    receivers: [otlp]\n    processors: [batch]\n    exporters:  ... # Requires configuring for your environment\n

exporters

The exporters will need to be manually configured to your specific environment e.g. forwarding logs/metrics to Grafana, Azure Monitor, AWS etc.

Otherwise the data will 'disappear into the ether' and not be relayed upstream.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#hostmetrics-configuration-optional","title":"hostmetrics Configuration (Optional)","text":"

The Host Metrics receiver generates metrics about the host system scraped from various sources. This is intended to be used when the collector is deployed as an agent.

The OpenTelemetry hostmetrics documentation can be found here.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#otel-receivers-configuration_2","title":"OTEL Receivers Configuration","text":"

hostmetricsreceiver

The OTEL hostmetricsreceiverrequires that the running OTEL collector is the 'contrib' version.

YAML
receivers:\n  hostmetrics:  \n    # https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver  \n    scrapers:  \n      load:  \n      memory: \n
Grafana

the attributes and resource processing enables finer grained selection using Grafana queries.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#k8s_events-configuration-optional","title":"k8s_events Configuration (Optional)","text":"

The kubernetes Events receiver collects events from the Kubernetes API server. It collects all the new or updated events that come in from the specified namespaces. Below we are collecting events from the two default Kinetica namespaces: -

YAML
receivers:\n  k8s_events:  \n    namespaces: [kinetica-system, gpudb]  \n

The OpenTelemetry k8seventsreceiver documentation can be found here.

","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#use-an-existing-provisioned-opentelemetry-collector","title":"Use an existing provisioned OpenTelemetry Collector","text":"","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/otel/#coming-soon","title":"Coming Soon","text":"","tags":["Configuration","Operations","Monitoring"]},{"location":"Operations/rebalance/","title":"Kinetica for Kubernetes Data Rebalancing","text":"","tags":["Operations"]},{"location":"Operations/rebalance/#coming-soon","title":"Coming Soon","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/","title":"Kinetica for Kubernetes Suspend & Resume","text":"

It is possible to supend Kinetica for Kubernetes which spins down the DB.

Infra Structure

For each deployment of Kinetica for Kubernetes there are two distinct types of pods: -

  • 'Compute' pods containing the Kinetica DB along with the Statics Pod
  • 'Infra' pods containing the supporting apps, e.g. Workbench, OpenLDAP etc, and the Kinetica Operators.

Whilst Kinetica for Kubernetes is in the Suspended state only the 'Compute' pods are scaled down. The 'Infra' pods remain running in order for Workbenchto be able to login, backup, restore and in this case Resume the suspended system.

There are three discrete ways to suspand and resume KInetica for Kubernetes: -

  • Manually from Workbench
  • Auto-Suspend set in Workbench or fro the Helm installation Chart.
  • Manually using a Kubernetes CR
","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-manually-from-workbench","title":"Suspend - Manually from Workbench","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-auto-suspend","title":"Suspend - Auto-Suspend","text":"","tags":["Operations"]},{"location":"Operations/suspend_resume/#suspend-manually-using-a-kubernetes-cr","title":"Suspend - Manually using a Kubernetes CR","text":"","tags":["Operations"]},{"location":"Operators/k3s/","title":"Overview","text":"

Kinetica Operators can be installed in any on-prem kubernetes cluster. This document provides instructions to install the operators in k3s. If you are on another distribution, you should be able to change the values file to suit your environment.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/k3s/#kinetica-on-k3s-k3sio","title":"Kinetica on k3s (k3s.io)","text":"

Current version of the chart supports kubernetes version 1.25 and above.

"},{"location":"Operators/k3s/#install-k3s-129","title":"Install k3s 1.29","text":"Bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik  --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n
"},{"location":"Operators/k3s/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s -Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Text Only
127.0.0.1 local.kinetica\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart","title":"K3s - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart-gpu-capable-machine","title":"K3s - Install the kinetica-operators chart (GPU Capable Machine)","text":"

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Operators/k3s/#uninstall-k3s","title":"Uninstall k3s","text":"Bash
/usr/local/bin/k3s-uninstall.sh\n
"},{"location":"Operators/k8s/","title":"Overview","text":"

For managed Kubernetes solutions (AKS, EKS, GKE) or other on-prem K8s flavors, follow this generic guide to install the Kinetica Operators, Database and Workbench. A product license key will be required for install. Please contact Kinetica Support to request a trial key.

"},{"location":"Operators/k8s/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. kubectl is optional but highly recommended. The context for the desired target cluster must be selected from your ~/.kube/config file or set via the KUBECONFIG environment variable. Check to see if you have the correct context with,

Bash
kubectl config current-context\n

and that you can access this cluster correctly with,

Bash
kubectl get nodes\n

If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).

"},{"location":"Operators/k8s/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

Alternatively, if you are installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Bash
127.0.0.1  local.kinetica\n

Note that the default chart configuration points to local.kinetica but this is configurable.

"},{"location":"Operators/k8s/#1-add-the-kinetica-chart-repository","title":"1. Add the Kinetica chart repository","text":"

Add the repo locally as kinetica-operators:

Bash
helm repo add kinetica-operators https://kineticadb.github.io/charts\n
"},{"location":"Operators/k8s/#2-obtain-the-default-helm-values-file","title":"2. Obtain the default Helm values file","text":"

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n
"},{"location":"Operators/k8s/#3-determine-the-following-prior-to-the-chart-install","title":"3. Determine the following prior to the chart install","text":"

(a) Obtain a LICENSE-KEY as described in the introduction above. (b) Choose a PASSWORD for the initial administrator user (Note: the default in the chart for this user is kadmin but this is configurable). Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, --set dbAdminUser.password=\"MyPassword\\!\" \u00a9 As storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain DEFAULT-STORAGE-CLASS name with the command:

Bash
kubectl get sc -o name \n

use the name found after the /, For example, in \"storageclass.storage.k8s.io/TheName\" use \"TheName\" as the parameter.

"},{"location":"Operators/k8s/#4-install-the-helm-chart","title":"4. Install the helm chart","text":"

Run the following Helm install command after substituting values from section 3 above:

Bash
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
"},{"location":"Operators/k8s/#5-check-installation-progress","title":"5. Check installation progress","text":"

After a few moments, follow the progression of the main database pod startup with:

Bash
kubectl -n gpudb get po gpudb-0 -w\n

until it reaches \"gpudb-0 3/3 Running\" at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the helm command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using ctrl-c.

"},{"location":"Operators/k8s/#6-accessing-the-kinetica-installation","title":"6. Accessing the Kinetica installation","text":""},{"location":"Operators/k8s/#optional-install-a-development-chart-version","title":"(Optional) Install a development chart version","text":"

Find all alternative chart versions with:

Bash
helm search repo kinetica-operators --devel --versions\n

Then append --devel --version [CHART-DEVEL-VERSION] to the end of the Helm install command in section 4 above.

"},{"location":"Operators/k8s/#k8s-flavour-specific-notes","title":"K8s Flavour specific notes","text":""},{"location":"Operators/k8s/#eks","title":"EKS","text":""},{"location":"Operators/k8s/#ebs-csi-driver","title":"EBS CSI driver","text":"

Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work. Please refer to this AWS documentation for more information.

"},{"location":"Operators/k8s/#ingress","title":"Ingress","text":"

As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.

"},{"location":"Operators/k8s/#option-1-use-the-loadbalancer-domain","title":"Option 1: Use the LoadBalancer domain","text":"Bash
kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n
"},{"location":"Operators/k8s/#option-1-use-your-custom-domain","title":"Option 1: Use your custom domain","text":"

Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

"},{"location":"Operators/kind/","title":"Overview","text":"

This installation in a kind cluster is for trying out the operators and the database in a non production environment. This method currently only supports installing a CPU version of the database.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/kind/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":""},{"location":"Operators/kind/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Bash
kind create cluster --config charts/kinetica-operators/kind.yaml\n
"},{"location":"Operators/kind/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

"},{"location":"Operators/kind/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Operators/kinetica-operators/","title":"Kinetica DB Operator Helm Charts","text":"

To install all the required operators in a single command perform the following: -

Bash
helm install -n kinetica-system \\\nkinetica-operators kinetica-operators/kinetica-operators --create-namespace\n

This will install all the Kubernetes Operators required into the kinetica-system namespace and create the namespace if it is not currently present.

Note

Depending on what target platform you are installing to it may be necessary to supply an additional parameter pointing to a values file to successfully provision the DB.

Bash
helm install -n kinetica-system -f values.yaml --set provider=aks \\\nkinetica-operators kinetica-operators/kinetica-operators --create-namespace\n

The command above uses a custom values.yaml for helm and sets the install platform to Microsoft Azure AKS.

Currently supported providers are: -

  • aks - Microsoft Azure AKS
  • eks - Amazon AWS EKS
  • local - Generic 'On-Prem' Kubernetes Clusters e.g. one deployed using kubeadm

Example Helm values.yaml for different Cloud Providers/On-Prem installations: -

Azure AKSAmazon EKSOn-Prem values.yaml
namespace: kinetica-system\n\ndb:\n  serviceAccount: {}\n  image:\n    # Kinetica DB Operator installer image\n    repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n    #  Kinetica DB Operator installer image tag\n    tag: \"\"\n\n  parameters:\n    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n    kubeconfig: \"\"\n    # The storage class to use for PVCs\n    storageClass: \"managed-premium\"\n\n  storageClass:\n    persist:\n      # Workbench Operator Persistent Volume Storage Class\n      provisioner: \"disk.csi.azure.com\"\n    procs:\n      # Workbench Operator Procs Volume Storage Class\n      provisioner: \"disk.csi.azure.com\"\n    cache:\n      # Workbench Operator Cache Volume Storage Class\n      provisioner: \"disk.csi.azure.com\"\n

15 storageClass: \"managed-premium\" - sets the appropriate storageClass for Microsoft Azure AKS Persistent Volume (PV)

20 provisioner: \"disk.csi.azure.com\" - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure

23 provisioner: \"disk.csi.azure.com\" - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure

26 provisioner: \"disk.csi.azure.com\" - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure

values.yaml
namespace: kinetica-system\n\ndb:\n  serviceAccount: {}\n  image:\n    # Kinetica DB Operator installer image\n    repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n    #  Kinetica DB Operator installer image tag\n    tag: \"\"\n\n  parameters:\n    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n    kubeconfig: \"\"\n    # The storage class to use for PVCs\n    storageClass: \"gp2\"\n\n  storageClass:\n    persist:\n      # Workbench Operator Persistent Volume Storage Class\n      provisioner: \"kubernetes.io/aws-ebs\"\n    procs:\n      # Workbench Operator Procs Volume Storage Class\n      provisioner: \"kubernetes.io/aws-ebs\"\n    cache:\n      # Workbench Operator Cache Volume Storage Class\n      provisioner: \"kubernetes.io/aws-ebs\"\n

15 storageClass: \"gp2\" - sets the appropriate storageClass for Amazon EKS Persistent Volume (PV)

20 provisioner: \"kubernetes.io/aws-ebs\" - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure

23 provisioner: \"kubernetes.io/aws-ebs\" - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure

26 provisioner: \"kubernetes.io/aws-ebs\" - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure

values.yaml
namespace: kinetica-system\n\ndb:\n  serviceAccount: {}\n  image:\n    # Kinetica DB Operator installer image\n    repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n    #  Kinetica DB Operator installer image tag\n    tag: \"\"\n\n  parameters:\n    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n    kubeconfig: \"\"\n    # the type of installation e.g. aks, eks, local\n    environment: \"local\"\n    # The storage class to use for PVCs\n    storageClass: \"standard\"\n\n  storageClass:\n    procs: {}\n    persist: {}\n    cache: {}\n

15 environment: \"local\" - tells the DB Operator to deploy the DB as a 'local' instance to the Kubernetes Cluster

17 storageClass: \"standard\" - sets the appropriate storageClass for the On-Prem Persistent Volume Provisioner

storageClass

The storageClass should be present in the target environment.

A list of available storageClass can be obtained using: -

Bash
kubectl get sc\n
"},{"location":"Operators/kinetica-operators/#components","title":"Components","text":"

The kinetica-db Helm Chart wraps the deployment of a number of sub-components: -

  • Porter Operator
  • Kinetica Database Operator
  • Kinetica Workbench Operator

Installation/Upgrading/Deletion of the Kinetica Operators is done via two CRs which leverage porter.sh as the orchestrator. The corresponding Porter Operator, DB Operator & Workbench Operator CRs are submitted by running the appropriate helm command i.e.

  • install
  • upgrade
  • uninstall
"},{"location":"Operators/kinetica-operators/#porter-operator","title":"Porter Operator","text":""},{"location":"Operators/kinetica-operators/#database-operator","title":"Database Operator","text":"

The Kinetica DB Operator installation CR for the porter.sh operator is: -

YAML
apiVersion: porter.sh/v1\nkind: Installation\nmetadata:\n  annotations:\n    meta.helm.sh/release-name: kinetica-operators\n    meta.helm.sh/release-namespace: kinetica-system\n  labels:\n    app.kubernetes.io/instance: kinetica-operators\n    app.kubernetes.io/managed-by: Helm\n    app.kubernetes.io/name: kinetica-operators\n    app.kubernetes.io/version: 0.1.0\n    helm.sh/chart: kinetica-operators-0.1.0\n    installVersion: 0.38.10\n  name: kinetica-operators-operator-install\n  namespace: kinetica-system\nspec:\n  action: install\n  agentConfig:\n    volumeSize: '0'\n  parameters:\n    environment: local\n    storageclass: managed-premium\n  reference: docker.io/kinetica/kinetica-k8s-operator:v7.1.9-7.rc3\n
"},{"location":"Operators/kinetica-operators/#workbench-operator","title":"Workbench Operator","text":"

The Kinetica Workbench installation CR for the porter.sh operator is: -

YAML
apiVersion: porter.sh/v1\nkind: Installation\nmetadata:\n  annotations:\n    meta.helm.sh/release-name: kinetica-operators\n    meta.helm.sh/release-namespace: kinetica-system\n  labels:\n    app.kubernetes.io/instance: kinetica-operators\n    app.kubernetes.io/managed-by: Helm\n    app.kubernetes.io/name: kinetica-operators\n    app.kubernetes.io/version: 0.1.0\n    helm.sh/chart: kinetica-operators-0.1.0\n    installVersion: 0.38.10\n  name: kinetica-operators-wb-operator-install\n  namespace: kinetica-system\nspec:\n  action: install\n  agentConfig:\n    volumeSize: '0'\n  parameters:\n    environment: local\n  reference: docker.io/kinetica/workbench-operator:v7.1.9-7.rc3\n
"},{"location":"Operators/kinetica-operators/#overriding-images-tags","title":"Overriding Images Tags","text":"Bash
helm install -n kinetica-system kinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--set provider=aks  \n--set dbOperator.image.tag=v7.1.9-7.rc3 \\\n--set dbOperator.image.repository=docker.io/kinetica/kinetica-k8s-operator \\\n--set wbOperator.image.repository=docker.io/kinetica/workbench-operator \\\n--set wbOperator.image.tag=v7.1.9-7.rc3\n
"},{"location":"Reference/","title":"Reference Section","text":"
  • Kinetica Operators Helm

    Kinetica Operators Helm charts & values file reference data. Charts

  • Kinetica Core DB CRDs-

    Kinetica DB Kubernetes CRD & ConfigMap reference data. Cluster CRDs

  • Kinetica Workbench CRDs

    Kinetica Workbench Kubernetes CRD & ConfigMap reference data. Workbench

","tags":["Reference"]},{"location":"Reference/database/","title":"Kinetica Database Configuration","text":"
  • kubectl (yaml)
","tags":["Reference"]},{"location":"Reference/database/#kineticacluster","title":"KineticaCluster","text":"

To deploy a new Database Instance into a Kubernetes cluster...

kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica DB Cluster is as follows: -

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\n
","tags":["Reference"]},{"location":"Reference/database/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n
","tags":["Reference"]},{"location":"Reference/database/#spec","title":"Spec","text":"

Under the spec: section of the KineticaCLuster CR we have a number of sections supporting different aspects of the deployed DB cluster:-

  • gpudbCluster
  • autoSuspend
  • gadmin
","tags":["Reference"]},{"location":"Reference/database/#gpudbcluster","title":"gpudbCluster","text":"

Configuartion items specific to the DB itself.

kineticacluster.yaml - gpudbCluster
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  gpudbCluster:\n
","tags":["Reference"]},{"location":"Reference/database/#gpudbcluster_1","title":"gpudbCluster","text":"cluster name & size
clusterName: kinetica-cluster \nclusterSize: \n  tshirtSize: M \n  tshirtType: LargeCPU \nfqdn: kinetica-cluster.saas.kinetica.com\nhaRingName: default\nhasPools: false    \n

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

3. tshirtSize - sets the cluster size to a defined size based upon the t-shirt size. Valid sizes are: -

  • XS - 1 DB Rank
  • S - 2 DB Ranks
  • M - 4 DB Ranks
  • L - 8 DB Ranks
  • XL - 16 DB Ranks
  • XXL - 32 DB Ranks
  • XXXL - 64 DB Ranks

4. tshirtType - block that defines the tyoe DB Ranks to run: -

  • SmallCPU -
  • LargeCPU -
  • SmallGPU -
  • LargeGPU -

5. fqdn - The fully qualified URL for the DB cluster. Used on the Ingress records for any exposed services.

6. haRingName - Default: default

7. hasPools - Whether to enable the separate node 'pools' for \"infra\", \"compute\" pod scheduling. Default: false +optional

","tags":["Reference"]},{"location":"Reference/database/#autosuspend","title":"autoSuspend","text":"

The DB Cluster autosuspend section allows for the spinning down of the core DB Pods to release the underlying Kubernetes nodes to reduce infrastructure costs when the DB is not in use.

kineticacluster.yaml - autoSuspend
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  autoSuspend:\n    enabled: false\n    inactivityDuration: 1h0m0s\n

7. the start of the autoSuspend definition

8. enabled when set to true auto suspend of the DB cluster is enabled otherwise set to false and no automatic suspending of the DB takes place. If omitted it defaults to false

9. inactivityDuration the duration after which if no DB activity has taken place the DB will be suspended

Horizontal Pod Autoscaler

In order for autoSuspend to work correctly the Kubernetes Horizontal Pod Autoscaler needs to be deployed to the cluster.

","tags":["Reference"]},{"location":"Reference/database/#gadmin","title":"gadmin","text":"

GAdmin the Database Administration Console

kineticacluster.yaml - gadmin
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  gadmin:\n    containerPort:\n      containerPort: 8080\n      name: gadmin\n      protocol: TCP\n    isEnabled: true\n

7. gadmin configuration block definition

8. containerPort configuration block i.e. where gadmin is exposed on the DB Pod

9. containerPort the port number as an integer. Default: 8080

10. name the name of the port being exposed. Default: gadmin

11. protocol network protocal used. Default: TCP

12. isEnabled whether gadmin is exposed from the DB pod. Default: true

","tags":["Reference"]},{"location":"Reference/database/#kineticauser","title":"KineticaUser","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticagrant","title":"KineticaGrant","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticaschema","title":"KineticaSchema","text":"","tags":["Reference"]},{"location":"Reference/database/#kineticaresourcegroup","title":"KineticaResourceGroup","text":"","tags":["Reference"]},{"location":"Reference/helm_kinetica_operators/","title":"Helm Chart Reference","text":"","tags":["Reference"]},{"location":"Reference/helm_kinetica_operators/#coming-soon","title":"Coming Soon","text":"","tags":["Reference"]},{"location":"Reference/kinetica_cluster_admins/","title":"Kinetica Cluster Admins Reference","text":"","tags":["Reference"]},{"location":"Reference/kinetica_cluster_admins/#full-kineticaclusteradmin-cr-structure","title":"Full KineticaClusterAdmin CR Structure","text":"kineticaclusteradmins.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterAdmin\nmetadata: {}\n# KineticaClusterAdminSpec defines the desired state of\n# KineticaClusterAdmin\nspec:\n  # ForceDBStatus - Force a Status of the DB.\n  forceDbStatus: string\n  # Name - The name of the cluster to target.\n  kineticaClusterName: string\n  # Offline - Pause/Resume of the DB.\n  offline:\n    # Set to true if desired state is offline. The supported values are:\n    # true false\n    offline: false\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: flush_to_disk Flush to disk when\n    # going offline The supported values are: true false\n    options: {}\n  # Rebalance of the DB.\n  rebalance:\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: rebalance_sharded_data        If true,\n    # sharded data will be rebalanced approximately equally across the\n    # cluster. Note that for clusters with large amounts of sharded\n    # data, this data transfer could be time-consuming and result in\n    # delayed query responses. The default value is true. The supported\n    # values are: true false rebalance_unsharded_data   If true,\n    # unsharded data (a.k.a. randomly-sharded) will be rebalanced\n    # approximately equally across the cluster. Note that for clusters\n    # with large amounts of unsharded data, this data transfer could be\n    # time-consuming and result in delayed query responses. The default\n    # value is true. The supported values are: true false\n    # table_includes                Comma-separated list of unsharded table names\n    # to rebalance. Not applicable to sharded tables because they are\n    # always rebalanced. Cannot be used simultaneously with\n    # table_excludes. This parameter is ignored if\n    # rebalance_unsharded_data is false.\n    # table_excludes                Comma-separated list of unsharded table names\n    # to not rebalance. Not applicable to sharded tables because they\n    # are always rebalanced. Cannot be used simultaneously with\n    # table_includes. This parameter is ignored if rebalance_\n    # unsharded_data is false. aggressiveness               Influences how much\n    # data is moved at a time during rebalance. A higher aggressiveness\n    # will complete the rebalance faster. A lower aggressiveness will\n    # take longer but allow for better interleaving between the\n    # rebalance and other queries. Valid values are constants from 1\n    # (lowest) to 10 (highest). The default value is '1'.\n    # compact_after_rebalance   Perform compaction of deleted records\n    # once the rebalance completes to reclaim memory and disk space.\n    # Default is true, unless repair_incorrectly_sharded_data is set to\n    # true. The default value is true. The supported values are: true\n    # false compact_only                If set to true, ignore rebalance options\n    # and attempt to perform compaction of deleted records to reclaim\n    # memory and disk space without rebalancing first. The default\n    # value is false. The supported values are: true false\n    # repair_incorrectly_sharded_data       Scans for any data sharded\n    # incorrectly and re-routes the data to the correct location. Only\n    # necessary if /admin/verifydb reports an error in sharding\n    # alignment. This can be done as part of a typical rebalance after\n    # expanding the cluster or in a standalone fashion when it is\n    # believed that data is sharded incorrectly somewhere in the\n    # cluster. Compaction will not be performed by default when this is\n    # enabled. If this option is set to true, the time necessary to\n    # rebalance and the memory used by the rebalance may increase. The\n    # default value is false. The supported values are: true false\n    options: {}\n  # RegenerateDBConfig - Force regenerate of DB ConfigMap. true -\n  # restarts DB Pods after config generation false - writes new\n  # configuration without restarting the DB Pods\n  regenerateDBConfig:\n    # Restart - Scales down the DB STS and back up once the DB\n    # Configuration has been regenerated.\n    restart: false\n# KineticaClusterAdminStatus defines the observed state of\n# KineticaClusterAdmin\nstatus:\n  # Phase - The current phase/state of the Admin request\n  phase: string\n  # Processed - Indicates if the admin request has already been\n  # processed. Avoids the request being rerun in the case the Operator\n  # gets restarted.\n  processed: false\n
","tags":["Reference"]},{"location":"Reference/kinetica_cluster_backups/","title":"Kinetica Cluster Backups Reference","text":"","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_backups/#full-kineticaclusterbackup-cr-structure","title":"Full KineticaClusterBackup CR Structure","text":"kineticaclusterbackups.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterBackup \nmetadata: {}\n# Fields specific to the linked backup engine\nprovider:\n  # Name of the backup/restore provider. FOR INTERNAL USE ONLY.\n  backupProvider: \"velero\"\n  # Name of the backup in the linked BackupProvider. FOR INTERNAL USE\n  # ONLY.\n  linkedItemName: \"\"\n# BackupSpec defines the specification for a Velero backup.\nspec:\n  # DefaultVolumesToRestic specifies whether restic should be used to\n  # take a backup of all pod volumes by default.\n  defaultVolumesToRestic: true\n  # ExcludedNamespaces contains a list of namespaces that are not\n  # included in the backup.\n  excludedNamespaces: [\"string\"]\n  # ExcludedResources is a slice of resource names that are not included\n  # in the backup.\n  excludedResources: [\"string\"]\n  # Hooks represent custom behaviors that should be executed at\n  # different phases of the backup.\n  hooks:\n    # Resources are hooks that should be executed when backing up\n    # individual instances of a resource.\n    resources:\n    - excludedNamespaces: [\"string\"]\n      # ExcludedResources specifies the resources to which this hook\n      # spec does not apply.\n      excludedResources: [\"string\"]\n      # IncludedNamespaces specifies the namespaces to which this hook\n      # spec applies. If empty, it applies to all namespaces.\n      includedNamespaces: [\"string\"]\n      # IncludedResources specifies the resources to which this hook\n      # spec applies. If empty, it applies to all resources.\n      includedResources: [\"string\"]\n      # LabelSelector, if specified, filters the resources to which this\n      # hook spec applies.\n      labelSelector:\n        # matchExpressions is a list of label selector requirements. The\n        # requirements are ANDed.\n        matchExpressions:\n        - key: string\n          # operator represents a key's relationship to a set of values.\n          # Valid operators are In, NotIn, Exists and DoesNotExist.\n          operator: string\n          # values is an array of string values. If the operator is In\n          # or NotIn, the values array must be non-empty. If the\n          # operator is Exists or DoesNotExist, the values array must\n          # be empty. This array is replaced during a strategic merge\n          # patch.\n          values: [\"string\"]\n        # matchLabels is a map of {key,value} pairs. A single\n        # {key,value} in the matchLabels map is equivalent to an\n        # element of matchExpressions, whose key field is \"key\", the\n        # operator is \"In\", and the values array contains only \"value\".\n        # The requirements are ANDed.\n        matchLabels: {}\n      # Name is the name of this hook.\n      name: string\n      # PostHooks is a list of BackupResourceHooks to execute after\n      # storing the item in the backup. These are executed after\n      # all \"additional items\" from item actions are processed.\n      post:\n      - exec:\n          # Command is the command and arguments to execute.\n          command: [\"string\"]\n          # Container is the container in the pod where the command\n          # should be executed. If not specified, the pod's first\n          # container is used.\n          container: string\n          # OnError specifies how Velero should behave if it encounters\n          # an error executing this hook.\n          onError: string\n          # Timeout defines the maximum amount of time Velero should\n          # wait for the hook to complete before considering the\n          # execution a failure.\n          timeout: string\n      # PreHooks is a list of BackupResourceHooks to execute prior to\n      # storing the item in the backup. These are executed before\n      # any \"additional items\" from item actions are processed.\n      pre:\n      - exec:\n          # Command is the command and arguments to execute.\n          command: [\"string\"]\n          # Container is the container in the pod where the command\n          # should be executed. If not specified, the pod's first\n          # container is used.\n          container: string\n          # OnError specifies how Velero should behave if it encounters\n          # an error executing this hook.\n          onError: string\n          # Timeout defines the maximum amount of time Velero should\n          # wait for the hook to complete before considering the\n          # execution a failure.\n          timeout: string\n  # IncludeClusterResources specifies whether cluster-scoped resources\n  # should be included for consideration in the backup.\n  includeClusterResources: true\n  # IncludedNamespaces is a slice of namespace names to include objects\n  # from. If empty, all namespaces are included.\n  includedNamespaces: [\"string\"]\n  # IncludedResources is a slice of resource names to include in the\n  # backup. If empty, all resources are included.\n  includedResources: [\"string\"]\n  # LabelSelector is a metav1.LabelSelector to filter with when adding\n  # individual objects to the backup. If empty or nil, all objects are\n  # included. Optional.\n  labelSelector:\n    # matchExpressions is a list of label selector requirements. The\n    # requirements are ANDed.\n    matchExpressions:\n    - key: string\n      # operator represents a key's relationship to a set of values.\n      # Valid operators are In, NotIn, Exists and DoesNotExist.\n      operator: string\n      # values is an array of string values. If the operator is In or\n      # NotIn, the values array must be non-empty. If the operator is\n      # Exists or DoesNotExist, the values array must be empty. This\n      # array is replaced during a strategic merge patch.\n      values: [\"string\"]\n    # matchLabels is a map of {key,value} pairs. A single {key,value} in\n    # the matchLabels map is equivalent to an element of\n    # matchExpressions, whose key field is \"key\", the operator is \"In\",\n    # and the values array contains only \"value\". The requirements are\n    # ANDed.\n    matchLabels: {} metadata: labels: {}\n  # OrderedResources specifies the backup order of resources of specific\n  # Kind. The map key is the Kind name and value is a list of resource\n  # names separated by commas. Each resource name has\n  # format \"namespace/resourcename\".  For cluster resources, simply\n  # use \"resourcename\".\n  orderedResources: {}\n  # SnapshotVolumes specifies whether to take cloud snapshots of any\n  # PV's referenced in the set of objects included in the Backup.\n  snapshotVolumes: true\n  # StorageLocation is a string containing the name of a\n  # BackupStorageLocation where the backup should be stored.\n  storageLocation: string\n  # TTL is a time.Duration-parseable string describing how long the\n  # Backup should be retained for.\n  ttl: string\n  # VolumeSnapshotLocations is a list containing names of\n  # VolumeSnapshotLocations associated with this backup.\n  volumeSnapshotLocations: [\"string\"] status:\n  # ClusterSize the current number of ranks & type i.e. CPU or GPU of\n  # the cluster when the backup took place.\n  clusterSize:\n    # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n    # representation of the number of nodes in a simple to understand\n    # T-Short size scheme. This indicates the size of the cluster i.e.\n    # the number of nodes. It does not identify the size of the cloud\n    # provider nodes. For node size see ClusterTypeEnum. Supported\n    # Values are: - XS S M L XL XXL XXXL\n    tshirtSize: string\n    # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n    # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n    # of the VM.\n    tshirtType: string coldTierBackup: string\n  # CompletionTimestamp records the time a backup was completed.\n  # Completion time is recorded even on failed backups. Completion time\n  # is recorded before uploading the backup object. The server's time\n  # is used for CompletionTimestamps\n  completionTimestamp: string\n  # Errors is a count of all error messages that were generated during\n  # execution of the backup.  The actual errors are in the backup's log\n  # file in object storage.\n  errors: 1\n  # Expiration is when this Backup is eligible for garbage-collection.\n  expiration: string\n  # FormatVersion is the backup format version, including major, minor,\n  # and patch version.\n  formatVersion: string\n  # Phase is the current state of the Backup.\n  phase: string\n  # Progress contains information about the backup's execution progress.\n  # Note that this information is best-effort only -- if Velero fails\n  # to update it during a backup for any reason, it may be\n  # inaccurate/stale.\n  progress:\n    # ItemsBackedUp is the number of items that have actually been\n    # written to the backup tarball so far.\n    itemsBackedUp: 1\n    # TotalItems is the total number of items to be backed up. This\n    # number may change throughout the execution of the backup due to\n    # plugins that return additional related items to back up, the\n    # velero.io/exclude-from-backup label, and various other filters\n    # that happen as items are processed.\n    totalItems: 1\n  # StartTimestamp records the time a backup was started. Separate from\n  # CreationTimestamp, since that value changes on restores. The\n  # server's time is used for StartTimestamps\n  startTimestamp: string\n  # ValidationErrors is a slice of all validation errors\n  # (if applicable).\n  validationErrors: [\"string\"]\n  # Version is the backup format major version. Deprecated: Please see\n  # FormatVersion\n  version: 1\n  # VolumeSnapshotsAttempted is the total number of attempted volume\n  # snapshots for this backup.\n  volumeSnapshotsAttempted: 1\n  # VolumeSnapshotsCompleted is the total number of successfully\n  # completed volume snapshots for this backup.\n  volumeSnapshotsCompleted: 1\n  # Warnings is a count of all warning messages that were generated\n  # during execution of the backup. The actual warnings are in the\n  # backup's log file in object storage.\n  warnings: 1\n
","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_grants/","title":"Kinetica Cluster Grants CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_grants/#full-kineticagrant-cr-structure","title":"Full KineticaGrant CR Structure","text":"kineticagrants.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaGrant \nmetadata: {}\n# KineticaGrantSpec defines the desired state of KineticaGrant\nspec:\n  # Grants system-level and/or table permissions to a user or role.\n  addGrantAllOnSchemaRequest:\n    # Name of the user or role that will be granted membership in input\n    # parameter role. Must be an existing user or role.\n    member: string\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # SchemaName - name of the schema on which to perform the Grant All\n    schemaName: string\n  # Grants system-level and/or table permissions to a user or role.\n  addGrantPermissionRequest:\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # Permission to grant to the user or role. Supported\n    # Values    Description system_admin    Full access to all data and\n    # system functions. system_user_admin   Access to administer users\n    # and roles that do not have system_admin permission.\n    # system_write  Read and write access to all tables.\n    # system_read   Read-only access to all tables.\n    systemPermission:\n      # UID of the user or role to which the permission will be granted.\n      # Must be an existing user or role.\n      name: string\n      # Optional parameters. The default value is an empty map (\n      # {} ). Supported Parameters: resource_group  Name of an existing\n      # resource group to associate with this role.\n      options: {}\n      # Permission to grant to the user or role. Supported\n      # Values  Description table_admin Full read/write and\n      # administrative access to the table. table_insert    Insert access\n      # to the table. table_update  Update access to the table.\n      # table_delete    Delete access to the table. table_read  Read access\n      # to the table.\n      permission: string\n    # Permission to grant to the user or role. Supported\n    # Values    Description<br/> system_admin   Full access to all data and\n    # system functions.<br/> system_user_admin  Access to administer\n    # users and roles that do not have system_admin permission.<br/>\n    # system_write  Read and write access to all tables.<br/>\n    # system_read   Read-only access to all tables.<br/>\n    tablePermissions:\n    - filter_expression: \"\"\n      # UID of the user or role to which the permission will be granted.\n      # Must be an existing user or role.\n      name: string\n      # Optional parameters. The default value is an empty map (\n      # {} ). Supported Parameters: resource_group  Name of an existing\n      # resource group to associate with this role.\n      options: {}\n      # Permission to grant to the user or role. Supported\n      # Values  Description table_admin Full read/write and\n      # administrative access to the table. table_insert    Insert access\n      # to the table. table_update  Update access to the table.\n      # table_delete    Delete access to the table. table_read  Read access\n      # to the table.\n      permission: string\n      # Name of the table for which the Permission is to be granted\n      table_name: string\n  # Grants membership in a role to a user or role.\n  addGrantRoleRequest:\n    # Name of the user or role that will be granted membership in input\n    # parameter role. Must be an existing user or role.\n    member: string\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # Name of the role in which membership will be granted. Must be an\n    # existing role.\n    role: string\n  # Debug debug the call\n  debug: false\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n# KineticaGrantStatus defines the observed state of KineticaGrant\nstatus:\n  # DBStringResponse - The GPUdb server embeds the endpoint response\n  # inside a standard response structure which contains status\n  # information and the actual response to the query.\n  db_response: data: string\n    # This embedded JSON represents the result of the endpoint\n    data_str: string\n    # API Call Specific\n    data_type: string\n    # Empty if success or an error message\n    message: string\n    # 'OK' or 'ERROR'\n    status: string \n    ldap_response: string\n
","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_reference/","title":"Core DB CRDs","text":"
  • DB Clusters

    Core Kinetica Database Cluster Management CRD & sample CR.

    KineticaCluster

  • DB Users

    Kinetica Database User Management CRD & sample CR.

    KineticaUser

  • DB Roles

    Kinetica Database Role Management CRD & sample CR.

    KineticaRole

  • DB Schemas

    Kinetica Database Schema Management CRD & sample CR.

    KineticaSchema

  • DB Grants

    Kinetica Database Grant Management CRD & sample CR.

    KineticaGrant

  • DB Resource Groups

    Kinetica Database Resource Group Management CRD & sample CR.

    KineticaResourceGroup

  • DB Administration

    Kinetica Database Administration CRD & sample CR.

    KineticaAdmin

  • DB Backups

    Kinetica Database Backup Management CRD & sample CR.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaBackup

  • DB Restore

    Kinetica Database Restore CRD & sample CR.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaRestore

","tags":["Reference","Installation","Operations"]},{"location":"Reference/kinetica_cluster_resource_groups/","title":"Kinetica Cluster Resource Groups CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_resource_groups/#full-kineticaresourcegroup-cr-structure","title":"Full KineticaResourceGroup CR Structure","text":"kineticaclusterresourcegroups.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterResourceGroup \nmetadata: {}\n# KineticaClusterResourceGroupSpec defines the desired state of\n# KineticaClusterResourceGroup\nspec: \n  db_create_resource_group_request:\n    # AdjoiningResourceGroup -\n    adjoining_resource_group: \"\"\n    # Name - name of the DB ResourceGroup\n    # https://docs.kinetica.com/7.1/azure/sql/resource_group/?search-highlight=resource+group#id-baea5b60-769c-5373-bff1-53f4f1ca5c21\n    name: string\n    # Options - DB Options used when creating the ResourceGroup\n    options: {}\n    # Ranking - Indicates the relative ranking among existing resource\n    # groups where this new resource group will be placed. When using\n    # before or after, specify which resource group this one will be\n    # inserted before or after in input parameter\n    # adjoining_resource_group. The supported values are: first last\n    # before after\n    ranking: \"\"\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n# KineticaClusterResourceGroupStatus defines the observed state of\n# KineticaClusterResourceGroup\nstatus: \n  provisioned: string\n
","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_restores/","title":"Kinetica Cluster Restores Reference","text":"","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_restores/#full-kineticaclusterrestore-cr-structure","title":"Full KineticaClusterRestore CR Structure","text":"kineticaclusterrestores.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterRestore \nmetadata: {}\n# RestoreSpec defines the specification for a Velero restore.\nspec:\n  # BackupName is the unique name of the Velero backup to restore from.\n  backupName: string\n  # ExcludedNamespaces contains a list of namespaces that are not\n  # included in the restore.\n  excludedNamespaces: [\"string\"]\n  # ExcludedResources is a slice of resource names that are not included\n  # in the restore.\n  excludedResources: [\"string\"]\n  # IncludeClusterResources specifies whether cluster-scoped resources\n  # should be included for consideration in the restore. If null,\n  # defaults to true.\n  includeClusterResources: true\n  # IncludedNamespaces is a slice of namespace names to include objects\n  # from. If empty, all namespaces are included.\n  includedNamespaces: [\"string\"]\n  # IncludedResources is a slice of resource names to include in the\n  # restore. If empty, all resources in the backup are included.\n  includedResources: [\"string\"]\n  # LabelSelector is a metav1.LabelSelector to filter with when\n  # restoring individual objects from the backup. If empty or nil, all\n  # objects are included. Optional.\n  labelSelector:\n    # matchExpressions is a list of label selector requirements. The\n    # requirements are ANDed.\n    matchExpressions:\n    - key: string\n      # operator represents a key's relationship to a set of values.\n      # Valid operators are In, NotIn, Exists and DoesNotExist.\n      operator: string\n      # values is an array of string values. If the operator is In or\n      # NotIn, the values array must be non-empty. If the operator is\n      # Exists or DoesNotExist, the values array must be empty. This\n      # array is replaced during a strategic merge patch.\n      values: [\"string\"]\n    # matchLabels is a map of {key,value} pairs. A single {key,value} in\n    # the matchLabels map is equivalent to an element of\n    # matchExpressions, whose key field is \"key\", the operator is \"In\",\n    # and the values array contains only \"value\". The requirements are\n    # ANDed.\n    matchLabels: {}\n  # NamespaceMapping is a map of source namespace names to target\n  # namespace names to restore into. Any source namespaces not included\n  # in the map will be restored into namespaces of the same name.\n  namespaceMapping: {}\n  # RestorePVs specifies whether to restore all included PVs from\n  # snapshot (via the cloudprovider).\n  restorePVs: true\n  # ScheduleName is the unique name of the Velero schedule to restore\n  # from. If specified, and BackupName is empty, Velero will restore\n  # from the most recent successful backup created from this schedule.\n  scheduleName: string status: coldTierRestore: \"\"\n  # CompletionTimestamp records the time the restore operation was\n  # completed. Completion time is recorded even on failed restore. The\n  # server's time is used for StartTimestamps\n  completionTimestamp: string\n  # Errors is a count of all error messages that were generated during\n  # execution of the restore. The actual errors are stored in object\n  # storage.\n  errors: 1\n  # FailureReason is an error that caused the entire restore to fail.\n  failureReason: string\n  # Phase is the current state of the Restore\n  phase: string\n  # Progress contains information about the restore's execution\n  # progress. Note that this information is best-effort only -- if\n  # Velero fails to update it during a restore for any reason, it may\n  # be inaccurate/stale.\n  progress:\n    # ItemsRestored is the number of items that have actually been\n    # restored so far\n    itemsRestored: 1\n    # TotalItems is the total number of items to be restored. This\n    # number may change throughout the execution of the restore due to\n    # plugins that return additional related items to restore\n    totalItems: 1\n  # StartTimestamp records the time the restore operation was started.\n  # The server's time is used for StartTimestamps\n  startTimestamp: string\n  # ValidationErrors is a slice of all validation errors(if applicable)\n  validationErrors: [\"string\"]\n  # Warnings is a count of all warning messages that were generated\n  # during execution of the restore. The actual warnings are stored in\n  # object storage.\n  warnings: 1\n
","tags":["Reference","Operations"]},{"location":"Reference/kinetica_cluster_roles/","title":"Kinetica Cluster Roles CRD","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_roles/#full-kineticarole-cr-structure","title":"Full KineticaRole CR Structure","text":"kineticaroles.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaRole \nmetadata: {}\n# KineticaRoleSpec defines the desired state of KineticaRole\nspec:\n  # AlterRoleRequest Kinetica DB REST API Request Format Object.\n  alter_role:\n    # Action - Modification operation to be applied to the role.\n    action: string\n    # Role UID - Name of the role to be altered. Must be an existing\n    # role.\n    name: string\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # Value - The value of the modification, depending on input\n    # parameter action.\n    value: string\n  # Debug debug the call\n  debug: false\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n  # AddRoleRequest Kinetica DB REST API Request Format Object.\n  role:\n    # User UID\n    name: string\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: resource_group    Name of an existing\n    # resource group to associate with this role.\n    options: {}\n    # ResourceGroupName of an existing resource group to associate with\n    # this role\n    resourceGroupName: \"\"\n# KineticaRoleStatus defines the observed state of KineticaRole\nstatus:\n  # DBStringResponse - The GPUdb server embeds the endpoint response\n  # inside a standard response structure which contains status\n  # information and the actual response to the query.\n  db_response: data: string\n    # This embedded JSON represents the result of the endpoint\n    data_str: string\n    # API Call Specific\n    data_type: string\n    # Empty if success or an error message\n    message: string\n    # 'OK' or 'ERROR'\n    status: string \n    ldap_response: string\n
","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_schemas/","title":"Kinetica Cluster Schemas CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_schemas/#full-kinetica-cluster-schemas-cr-structure","title":"Full Kinetica Cluster Schemas CR Structure","text":"kineticaclusterschemas.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterSchema \nmetadata: {}\n# KineticaClusterSchemaSpec defines the desired state of\n# KineticaClusterSchema\nspec: \n  db_create_schema_request:\n    # Name - the name of the resource group to create in the DB\n    name: string\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: \"max_cpu_concurrency\", \"max_data\"\n    options: {}\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n# KineticaClusterSchemaStatus defines the observed state of\n# KineticaClusterSchema\nstatus: \n  provisioned: string\n
","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_users/","title":"Kinetica Cluster Users CRD Reference","text":"","tags":["Reference","Administration"]},{"location":"Reference/kinetica_cluster_users/#full-kineticauser-cr-structure","title":"Full KineticaUser CR Structure","text":"kineticausers.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaUser\nmetadata: {}\n# KineticaUserSpec defines the desired state of KineticaUser\nspec:\n  # Action field contains UserActionEnum field indicating whether it is\n  # an Upsert or Change Password operation. For deletion delete the\n  # KineticaUser CR and a finalizer will remove the user from LDAP.\n  action: string\n  # ChangePassword specific fields\n  changePassword:\n    # PasswordSecret - Not the actual user password but the name of a\n    # Kubernetes Secret containing a Data element with a Password\n    # attribute. The secret is removed on user creation. Must be in the\n    # same namespace as the Kinetica Cluster. Must contain the\n    # following fields: - oldPassword newPassword\n    passwordSecret: string\n  # Debug debug the call\n  debug: false\n  # GroupID - Organisation or Team Id the user belongs to.\n  groupId: string\n  # Create the user in Reveal\n  reveal: true\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n  # UID is the username (not UUID UID).\n  uid: string\n  # Upsert specific fields\n  upsert:\n    # CreateHomeDirectory - when true, a home directory in KiFS is\n    # created for this user The default value is true. The supported\n    # values are: true false\n    createHomeDirectory: true\n    # DB Memory user data size limit\n    dataLimit: \"10Gi\"\n    # DisplayName\n    displayName: string\n    # GivenName is Firstname also called Christian name. givenName in\n    # LDAP terms.\n    givenName: string\n    # KIFs user data size limit\n    kifsDataLimit: \"2Gi\"\n    # LastName refers to last name or surname. sn in LDAP terms.\n    lastName: string\n    # Options -\n    options: {}\n    # PasswordSecret - Not the actual user password but the name of a\n    # Kubernetes Secret containing a Data element with a Password\n    # attribute. The secret is removed on user creation. Must be in the\n    # same namespace as the Kinetica Cluster.\n    passwordSecret: string\n    # UPN or UserPrincipalName - e.g. guyt@cp.com  \n    # Looks like an email address.\n    userPrincipalName: string\n  # UUID is the user unique UUID from the Control Plane.\n  uuid: string\n# KineticaUserStatus defines the observed state of KineticaUser\nstatus:\n  # DBStringResponse - The GPUdb server embeds the endpoint response\n  # inside a standard response structure which contains status\n  # information and the actual response to the query.\n  db_response: data: string\n    # This embedded JSON represents the result of the endpoint\n    data_str: string\n    # API Call Specific\n    data_type: string\n    # Empty if success or an error message\n    message: string\n    # 'OK' or 'ERROR'\n    status: string \n    ldap_response: string \n    reveal_admin: string\n
","tags":["Reference","Administration"]},{"location":"Reference/kinetica_clusters/","title":"image/svg+xml crd Kinetica Clusters CRD Reference","text":"

This page covers the Kinetica Cluster Kubernetes CRD.

","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#kubectl-cli-commands","title":"kubectl cli commands","text":"","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#kubectl-n-_namespace_-get-kc","title":"kubectl -n _namespace_ get kc","text":"

Lists the KineticaUsers defined within the specified anmespace to the console.

Bash
kubectl -n _namespace_ get ku\n
","tags":["Reference"]},{"location":"Reference/kinetica_clusters/#full-kineticacluster-cr-structure","title":"Full KineticaCluster CR Structure","text":"kineticaclusters.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaCluster \nmetadata: {}\n# KineticaClusterSpec defines the configuration for KineticaCluster DB\nspec:\n  # An optional duration after which the database is stopped and DB\n  # resources are freed\n  autoSuspend: \n    enabled: false\n    # InactivityDuration - the duration which the cluster should be idle\n    # before auto-pausing the DB Cluster.\n    inactivityDuration: \"1h\"\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  awsConfig:\n    # ClusterName - AWS name of the EKS Cluster. NOTE: Marked as\n    # optional but is mandatory\n    clusterName: string\n    # MarketplaceAppConfig - Amazon AWS specific DB Cluster\n    # information.\n    marketplaceApp:\n      # KmsKeyId - Key for disk encryption. The full Amazon Resource\n      # Name of the key to use when encrypting the volume. If none is\n      # supplied but encrypted is true, a key is generated by AWS. See\n      # AWS docs for valid ARN value.\n      kmsKeyId: string\n      # ProductCode - used to uniquely identify a product in AWS\n      # Marketplace. The product code should be the same as the one\n      # used during the publishing of a new product.\n      productCode: \"1cmucncoyp9pi8xjdwqjimlf8\"\n      # PublicKeyVersion - Public Key Version provided by AWS\n      # Marketplace\n      publicKeyVersion: 1\n      # ParentResourceGroup - The resource group of the ManagedApp\n      # itself ParentResourceGroup     string\n      # `json:\"parentResourceGroup\"` ResourceId - Identifier of the\n      # resource against which usage is emitted Format is GUID\n      # (UUID)\n      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n      # Optional only if that exactly of  ResourceId or ResourceUri is\n      # specified.\n      resourceId: string\n    # NodeGroups - List of NodeGroups for this cluster MUST contain at\n    # least one of the following keys: - \n    #   * none\n    #   * infra \n    #   * infra_public \n    #   * compute \n    #   * compute-gpu \n    #   * aaw_cpu \n    # NOTE: Marked as optional but is mandatory\n    nodeGroups: {}\n    # OTELTracing - OpenTelemetry Tracing Specifics\n    otelTracing:\n      # Endpoint - Set the OpenTelemetry reporting Endpoint\n      endpoint: \"\"\n      # Key - KineticaCluster specific Key required to send Telemetry\n      # information to the Cloud\n      key: string\n      # MaxBatchSize - Telemetry Reporting Interval to use in seconds.\n      maxBatchInterval: 10\n      # MaxBatchSize - Telemetry Maximum Batch Size to send.\n      maxBatchSize: 1024\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  azureConfig:\n    # App Insights Specifics\n    appInsights:\n      # Endpoint - Override the default AppInsights reporting Endpoint\n      endpoint: \"\"\n      # Key - KineticaCluster specific Application Insights Key required\n      # to send Telemetry information to the Azure Portal\n      key: string\n      # MaxBatchSize - Telemetry Reporting Interval to use in seconds.\n      maxBatchInterval: 10\n      # MaxBatchSize - Telemetry Maximum Batch Size to send.\n      maxBatchSize: 1024\n    # AzureManagedAppConfig - Microsoft Azure specific DB Cluster\n    # information.\n    managedApp:\n      # DiskEncryptionSetID - By default, managed disks use\n      # platform-managed encryption keys. All managed disks, snapshots,\n      # images, and data written to existing managed disks are\n      # automatically encrypted-at-rest with platform-managed keys. You\n      # can choose to manage encryption at the level of each managed\n      # disk, with your own keys. When you specify a customer-managed\n      # key, that key is used to protect and control access to the key\n      # that encrypts your data. Customer-managed keys offer greater\n      # flexibility to manage access controls.\n      diskEncryptionSetId: string\n      # PlanId - The Azure Marketplace Plan/Offer identifier selected by\n      # the customer for this DB cluster e.g. BYOL, Pay-As-You-Go etc.\n      planId: string\n      # ParentResourceGroup - The resource group of the ManagedApp\n      # itself ParentResourceGroup     string\n      # `json:\"parentResourceGroup\"` ResourceId - Identifier of the\n      # resource against which usage is emitted Format is GUID\n      # (UUID)\n      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n      # Optional only if that exactly of  ResourceId or ResourceUri is\n      # specified.\n      resourceId: string\n      # ResourceUri - Identifier of the managed app resource against\n      # which usage is emitted\n      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n      # Optional only if that exactly of  ResourceId or ResourceUri is\n      # specified.\n      resourceUri: string\n  # Tells the operator we want to run in Debug mode.\n  debug: false\n  # Identifies the type of Kubernetes deployment.\n  deploymentType:\n    # CloudRegionEnum - The target Kubernetes type to deploy to.\n    # Supported Values are: - aws_useast_1 aws_useast_2 aws_uswest_1\n    # az_useast_1 az_uswest_1\n    region: string\n    # DeploymentTypeEnum - The type of the Deployment. Supported Values\n    # are: - Managed FreeSaaS DedicatedSaaS OnPrem\n    type: string\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  devEditionConfig:\n    # Host IPv4 address. Used by KiND based Developer Edition where\n    # ingress paths set to *. Provides qualified, routable URLs to\n    # workbench.\n    hostIpAddress: \"\"\n  # The GAdmin Dashboard Configuration for the Kinetica Cluster.\n  gadmin:\n    # The port that GAdmin will be running on. It runs only on the head\n    # node pod in the cluster. Default: 8080\n    containerPort:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # The Ingress Endpoint that GAdmin will be running on.\n    ingressPath:\n      # backend defines the referenced service endpoint to which the\n      # traffic will be forwarded to.\n      backend:\n        # resource is an ObjectRef to another Kubernetes resource in the\n        # namespace of the Ingress object. If resource is specified,\n        # serviceName and servicePort must not be specified.\n        resource:\n          # APIGroup is the group for the resource being referenced. If\n          # APIGroup is not specified, the specified Kind must be in\n          # the core API group. For any other third-party types,\n          # APIGroup is required.\n          apiGroup: string\n          # Kind is the type of resource being referenced\n          kind: KineticaCluster\n          # Name is the name of resource being referenced\n          name: string\n        # serviceName specifies the name of the referenced service.\n        serviceName: string\n        # servicePort Specifies the port of the referenced service.\n        servicePort: \n      # path is matched against the path of an incoming request.\n      # Currently it can contain characters disallowed from the\n      # conventional \"path\" part of a URL as defined by RFC 3986. Paths\n      # must begin with a '/' and must be present when using PathType\n      # with value \"Exact\" or \"Prefix\".\n      path: string\n      # pathType determines the interpretation of the path matching.\n      # PathType can be one of the following values: * Exact: Matches\n      # the URL path exactly. * Prefix: Matches based on a URL path\n      # prefix split by '/'. Matching is done on a path element by\n      # element basis. A path element refers is the list of labels in\n      # the path split by the '/' separator. A request is a match for\n      # path p if every p is an element-wise prefix of p of the request\n      # path. Note that if the last element of the path is a substring\n      # of the last element in request path, it is not a match\n      # (e.g. /foo/bar matches /foo/bar/baz, but does not\n      # match /foo/barbaz). * ImplementationSpecific: Interpretation of\n      # the Path matching is up to the IngressClass. Implementations\n      # can treat this as a separate PathType or treat it identically\n      # to Prefix or Exact path types. Implementations are required to\n      # support all path types. Defaults to ImplementationSpecific.\n      pathType: string\n    # Whether to enable the GAdmin Dashboard on the Cluster. Default:\n    # true\n    isEnabled: true\n  # Gaia - gaia.properties configuration\n  gaia: admin:\n      # AdminLoginOnlyGpudbDown - When GPUdb is down, only allow admin\n      # user to login\n      admin_login_only_gpudb_down: true\n      # Username - We do check for admin username in various places\n      admin_username: \"admin\"\n      # LoginAnimationEnabled - Display any animation in login page\n      login_animation_enabled: true\n      # AdminLoginOnlyGpudbDown - Convenience settings for dev mode\n      login_bypass_enabled: false\n      # RequireStrongPassword - Convenience settings for dev mode\n      require_strong_password: true\n      # SSLTruststorePasswordScript - Display any animation in login\n      # page\n      ssl_truststore_password_script: string\n    # DemoSchema - Schema-related configuration\n    demo_schema: \"demo\" gpudb:\n      # DataFileStringNullValue - Table import/export null value string\n      data_file_string_null_value: \"\\\\N\"\n      gpudb_ext_url: \"http://127.0.0.1:8082/gpudb-0\"\n      # URL - Current instance of gpudb, when running in HA mode change\n      # this to load balancer endpoint\n      gpudb_url: \"http://127.0.0.1:9191\"\n      # LoggingLogFileName - Which file to use when displaying logging\n      # on Cluster page.\n      logging_log_file_name: \"gpudb.log\"\n      # SampleRepoURL - Table import/export null value string\n      sample_repo_url: \"//s3.amazonaws.com/kinetica-ce-data\" hm:\n      gpudb_ext_hm_url: \"http://127.0.0.1:8082/gpudb-host-manager\"\n      gpudb_hm_url: \"http://127.0.0.1:9300\" http:\n      # ClientTimeout - Number of seconds for proxy request timeout\n      http_client_timeout: 3600\n      # ClientTimeoutV2 - Force override of previous default with 0 as\n      # infinite timeout\n      http_client_timeout_v2: 0\n      # TomcatPathKey - Name of folder where Tomcat apps are installed\n      tomcat_path_key: \"tomcat\"\n      # WebappContext - Web App context\n      webapp_context: \"gadmin\"\n    # GAdminIsRemote - True if the gadmin application is running on a\n    # remote machine (not on same node as gpudb). If running on a\n    # remote machine the manage options will be disabled.\n    is_remote: false\n    # KAgentCLIPath - Schema-related configuration\n    kagent_cli_path: \"/opt/gpudb/kagent/bin/kagent\"\n    # KIO - KIO-related configuration\n    kio: kio_log_file_path: \"/opt/gpudb/kitools/kio/logs/gadmin.log\"\n    kio_log_level: \"DEBUG\" kio_log_size_limit: 10485760 kisql:\n      # QueryResultsLimit - KiSQL limit on the number of results in each\n      # query\n      kisql_query_results_limit: 10000\n      # QueryTimezone - KiSQL TimeZoneId setting for queries\n      # (use \"system\" for local system time)\n      kisql_query_timezone: \"GMT\" license:\n      # Status - Stub for license manager\n      status: \"ok\"\n      # Type - Stub for license manager\n      type: \"unlimited\"\n    # MaxConcurrentUserSessions - Session management configuration\n    max_concurrent_user_sessions: 0\n    # PublicSchema - Schema-related configuration\n    public_schema: \"ki_home\"\n    # RevealDBInfoFile - Path to file containing Reveal DB location\n    reveal_db_info_file: \"/opt/gpudb/connectors/reveal/var/REVEAL_DB_DIR\"\n    # RootSchema - Schema-related configuration\n    root_schema: \"root\" stats:\n      # GraphanaURL -\n      graphana_url: \"http://127.0.0.1:3000\"\n      # GraphiteURL\n      graphite_url: \"http://127.0.0.1:8181\"\n      # StatsGrafanaURL - Port used to host the Grafana user interface\n      # and embeddable metric dashboards in GAdmin. Note: If this value\n      # is defaulted then it will be replaced by the name of the Stats\n      # service if it is deployed & Grafana is enabled e.g.\n      # cluster-1234.gpudb.svc.cluster.local\n      stats_grafana_url: \"http://127.0.0.1:9091\"\n  # https://github.com/kubernetes-sigs/controller-tools/issues/622 if we\n  # want to set usePools as false, need to set defaults GPUDBCluster is\n  # an instance of a Kinetica DB Cluster i.e. it's StatefulSet,\n  # Service, Ingress, ConfigMap etc.\n  gpudbCluster:\n    # Affinity - is a group of affinity scheduling rules.\n    affinity:\n      # Describes node affinity scheduling rules for the pod.\n      nodeAffinity:\n        # The scheduler will prefer to schedule pods to nodes that\n        # satisfy the affinity expressions specified by this field, but\n        # it may choose a node that violates one or more of the\n        # expressions. The node that is most preferred is the one with\n        # the greatest sum of weights, i.e. for each node that meets\n        # all of the scheduling requirements (resource request,\n        # requiredDuringScheduling affinity expressions, etc.), compute\n        # a sum by iterating through the elements of this field and\n        # adding \"weight\" to the sum if the node matches the\n        # corresponding matchExpressions; the node(s) with the highest\n        # sum are the most preferred.\n        preferredDuringSchedulingIgnoredDuringExecution:\n        - preference:\n            # A list of node selector requirements by node's labels.\n            matchExpressions:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n            # A list of node selector requirements by node's fields.\n            matchFields:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n          # Weight associated with matching the corresponding\n          # nodeSelectorTerm, in the range 1-100.\n          weight: 1\n        # If the affinity requirements specified by this field are not\n        # met at scheduling time, the pod will not be scheduled onto\n        # the node. If the affinity requirements specified by this\n        # field cease to be met at some point during pod execution\n        # (e.g. due to an update), the system may or may not try to\n        # eventually evict the pod from its node.\n        requiredDuringSchedulingIgnoredDuringExecution:\n          # Required. A list of node selector terms. The terms are\n          # ORed.\n          nodeSelectorTerms:\n          - matchExpressions:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n            # A list of node selector requirements by node's fields.\n            matchFields:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n      # Describes pod affinity scheduling rules (e.g. co-locate this pod\n      # in the same node, zone, etc. as some other pod(s)).\n      podAffinity:\n        # The scheduler will prefer to schedule pods to nodes that\n        # satisfy the affinity expressions specified by this field, but\n        # it may choose a node that violates one or more of the\n        # expressions. The node that is most preferred is the one with\n        # the greatest sum of weights, i.e. for each node that meets\n        # all of the scheduling requirements (resource request,\n        # requiredDuringScheduling affinity expressions, etc.), compute\n        # a sum by iterating through the elements of this field and\n        # adding \"weight\" to the sum if the node has pods which matches\n        # the corresponding podAffinityTerm; the node(s) with the\n        # highest sum are the most preferred.\n        preferredDuringSchedulingIgnoredDuringExecution:\n        - podAffinityTerm:\n            # A label query over a set of resources, in this case pods.\n            labelSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # A label query over the set of namespaces that the term\n            # applies to. The term is applied to the union of the\n            # namespaces selected by this field and the ones listed in\n            # the namespaces field. null selector and null or empty\n            # namespaces list means \"this pod's namespace\". An empty\n            # selector ({}) matches all namespaces.\n            namespaceSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # namespaces specifies a static list of namespace names that\n            # the term applies to. The term is applied to the union of\n            # the namespaces listed in this field and the ones selected\n            # by namespaceSelector. null or empty namespaces list and\n            # null namespaceSelector means \"this pod's namespace\".\n            namespaces: [\"string\"]\n            # This pod should be co-located (affinity) or not\n            # co-located (anti-affinity) with the pods matching the\n            # labelSelector in the specified namespaces, where\n            # co-located is defined as running on a node whose value of\n            # the label with key topologyKey matches that of any node\n            # on which any of the selected pods is running. Empty\n            # topologyKey is not allowed.\n            topologyKey: string\n          # weight associated with matching the corresponding\n          # podAffinityTerm, in the range 1-100.\n          weight: 1\n        # If the affinity requirements specified by this field are not\n        # met at scheduling time, the pod will not be scheduled onto\n        # the node. If the affinity requirements specified by this\n        # field cease to be met at some point during pod execution\n        # (e.g. due to a pod label update), the system may or may not\n        # try to eventually evict the pod from its node. When there are\n        # multiple elements, the lists of nodes corresponding to each\n        # podAffinityTerm are intersected, i.e. all terms must be\n        # satisfied.\n        requiredDuringSchedulingIgnoredDuringExecution:\n        - labelSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # A label query over the set of namespaces that the term\n          # applies to. The term is applied to the union of the\n          # namespaces selected by this field and the ones listed in\n          # the namespaces field. null selector and null or empty\n          # namespaces list means \"this pod's namespace\". An empty\n          # selector ({}) matches all namespaces.\n          namespaceSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # namespaces specifies a static list of namespace names that\n          # the term applies to. The term is applied to the union of\n          # the namespaces listed in this field and the ones selected\n          # by namespaceSelector. null or empty namespaces list and\n          # null namespaceSelector means \"this pod's namespace\".\n          namespaces: [\"string\"]\n          # This pod should be co-located (affinity) or not co-located\n          # (anti-affinity) with the pods matching the labelSelector in\n          # the specified namespaces, where co-located is defined as\n          # running on a node whose value of the label with key\n          # topologyKey matches that of any node on which any of the\n          # selected pods is running. Empty topologyKey is not\n          # allowed.\n          topologyKey: string\n      # Describes pod anti-affinity scheduling rules (e.g. avoid putting\n      # this pod in the same node, zone, etc. as some other pod(s)).\n      podAntiAffinity:\n        # The scheduler will prefer to schedule pods to nodes that\n        # satisfy the anti-affinity expressions specified by this\n        # field, but it may choose a node that violates one or more of\n        # the expressions. The node that is most preferred is the one\n        # with the greatest sum of weights, i.e. for each node that\n        # meets all of the scheduling requirements (resource request,\n        # requiredDuringScheduling anti-affinity expressions, etc.),\n        # compute a sum by iterating through the elements of this field\n        # and adding \"weight\" to the sum if the node has pods which\n        # matches the corresponding podAffinityTerm; the node(s) with\n        # the highest sum are the most preferred.\n        preferredDuringSchedulingIgnoredDuringExecution:\n        - podAffinityTerm:\n            # A label query over a set of resources, in this case pods.\n            labelSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # A label query over the set of namespaces that the term\n            # applies to. The term is applied to the union of the\n            # namespaces selected by this field and the ones listed in\n            # the namespaces field. null selector and null or empty\n            # namespaces list means \"this pod's namespace\". An empty\n            # selector ({}) matches all namespaces.\n            namespaceSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # namespaces specifies a static list of namespace names that\n            # the term applies to. The term is applied to the union of\n            # the namespaces listed in this field and the ones selected\n            # by namespaceSelector. null or empty namespaces list and\n            # null namespaceSelector means \"this pod's namespace\".\n            namespaces: [\"string\"]\n            # This pod should be co-located (affinity) or not\n            # co-located (anti-affinity) with the pods matching the\n            # labelSelector in the specified namespaces, where\n            # co-located is defined as running on a node whose value of\n            # the label with key topologyKey matches that of any node\n            # on which any of the selected pods is running. Empty\n            # topologyKey is not allowed.\n            topologyKey: string\n          # weight associated with matching the corresponding\n          # podAffinityTerm, in the range 1-100.\n          weight: 1\n        # If the anti-affinity requirements specified by this field are\n        # not met at scheduling time, the pod will not be scheduled\n        # onto the node. If the anti-affinity requirements specified by\n        # this field cease to be met at some point during pod\n        # execution (e.g. due to a pod label update), the system may or\n        # may not try to eventually evict the pod from its node. When\n        # there are multiple elements, the lists of nodes corresponding\n        # to each podAffinityTerm are intersected, i.e. all terms must\n        # be satisfied.\n        requiredDuringSchedulingIgnoredDuringExecution:\n        - labelSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # A label query over the set of namespaces that the term\n          # applies to. The term is applied to the union of the\n          # namespaces selected by this field and the ones listed in\n          # the namespaces field. null selector and null or empty\n          # namespaces list means \"this pod's namespace\". An empty\n          # selector ({}) matches all namespaces.\n          namespaceSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # namespaces specifies a static list of namespace names that\n          # the term applies to. The term is applied to the union of\n          # the namespaces listed in this field and the ones selected\n          # by namespaceSelector. null or empty namespaces list and\n          # null namespaceSelector means \"this pod's namespace\".\n          namespaces: [\"string\"]\n          # This pod should be co-located (affinity) or not co-located\n          # (anti-affinity) with the pods matching the labelSelector in\n          # the specified namespaces, where co-located is defined as\n          # running on a node whose value of the label with key\n          # topologyKey matches that of any node on which any of the\n          # selected pods is running. Empty topologyKey is not\n          # allowed.\n          topologyKey: string\n    # Annotations - Annotations labels to be applied to the Statefulset\n    # DB pods.\n    annotations: {}\n    # The name of the cluster to form.\n    clusterName: string\n    # The Ingress Endpoint that GAdmin will be running on.\n    clusterSize:\n      # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n      # representation of the number of nodes in a simple to understand\n      # T-Short size scheme. This indicates the size of the cluster\n      # i.e. the number of nodes. It does not identify the size of the\n      # cloud provider nodes. For node size see ClusterTypeEnum.\n      # Supported Values are: - XS S M L XL XXL XXXL\n      tshirtSize: string\n      # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n      # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n      # of the VM.\n      tshirtType: string\n    # Config Kinetica DB Configuration Object\n    config: ai: apiKey: string\n        # Provider - AI API provider type. The default is \"sqlgpt\"\n        apiProvider: \"sqlgpt\" apiUrl: string\n      # AlertManagerConfig\n      alertManager:\n        # AlertManager IP address (run on head node) default port\n        # is \"2003\"\n        ipAddress: \"${gaia.host0.address}\" port: 2003\n      # AlertConfig\n      alerts: alertDiskAbsolute: [integer]\n        # Trigger an alert if available disk space on any given node\n        # falls to or below a certain threshold, either absolute\n        # (number of bytes) or percentage of total disk space. For\n        # multiple thresholds, use a comma-delimited list of values.\n        alertDiskPercentage: [1,5,10,20]\n        # Trigger generic error message alerts, in cases of various\n        # significant runtime errors.\n        alertErrorMessages: true\n        # Executable to run when an alert condition occurs. This\n        # executable will only be run on **rank0** and does not need to\n        # be present on other nodes.\n        alertExe: \"\"\n        # Trigger an alert whenever the status of a host or rank\n        # changes.\n        alertHostStatus: true\n        # Optionally, filter host alerts for a comma-delimited list of\n        # statuses. If a filter is empty, every host status change will\n        # trigger an alert.\n        alertHostStatusFilter: \"fatal_init_error\"\n        # The maximum number of triggered alerts guaranteed to be stored\n        # at any given time. When this number of alerts is exceeded,\n        # older alerts may be discarded to stay within the limit.\n        alertMaxStoredAlerts: 100 alertMemoryAbsolute: [integer]\n        # Trigger an alert if available memory on any given node falls\n        # to or below a certain threshold, either absolute (number of\n        # bytes) or percentage of total memory. For multiple\n        # thresholds, use a comma-delimited list of values.\n        alertMemoryPercentage: [1,5,10,20]\n        # Trigger an alert if a CUDA error occurs on a rank.\n        alertRankCudaError: true\n        # Trigger alerts when the fallback allocator is employed; e.g.,\n        # host memory is allocated because GPU allocation fails. NOTE:\n        # To prevent a flooding of alerts, if a fallback allocator is\n        # triggered in bursts, not every use will generate an alert.\n        alertRankFallbackAllocator: true\n        # Trigger an alert whenever the status of a rank changes.\n        alertRankStatus: true\n        # Optionally, filter rank alerts for a comma-delimited list of\n        # statuses. If a filter is empty, every rank status change will\n        # trigger an alert.\n        alertRankStatusFilter:\n        [\"fatal_init_error\",\"not_responding\",\"terminated\"]\n        # Enable the alerting system.\n        enableAlerts: true\n        # Directory where the trace event and summary files are stored.\n        # Must be a fully qualified path with sufficient free space for\n        # required volume of data.\n        traceDirectory: \"/tmp\"\n        # The maximum number of trace events to be collected\n        traceEventBufferSize: 1000000\n      # Audit - This section controls the request auditor, which will\n      # audit all requests received by the server in full or in part\n      # based on the settings.\n      audit:\n        # Controls whether the body of each request is audited (in JSON\n        # format). If 'enable_audit' is \"false\" this setting has no\n        # effect. NOTE: For requests that insert data records, this\n        # setting does not control the auditing of the records being\n        # inserted, only the rest of the request body; see 'audit_data'\n        # below to control this. audit_body = false\n        body: false\n        # Controls whether records being inserted are audited (in JSON\n        # format) for requests that insert data records. If\n        # either 'enable_audit' or 'audit_body' is \"false\", this\n        # setting has no effect. NOTE: Enabling this setting during\n        # bulk ingestion of data will rapidly produce very large audit\n        # logs and may cause disk space exhaustion; use with caution.\n        # audit_data = false\n        data: false\n        # Controls whether request auditing is enabled. If set\n        # to \"true\", the following information is audited for every\n        # request: Job ID, URI, User, and Client Address. The settings\n        # below control whether additional information about each\n        # request is also audited. If set to \"false\", all auditing is\n        # disabled. enable_audit = false\n        enable: false\n        # Controls whether HTTP headers are audited for each request.\n        # If 'enable_audit' is \"false\" this setting has no effect.\n        # audit_headers = false\n        headers: true\n        # Controls whether the above audit settings can be altered at\n        # runtime via the /alter/system/properties endpoint. In a\n        # secure environment where auditing is required at all times,\n        # this should be set to \"true\" to lock the settings to what is\n        # set in this file. lock_audit = false\n        lock: false\n        # Controls whether response information is audited for each\n        # request. If 'enable_audit' is \"false\" this setting has no\n        # effect. audit_response = false\n        response: false\n      # EventConfig\n      events:\n        # Run a statistics server to collect information about Kinetica\n        # and the machines it runs on.\n        internal: true\n        # Statistics server IP address (run on head node) default port\n        # is \"2003\"\n        ipAddress: \"${gaia.host0.address}\" port: 2003\n        # Statistics server namespace - should be a machine identifier\n        statsServerNamespace: \"gpudb\"\n      # ExternalFilesConfig\n      externalFiles:\n        # Defines the directory from which external files can be loaded\n        directory: \"/opt/gpudb/persist\"\n        # # Parquet files compression type egress_parquet_compression =\n        #   snappy\n        egressParquetCompression: \"snappy\"\n        # Max file size (in MB) to allow saving to a single file. May be\n        # overridden by target limitations. egress_single_file_max_size\n        # = 100\n        egressSingleFileMaxSize: \"100\"\n        # Maximum number of simultaneous threads allocated to a given\n        # external file read request, on each rank. Note that thread\n        # allocation may also be limited by resource group limits, the\n        # subtask_concurrency_limit setting, or system load.\n        readerNumTasks: \"-1\"\n      # GeneralConfig - the root of the gpudb.conf configuration in the\n      # CRD\n      general:\n        # Timeout (in seconds) to wait for a rank to start during a\n        # cluster event (ex: failover) event is considered failed.\n        clusterEventTimeoutStartupRank: \"300\"\n        # Enable (if \"true\") multiple kernels to run concurrently on the\n        # same GPU\n        concurrentKernelExecution: true\n        # Time-to-live in minutes of non-protected tables before they\n        # are automatically deleted from the database.\n        defaultTTL: \"20\"\n        # Disallow the /clear/table request to clear all tables.\n        disableClearAll: true\n        # Enable overlapped-equi-join filters\n        enableOverlappedEquiJoin: true\n        # Enable predicate-equi-join filter plan type\n        enablePredicateEquiJoin: true\n        # If \"true\" then all filter execution will be host-only\n        # (i.e. CPU). This can be useful for high-concurrency\n        # situations and when PCIe bandwidth is a limiting factor.\n        forceHostFilterExecution: false\n        # Maximum number of kernels that can be running at the same time\n        # on a given GPU. Set to \"0\" for no limit. Only takes effect\n        # if 'concurrent_kernel_execution' is \"true\"\n        maxConcurrentKernels: \"0\"\n        # Maximum number of records that data retrieval requests such\n        # as /get/records and /aggregate/groupby will return per\n        # request.\n        maxGetRecordsSize: 20000\n        # Set an optional executable command that will be run once when\n        # Kinetica is ready for client requests. This can be used to\n        # perform any initialization logic that needs to be run before\n        # clients connect. It will be run as the \"gpudb\" user, so you\n        # must ensure that any required permissions are set on the file\n        # to allow it to be executed.  If the command cannot be\n        # executed or returns a non-zero error code, then Kinetica will\n        # be stopped.  Output from the startup script will be logged\n        # to \"/opt/gpudb/core/logs/gpudb-on-start.log\" (and its dated\n        # relatives).  The \"gpudb_env.sh\" script is run directly before\n        # the command, so the path will be set to include the supplied\n        # Python runtime. Example: on_startup_script\n        # = /home/gpudb/on-start.sh param1 param2 ...\n        onStartupScript: \"\"\n        # Size in bytes of the pinned memory pool per-rank process to\n        # speed up copying data to the GPU.  Set to \"0\" to disable.\n        pinnedMemoryPoolSize: 2000000000\n        # Tables and collections with these names will not be deleted\n        # (comma separated).\n        protectedSets: \"MASTER,_MASTER,_DATASOURCE\"\n        # Timeout (in minutes) for filter-type requests\n        requestTimeout: \"20\"\n        # Timeout (in seconds) to wait for a rank to exit gracefully\n        # before it is force-killed. Machines with slow disk drives may\n        # require longer times and data may be lost if a drive is not\n        # responsive.\n        timeoutShutdownRank: \"300\"\n        # Timeout (in seconds) to wait for each database subsystem to\n        # exit gracefully before it is force-killed.\n        timeoutShutdownSubsystem: \"20\"\n        # Timeout (in seconds) to wait for each database subsystem to\n        # startup. Subsystems include the Query Planner, Graph,\n        # Stats, & HTTP servers, as well as external text-search\n        # ranks.\n        timeoutStartupSubsystem: \"60\"\n      # GraphConfig\n      graph:\n        # Enable the graph server\n        enable: false\n        # List of GPU devices to be used by graph server The server\n        # would ideally be run on a different node with dedicated GPU\n        # (s)\n        gpuList: \"\"\n        # Specify where the graph server should be run, defaults to head\n        # node\n        ipAddress: \"${gaia.rank0_ip_address}\"\n        # Maximum memory that can be used by the graph server, set\n        # to \"0\" to disable memory restriction\n        maxMemory: 0\n        # Port used for responses from the graph server to the database\n        # server\n        pullPort: 8100\n        # Port used for requests from the database server to the graph\n        # server\n        pushPort: 8099\n        # Number of seconds the graph client will wait for a response\n        # from the graph server\n        timeout: 1200\n      # HardwareConfig\n      hardware:\n        # Rank0HardwareConfig\n        rank0:\n          # Specify the GPU to use for all calculations on the HTTP\n          # server node, **rank0**. NOTE: The **rank0** GPU may be\n          # shared with another rank.\n          gpu: 0\n          # Set the head HTTP **rank0** numa node(s). If left empty,\n          # there will be no thread affinity or preferred memory node.\n          # The node list may be either a single node number or a\n          # range; e.g., \"1-5,7,10\". If there will be many simultaneous\n          # users, specify as many nodes as possible that won't overlap\n          # the **rank1** to **rankN** worker numa nodes that the GPUs\n          # are on. If there will be few simultaneous users and WMS\n          # speed is important, choose the numa node the 'rank0.gpu' is\n          # on.\n          numaNode: ranks:\n        - baseNumaNode: string\n          # Set each worker rank's preferred data numa node for CPU\n          # affinity and memory allocation.\n          # The 'rank<#>.data_numa_node' is the node or nodes that data\n          # intensive threads will run in and should be set to the same\n          # numa node that the GPU specified by the\n          # corresponding 'rank<#>.taskcalc_gpu' is on for best\n          # performance. If the 'rank<#>.taskcalc_gpu' is specified\n          # the 'rank<#>.data_numa_node' will be automatically set to\n          # the node the GPU is attached to, otherwise there will be no\n          # CPU thread affinity or preferred node for memory allocation\n          # if not specified or left empty. The node list may be a\n          # single node number or a range; e.g., \"1-5,7,10\".\n          dataNumaNode: string\n          # Set the GPU device for each worker rank to use. If no GPUs\n          # are specified, each rank will round-robin the available\n          # GPUs per host system. Add 'rank<#>.taskcalc_gpu' as needed\n          # for the worker ranks, where *#* ranges from \"1\" to the\n          # highest *rank #* among the 'rank<#>.host' parameters\n          # Example setting the GPUs to use for ranks 1 and 2: \n          #  #   rank1.taskcalc_gpu = 0 #   rank2.taskcalc_gpu = 1\n          taskCalcGPU: kafka:\n        # Maximum number of records to be ingested in a single batch\n        # kafka.batch_size = 1000\n        batchSize: 1000\n        # Maximum time (milliseconds) for each poll to get records from\n        # kafka kafka.poll_timeout = 0\n        pollTimeout: 1\n        # Maximum wait time (seconds) to buffer records received from\n        # kafka before ingestion kafka.wait_time = 30\n        waitTime: 30\n      # KifsConfig\n      kifs:\n        # KIFs user data size limit\n        dataLimit: \"4Gi\"\n        # sudo usermod -a -G gpudb_proc <user>\n        enable: false\n        # Parent directory of the mount point for the KiFS file system.\n        # Must be a fully qualified path. The actual mount point will\n        # be a subdirectory *mount* below this directory. Note that\n        # this folder must have read, write and execute permissions for\n        # the \"gpudb\" user and the \"gpudb_proc\" group, and it cannot be\n        # a path on an NFS.\n        mountPoint: \"/gpudb/kifs\" useManagedCredentials: true\n      # Etcd *ETCDConfig `json:\"etcd,omitempty\"` HA        HAConfig\n      # `json:\"ha,omitempty\"`\n      ml:\n        # Enable the ML server.\n        enable: false\n      # NetworkConfig\n      network:\n        # HAAddress - An optional address to allow inter-cluster\n        # communication with HA when 'address' is not routable between\n        # clusters.\n        HAAddress: string\n        # CompressNetworkData - Enables compression of inter-node\n        # network data transfers.\n        compressNetworkData: false\n        # EnableHTTPDProxy - Start an HTTP server as a proxy to handle\n        # LDAP and/or Kerberos authentication. Each host will run an\n        # HTTP server and access to each rank is available through\n        # http://host:8082/gpudb-1, where port \"8082\" is defined\n        # by 'httpd_proxy_port'. NOTE: HTTP external endpoints are not\n        # affected by the 'use_https' parameter above. If you wish to\n        # enable HTTPS, you must edit\n        # the \"/opt/gpudb/httpd/conf/httpd.conf\" and setup HTTPS as per\n        # the Apache httpd documentation at\n        # https://httpd.apache.org/docs/2.2/\n        enableHTTPDProxy: true\n        # EnableWorkerHTTPServers - Enable worker HTTP servers; each\n        # process runs its own server for multi-head ingest.\n        enableWorkerHTTPServers: true\n        # GlobalManagerLocalPubPort - ?\n        globalManagerLocalPubPort: 5554\n        # GlobalManagerPortOne - Internal communication ports -  Host\n        # manager status notification channel\n        globalManagerPortOne: 5552\n        # GlobalManagerPubPort - Host manager synchronization message\n        # publishing channel port\n        globalManagerPubPort: 5553\n        # HeadIPAddress - Head HTTP server IP address. Set to the\n        # publicly accessible IP address of the first\n        # process, **rank0**.\n        headIPAddress: \"172.20.0.10\"\n        # HeadPort - Head HTTP server port to use\n        # for 'head_ip_address'.\n        headPort: 9191\n        # HostManagerHTTPPort - HTTP port for web portal of the host\n        # manager\n        hostManagerHTTPPort: 9300\n        # HTTPAllowOrigin - Value to return via\n        # Access-Control-Allow-Origin HTTP header (for Cross-Origin\n        # Resource Sharing). Set to empty to not return the header and\n        # disallow CORS.\n        httpAllowOrigin: \"*\"\n        # HTTPKeepAlive - Keep HTTP connections alive between requests\n        httpKeepAlive: false\n        # HTTPDProxyPort - TCP port that the httpd auth proxy server\n        # will listen on if 'enable_httpd_proxy' is \"true\".\n        httpdProxyPort: 8082\n        # HTTPDProxyUseHTTPS - Set to \"true\" if the httpd auth proxy\n        # server is configured to use HTTPS.\n        httpdProxyUseHTTPS: false\n        # HTTPSCertFile - File containing the SSL certificate  e.g.\n        # cert.pem If required, a self-signed certificate(expires after\n        # 10 years) can be generated via the command: e.g. cert.pem\n        # openssl req -newkey rsa:2048 -new -nodes -x509 \\ -days\n        # 3650 -keyout key.pem -out cert.pem\n        httpsCertFile: \"\"\n        # HTTPSKeyFile - File containing the SSL private Key e.g.\n        # key.pem If required, a self-signed certificate (expires after\n        # 10 years) can be generated via the command: openssl\n        # req -newkey rsa:2048 -new -nodes -x509 \\ -days 3650 -keyout\n        # key.pem -out cert.pem\n        httpsKeyFile: \"\"\n        # Rank0IPAddress - Internal use IP address of the head HTTP\n        # server, **rank0**. Set to either a second internal network\n        # accessible by all ranks or to '${gaia.head_ip_address}'.\n        rank0IPAddress: \"${gaia.rank0.host}\" ranks:\n        - communicatorPort:\n            # Number of port to expose on the pod's IP address. This\n            # must be a valid port number, 0 < x < 65536.\n            containerPort: 1\n            # What host IP to bind the external port to.\n            hostIP: string\n            # Number of port to expose on the host. If specified, this\n            # must be a valid port number, 0 < x < 65536. If\n            # HostNetwork is specified, this must match ContainerPort.\n            # Most containers do not need this.\n            hostPort: 1\n            # If specified, this must be an IANA_SVC_NAME and unique\n            # within the pod. Each named port in a pod must have a\n            # unique name. Name for the port that can be referred to by\n            # services.\n            name: string\n            # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n            # to \"TCP\".\n            protocol: \"TCP\"\n          # Specify the hosts to run each rank worker process in the\n          # cluster. For a single machine system, use \"127.0.0.1\", but\n          # if using two or more machines, a hostname or IP address\n          # must be specified for each rank that is accessible from the\n          # other ranks. See also 'head_ip_address'\n          # and 'rank0_ip_address'.\n          host: string\n          # Optionally, specify the worker HTTP server ports. The\n          # default is to use ('head_port' + *rank #*) for each worker\n          # process where rank number is from \"1\" to number of ranks\n          # in 'rank<#>.host' below.\n          httpServerPort:\n            # Number of port to expose on the pod's IP address. This\n            # must be a valid port number, 0 < x < 65536.\n            containerPort: 1\n            # What host IP to bind the external port to.\n            hostIP: string\n            # Number of port to expose on the host. If specified, this\n            # must be a valid port number, 0 < x < 65536. If\n            # HostNetwork is specified, this must match ContainerPort.\n            # Most containers do not need this.\n            hostPort: 1\n            # If specified, this must be an IANA_SVC_NAME and unique\n            # within the pod. Each named port in a pod must have a\n            # unique name. Name for the port that can be referred to by\n            # services.\n            name: string\n            # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n            # to \"TCP\".\n            protocol: \"TCP\"\n          # This is the Kubernetes pod IP Address of the current rank\n          # which we need to populate in the operator. NOTE: Internal\n          # Attribute\n          podIP: string\n          # Optionally, specify a public URL for each worker HTTP server\n          # that clients should use to connect for multi-head\n          # operations. NOTE: If specified for any ranks, a public URL\n          # must be specified for all ranks.\n          publicURL: \"https://:8082/gpudb-{{.Rank}}\"\n          # Define the rank number of this rank.\n          rank: 1\n        # SetMonitorPort - Set monitor ZMQ publisher server port (-1 to\n        # disable), uses the 'head_ip_address' interface.\n        setMonitorPort: 9002\n        # SetMonitorProxyPort - Set monitor ZMQ publisher internal proxy\n        # server port (\"-1\" to disable), uses the 'head_ip_address'\n        # interface. IMPORTANT:  Disabling this port effectively\n        # prevents worker nodes from publishing set monitor\n        # notifications when multi-head ingest is enabled\n        # (see 'enable_worker_http_servers').\n        setMonitorProxyPort: 9003\n        # SetMonitorQueueSize - Set monitor queue size\n        setMonitorQueueSize: 1000\n        # TriggerPort - Trigger ZMQ publisher server port (\"-1\" to\n        # disable), uses the 'head_ip_address' interface.\n        triggerPort: -1\n        # UseHTTPS - Set to \"true\" to use HTTPS; if \"true\"\n        # then 'https_key_file' and 'https_cert_file' must be provided\n        useHttps: false\n      # PersistenceConfig\n      persistence:\n        # Removed in 7.2\n        IndexDBFlushImmediate: true\n        # DataLoadingSchema Startup data-loading scheme\n        buildMaterializedViewsOnStart: \"on_demand\"\n        # DataLoadingSchema Startup data-loading scheme\n        buildPKIndexOnStart: \"on_demand\"\n        # Target maximum data size for any one column in a chunk\n        # (512 MB) (0 = disable). chunk_max_memory = 8192000000\n        chunkColumnMaxMemory: 8192000000\n        # Target maximum total data size for all columns in a chunk\n        # (8 GB) (0 = disable).\n        chunkMaxMemory: 512000000\n        # Number of records per chunk (\"0\" disables chunking)\n        chunkSize: 8000000\n        # Determines whether to execute kernels on host (CPU) or device\n        # (GPU). Possible values are: \n        #  * \"default\"   : engine decides * \"host\"      : execute only\n        #     host * \"device\"    : execute only device * *<rows>*    :\n        #     execute on the host if chunked column contains the given\n        #     number of *rows* or fewer; otherwise, execute on device.\n        executionMode: \"device\"\n        # Removed in 7.2\n        fsyncIndexDBImmediate: true\n        # Removed in 7.2\n        fsyncInodesImmediate: true\n        # Removed in 7.2\n        fsyncMetadataImmediate: true\n        # Removed in 7.2\n        fsyncOnInterval: true\n        # Maximum number of open files for IndexedDb object file store.\n        # Removed in 7.2\n        indexDBMaxOpenFiles: \n        # Table of contents size for IndexedDb object file store.\n        # Removed in 7.2\n        indexDBTOCSize: \n        # Disable detection of sparse file support and use the full file\n        # length which may be an over-estimate of the actual usage in\n        # the persist tier. Removed in 7.2\n        indexDBTierByFileLength: false\n        # Startup data-loading scheme: \n        #  * \"always\"    : load all the data into memory before\n        #     accepting requests * \"lazy\"      : load the necessary\n        #     data to start, but load the remainder\n        #     lazily * \"on_demand\" : only load data as requests use it\n        loadVectorsOnStart: \"on_demand\"\n        # Removed in 7.2\n        metadataFlushImmediate: true\n        # Specify a base directory to store persistence data files.\n        persistDirectory: \"/opt/gpudb/persist\"\n        # Whether to use synchronous persistence file writing.\n        # If \"false\", files will be written asynchronously. Removed in\n        # 7.2\n        persistSync: true\n        # Duration in seconds, for which persistence files will be\n        # force-synced if out of sync, once per minute. NOTE: Files are\n        # always opportunistically saved; this simply enforces a\n        # maximum time a file can be out of date. Set to a very high\n        # number to disable.\n        persistSyncTime: 5\n        # The maximum number of bytes in the shadow aggregate cache\n        shadowAggSize: 100000000\n        # Whether to enable chunk caching\n        shadowCubeEnabled: true\n        # The maximum number of bytes in the shadow filter cache\n        shadowFilterSize: 100000000\n        # Base directory to store hashed strings.\n        smsDirectory: \"${gaia.persist_directory}\"\n        # Maximum number of open files (per-TOM) for the SMS\n        # (string) store.\n        smsMaxOpenFiles: 128\n        # Synchronous compression: compress vectors on set compression.\n        synchronousCompression: false\n        # Directory for GPUdb to use to store temporary files. Must be a\n        # fully qualified path, have at least 100Mb of free space, and\n        # execute permission.\n        tempDirectory: \"${gaia.persist_directory}/tmp\"\n        # Base directory to store the text search index.\n        textIndexDirectory: \"${gaia.persist_directory}\"\n        # Enable checksum protection on the wal entries. New in 7.2\n        walChecksum: true\n        # Specifies how frequently wal entries are written with\n        # background sync. New in 7.2\n        walFlushFrequency: 60\n        # Maximum size of each wal segment file New in 7.2\n        walMaxSegmentSize: 500000000\n        # Approximate number of segment files to split the wal across. A\n        # minimum of two is required. The size of the wal is limited by\n        # segment_count * max_segment_size. (per rank and per tom) Set\n        # to 0 to remove a size limit on the wal itself, but still be\n        # bounded by rank tier limits. Set to -1 to have the database\n        # decide automatically per table. New in 7.2\n        walSegmentCount: \n        # Sync mode to use when persisting wal entries to disk: \n        #  \"none\"       : Disable the wal \"background\" : Wal entries are\n        #   periodically written instead of immediately after each\n        #   operation \"flush\"      : Protects entries in the event of a\n        #   database crash \"fsync\"      : Protects entries in the event\n        #   of an OS crash New in 7.2\n        walSyncPolicy: \"flush\"\n        # If true, any table that is found to be corrupt after replaying\n        # its wal at startup will automatically be truncated so that\n        # the table becomes operable. If false, the user will be\n        # responsible for resolving the issue via sql REPAIR TABLE or\n        # similar. New in 7.2\n        walTruncateCorruptTablesOnStart: true\n      # PostgresProxy\n      postgresProxy:\n        # Postgres Proxy Server Start an Postgres(TCP) server as a proxy\n        # to handle postgres wire protocol messages.\n        enablePostgresProxy: false\n        # Set idle connection  timeout in seconds. (default: \"1200\")\n        idleConnectionTimeout: 1200\n        # Set max number of queued server connections. (default: \"1\")\n        maxQueuedConnections: 1\n        # Set max number of server threads to spawn. (default: \"64\")\n        maxThreads: 64\n        # Set min number of server threads to spawn. (default: \"2\")\n        minThreads: 2\n        # TCP port that the postgres  proxy server will listen on\n        # if 'enable_postgres_proxy' is \"true\".\n        port:\n          # Number of port to expose on the pod's IP address. This must\n          # be a valid port number, 0 < x < 65536.\n          containerPort: 1\n          # What host IP to bind the external port to.\n          hostIP: string\n          # Number of port to expose on the host. If specified, this\n          # must be a valid port number, 0 < x < 65536. If HostNetwork\n          # is specified, this must match ContainerPort. Most\n          # containers do not need this.\n          hostPort: 1\n          # If specified, this must be an IANA_SVC_NAME and unique\n          # within the pod. Each named port in a pod must have a unique\n          # name. Name for the port that can be referred to by\n          # services.\n          name: string\n          # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n          # to \"TCP\".\n          protocol: \"TCP\"\n        # Set to \"true\" to use SSL; if \"true\" then 'ssl_key_file'\n        # and 'ssl_cert_file' must be provided\n        ssl: false sslCertFile: \"\"\n        # Files containing the SSL private Key and the SSL certificate\n        # for. If required, a self signed certificate (expires after 10\n        # years) can be generated via the command: openssl req -newkey\n        # rsa:2048 -new -nodes -x509 \\ -days 3650 -keyout key.pem -out\n        # cert.pem\n        sslKeyFile: \"\"\n      # ProcessesConfig\n      processes:\n        # Set the maximum number of threads per tom for table\n        # initialization on startup\n        initTablesNumThreadsPerTom: 8\n        # Set the number of parallel calculation threads to use for data\n        # processing use -1 to use the max number of threads\n        # (not recommended)\n        kernelOmpThreads: 3\n        # The maximum number of web server threads to spawn\n        maxHttpThreads: 512\n        # Set the maximum number of threads (both workers and masters)\n        # to be passed to TBB on initialization.  Generally\n        # speaking, 'max_tbb_threads_per_rank' - \"1\" TBB workers will\n        # be created.  Use \"-1\" for no limit.\n        maxTbbThreadsPerRank: \"-1\"\n        # The minimum number of web server threads to spawn\n        minHttpThreads: 8\n        # Set the number of parallel jobs to create for multi-child set\n        # calulations use \"-1\" to use the max number of threads\n        # (not recommended)\n        smOmpThreads: 2\n        # Maximum number of simultaneous threads allocated to a given\n        # request, on each rank. Note that thread allocation may also\n        # be limted by resource group limits and/or system load.\n        subtaskConcurrentyLimit: \"-1\"\n        # Set the number of TaskCalculators per TOM, GPU data\n        # processors.\n        tcsPerTom: \"-1\"\n        # Set the number of TOMs (data container shards) per rank\n        tomsPerRank: 1\n        # Set the number of TaskProcessors per TOM, CPU data\n        # processors.\n        tpsPerTom: \"-1\"\n      # ProcsConfig\n      procs:\n        # Directory where proc files are stored at runtime. Must be a\n        # fully qualified path with execute permission. If not\n        # specified, 'temp_directory' will be used.\n        directory:\n          # PersistentVolumeClaim is a user's request for and claim to a\n          # persistent volume\n          persistVolumeClaim:\n            # APIVersion defines the versioned schema of this\n            # representation of an object. Servers should convert\n            # recognized schemas to the latest internal value, and may\n            # reject unrecognized values. More info:\n            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            apiVersion: app.kinetica.com/v1\n            # Kind is a string value representing the REST resource this\n            # object represents. Servers may infer this from the\n            # endpoint the client submits requests to. Cannot be\n            # updated. In CamelCase. More info:\n            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            kind: KineticaCluster\n            # Standard object's metadata. More info:\n            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n            metadata: {}\n            # spec defines the desired characteristics of a volume\n            # requested by a pod author. More info:\n            # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n            spec:\n              # accessModes contains the desired access modes the volume\n              # should have. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n              accessModes: [\"string\"]\n              # dataSource field can be used to specify either: * An\n              # existing VolumeSnapshot object\n              # (snapshot.storage.k8s.io/VolumeSnapshot) * An existing\n              # PVC (PersistentVolumeClaim) If the provisioner or an\n              # external controller can support the specified data\n              # source, it will create a new volume based on the\n              # contents of the specified data source. When the\n              # AnyVolumeDataSource feature gate is enabled, dataSource\n              # contents will be copied to dataSourceRef, and\n              # dataSourceRef contents will be copied to dataSource\n              # when dataSourceRef.namespace is not specified. If the\n              # namespace is specified, then dataSourceRef will not be\n              # copied to dataSource.\n              dataSource:\n                # APIGroup is the group for the resource being\n                # referenced. If APIGroup is not specified, the\n                # specified Kind must be in the core API group. For any\n                # other third-party types, APIGroup is required.\n                apiGroup: string\n                # Kind is the type of resource being referenced\n                kind: KineticaCluster\n                # Name is the name of resource being referenced\n                name: string\n              # dataSourceRef specifies the object from which to\n              # populate the volume with data, if a non-empty volume is\n              # desired. This may be any object from a non-empty API\n              # group (non core object) or a PersistentVolumeClaim\n              # object. When this field is specified, volume binding\n              # will only succeed if the type of the specified object\n              # matches some installed volume populator or dynamic\n              # provisioner. This field will replace the functionality\n              # of the dataSource field and as such if both fields are\n              # non-empty, they must have the same value. For backwards\n              # compatibility, when namespace isn't specified in\n              # dataSourceRef, both fields (dataSource and\n              # dataSourceRef) will be set to the same value\n              # automatically if one of them is empty and the other is\n              # non-empty. When namespace is specified in\n              # dataSourceRef, dataSource isn't set to the same value\n              # and must be empty. There are three important\n              # differences between dataSource and dataSourceRef: *\n              # While dataSource only allows two specific types of\n              # objects, dataSourceRef allows any non-core object, as\n              # well as PersistentVolumeClaim objects. * While\n              # dataSource ignores disallowed values (dropping them),\n              # dataSourceRef preserves all values, and generates an\n              # error if a disallowed value is specified. * While\n              # dataSource only allows local objects, dataSourceRef\n              # allows objects in any namespaces. (Beta) Using this\n              # field requires the AnyVolumeDataSource feature gate to\n              # be enabled. (Alpha) Using the namespace field of\n              # dataSourceRef requires the\n              # CrossNamespaceVolumeDataSource feature gate to be\n              # enabled.\n              dataSourceRef:\n                # APIGroup is the group for the resource being\n                # referenced. If APIGroup is not specified, the\n                # specified Kind must be in the core API group. For any\n                # other third-party types, APIGroup is required.\n                apiGroup: string\n                # Kind is the type of resource being referenced\n                kind: KineticaCluster\n                # Name is the name of resource being referenced\n                name: string\n                # Namespace is the namespace of resource being\n                # referenced Note that when a namespace is specified, a\n                # gateway.networking.k8s.io/ReferenceGrant object is\n                # required in the referent namespace to allow that\n                # namespace's owner to accept the reference. See the\n                # ReferenceGrant documentation for details.(Alpha) This\n                # field requires the CrossNamespaceVolumeDataSource\n                # feature gate to be enabled.\n                namespace: string\n              # resources represents the minimum resources the volume\n              # should have. If RecoverVolumeExpansionFailure feature\n              # is enabled users are allowed to specify resource\n              # requirements that are lower than previous value but\n              # must still be higher than capacity recorded in the\n              # status field of the claim. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n              resources:\n                # Claims lists the names of resources, defined in\n                # spec.resourceClaims, that are used by this container.\n                # This is an alpha field and requires enabling the\n                # DynamicResourceAllocation feature gate. This field is\n                # immutable. It can only be set for containers.\n                claims:\n                - name: string\n                # Limits describes the maximum amount of compute\n                # resources allowed. More info:\n                # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                limits: {}\n                # Requests describes the minimum amount of compute\n                # resources required. If Requests is omitted for a\n                # container, it defaults to Limits if that is\n                # explicitly specified, otherwise to an\n                # implementation-defined value. Requests cannot exceed\n                # Limits. More info:\n                # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                requests: {}\n              # selector is a label query over volumes to consider for\n              # binding.\n              selector:\n                # matchExpressions is a list of label selector\n                # requirements. The requirements are ANDed.\n                matchExpressions:\n                - key: string\n                  # operator represents a key's relationship to a set of\n                  # values. Valid operators are In, NotIn, Exists and\n                  # DoesNotExist.\n                  operator: string\n                  # values is an array of string values. If the operator\n                  # is In or NotIn, the values array must be non-empty.\n                  # If the operator is Exists or DoesNotExist, the\n                  # values array must be empty. This array is replaced\n                  # during a strategic merge patch.\n                  values: [\"string\"]\n                # matchLabels is a map of {key,value} pairs. A single\n                # {key,value} in the matchLabels map is equivalent to\n                # an element of matchExpressions, whose key field\n                # is \"key\", the operator is \"In\", and the values array\n                # contains only \"value\". The requirements are ANDed.\n                matchLabels: {}\n              # storageClassName is the name of the StorageClass\n              # required by the claim. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n              storageClassName: string\n              # volumeMode defines what type of volume is required by\n              # the claim. Value of Filesystem is implied when not\n              # included in claim spec.\n              volumeMode: string\n              # volumeName is the binding reference to the\n              # PersistentVolume backing this claim.\n              volumeName: string\n            # status represents the current information/status of a\n            # persistent volume claim. Read-only. More info:\n            # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n            status:\n              # accessModes contains the actual access modes the volume\n              # backing the PVC has. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n              accessModes: [\"string\"]\n              # allocatedResources is the storage resource within\n              # AllocatedResources tracks the capacity allocated to a\n              # PVC. It may be larger than the actual capacity when a\n              # volume expansion operation is requested. For storage\n              # quota, the larger value from allocatedResources and\n              # PVC.spec.resources is used. If allocatedResources is\n              # not set, PVC.spec.resources alone is used for quota\n              # calculation. If a volume expansion capacity request is\n              # lowered, allocatedResources is only lowered if there\n              # are no expansion operations in progress and if the\n              # actual volume capacity is equal or lower than the\n              # requested capacity. This is an alpha field and requires\n              # enabling RecoverVolumeExpansionFailure feature.\n              allocatedResources: {}\n              # capacity represents the actual resources of the\n              # underlying volume.\n              capacity: {}\n              # conditions is the current Condition of persistent volume\n              # claim. If underlying persistent volume is being resized\n              # then the Condition will be set to 'ResizeStarted'.\n              conditions:\n              - lastProbeTime: string\n                # lastTransitionTime is the time the condition\n                # transitioned from one status to another.\n                lastTransitionTime: string\n                # message is the human-readable message indicating\n                # details about last transition.\n                message: string\n                # reason is a unique, this should be a short, machine\n                # understandable string that gives the reason for\n                # condition's last transition. If it\n                # reports \"ResizeStarted\" that means the underlying\n                # persistent volume is being resized.\n                reason: string status: string\n                # PersistentVolumeClaimConditionType is a valid value of\n                # PersistentVolumeClaimCondition.Type\n                type: string\n              # phase represents the current phase of\n              # PersistentVolumeClaim.\n              phase: string\n              # resizeStatus stores status of resize operation.\n              # ResizeStatus is not set by default but when expansion\n              # is complete resizeStatus is set to empty string by\n              # resize controller or kubelet. This is an alpha field\n              # and requires enabling RecoverVolumeExpansionFailure\n              # feature.\n              resizeStatus: string\n          # VolumeMount describes a mounting of a Volume within a\n          # container.\n          volumeMount:\n            # Path within the container at which the volume should be\n            # mounted.  Must not contain ':'.\n            mountPath: string\n            # mountPropagation determines how mounts are propagated from\n            # the host to container and the other way around. When not\n            # set, MountPropagationNone is used. This field is beta in\n            # 1.10.\n            mountPropagation: string\n            # This must match the Name of a Volume.\n            name: string\n            # Mounted read-only if true, read-write otherwise (false or\n            # unspecified). Defaults to false.\n            readOnly: true\n            # Path within the volume from which the container's volume\n            # should be mounted. Defaults to \"\" (volume's root).\n            subPath: string\n            # Expanded path within the volume from which the container's\n            # volume should be mounted. Behaves similarly to SubPath\n            # but environment variable references $(VAR_NAME) are\n            # expanded using the container's environment. Defaults\n            # to \"\" (volume's root). SubPathExpr and SubPath are\n            # mutually exclusive.\n            subPathExpr: string\n        # Enable procs (UDFs)\n        enable: true\n      # SecurityConfig\n      security:\n        # Automatically create accounts for externally-authenticated\n        # users. If 'enable_external_authentication' is \"false\", this\n        # setting has no effect. Note that accounts are not\n        # automatically deleted if users are removed from the external\n        # authentication provider and will be orphaned.\n        autoCreateExternalUsers: false\n        # Automatically add roles passed in via the \"KINETICA_ROLES\"\n        # HTTP header to externally-authenticated users. Specified\n        # roles that do not exist are ignored.\n        # If 'enable_external_authentication' is \"false\", this setting\n        # has no effect. IMPORTANT: DO NOT ENABLE unless the\n        # authentication proxy is configured to block \"KINETICA_ROLES\"\n        # HTTP headers passed in from clients.\n        autoGrantExternalRoles: false\n        # Comma-separated list of roles to revoke from\n        # externally-authenticated users prior to granting roles passed\n        # in via the \"KINETICA_ROLES\" HTTP header, or \"*\" to revoke all\n        # roles. Preceding a role name with an \"!\" overrides the\n        # revocation (e.g. \"*,!foo\" revokes all roles except \"foo\").\n        # Leave blank to disable. If\n        # either 'enable_external_authentication'\n        # or 'auto_grant_external_roles' is \"false\", this setting has\n        # no effect.\n        autoRevokeExternalRoles: false\n        # Enable authorization checks.  When disabled, all requests will\n        # be treated as the administrative user.\n        enableAuthorization: true\n        # Enable external (LDAP, Kerberos, etc.) authentication. User\n        # IDs of externally-authenticated users must be passed in via\n        # the \"REMOTE_USER\" HTTP header from the authentication proxy.\n        # May be used in conjuntion with the 'enable_httpd_proxy'\n        # setting above for an integrated external authentication\n        # solution. IMPORTANT: DO NOT ENABLE unless external access to\n        # GPUdb ports has been blocked via firewall AND the\n        # authentication proxy is configured to block \"REMOTE_USER\"\n        # HTTP headers passed in from clients. server.\n        enableExternalAuthentication: true\n        # ExternalSecurity\n        externalSecurity:\n          # Ranger\n          ranger:\n            # AuthorizerAddress - The network URI for the\n            # ranger_authorizer to start. The URI can be either TCP or\n            # IPC. TCP address is used to indicate the remote\n            # ranger_authorizer which may run at other hosts. The IPC\n            # address is for a local ranger_authorizer. Example\n            # addresses for remote or TCP servers: tcp://127.0.0.1:9293\n            # tcp://HOST_IP:9293 Example address for local IPC servers:\n            # ipc:///tmp/gpudb-ranger-0\n            # security.external.ranger_authorizer.address = ipc://$\n            # {gaia.temp_directory}/gpudb-ranger-0\n            authorizerAddress: \"ipc://$\n            {gaia.temp_directory}/gpudb-ranger-0\"\n            # Remote debugger port used for the ranger_authorizer.\n            # Setting the port to \"0\" disables remote debugging. NOTE:\n            # Recommended port to use is \"5005\"\n            # security.external.ranger_authorizer.remote_debug_port =\n            # 0\n            authorizerRemoteDebugPort: 0\n            # AuthorizerTimeout - Ranger Authorizer timeout in seconds\n            # security.external.ranger_authorizer.timeout = 120\n            authorizerTimeout: 120\n            # CacheMinutes- Maximum minutes to hold on to data from\n            # Ranger security.external.ranger.cache_minutes = 60\n            cacheMinutes: 60\n            # Name of the service created on the Ranger Server to manage\n            # this Kinetica instance\n            # security.external.ranger.service_name = kinetica\n            name: \"kinetica\"\n            # ExtURL - URL of Ranger REST API.  E.g.,\n            # https://localhost:6080/ Leave blank for no Ranger Server\n            # security.external.ranger.url =\n            url: string\n        # The minimum allowable password length.\n        minPasswordLength: 4\n        # Require all users to be authenticated.  Disable this to allow\n        # users to access the database as the 'unauthenticated' user.\n        # Useful for situations where the public needs to access the\n        # data.\n        requireAuthentication: true\n        # UnifiedSecurityNamespace - Use a single namespace for internal\n        # and external user IDs and role names. If false, external user\n        # IDs must be prefixed with \"@\" to differentiate them from\n        # internal user IDs and role names (except in the \"REMOTE_USER\"\n        # HTTP header, where the \"@\" is omitted).\n        # unified_security_namespace = true\n        unifiedSecurityNamespace: true\n      # SQLConfig\n      sql:\n        # SQLPlannerAddress is not included as it is just default\n        # always\n        address: \"ipc://${gaia.temp_directory}/gpudb-query-engine-0\"\n        # Enable the cost-based optimizer\n        costBasedOptimization: false\n        # Enable distributed joins\n        distributedJoins: true\n        # Enable distributed operations\n        distributedOperations: true\n        # Enable Query Planner\n        enablePlanner: true\n        # Perform joins between only 2 tables at a time; default is all\n        # tables involved in the operation at once\n        forceBinaryJoins: false\n        # Perform unions/intersections/exceptions between only 2 tables\n        # at a time; default is all tables involved in the operation at\n        # once\n        forceBinarySetOps: false\n        # Max parallel steps\n        maxParallelSteps: 4\n        # Max allowed view nesting levels. Valid range(1-64)\n        maxViewNestingLevels: 16\n        # TTL of the paging results table\n        pagingTableTTL: 20\n        # Enable parallel query evaluation\n        parallelExecution: true\n        # The maximum number of entries in the SQL plan cache.  The\n        # default is \"4000\" entries, but the configurable range\n        # is \"1\" - \"1000000\".  Plan caching will be disabled if the\n        # value is set outside of that range.\n        planCacheSize: 4000\n        # The maximum memory for the query planner to use in Megabytes.\n        plannerMaxMemory: 4096\n        # The maximum stack size for the query planner threads to use in\n        # Megabytes.\n        plannerMaxStack: 6\n        # Query planner timeout in seconds\n        plannerTimeout: 120\n        # Max Query planner threads\n        plannerWorkers: 16\n        # Remote debugger port used for the query planner. Setting the\n        # port to \"0\" disables remote debugging. NOTE:  Recommended\n        # port to use is \"5005\"\n        remoteDebugPort: 5005\n        # TTL of the query cache results table\n        resultsCacheTTL: 60\n        # Enable query results caching\n        resultsCaching: true\n        # Enable rule-based query rewrites\n        ruleBasedOptimization: true\n      # SQLEngineConfig\n      sqlEngine:\n        # Enable the cost-based optimizer\n        costBasedOptimization: false\n        # Name of default collection for user tables\n        defaultSchema: \"\"\n        # Enable distributed joins\n        distributedJoins: true\n        # Enable distributed operations\n        distributedOperations: true\n        # Perform joins between only 2 tables at a time; default is all\n        # tables involved in the operation at once\n        forceBinaryJoins: false\n        # Perform unions/intersections/exceptions between only 2 tables\n        # at a time; default is all tables involved in the operation at\n        # once\n        forceBinarySetOps: false\n        # Max parallel steps\n        maxParallelSteps: 4\n        # Max allowed view nesting levels. Valid range(1-64)\n        maxViewNestingLevels: 16\n        # TTL of the paging results table\n        pagingTableTTL: 20\n        # Enable parallel query evaluation\n        parallelExecution: true\n        # The maximum number of entries in the SQL plan cache.  The\n        # default is \"4000\" entries, but the configurable range\n        # is \"1\" - \"1000000\".  Plan caching will be disabled if the\n        # value is set outside of that range.\n        planCacheSize: 4000\n        # PlannerConfig\n        planner:\n          # Enable Query Planner\n          enablePlanner: true\n          # The maximum memory for the query planner to use in\n          # Megabytes.\n          maxMemory: 4096\n          # The maximum stack size for the query planner threads to use\n          # in Megabytes.\n          maxStack: 6\n          # The network URI for the query planner to start. The URI can\n          # be either TCP or IPC. TCP address is used to indicate the\n          # remote query planner which may run at other hosts. The IPC\n          # address is for a local query planner. Example for remote or\n          # TCP servers: \n          #  #  sql.planner.address  = tcp://127.0.0.1:9293 #\n          #     sql.planner.address  = tcp://HOST_IP:9293 Example for\n          #     local IPC servers: \n          #  #  sql.planner.address  = ipc:///tmp/gpudb-query-engine-0\n          plannerAddress: \"ipc:///tmp/gpudb-query-engine-0\"\n          # Remote debugger port used for the query planner. Setting the\n          # port to \"0\" disables remote debugging. NOTE:  Recommended\n          # port to use is \"5005\"\n          remoteDebugPort: 0\n          # Query planner timeout in seconds\n          timeout: 120\n          # Max Query planner threads\n          workers: 16 results:\n          # TTL of the query cache results table\n          cacheTTL: 60\n          # Enable query results caching\n          caching: true\n        # Enable rule-based query rewrites\n        ruleBasedOptimization: true\n        # Name of collection that will be used to store result tables\n        # generated as part of query execution\n        tempCollection: \"__SQL_TEMP\"\n      # StatisticsConfig\n      statistics:\n        # system_metadata.stats_aggr_rowcount = 10000\n        aggrRowCount: 10000\n        # system_metadata.stats_aggr_time = 1\n        aggrTime: 1\n        # Run a statistics server to collect information about Kinetica\n        # and the machines it runs on.\n        enable: true\n        # Statistics server IP address (run on head node) default port\n        # is \"2003\"\n        ipAddress: \"${gaia.host0.address}\"\n        # Statistics server namespace - should be a machine identifier\n        namespace: \"gpudb\" port: 2003\n        # System metadata catalog settings\n        # system_metadata.stats_retention_days = 21\n        retentionDays: 21\n      # TextSearchConfig\n      textSearch:\n        # Enable text search capability within the database.\n        enableTextSearch: false\n        # Number of text indices to start for each rank\n        textIndicesPerTom: 2\n        # Searcher refresh intervals - specifies the maximum delay\n        # (in seconds) between writing to the text search index and\n        # being able to search for the value just written.  A value\n        # of \"0\" insures that writes to the index are immediately\n        # available to be searched.  A more nominal value of \"100\"\n        # should improve ingest speed at the cost of some delay in\n        # being able to text search newly added values.\n        textSearcherRefreshInterval: 20\n        # Use the production capable external text server instead of a\n        # lightweight internal server which should only be used for\n        # light testing. Note: The internal text server is deprecated\n        # and may be removed in future versions.\n        useExternalTextServer: true tieredStorage:\n        # Cold Storage Tiers can be used to extend the storage capacity\n        # of the Persist Tier. Assign a tier strategy with cold storage\n        # to objects that will be infrequently accessed since they will\n        # be moved as needed from the Persist Tier. The Cold Storage\n        # Tier is typically a much larger capacity physical disk or a\n        # cloud-based storage system which may not be as performant as\n        # the Persist Tier storage. A default storage limit and\n        # eviction thresholds can be set across all ranks for a given\n        # Cold Storage Tier, while one or more ranks within a Cold\n        # Storage Tier may be configured to override those defaults.\n        # NOTE: If an object needs to be pulled out of cold storage\n        # during a query, it may need to use the local persist\n        # directory as a temporary swap space. This may trigger an\n        # eviction of other persisted items to cold storage due to low\n        # disk space condition defined by the watermark settings for\n        # the Persist Tier.\n        coldStorageTier:\n          # ColdStorageAzure\n          coldStorageAzure:\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string clientID: string clientSecret: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\"\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier. BasePath string\n            #  `json:\"basePath,omitempty\"`\n            containerName: \"/gpudb/cold_storage\"\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\" sasToken:\n            string storageAccountKey: string storageAccountName: string\n            tenantID: string useManagedCredentials: false\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageDisk\n          coldStorageDisk:\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\"\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageGCS - Google Cloud Storage-specific *parameter*\n          # names: \n          #  * BucketName =        'gcs_bucket_name' *\n          #    ProjectID - 'gcs_project_id'\n          #    (optional) * AccountID - 'gcs_service_account_id'\n          #    (optional) *\n          #    AccountPrivateKey - 'gcs_service_account_private_key'\n          #    (optional) * AccountKeys -  'gcs_service_account_keys'\n          #    (optional) NOTE: If\n          #    the 'gcs_service_account_id', 'gcs_service_account_private_key'\n          #    and/or 'gcs_service_account_keys' values are not\n          #    specified, the Google Clould Client Libraries will\n          #    attempt to find and use service account credentials from\n          #    the GOOGLE_APPLICATION_CREDENTIALS environment\n          #    variable.\n          coldStorageGCS: accountID: string accountKeys: string\n          accountPrivateKey: string\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string bucketName: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\"\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" projectID: string\n            provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageHDFS\n          coldStorageHDFS:\n            # ColdStorageDisk\n            default:\n              # 'base_path'             : A base path based on the\n              #  provider type for this tier.\n              basePath: string\n              # 'connection_timeout'    : Timeout in seconds for\n              #  connecting to this storage provider.\n              connectionTimeout: \"30\"\n              # * 'high_watermark' : Percentage used eviction threshold.\n              #    Once usage exceeds this value, evictions from this\n              #    tier will be scheduled in the background and\n              #    continue until the 'low_watermark' percentage usage\n              #    is reached.  Default is \"90\", signifying a 90%\n              #    memory usage threshold.\n              highWatermark: 90\n              # * 'limit'          : The maximum (bytes) per rank that\n              #    can be allocated across all resource groups.\n              limit: \"1Gi\"\n              # * 'low_watermark'  : Percentage used recovery threshold.\n              #    Once usage exceeds the 'high_watermark', evictions\n              #    will continue until usage falls below this recovery\n              #    threshold. Default is \"80\", signifying an 80% usage\n              #    threshold.\n              lowWatermark: 80 name: string\n              # A base directory to use as a space for this tier.\n              path: \"default\" provisioner: \"docker.io/hostpath\"\n              # Kubernetes Persistent Volume Claim for this disk tier.\n              volumeClaim:\n                # APIVersion defines the versioned schema of this\n                # representation of an object. Servers should convert\n                # recognized schemas to the latest internal value, and\n                # may reject unrecognized values. More info:\n                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n                apiVersion: app.kinetica.com/v1\n                # Kind is a string value representing the REST resource\n                # this object represents. Servers may infer this from\n                # the endpoint the client submits requests to. Cannot\n                # be updated. In CamelCase. More info:\n                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n                kind: KineticaCluster\n                # Standard object's metadata. More info:\n                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n                metadata: {}\n                # spec defines the desired characteristics of a volume\n                # requested by a pod author. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n                spec:\n                  # accessModes contains the desired access modes the\n                  # volume should have. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                  accessModes: [\"string\"]\n                  # dataSource field can be used to specify either: * An\n                  # existing VolumeSnapshot object\n                  # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                  # existing PVC (PersistentVolumeClaim) If the\n                  # provisioner or an external controller can support\n                  # the specified data source, it will create a new\n                  # volume based on the contents of the specified data\n                  # source. When the AnyVolumeDataSource feature gate\n                  # is enabled, dataSource contents will be copied to\n                  # dataSourceRef, and dataSourceRef contents will be\n                  # copied to dataSource when dataSourceRef.namespace\n                  # is not specified. If the namespace is specified,\n                  # then dataSourceRef will not be copied to\n                  # dataSource.\n                  dataSource:\n                    # APIGroup is the group for the resource being\n                    # referenced. If APIGroup is not specified, the\n                    # specified Kind must be in the core API group. For\n                    # any other third-party types, APIGroup is\n                    # required.\n                    apiGroup: string\n                    # Kind is the type of resource being referenced\n                    kind: KineticaCluster\n                    # Name is the name of resource being referenced\n                    name: string\n                  # dataSourceRef specifies the object from which to\n                  # populate the volume with data, if a non-empty\n                  # volume is desired. This may be any object from a\n                  # non-empty API group (non core object) or a\n                  # PersistentVolumeClaim object. When this field is\n                  # specified, volume binding will only succeed if the\n                  # type of the specified object matches some installed\n                  # volume populator or dynamic provisioner. This field\n                  # will replace the functionality of the dataSource\n                  # field and as such if both fields are non-empty,\n                  # they must have the same value. For backwards\n                  # compatibility, when namespace isn't specified in\n                  # dataSourceRef, both fields (dataSource and\n                  # dataSourceRef) will be set to the same value\n                  # automatically if one of them is empty and the other\n                  # is non-empty. When namespace is specified in\n                  # dataSourceRef, dataSource isn't set to the same\n                  # value and must be empty. There are three important\n                  # differences between dataSource and dataSourceRef: *\n                  # While dataSource only allows two specific types of\n                  # objects, dataSourceRef allows any non-core object,\n                  # as well as PersistentVolumeClaim objects. * While\n                  # dataSource ignores disallowed values\n                  # (dropping them), dataSourceRef preserves all\n                  # values, and generates an error if a disallowed\n                  # value is specified. * While dataSource only allows\n                  # local objects, dataSourceRef allows objects in any\n                  # namespaces. (Beta) Using this field requires the\n                  # AnyVolumeDataSource feature gate to be enabled.\n                  # (Alpha) Using the namespace field of dataSourceRef\n                  # requires the CrossNamespaceVolumeDataSource feature\n                  # gate to be enabled.\n                  dataSourceRef:\n                    # APIGroup is the group for the resource being\n                    # referenced. If APIGroup is not specified, the\n                    # specified Kind must be in the core API group. For\n                    # any other third-party types, APIGroup is\n                    # required.\n                    apiGroup: string\n                    # Kind is the type of resource being referenced\n                    kind: KineticaCluster\n                    # Name is the name of resource being referenced\n                    name: string\n                    # Namespace is the namespace of resource being\n                    # referenced Note that when a namespace is\n                    # specified, a\n                    # gateway.networking.k8s.io/ReferenceGrant object\n                    # is required in the referent namespace to allow\n                    # that namespace's owner to accept the reference.\n                    # See the ReferenceGrant documentation for\n                    # details. (Alpha) This field requires the\n                    # CrossNamespaceVolumeDataSource feature gate to be\n                    # enabled.\n                    namespace: string\n                  # resources represents the minimum resources the\n                  # volume should have. If\n                  # RecoverVolumeExpansionFailure feature is enabled\n                  # users are allowed to specify resource requirements\n                  # that are lower than previous value but must still\n                  # be higher than capacity recorded in the status\n                  # field of the claim. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                  resources:\n                    # Claims lists the names of resources, defined in\n                    # spec.resourceClaims, that are used by this\n                    # container. This is an alpha field and requires\n                    # enabling the DynamicResourceAllocation feature\n                    # gate. This field is immutable. It can only be set\n                    # for containers.\n                    claims:\n                    - name: string\n                    # Limits describes the maximum amount of compute\n                    # resources allowed. More info:\n                    # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                    limits: {}\n                    # Requests describes the minimum amount of compute\n                    # resources required. If Requests is omitted for a\n                    # container, it defaults to Limits if that is\n                    # explicitly specified, otherwise to an\n                    # implementation-defined value. Requests cannot\n                    # exceed Limits. More info:\n                    # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                    requests: {}\n                  # selector is a label query over volumes to consider\n                  # for binding.\n                  selector:\n                    # matchExpressions is a list of label selector\n                    # requirements. The requirements are ANDed.\n                    matchExpressions:\n                    - key: string\n                      # operator represents a key's relationship to a\n                      # set of values. Valid operators are In, NotIn,\n                      # Exists and DoesNotExist.\n                      operator: string\n                      # values is an array of string values. If the\n                      # operator is In or NotIn, the values array must\n                      # be non-empty. If the operator is Exists or\n                      # DoesNotExist, the values array must be empty.\n                      # This array is replaced during a strategic merge\n                      # patch.\n                      values: [\"string\"]\n                    # matchLabels is a map of {key,value} pairs. A\n                    # single {key,value} in the matchLabels map is\n                    # equivalent to an element of matchExpressions,\n                    # whose key field is \"key\", the operator is \"In\",\n                    # and the values array contains only \"value\". The\n                    # requirements are ANDed.\n                    matchLabels: {}\n                  # storageClassName is the name of the StorageClass\n                  # required by the claim. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                  storageClassName: string\n                  # volumeMode defines what type of volume is required\n                  # by the claim. Value of Filesystem is implied when\n                  # not included in claim spec.\n                  volumeMode: string\n                  # volumeName is the binding reference to the\n                  # PersistentVolume backing this claim.\n                  volumeName: string\n                # status represents the current information/status of a\n                # persistent volume claim. Read-only. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n                status:\n                  # accessModes contains the actual access modes the\n                  # volume backing the PVC has. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                  accessModes: [\"string\"]\n                  # allocatedResources is the storage resource within\n                  # AllocatedResources tracks the capacity allocated to\n                  # a PVC. It may be larger than the actual capacity\n                  # when a volume expansion operation is requested. For\n                  # storage quota, the larger value from\n                  # allocatedResources and PVC.spec.resources is used.\n                  # If allocatedResources is not set,\n                  # PVC.spec.resources alone is used for quota\n                  # calculation. If a volume expansion capacity request\n                  # is lowered, allocatedResources is only lowered if\n                  # there are no expansion operations in progress and\n                  # if the actual volume capacity is equal or lower\n                  # than the requested capacity. This is an alpha field\n                  # and requires enabling RecoverVolumeExpansionFailure\n                  # feature.\n                  allocatedResources: {}\n                  # capacity represents the actual resources of the\n                  # underlying volume.\n                  capacity: {}\n                  # conditions is the current Condition of persistent\n                  # volume claim. If underlying persistent volume is\n                  # being resized then the Condition will be set\n                  # to 'ResizeStarted'.\n                  conditions:\n                  - lastProbeTime: string\n                    # lastTransitionTime is the time the condition\n                    # transitioned from one status to another.\n                    lastTransitionTime: string\n                    # message is the human-readable message indicating\n                    # details about last transition.\n                    message: string\n                    # reason is a unique, this should be a short,\n                    # machine understandable string that gives the\n                    # reason for condition's last transition. If it\n                    # reports \"ResizeStarted\" that means the underlying\n                    # persistent volume is being resized.\n                    reason: string status: string\n                    # PersistentVolumeClaimConditionType is a valid\n                    # value of PersistentVolumeClaimCondition.Type\n                    type: string\n                  # phase represents the current phase of\n                  # PersistentVolumeClaim.\n                  phase: string\n                  # resizeStatus stores status of resize operation.\n                  # ResizeStatus is not set by default but when\n                  # expansion is complete resizeStatus is set to empty\n                  # string by resize controller or kubelet. This is an\n                  # alpha field and requires enabling\n                  # RecoverVolumeExpansionFailure feature.\n                  resizeStatus: string\n              # 'wait_timeout'          : Timeout in seconds for reading\n              #  from or writing to this storage provider.\n              waitTimeout: \"90\"\n            # 'hdfs_kerberos_keytab'  : The Kerberos keytab file used to\n            #  authenticate the \"gpudb\" Kerberos\n            kerberosKeytab: string\n            # 'hdfs_principal'        : The effective principal name to\n            #  use when connecting to the hadoop cluster.\n            principal: string\n            # 'hdfs_uri'              : The host IP address & port for\n            #  the hadoop distributed file system. For example:\n            #  hdfs://localhost:8020\n            uri: string\n            # 'hdfs_use_kerberos'     : Set to \"true\" to enable Kerberos\n            #  authentication to an HDFS storage server. The\n            #  credentials of the principal are in the file specified\n            #  by the 'hdfs_kerberos_keytab' parameter. Note that\n            #  Kerberos's *kinit* command will be run when the database\n            #  is started.\n            useKerberos: true\n          # ColdStorageS3\n          coldStorageS3: awsAccessKeyId: string awsRoleARN: string\n          awsSecretAccessKey: string\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string bucketName: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\" encryptionCustomerAlgorithm: string\n            encryptionCustomerKey: string\n            # EncryptionType - This is optional and valid values are\n            # sse-s3 (Encryption key is managed by Amazon S3) and\n            # sse-kms (Encryption key is managed by AWS Key Management\n            # Service (kms)).\n            encryptionType: string\n            # Endpoint - s3_endpoint\n            endpoint: string\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # KMSKeyID - This is optional and must be specified when\n            # encryption type is sse-kms.\n            kmsKeyID: string\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\" region:\n            string useManagedCredentials: true\n            # UseVirtualAddressing - 's3_use_virtual_addressing'  : If\n            # true (default), S3 endpoints will be constructed using\n            # the 'virtual' style which includes the bucket name as\n            # part of the hostname. Set to false to use the 'path'\n            # style which treats the bucket name as if it is a path in\n            # the URI.\n            useVirtualAddressing: true\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageType The storage provider type. Currently,\n          # supports \"none\", \"disk\"(local/network storage), \"hdfs\"\n          # (Hadoop distributed filesystem), \"s3\" (Amazon S3\n          # bucket), \"azure_blob\" (Microsoft Azure Blob Storage)\n          # and \"gcs\" (Google GCS Bucket).\n          coldStorageType: \"none\" name: string\n        # The DiskCacheTier are used as temporary swap space for data\n        # that doesn't fit in RAM or VRAM. The disk should be as fast\n        # or faster than the Persist Tier storage since this tier is\n        # used as an intermediary cache between the RAM and Persist\n        # Tiers.\n        diskCacheTier:\n          # DiskTierStorageLimit\n          default:\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string defaultStorePersistentObjects: true\n                ranks:\n          - highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n        # GlobalTier Parameters\n        globalTier:\n          # Co-locates all disks to a single disk i.e. persist, cache,\n          # UDF will be on a single PVC.\n          colocateDisks: true\n          # Timeout in seconds for subsequent requests to wait on a\n          # locked resource\n          concurrentWaitTimeout: 120\n          # EncryptDataAtRest - Enable disk encryption of data at rest\n          encryptDataAtRest: true\n        # The PersistTier are used as temporary swap space for data that\n        # doesn't fit in RAM or VRAM. The disk should be as fast or\n        # faster than the Persist Tier storage since this tier is used\n        # as an intermediary cache between the RAM and Persist Tiers.\n        persistTier:\n          # DiskTierStorageLimit\n          default:\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string defaultStorePersistentObjects: true\n                ranks:\n          - highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n        # The RAMTier represents the RAM available for data storage per\n        # rank. The RAM Tier is NOT used for small, non-data objects or\n        # variables that are allocated and deallocated for program flow\n        # control or used to store metadata or other similar\n        # information; these continue to use either the stack or the\n        # regular runtime memory allocator. This tier should be sized\n        # on each machine such that there is sufficient RAM left over\n        # to handle this overhead, as well as the needs of other\n        # processes running on the same machine.\n        ramTier:\n          # The RAM Tier represents the RAM available for data storage\n          # per rank. The RAM Tier is NOT used for small, non-data\n          # objects or variables that are allocated and deallocated for\n          # program flow control or used to store metadata or other\n          # similar information; these continue to use either the stack\n          # or the regular runtime memory allocator. This tier should\n          # be sized on each machine such that there is sufficient RAM\n          # left over to handle this overhead, as well as the needs of\n          # other processes running on the same machine. A default\n          # memory limit and eviction thresholds can be set across all\n          # ranks, while one or more ranks may be configured to\n          # override those defaults. The general format for RAM\n          # settings: \n          #  # tier.ram.[default|rank<#>].<parameter> Valid *parameter*\n          #    names include: \n          #  * 'limit'          : The maximum RAM (bytes) per rank that\n          #     can be allocated across all resource groups.  Default\n          #     is -1, signifying no limit and ignore watermark\n          #     settings. * 'high_watermark' : RAM percentage used\n          #     eviction threshold.  Once memory usage exceeds this\n          #     value, evictions from this tier will be scheduled in\n          #     the background and continue until the 'low_watermark'\n          #     percentage usage is reached.  Default is \"90\",\n          #     signifying a 90% memory usage\n          #     threshold. * 'low_watermark'  : RAM percentage used\n          #     recovery threshold.  Once memory usage exceeds\n          #     the 'high_watermark', evictions will continue until\n          #     memory usage falls below this recovery threshold.\n          #     Default is \"50\", signifying a 50% memory usage\n          #     threshold.\n          default:\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n          # The maximum RAM (bytes) for processing data at rank 0.\n          # Overrides the overall default RAM tier\n          # limit. #tier.ram.rank0.limit = -1\n          ranks:\n          - highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string tieredStrategy:\n        # Default strategy to apply to tables or columns when one was\n        # not provided during table creation. This strategy is also\n        # applied to a resource group that does not specify one at time\n        # of creation. The strategy is formed by chaining together the\n        # tier types and their respective eviction priorities. Any\n        # given tier may appear no more than once in the chain and the\n        # priority must be in range \"1\" - \"10\", where \"1\" is the lowest\n        # priority (first to be evicted) and \"9\" is the highest\n        # priority (last to be evicted).  A priority of \"10\" indicates\n        # that an object is unevictable. Each tier's priority is in\n        # relation to the priority of other objects in the same tier;\n        # e.g., \"RAM 9, DISK2 1\" indicates that an object will be the\n        # highest evictable priority among objects in the RAM Tier\n        # (last evicted), but that it will be the lowest priority among\n        # objects in the Disk Tier named 'disk2' (first evicted).  Note\n        # that since an object can only have one Disk Tier instance in\n        # its strategy, the corresponding priority will only apply in\n        # relation to other objects in Disk Tier instance 'disk2'. See\n        # the Tiered Storage section for more information about tier\n        # type names. Format: <tier1> <priority>, <tier2> <priority>,\n        # <tier3> <priority>, ... Examples using a Disk Tier\n        # named 'disk2' and a Cold Storage Tier 'cold0': vram 3, ram 5,\n        # disk2 3, persist 10 vram 3, ram 5, disk2 3, persist 6, cold0\n        # 10 tier_strategy.default = VRAM 1, RAM 5, PERSIST 5\n        default: \"VRAM 1, RAM 5, PERSIST 5\"\n        # Predicate evaluation interval (in minutes) -  indicates the\n        # interval at which the tier strategy predicates are evaluated\n        predicateEvaluationInterval: 60 video:\n        # System default TTL for videos. Time-to-live (TTL) is the\n        # number of minutes before a video will expire and be removed,\n        # or -1 to disable. video_default_ttl = -1\n        defaultTTL: \"-1\"\n        # The maximum number of videos to allow on the system. Set to 0\n        # to disable video rendering.  Set to -1 to allow an unlimited\n        # number of videos. video_max_count = -1\n        maxCount: \"-1\"\n        # Directory where video files should be temporarily stored while\n        # rendering. Only accessed by rank 0. video_temp_directory = $\n        # {gaia.temp_directory}/gpudb-temp-videos\n        tmpDir: \"${gaia.temp_directory}/gpudb-temp-videos\"\n      # VisualizationConfig\n      visualization:\n        # Enable level-of-details rendering for fast interaction with\n        # large WKT polygon data.  Only available for the OpenGL\n        # renderer (when 'enable_opengl_renderer' is \"true\").\n        enableLODRendering: true\n        # If \"true\", enable hardware-accelerated OpenGL renderer;\n        # if \"false\", use the software-based Cairo renderer.\n        enableOpenGLRenderer: true\n        # If \"true\", enable Vector Tile Service (VTS) to support\n        # client-side visualization of geospatial data. Enabling this\n        # option increases memory usage on ingestion.\n        enableVectorTileService: false\n        # Longitude and latitude ranges of geospatial data for which\n        # level-of-details representations are being generated. The\n        # parameter order is: <min_longitude> <min_latitude>\n        # <max_longitude> <max_latitude> The default values span over\n        # the world, but the level-of-details rendering becomes more\n        # efficient when the precise extent of geospatial data is\n        # specified. kubebuilder:default:={ -180, -90, 180, 90 }\n        lodDataExtent: [integer]\n        # The extent to which shape data are pre-processed for\n        # level-of-details rendering during data insert/load or\n        # processed on-the-fly in rendering time. This is a trade-off\n        # between speed and memory. The higher the value, the faster\n        # level-of-details rendering is, but the more memory is used\n        # for storing processed shape data. The maximum level is \"10\"\n        # (most shape data are pre-processed) and the minimum level\n        # is \"0\".\n        lodPreProcessingLevel: 5\n        # The number of subregions in horizontal and vertical geospatial\n        # data extent. The default values of \"12 6\" divide the world\n        # into subregions of 30 degree (lon.) x 30 degree (lat.)\n        lodSubRegionNum: [12,6]\n        # A base image resolution (width and height in pixels) at which\n        # a subregion would be rendered in a global view spanning over\n        # the whole dataset. Based on this resolution level-of-details\n        # representations are generated for the polygons located in the\n        # subregion.\n        lodSubRegionResolution: [512,512]\n        # Maximum heatmap size (in pixels) that can be generated. This\n        # reserves 'max_heatmap_size' ^ 2 * 8 bytes of GPU memory\n        # at **rank0**\n        maxHeatmapSize: 3072\n        # The maximum number of levels in the level-of-details\n        # rendering. As the number increases, level-of-details\n        # rendering becomes effective at higher zoom levels, but it may\n        # increase memory usage for storing level-of-details\n        # representations.\n        maxLODLevel: 8\n        # Input geometries are pre-processed upon ingestion for faster\n        # vector tile generation. This parameter determines the\n        # zoomlevel at which the vector tile pre-processing stops. A\n        # vector tile request for a higher zoomlevel than this\n        # parameter takes additional time because the vector tile needs\n        # to be generated on the fly.\n        maxVectorTileZoomLevel: 8\n        # Input geometries are pre-processed upon ingestion for faster\n        # vector tile generation. This parameter determines the\n        # zoomlevel from which the vector tile pre-processing starts. A\n        # vector tile request for a lower zoomlevel than this parameter\n        # takes additional time because the vector tile needs to be\n        # generated on the fly.\n        minVectorTileZoomLevel: 1\n        # The number of samples to use for antialiasing. Higher numbers\n        # will improve image quality but require more GPU memory to\n        # store the samples on worker ranks.  This affects only the\n        # OpenGL renderer. Value may be \"0\", \"4\", \"8\" or \"16\". When \"0\"\n        # antialiasing is disabled. The default value is \"0\".\n        openGLAntialiasingLevel: 1\n        # Threshold number of points (per-TOM) at which point rendering\n        # switches to fast mode.\n        pointRenderThreshold: 100000\n        # Single-precision coordinates are used for usual rendering\n        # processes, but depending on the precision of geometry data\n        # and use case, double precision processing may be required at\n        # a high zoomlevel. Double precision rendering processes are\n        # used from the zoomlevel specified by this parameter, which is\n        # corresponding to a zoomlevel of TMS or Google map service.\n        renderingPrecisionThreshold: 30\n        # The image width/height (in pixels) of svg symbols cached in\n        # the OpenGL symbol cache.\n        symbolResolution: 100\n        # The width/height (in pixels) of an OpenGL texture which caches\n        # symbol images for OpenGL rendering.\n        symbolTextureSize: 4000\n        # Threshold for the number of points (per-TOM) after which\n        # symbology rendering falls back to regular rendering\n        symbologyRenderThreshold: 10000\n        # The name of map tiler used for Vector Tile Service. \"google\"\n        # and \"tms\" map tilers are supported currently. This parameter\n        # should be matched with the map tiler of clients' vector tile\n        # renderer.\n        vectorTileMapTiler: \"google\" workbench:\n        # Start the Workbench app on the head host when host manager is\n        # started. enable_workbench = false\n        enable: false\n        # # HTTP server port for Workbench if enabled. workbench_port =\n        #   8000\n        port:\n          # Number of port to expose on the pod's IP address. This must\n          # be a valid port number, 0 < x < 65536.\n          containerPort: 1\n          # What host IP to bind the external port to.\n          hostIP: string\n          # Number of port to expose on the host. If specified, this\n          # must be a valid port number, 0 < x < 65536. If HostNetwork\n          # is specified, this must match ContainerPort. Most\n          # containers do not need this.\n          hostPort: 1\n          # If specified, this must be an IANA_SVC_NAME and unique\n          # within the pod. Each named port in a pod must have a unique\n          # name. Name for the port that can be referred to by\n          # services.\n          name: string\n          # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n          # to \"TCP\".\n          protocol: \"TCP\"\n    # The fully qualified URL used on the Ingress records for any\n    # exposed services. Completed buy yth Operator. DO NOT POPULATE\n    # MANUALLY.\n    fqdn: \"\"\n    # The name of the parent HA Ring this cluster belongs to.\n    haRingName: \"default\"\n    # Whether to enable the separate node 'pools' for \"infra\", \"compute\"\n    # pod scheduling. Default: false\n    hasPools: true\n    # The port the HostManager will be running in each pod in the\n    # cluster. Default: 9300, TCP\n    hostManagerPort:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # Set the name of the container image to use.\n    image: \"kinetica/kinetica-k8s-intel:v7.1.6.0\"\n    # Set the policy for pulling container images.\n    imagePullPolicy: \"IfNotPresent\"\n    # ImagePullSecrets is an optional list of references to secrets in\n    # the same gpudb-namespace to use for pulling any of the images\n    # used by this PodSpec. If specified, these secrets will be passed\n    # to individual puller implementations for them to use. For\n    # example, in the case of docker, only DockerConfig type secrets\n    # are honored.\n    imagePullSecrets:\n    - name: string\n    # Labels - Pod labels to be applied to the Statefulset DB pods.\n    labels: {}\n    # The Ingress Endpoint that GAdmin will be running on.\n    letsEncrypt:\n      # Enable LetsEncrypt for Certificate generation.\n      enabled: false\n      # LetsEncryptEnvironment\n      environment: \"staging\"\n    # Set the Kinetica DB License.\n    license: string\n    # Periodic probe of container liveness. Container will be restarted\n    # if the probe fails. Cannot be updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    livenessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # LoggerConfig Kinetica DB Logger Configuration Object Configure the\n    # LOG4CPLUS logger for the DB. Field takes a string containing the\n    # full configuration. If not specified a template file is used\n    # during DB configuration generation.\n    loggerConfig: configString: string\n    # Metrics - DB Metrics scrape & forward configuration for\n    # `fluent-bit`.\n    metricsRegistryRepositoryTag:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Metrics - `fluent-bit` container requests/limits.\n    metricsResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # NodeSelector - NodeSelector to be applied to the DB Pods\n    nodeSelector: {}\n    # Do not use internal Operator field only.\n    originalReplicas: 1\n    # podManagementPolicy controls how pods are created during initial\n    # scale up, when replacing pods on nodes, or when scaling down. The\n    # default policy is `OrderedReady`, where pods are created in\n    # increasing order (pod-0, then pod-1, etc) and the controller will\n    # wait until each pod is ready before continuing. When scaling\n    # down, the pods are removed in the opposite order. The alternative\n    # policy is `Parallel` which will create pods in parallel to match\n    # the desired scale without waiting, and on scale down will delete\n    # all pods at once.\n    podManagementPolicy: \"Parallel\"\n    # Number of ranks per node as a uint16 i.e. 1-65535 ranks per node.\n    # Default: 1\n    ranksPerNode: 1\n    # Periodic probe of container service readiness. Container will be\n    # removed from service endpoints if the probe fails. Cannot be\n    # updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    readinessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # The number of DB ranks i.e. replicas that the cluster will spin\n    # up. Default: 3\n    replicas: 3\n    # Limit the resources a DB Pod can consume.\n    resources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # SecurityContext holds security configuration that will be applied\n    # to a container. Some fields are present in both SecurityContext\n    # and PodSecurityContext.  When both are set, the values in\n    # SecurityContext take precedence.\n    securityContext:\n      # AllowPrivilegeEscalation controls whether a process can gain\n      # more privileges than its parent process. This bool directly\n      # controls if the no_new_privs flag will be set on the container\n      # process. AllowPrivilegeEscalation is true always when the\n      # container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note\n      # that this field cannot be set when spec.os.name is windows.\n      allowPrivilegeEscalation: true\n      # The capabilities to add/drop when running containers. Defaults\n      # to the default set of capabilities granted by the container\n      # runtime. Note that this field cannot be set when spec.os.name\n      # is windows.\n      capabilities:\n        # Added capabilities\n        add: [\"string\"]\n        # Removed capabilities\n        drop: [\"string\"]\n      # Run container in privileged mode. Processes in privileged\n      # containers are essentially equivalent to root on the host.\n      # Defaults to false. Note that this field cannot be set when\n      # spec.os.name is windows.\n      privileged: true\n      # procMount denotes the type of proc mount to use for the\n      # containers. The default is DefaultProcMount which uses the\n      # container runtime defaults for readonly paths and masked paths.\n      # This requires the ProcMountType feature flag to be enabled.\n      # Note that this field cannot be set when spec.os.name is\n      # windows.\n      procMount: string\n      # Whether this container has a read-only root filesystem. Default\n      # is false. Note that this field cannot be set when spec.os.name\n      # is windows.\n      readOnlyRootFilesystem: true\n      # The GID to run the entrypoint of the container process. Uses\n      # runtime default if unset. May also be set in\n      # PodSecurityContext.  If set in both SecurityContext and\n      # PodSecurityContext, the value specified in SecurityContext\n      # takes precedence. Note that this field cannot be set when\n      # spec.os.name is windows.\n      runAsGroup: 1\n      # Indicates that the container must run as a non-root user. If\n      # true, the Kubelet will validate the image at runtime to ensure\n      # that it does not run as UID 0 (root) and fail to start the\n      # container if it does. If unset or false, no such validation\n      # will be performed. May also be set in PodSecurityContext.  If\n      # set in both SecurityContext and PodSecurityContext, the value\n      # specified in SecurityContext takes precedence.\n      runAsNonRoot: true\n      # The UID to run the entrypoint of the container process. Defaults\n      # to user specified in image metadata if unspecified. May also be\n      # set in PodSecurityContext.  If set in both SecurityContext and\n      # PodSecurityContext, the value specified in SecurityContext\n      # takes precedence. Note that this field cannot be set when\n      # spec.os.name is windows.\n      runAsUser: 1\n      # The SELinux context to be applied to the container. If\n      # unspecified, the container runtime will allocate a random\n      # SELinux context for each container.  May also be set in\n      # PodSecurityContext.  If set in both SecurityContext and\n      # PodSecurityContext, the value specified in SecurityContext\n      # takes precedence. Note that this field cannot be set when\n      # spec.os.name is windows.\n      seLinuxOptions:\n        # Level is SELinux level label that applies to the container.\n        level: string\n        # Role is a SELinux role label that applies to the container.\n        role: string\n        # Type is a SELinux type label that applies to the container.\n        type: string\n        # User is a SELinux user label that applies to the container.\n        user: string\n      # The seccomp options to use by this container. If seccomp options\n      # are provided at both the pod & container level, the container\n      # options override the pod options. Note that this field cannot\n      # be set when spec.os.name is windows.\n      seccompProfile:\n        # localhostProfile indicates a profile defined in a file on the\n        # node should be used. The profile must be preconfigured on the\n        # node to work. Must be a descending path, relative to the\n        # kubelet's configured seccomp profile location. Must only be\n        # set if type is \"Localhost\".\n        localhostProfile: string\n        # type indicates which kind of seccomp profile will be applied.\n        # Valid options are: Localhost - a profile defined in a file on\n        # the node should be used. RuntimeDefault - the container\n        # runtime default profile should be used. Unconfined - no\n        # profile should be applied.\n        type: string\n      # The Windows specific settings applied to all containers. If\n      # unspecified, the options from the PodSecurityContext will be\n      # used. If set in both SecurityContext and PodSecurityContext,\n      # the value specified in SecurityContext takes precedence. Note\n      # that this field cannot be set when spec.os.name is linux.\n      windowsOptions:\n        # GMSACredentialSpec is where the GMSA admission webhook\n        # (https://github.com/kubernetes-sigs/windows-gmsa) inlines the\n        # contents of the GMSA credential spec named by the\n        # GMSACredentialSpecName field.\n        gmsaCredentialSpec: string\n        # GMSACredentialSpecName is the name of the GMSA credential spec\n        # to use.\n        gmsaCredentialSpecName: string\n        # HostProcess determines if a container should be run as a 'Host\n        # Process' container. This field is alpha-level and will only\n        # be honored by components that enable the\n        # WindowsHostProcessContainers feature flag. Setting this field\n        # without the feature flag will result in errors when\n        # validating the Pod. All of a Pod's containers must have the\n        # same effective HostProcess value (it is not allowed to have a\n        # mix of HostProcess containers and non-HostProcess\n        # containers).  In addition, if HostProcess is true then\n        # HostNetwork must also be set to true.\n        hostProcess: true\n        # The UserName in Windows to run the entrypoint of the container\n        # process. Defaults to the user specified in image metadata if\n        # unspecified. May also be set in PodSecurityContext. If set in\n        # both SecurityContext and PodSecurityContext, the value\n        # specified in SecurityContext takes precedence.\n        runAsUserName: string\n    # StartupProbe indicates that the Pod has successfully initialized.\n    # If specified, no other probes are executed until this completes\n    # successfully. If this probe fails, the Pod will be restarted,\n    # just as if the livenessProbe failed. This can be used to provide\n    # different probe parameters at the beginning of a Pod's lifecycle,\n    # when it might take a long time to load data or warm a cache, than\n    # during steady-state operation. This cannot be updated. This is an\n    # alpha feature enabled by the StartupProbe feature flag. More\n    # info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    startupProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n  # HostManagerMonitor is used to monitor the Kinetica DB Ranks. If a\n  # rank is unavailable for the specified time(MaxRankFailureCount) the\n  # cluster will be restarted.\n  hostManagerMonitor:\n    # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and\n    # Liveness probes. Default: 8888\n    db_healthz_port:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and\n    # Liveness probes. Default: 8889\n    hm_healthz_port:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # Periodic probe of container liveness. Container will be restarted\n    # if the probe fails. Cannot be updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    livenessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # Set the name of the container image to use.\n    monitorRegistryRepositoryTag:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Periodic probe of container service readiness. Container will be\n    # removed from service endpoints if the probe fails. Cannot be\n    # updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    readinessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # Allow for overriding resource requests/limits.\n    resources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # StartupProbe indicates that the Pod has successfully initialized.\n    # If specified, no other probes are executed until this completes\n    # successfully. If this probe fails, the Pod will be restarted,\n    # just as if the livenessProbe failed. This can be used to provide\n    # different probe parameters at the beginning of a Pod's lifecycle,\n    # when it might take a long time to load data or warm a cache, than\n    # during steady-state operation. This cannot be updated. This is an\n    # alpha feature enabled by the StartupProbe feature flag. More\n    # info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    startupProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  infra: \"on-prem\"\n  # The Kubernetes Ingress Controller will be running on e.g.\n  # ingress-nginx, Traefik, Ambassador, Gloo, Kong etc.\n  ingressController: \"nginx\"\n  # The LDAP server to connect to.\n  ldap:\n    # BaseDN - The root base LDAP Distinguished Name to use as the base\n    # for the LDAP usage\n    baseDN: \"dc=kinetica,dc=com\"\n    # BindDN - The LDAP Distinguished Name to use for the LDAP\n    # connectivity/data connectivity/bind\n    bindDN: \"cn=admin,dc=kinetica,dc=com\"\n    # Host - The name of the host to connect to. If IsInLocalK8S=true\n    # then supply only the name e.g. `openldap` Default: openldap\n    host: \"openldap\"\n    # IsInLocalK8S - Is the LDAP server co-located in the same K8s\n    # cluster the operator is running in. Default: true\n    isInLocalK8S: true\n    # IsLDAPS - IUse LDAPS instead of LDAP Default: false\n    isLDAPS: false\n    # Namespace - The namespace the Default: openldap\n    namespace: \"gpudb\"\n    # Port - Defaults to LDAP Port 389 Default: 389\n    port: 389\n  # Tells the operator to use Cloud Provider Pay As You Go\n  # functionality.\n  payAsYouGo: false\n  # The Reveal Dashboard Configuration for the Kinetica Cluster.\n  reveal:\n    # The port that Reveal will be running on. It runs only on the head\n    # node pod in the cluster. Default: 8080\n    containerPort:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # The Ingress Endpoint that Reveal will be running on.\n    ingressPath:\n      # backend defines the referenced service endpoint to which the\n      # traffic will be forwarded to.\n      backend:\n        # resource is an ObjectRef to another Kubernetes resource in the\n        # namespace of the Ingress object. If resource is specified,\n        # serviceName and servicePort must not be specified.\n        resource:\n          # APIGroup is the group for the resource being referenced. If\n          # APIGroup is not specified, the specified Kind must be in\n          # the core API group. For any other third-party types,\n          # APIGroup is required.\n          apiGroup: string\n          # Kind is the type of resource being referenced\n          kind: KineticaCluster\n          # Name is the name of resource being referenced\n          name: string\n        # serviceName specifies the name of the referenced service.\n        serviceName: string\n        # servicePort Specifies the port of the referenced service.\n        servicePort: \n      # path is matched against the path of an incoming request.\n      # Currently it can contain characters disallowed from the\n      # conventional \"path\" part of a URL as defined by RFC 3986. Paths\n      # must begin with a '/' and must be present when using PathType\n      # with value \"Exact\" or \"Prefix\".\n      path: string\n      # pathType determines the interpretation of the path matching.\n      # PathType can be one of the following values: * Exact: Matches\n      # the URL path exactly. * Prefix: Matches based on a URL path\n      # prefix split by '/'. Matching is done on a path element by\n      # element basis. A path element refers is the list of labels in\n      # the path split by the '/' separator. A request is a match for\n      # path p if every p is an element-wise prefix of p of the request\n      # path. Note that if the last element of the path is a substring\n      # of the last element in request path, it is not a match\n      # (e.g. /foo/bar matches /foo/bar/baz, but does not\n      # match /foo/barbaz). * ImplementationSpecific: Interpretation of\n      # the Path matching is up to the IngressClass. Implementations\n      # can treat this as a separate PathType or treat it identically\n      # to Prefix or Exact path types. Implementations are required to\n      # support all path types. Defaults to ImplementationSpecific.\n      pathType: string\n    # Whether to enable the Reveal Dashboard on the Cluster. Default:\n    # true\n    isEnabled: true\n  # The Stats server to deploy & connect to if required.\n  stats:\n    # AlertManager - AlertManager specific configuration.\n    alertManager:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n      # StoragePath - Set the location of the AlertManager file\n      # storage.\n      storagePath: \"/opt/gpudb/kagent/stats/storage/alertmanager/alertmanager\"\n      # WebConfigFile - Set the location of the AlertManager\n      # alertmanager-web-config.yml.\n      webConfigFile: \"/opt/gpudb/kagent/stats/alertmanager/alertmanager-web-config.yml\"\n      # WebListenAddress - Set the location of the AlertManager\n      # alertmanager-web-config.yml.\n      webListenAddress: \"0.0.0.0:9089\"\n    # Grafana - Grafana specific configuration.\n    grafana:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # HomePath - Set the location of the Grafana home directory.\n      homePath: \"/opt/gpudb/kagent/stats/grafana\"\n      # GraphiteHost - Host Address\n      host: \"0.0.0.0\"\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n    # Whether to enable the Stats Server on the Cluster. Default: true\n    isEnabled: true\n    # Loki - Loki specific configuration.\n    loki:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # ExpandEnv\n      expandEnv: true\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n      # Storage - Set the path of the Loki storage.\n      storage: \"/opt/gpudb/kagent/stats/storage/loki-storage\"\n    # Which vmss/node group etc. to use as the NodeSelector\n    pool: \"compute\"\n    # Prometheus - Prometheus specific configuration.\n    prometheus:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Set the Prometheus logging level.\n      logLevel: \"debug\"\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n      # Set the location of the TSDB database.\n      storageTSDBPath: \"/opt/gpudb/kagent/stats/storage/prometheus-storage\"\n      # Set the time to hold data in the TSDB database.\n      storageTSDBRetentionTime: \"7d\"\n      # Timings - Prometheus Intervals & Timeouts\n      timings: evaluationInterval: \"30s\" scrapeInterval: \"30s\"\n      scrapeTimeout: \"10s\"\n    # Whether to share a single PV for Loki, Prometheus & Grafana or\n    # have a separate PV for each. Default: true\n    sharedPV: true\n    # Resource block specifically for use with SharedPV = true to set\n    # storage `requests` & `limits`\n    sharedPVResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n  # Supporting images like socat,busybox etc.\n  supportingImages:\n    # Set the resource requests/limits for the BusyBox Pod(s).\n    busyBoxResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # Set the name of the container image to use.\n    busybox:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Set the name of the container image to use.\n    socat:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Set the resource requests/limits for the Socat Pod.\n    socatResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n# KineticaClusterStatus defines the observed state of KineticaCluster\nstatus:\n  # CloudProvider the DB is deployed on\n  cloudProvider: string\n  # CloudRegion the DB is deployed on\n  cloudRegion: string\n  # ClusterSize the current number of ranks & type i.e. CPU or GPU of\n  # the cluster\n  clusterSize:\n    # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n    # representation of the number of nodes in a simple to understand\n    # T-Short size scheme. This indicates the size of the cluster i.e.\n    # the number of nodes. It does not identify the size of the cloud\n    # provider nodes. For node size see ClusterTypeEnum. Supported\n    # Values are: - XS S M L XL XXL XXXL\n    tshirtSize: string\n    # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n    # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n    # of the VM.\n    tshirtType: string\n  # The number of ranks (replicas) that the cluster was last run with\n  currentReplicas: 0\n  # The first start of a new cluster has completed.\n  firstStartComplete: false\n  # HostManagerStatusResponse - The contents of polling the HostManager\n  # on port 9300n are added to the BR status field. This allows clients\n  # to get the Host/Rank/Graph/ML status information.\n  hmStatus: cluster_leader: string cluster_operation: string graph:\n  status: string graph_status: string host_httpd_status: string\n  host_mode: string host_num_gpus: string host_pid: 1\n  host_stats_status: string host_status: string hostname: string hosts:\n  graph_status: string host_httpd_status: string host_mode: string\n  host_pid: 1 host_stats_status: string host_status: string ml_status:\n  string query_planner_status: string reveal_status: string\n  license_expiration: string license_status: string license_type:\n  string ml_status: string query_planner_status: string ranks: mode:\n  string\n      # Pid - The OS Process Id for the Rank.\n      pid: 1 status: string reveal_status: string system_idle_time:\n      string system_mode: string system_rebalancing: 1 system_status:\n      string text: status: string version: string\n  # The fully qualified Ingress routes.\n  ingressUrls: aaw: string dbMonitor: string files: string gadmin:\n  string postgresProxy: string ranks: {} reveal: string\n  # The fully qualified in-cluster Ingress routes.\n  internalIngressUrls: aaw: string dbMonitor: string files: string\n  gadmin: string postgresProxy: string ranks: {} reveal: string\n  # Identify FreeSaaS Cluster\n  isFreeSaaS: false\n  # HostOptions used during DB Cluster Scaling Functions\n  options: ram_limit: 1\n  # OutstandingBilling - A list of hours not yet billed for. Will only\n  # be present if the plan is Pay As You Go and the operator was unable\n  # to send the billing information due to an issue with the cloud\n  # providers billing APIs.\n  outstandingBillableHour:\n  - billable: true billed: true billedAt: string duration: string end:\n    string start: string\n  # The state or phase of the current DB installation\n  phase: stringv\n
","tags":["Reference"]},{"location":"Reference/kinetica_workbench/","title":"Workbench CRD Reference","text":"","tags":["Reference"]},{"location":"Reference/kinetica_workbench/#coming-soon","title":"Coming Soon","text":"","tags":["Reference"]},{"location":"Reference/workbench/","title":"Kinetica Workbench Configuration","text":"
  • kubectl (yaml)
  • Helm Chart
","tags":["Reference"]},{"location":"Reference/workbench/#workbench","title":"Workbench","text":"kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica Workbench is as follows: -

Workbench GVK
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\n
","tags":["Reference"]},{"location":"Reference/workbench/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

Workbench metadata
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n  name: workbench-kinetica-cluster\n  namespace: gpudb\n

The simplest valid Workbench CR looks as follows: -

workbench.yaml
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n  name: workbench-kinetica-cluster\n  namespace: gpudb\nspec:\n  executeSqlLimit: 10000\n  fqdn: kinetica-cluster.saas.kinetica.com\n  image: kinetica/workbench:v7.1.9-8.rc1\n  letsEncrypt:\n    enabled: false\n  userIdleTimeout: 60\n  ingressController: nginx-ingress\n

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

helm","tags":["Reference"]},{"location":"Setup/","title":"Kinetica for Kubernetes Setup","text":"
  • Set up in 15 minutes

    Install the Kinetica DB locally on Kind or k3s with helm to get up and running in minutes. Quickstart

  • Prepare to Install

    What you need to know & do before beginning a production installation. Preparation and Prerequisites

  • Production DB Installation

    Install the Kinetica DB with helm to get up and running quickly Installation

  • Channel Your Inner Ninja

    Advanced Installation Topics which go beyond the basic installation. Advanced Topics

","tags":["Getting Started","Installation"]},{"location":"Support/","title":"Support","text":"
  • Taking the next steps

    Further tutorials or help on configuring Kinetica in different environments. Help & Tutorials

  • Locating Issues

    In the unlikely event you require information on how to troubleshoot your installation, help can be found here. Troubleshooting

  • FAQ

    Frequently Asked Questions.. FAQ

","tags":["Support"]},{"location":"Troubleshooting/troubleshooting/","title":"Troubleshooting","text":"","tags":["Support"]},{"location":"Troubleshooting/troubleshooting/#coming-soon","title":"Coming Soon","text":"","tags":["Support"]},{"location":"tags/","title":"Categories","text":"

Following is a list of relevant documentation categories:

"},{"location":"tags/#aks","title":"AKS","text":"
  • Azure AKS
"},{"location":"tags/#administration","title":"Administration","text":"
  • Administration
  • Grant management
  • Resource group management
  • Role Management
  • Schema management
  • User Management
  • Kinetica Cluster Grants Reference
  • Kinetica Cluster Resource Groups Reference
  • Kinetica Cluster Roles Reference
  • Kinetica Cluster Schemas Reference
  • Kinetica Cluster Users Reference
"},{"location":"tags/#advanced","title":"Advanced","text":"
  • Advanced
  • Advanced Topics
  • Air-Gapped Environments
  • Alternative Charts
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • Kinetica DB on OS X (Arm64)
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • Bare Metal/VM Installation - kubeadm
  • S3 Storage for Dev/Test
  • nginx-ingress Ingress Configuration
"},{"location":"tags/#architecture","title":"Architecture","text":"
  • Architecture
  • Core Database Architecture
  • Kubernetes Architecture
"},{"location":"tags/#configuration","title":"Configuration","text":"
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • nginx-ingress Ingress Configuration
  • How to change the Clusters FQDN
  • OpenTelemetry
"},{"location":"tags/#development","title":"Development","text":"
  • Kinetica DB on OS X (Arm64)
  • S3 Storage for Dev/Test
  • Quickstart
"},{"location":"tags/#eks","title":"EKS","text":"
  • Amazon EKS
"},{"location":"tags/#getting-started","title":"Getting Started","text":"
  • Getting Started
  • Azure AKS
  • Amazon EKS
  • Preparation & Prerequisites
  • Quickstart
  • Kinetica for Kubernetes Setup
"},{"location":"tags/#ingress","title":"Ingress","text":"
  • Ingress Configuration
  • ingress-nginx Ingress Configuration
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • nginx-ingress Ingress Configuration
"},{"location":"tags/#installation","title":"Installation","text":"
  • Air-Gapped Environments
  • Alternative Charts
  • Kubernetes Cluster LoadBalancer for Bare Metal/VM Installations
  • Bare Metal/VM Installation - kubeadm
  • S3 Storage for Dev/Test
  • Getting Started
  • Kinetica for Kubernetes Installation
  • CPU
  • GPU
  • Preparation & Prerequisites
  • Quickstart
  • Core DB CRDs
  • Kinetica for Kubernetes Setup
"},{"location":"tags/#monitoring","title":"Monitoring","text":"
  • Logs
  • Metrics Collection & Display
  • OpenTelemetry
"},{"location":"tags/#operations","title":"Operations","text":"
  • Logs
  • Metrics Collection & Display
  • Operational Management
  • Kinetica for Kubernetes Backup & Restore
  • OpenTelemetry
  • Kinetica for Kubernetes Data Rebalancing
  • Kinetica for Kubernetes Suspend & Resume
  • Kinetica Cluster Backups Reference
  • Core DB CRDs
  • Kinetica Cluster Restores Reference
"},{"location":"tags/#reference","title":"Reference","text":"
  • Reference Section
  • Kinetica Database Configuration
  • Kinetica Operators
  • Kinetica Cluster Admins Reference
  • Kinetica Cluster Backups Reference
  • Kinetica Cluster Grants Reference
  • Core DB CRDs
  • Kinetica Cluster Resource Groups Reference
  • Kinetica Cluster Restores Reference
  • Kinetica Cluster Roles Reference
  • Kinetica Cluster Schemas Reference
  • Kinetica Cluster Users Reference
  • Kinetica Clusters Reference
  • Kinetica Workbench Reference
  • Kinetica Workbench Configuration
"},{"location":"tags/#storage","title":"Storage","text":"
  • S3 Storage for Dev/Test
  • Amazon EKS
"},{"location":"tags/#support","title":"Support","text":"
  • How to change the Clusters FQDN
  • FAQ
  • Help & Tutorials
  • Creating Users, Roles, Schemas and other Kinetica DB Objects
  • Support
  • Troubleshooting
"}]} \ No newline at end of file diff --git a/7.2/sitemap.xml b/7.2/sitemap.xml index 5db5765..73e99ef 100644 --- a/7.2/sitemap.xml +++ b/7.2/sitemap.xml @@ -2,347 +2,347 @@ https://www.kinetica.com/7.2/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/tags/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Administration/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Administration/grant_management/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Administration/resource_group_management/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Administration/role_management/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Administration/schema_management/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Administration/user_management/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/advanced_topics/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/airgapped/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/alternative_charts/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/ingress_configuration/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/ingress_nginx_config/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/ingress_urls/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/kinetica_images_list_for_airgapped_environments/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/kinetica_mac_arm_k8s/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/kube_vip_loadbalancer/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/kubernetes_bare_metal_vm_install/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/minio_s3_dev_test/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Advanced/nginx_ingress_config/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Architecture/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Architecture/db_architecture/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Architecture/kinetica_for_kubernetes_architecture/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/aks/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/eks/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/helm_repo_add/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/installation/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/installation_cpu/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/installation_gpu/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/local_kinetica_etc_hosts/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/note_additional_gpu_sqlassistant/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/preparation_and_prerequisites/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/GettingStarted/quickstart/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Help/changing_the_fqdn/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Help/faq/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Help/help_and_tutorials/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Help/help_index/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Monitoring/logs/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Monitoring/metrics/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Operations/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Operations/backup_and_restore/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Operations/otel/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Operations/rebalance/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Operations/suspend_resume/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Operators/k3s/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Operators/k8s/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Operators/kind/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Operators/kinetica-operators/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/database/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/helm_kinetica_operators/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_cluster_admins/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_cluster_backups/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_cluster_grants/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_cluster_reference/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_cluster_resource_groups/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_cluster_restores/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_cluster_roles/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_cluster_schemas/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_cluster_users/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_clusters/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/kinetica_workbench/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Reference/workbench/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Setup/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Support/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/Troubleshooting/troubleshooting/ - 2024-11-07 + 2024-11-12 daily https://www.kinetica.com/7.2/tags/ - 2024-11-07 + 2024-11-12 daily \ No newline at end of file diff --git a/7.2/sitemap.xml.gz b/7.2/sitemap.xml.gz index 438ea81..9178fea 100644 Binary files a/7.2/sitemap.xml.gz and b/7.2/sitemap.xml.gz differ