diff --git a/404.html b/404.html index a3e3196..0d45ec4 100644 --- a/404.html +++ b/404.html @@ -1 +1 @@ - Kinetica DB Operator Helm Charts

404 - Not found

\ No newline at end of file + Kinetica for Kubernetes
\ No newline at end of file diff --git a/Advanced/advanced_topics/index.html b/Advanced/advanced_topics/index.html new file mode 100644 index 0000000..01427d1 --- /dev/null +++ b/Advanced/advanced_topics/index.html @@ -0,0 +1,11 @@ + Advanced Topics - Kinetica for Kubernetes
Skip to content

Advanced Topics

Install from a development/pre-release chart version

Find all alternative chart versions with:

Find alternative chart versions
helm search repo kinetica-operators --devel --versions
+

helm_alternative_versions

Then append --devel --version [CHART-DEVEL-VERSION] to the end of the Helm install command. See here.



Using your own OpenTelemetry Collector

Coming Soon


Configuring Ingress Records

ingress-nginx

Coming Soon

nginx-ingress

Coming Soon


Bare Metal LoadBalancer

kube-vip

kube-vip provides Kubernetes clusters with a virtual IP and load balancer for both the control plane (for building a highly-available cluster) and Kubernetes Services of type LoadBalancer without relying on any external hardware or software.

Coming Soon

\ No newline at end of file diff --git a/Architecture/db_architecture/index.html b/Architecture/db_architecture/index.html new file mode 100644 index 0000000..80ece72 --- /dev/null +++ b/Architecture/db_architecture/index.html @@ -0,0 +1,10 @@ + Architecture - Kinetica for Kubernetes
Skip to content

Architecture

Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance – particularly on streaming analytics and geospatial workloads.

Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

Database Architecture

Scale-out Architecture

Kinetica has a distributed architecture that has been designed for data processing at scale. A standard cluster consists of identical nodes run on commodity hardware. A single node is chosen to be the head aggregation node.

massivley_parallel A cluster can be scaled up at any time to increase storage capacity and processing power, with near-linear scale processing improvements for most operations. Sharding of data can be done automatically, or specified and optimized by the user.

Distributed Ingest & Query

Kinetica uses a shared-nothing data distribution across worker nodes. The head node receives a query and breaks it down into small tasks that can be spread across worker nodes. To avoid bottlenecks at the head node, ingestion can also be organized in parallel by all the worker nodes. Kinetica is able to distribute data client-side before sending it to designated worker nodes. This streamlines communication and processing time.

For the client application, there is no need to be aware of how many nodes are in the cluster, where they are, or how the data is distributed across them!

architecture

Column Oriented

Columnar data structures lend themselves to low-latency reads of data. But from a user's perspective, Kinetica behaves very similarly to a standard relational database – with tables of rows and columns and it can be queried with SQL or through APIs. Available column types include the standard base types (int, long, float, double, string, & bytes), as well as numerous sub-types supporting date/time, geospatial, and other data forms.

columnar

Vectorized Functions

Vectorization is Kinetica’s secret sauce and the key feature that underpins its blazing fast performance.

Advanced vectorized kernels are optimized to use vectorized CPUs and GPUs for faster performance. The query engine automatically assigns tasks to the processor where they will be most performant. Aggregations, filters, window functions, joins and geospatial rendering are some of the capabilities that see performance improvements.

vectorized

Memory-First, Tiered Storage

Tiered storage makes it possible to optimize where data lives for performance and cost. Recent data (such as all data where the timestamp is within the last 2 weeks) can be held in-memory, while older data can be moved to disk, or even to external storage services.

Kinetica operates on an entire data corpus by intelligently managing data across GPU memory, system memory, SIMD, disk / SSD, HDFS, and cloud storage like S3 for optimal performance.

Kinetica can also query and process data stored in data lakes, joining it with data managed by Kinetica in highly parallelized queries.

Performant Key-Value Lookup

Kinetica is able to generate distributed key-value lookups, from columnar data, for high-performance and concurrency. Sharding logic is embedded directly within client APIs enabling linear scale-out as clients can lookup data directly from the node where the data lives.


\ No newline at end of file diff --git a/Architecture/img.png b/Architecture/img.png new file mode 100644 index 0000000..6cc5230 Binary files /dev/null and b/Architecture/img.png differ diff --git a/Architecture/index.html b/Architecture/index.html new file mode 100644 index 0000000..86552d8 --- /dev/null +++ b/Architecture/index.html @@ -0,0 +1,10 @@ + Architecture - Kinetica for Kubernetes
Skip to content

Architecture

Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance – particularly on streaming analytics and geospatial workloads.

Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

  • Kinetica Database Architecture


    Install the Kinetica DB with helm and get up and running in minutes

    Core Database Architecture

  • Kinetica for Kubernetes Architecture


    Install the Kinetica DB with helm and get up and running in minutes

    Kubernetes Architecture

\ No newline at end of file diff --git a/Architecture/kinetica_for_kubernetes_architecture/index.html b/Architecture/kinetica_for_kubernetes_architecture/index.html new file mode 100644 index 0000000..060da29 --- /dev/null +++ b/Architecture/kinetica_for_kubernetes_architecture/index.html @@ -0,0 +1,10 @@ + Kubernetes Architecture - Kinetica for Kubernetes
Skip to content
\ No newline at end of file diff --git a/Database/database/index.html b/Database/database/index.html deleted file mode 100644 index ab96d4b..0000000 --- a/Database/database/index.html +++ /dev/null @@ -1,196 +0,0 @@ - Database CR - Kinetica DB Operator Helm Charts
Skip to content

Kinetica Database Configuration

  • kubectl (yaml)

KineticaCluster

To deploy a new Database Instance into a Kubernetes cluster...

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica DB Cluster is as follows: -

kineticacluster.yaml
1
-2
apiVersion: app.kinetica.com/v1
-kind: KineticaCluster
-

Metadata

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

kineticacluster.yaml
1
-2
-3
-4
-5
-6
apiVersion: app.kinetica.com/v1
-kind: KineticaCluster
-metadata:
-  name: my-kinetica-db-cr
-  namespace: gpudb
-spec:
-

Spec

Under the spec: section of the KineticaCLuster CR we have a number of sections supporting different aspects of the deployed DB cluster:-

gpudbCluster

Configuartion items specific to the DB itself.

kineticacluster.yaml - gpudbCluster
1
-2
-3
-4
-5
-6
-7
apiVersion: app.kinetica.com/v1
-kind: KineticaCluster
-metadata:
-  name: my-kinetica-db-cr
-  namespace: gpudb
-spec:
-  gpudbCluster:
-
gpudbCluster
cluster name & size
1
-2
-3
-4
-5
-6
-7
clusterName: kinetica-cluster 
-clusterSize: 
-  tshirtSize: M 
-  tshirtType: LargeCPU 
-fqdn: kinetica-cluster.saas.kinetica.com
-haRingName: default
-hasPools: false    
-

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

3. tshirtSize - sets the cluster size to a defined size based upon the t-shirt size. Valid sizes are: -

  • XS - 1 DB Rank
  • S - 2 DB Ranks
  • M - 4 DB Ranks
  • L - 8 DB Ranks
  • XL - 16 DB Ranks
  • XXL - 32 DB Ranks
  • XXXL - 64 DB Ranks

4. tshirtType - block that defines the tyoe DB Ranks to run: -

  • SmallCPU -
  • LargeCPU -
  • SmallGPU -
  • LargeGPU -

5. fqdn - The fully qualified URL for the DB cluster. Used on the Ingress records for any exposed services.

6. haRingName - Default: default

7. hasPools - Whether to enable the separate node 'pools' for "infra", "compute" pod scheduling. Default: false +optional

autoSuspend

The DB Cluster autosuspend section allows for the spinning down of the core DB Pods to release the underlying Kubernetes nodes to reduce infrastructure costs when the DB is not in use.

kineticacluster.yaml - autoSuspend
1
-2
-3
-4
-5
-6
-7
-8
-9
apiVersion: app.kinetica.com/v1
-kind: KineticaCluster
-metadata:
-  name: my-kinetica-db-cr
-  namespace: gpudb
-spec:
-  autoSuspend:
-    enabled: false
-    inactivityDuration: 1h0m0s
-

7. the start of the autoSuspend definition

8. enabled when set to true auto suspend of the DB cluster is enabled otherwise set to false and no automatic suspending of the DB takes place. If omitted it defaults to false

9. inactivityDuration the duration after which if no DB activity has taken place the DB will be suspended

Horizontal Pod Autoscaler

In order for autoSuspend to work correctly the Kubernetes Horizontal Pod Autoscaler needs to be deployed to the cluster.

gadmin

GAdmin the Database Administration Console

GAdmin

kineticacluster.yaml - gadmin
 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
-10
-11
-12
apiVersion: app.kinetica.com/v1
-kind: KineticaCluster
-metadata:
-  name: my-kinetica-db-cr
-  namespace: gpudb
-spec:
-  gadmin:
-    containerPort:
-      containerPort: 8080
-      name: gadmin
-      protocol: TCP
-    isEnabled: true
-

7. gadmin configuration block definition

8. containerPort configuration block i.e. where gadmin is exposed on the DB Pod

9. containerPort the port number as an integer. Default: 8080

10. name the name of the port being exposed. Default: gadmin

11. protocol network protocal used. Default: TCP

12. isEnabled whether gadmin is exposed from the DB pod. Default: true

Example DB CR

YAML
apiVersion: app.kinetica.com/v1
-kind: KineticaCluster
-metadata:
-  name: kinetica-cluster
-  namespace: gpudb
-spec:
-  autoSuspend:
-    enabled: false
-    inactivityDuration: 1h0m0s
-  debug: false
-  gadmin:
-    isEnabled: true
-  gpudbCluster:
-    clusterName: kinetica-cluster
-    clusterSize:
-      tshirtSize: M
-      tshirtType: LargeCPU
-    config:
-      graph:
-        enable: true
-      postgresProxy:
-        enablePostgresProxy: true
-      textSearch:
-        enableTextSearch: true
-      kifs:
-        enable: false
-      ml:
-        enable: false
-      tieredStorage:
-        globalTier:
-          colocateDisks: true
-          concurrentWaitTimeout: 120
-          encryptDataAtRest: true
-        persistTier:
-          default:
-            highWatermark: 90
-            limit: 4Ti
-            lowWatermark: 50
-            name: ''
-            path: default
-            provisioner: docker.io/hostpath
-            volumeClaim:
-              metadata: {}
-              spec:
-                resources: {}
-                storageClassName: kinetica-db-persist
-              status: {}
-      tieredStrategy:
-        default: VRAM 1, RAM 5, PERSIST 5
-        predicateEvaluationInterval: 60
-    fqdn: kinetica-cluster.saas.kinetica.com
-    haRingName: default
-    hasPools: false
-    hostManagerPort:
-      containerPort: 9300
-      name: hostmanager
-      protocol: TCP
-    image: docker.io/kineticastagingcloud/kinetica-k8s-db:v7.1.9-8.rc1
-    imagePullPolicy: IfNotPresent
-    letsEncrypt:
-      enabled: false
-    license: >-
-
-    metricsRegistryRepositoryTag:
-      imagePullPolicy: IfNotPresent
-      registry: docker.io
-      repository: kineticastagingcloud/fluent-bit
-      sha: ''
-      tag: v7.1.9-8.rc1
-    podManagementPolicy: Parallel
-    ranksPerNode: 1
-    replicas: 4
-  hostManagerMonitor:
-    monitorRegistryRepositoryTag:
-      imagePullPolicy: IfNotPresent
-      registry: docker.io
-      repository: kineticastagingcloud/kinetica-k8s-monitor
-      sha: ''
-      tag: v7.1.9-8.rc1
-    readinessProbe:
-      failureThreshold: 20
-      initialDelaySeconds: 5
-      periodSeconds: 10
-    startupProbe:
-      failureThreshold: 20
-      initialDelaySeconds: 5
-      periodSeconds: 10
-  infra: on-prem
-  ingressController: nginx-ingress
-  ldap:
-    host: openldap
-    isInLocalK8S: true
-    isLDAPS: false
-    namespace: gpudb
-    port: 389
-  payAsYouGo: false
-  reveal:
-    containerPort:
-      containerPort: 8088
-      name: reveal
-      protocol: TCP
-    isEnabled: true
-  supportingImages:
-    busybox:
-      imagePullPolicy: IfNotPresent
-      registry: docker.io
-      repository: kineticastagingcloud/busybox
-      sha: ''
-      tag: v7.1.9-8.rc1
-    socat:
-      imagePullPolicy: IfNotPresent
-      registry: docker.io
-      repository: kineticastagingcloud/socat
-      sha: ''
-      tag: v7.1.9-8.rc1
-

KineticaUser

KineticaGrant

KineticaSchema

KineticaResourceGroup

\ No newline at end of file diff --git a/Database/database_reference/index.html b/Database/database_reference/index.html deleted file mode 100644 index 6bb68c3..0000000 --- a/Database/database_reference/index.html +++ /dev/null @@ -1 +0,0 @@ - Database CRD/CR Reference - Kinetica DB Operator Helm Charts
Skip to content

Database CRD/CR Reference

\ No newline at end of file diff --git a/Database/grant_reference/index.html b/Database/grant_reference/index.html deleted file mode 100644 index cab6ad4..0000000 --- a/Database/grant_reference/index.html +++ /dev/null @@ -1 +0,0 @@ - Kinetica DB User Permission Grant CRD/CR Reference - Kinetica DB Operator Helm Charts
Skip to content

Kinetica DB User Permission Grant CRD/CR Reference

\ No newline at end of file diff --git a/Database/resource_group_reference/index.html b/Database/resource_group_reference/index.html deleted file mode 100644 index 0d148a0..0000000 --- a/Database/resource_group_reference/index.html +++ /dev/null @@ -1 +0,0 @@ - Kinetica DB User Resource Group CRD/CR Reference - Kinetica DB Operator Helm Charts
Skip to content

Kinetica DB User Resource Group CRD/CR Reference

\ No newline at end of file diff --git a/Database/schema_reference/index.html b/Database/schema_reference/index.html deleted file mode 100644 index 6fbcd88..0000000 --- a/Database/schema_reference/index.html +++ /dev/null @@ -1 +0,0 @@ - Kinetica DB User Schema CRD/CR Reference - Kinetica DB Operator Helm Charts
Skip to content

Kinetica DB User Schema CRD/CR Reference

\ No newline at end of file diff --git a/Database/user_reference/index.html b/Database/user_reference/index.html deleted file mode 100644 index 1e0211e..0000000 --- a/Database/user_reference/index.html +++ /dev/null @@ -1 +0,0 @@ - Kinetica DB User CRD/CR Reference - Kinetica DB Operator Helm Charts
Skip to content

Kinetica DB User CRD/CR Reference

\ No newline at end of file diff --git a/GettingStarted/aks/index.html b/GettingStarted/aks/index.html new file mode 100644 index 0000000..d7cf133 --- /dev/null +++ b/GettingStarted/aks/index.html @@ -0,0 +1,10 @@ + Azure AKS - Kinetica for Kubernetes
Skip to content

Azure AKS Specifics

This page covers any Microsoft Azure AKS cluster installation specifics.

\ No newline at end of file diff --git a/GettingStarted/eks/index.html b/GettingStarted/eks/index.html new file mode 100644 index 0000000..75baa81 --- /dev/null +++ b/GettingStarted/eks/index.html @@ -0,0 +1,10 @@ + Amazon EKS - Kinetica for Kubernetes
Skip to content

Amazon EKS Specifics

This page covers any Amazon EKS kubernetes cluster installation specifics.

EBS CSI driver

Warning

Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work.

Please refer to this AWS documentation for more information.

\ No newline at end of file diff --git a/GettingStarted/index.html b/GettingStarted/index.html new file mode 100644 index 0000000..81332d7 --- /dev/null +++ b/GettingStarted/index.html @@ -0,0 +1,10 @@ + Getting Started - Kinetica for Kubernetes
Skip to content

Getting Started

  • Prepare to Install


    What you need to know & do before beginning an installation.

    Preparation and Prerequisites

  • Set up in 15 minutes (local install)


    Install the Kinetica DB with helm and get up and running in minutes.

    Quickstart

  • Set up in 15 minutes


    Install the Kinetica DB with helm and get up and running in minutes.

    Installation

  • Amazon Cloud


    Amazon Cloud EKS specific installation information.

    EKS

  • kind


    kind Kubernetes specific installation information.

    kind

  • k3s


    k3s Kubernetes specific installation information.

    k3s

\ No newline at end of file diff --git a/GettingStarted/installation/index.html b/GettingStarted/installation/index.html new file mode 100644 index 0000000..2db62a9 --- /dev/null +++ b/GettingStarted/installation/index.html @@ -0,0 +1,28 @@ + Installation - Kinetica for Kubernetes
Skip to content

Installation

For managed Kubernetes solutions (AKS, EKS, GKE) or on-prem (kubeadm) Kubernetes variants, follow this generic guide to install the Kinetica Operators, Database and Workbench.

Preparation & Prequisites

Please make sure you have followed the Preparation & Prequisites steps

4. Install the helm chart

Run the following Helm install command after substituting values from section 3

Helm install kinetica-operators
helm -n kinetica-system install \
+kinetica-operators kinetica-operators/kinetica-operators \
+--create-namespace \
+--values values.onPrem.k8s.yaml \
+--set db.gpudbCluster.license="LICENSE-KEY" \
+--set dbAdminUser.password="PASSWORD" \
+--set global.defaultStorageClass="DEFAULT-STORAGE-CLASS"
+

5. Check installation progress

After a few moments, follow the progression of the main database pod startup with:

Monitor the Kinetica installation progress
kubectl -n gpudb get po gpudb-0 -w
+

until it reaches "gpudb-0 3/3 Running" at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the helm command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using ctrl-c.

6. Accessing the Kinetica installation

Target Platform Specifics

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.

Option 1: Use the LoadBalancer domain

Set your FQDN in Kinetica
kubectl get svc -n kinetica-system
+# look at the loadbalancer dns name, copy it
+
+kubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)
+# replace local.kinetica with the loadbalancer dns name
+kubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)
+# replace local.kinetica with the loadbalancer dns name
+# save and exit
+# you should be able to access the workbench from the loadbalancer dns name
+

Option 2: Use your custom domain Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

Installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Configure local acces - /etc/hosts
127.0.0.1  local.kinetica
+

Note

The default chart configuration points to local.kinetica but this is configurable.

Installing on a bare metal machines which do not have an external hardware loadbalancer requires an Ingress controller along with a software loadbalancer in order to be accessible.

Kinetica for Kubernetes has been tested with kube-vip


\ No newline at end of file diff --git a/GettingStarted/k3s/index.html b/GettingStarted/k3s/index.html new file mode 100644 index 0000000..d083c0a --- /dev/null +++ b/GettingStarted/k3s/index.html @@ -0,0 +1,10 @@ + k3s - Kinetica for Kubernetes
Skip to content

k3s Installation Specifics

This page covers any k3s kubernetes cluster installation specifics.

\ No newline at end of file diff --git a/GettingStarted/kind/index.html b/GettingStarted/kind/index.html new file mode 100644 index 0000000..a6bc4a7 --- /dev/null +++ b/GettingStarted/kind/index.html @@ -0,0 +1,10 @@ + KinD - Kinetica for Kubernetes
Skip to content

KinD Installation Specifics

This page covers any kind kubernetes cluster installation specifics.

\ No newline at end of file diff --git a/GettingStarted/preparation_and_prerequisites/index.html b/GettingStarted/preparation_and_prerequisites/index.html new file mode 100644 index 0000000..878c7e2 --- /dev/null +++ b/GettingStarted/preparation_and_prerequisites/index.html @@ -0,0 +1,15 @@ + Preparation & Prerequisites - Kinetica for Kubernetes
Skip to content

Preparation & Prerequisites

Checks & steps to ensure a smooth installation.

Obtain a Kinetica License Key

A product license key will be required for install. Please contact Kinetica Support to request a trial key.

Failing to provide a license key at installation time will prevent the DB from starting.

Preparation and prerequisites

Free Resources

Your Kubernetes cluster version should be >= 1.22.x and have a minimum of 8 CPU, 8GB Ram and
SSD or SATA 7200RPM hard drive(s) with 4X memory capacity.

GPU Support

For GPU enabled clusters the cards below have been tested in large-scale production environments and provide the best performance for the database.

GPU Driver
P4/P40/P100 525.X (or higher)
V100 525.X (or higher)
T4 525.X (or higher)
A10/A40/A100 525.X (or higher)

Kubernetes Cluster Connectivity

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. and the Kubernetes CLI kubectl.

The context for the desired target cluster must be selected from your ~/.kube/config file and set via the KUBECONFIG environment variable or kubectl ctx (if installed). Check to see if you have the correct context with,

show the current kubernetes context
kubectl config current-context
+

and that you can access this cluster correctly with,

list kubernetes cluster nodes
kubectl get nodes
+

Find get_nodes

If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).

Air-Gapped Environments

If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.

Required Container Images

docker.io (Required Kinetica Images for All Installations)

  • docker.io/kineticastagingcloud/kinetica-k8s-operator:v7.2.0-3.rc-3
    • docker.io/kineticastagingcloud/kinetica-k8s-cpu:v7.2.0-3.rc-3 or
    • docker.io/kineticastagingcloud/kinetica-k8s-cpu-avx512:v7.2.0-3.rc-3 or
    • docker.io/kineticastagingcloud/kinetica-k8s-gpu:v7.2.0-3.rc-3
  • docker.io/kineticastagingcloud/workbench-operator:v7.2.0-3.rc-3
  • docker.io/kineticastagingcloud/workbench:v7.2.0-3.rc-3
  • docker.io/kineticastagingcloud/kinetica-k8s-monitor:v7.2.0-3.rc-3
  • docker.io/kineticastagingcloud/busybox:v7.2.0-3.rc-3
  • docker.io/kineticastagingcloud/fluent-bit:v7.2.0-3.rc-3
  • docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga

nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)

  • nvcr.io/nvidia/gpu-operator:v23.9.1

registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)

  • registry.k8s.io/nfd/node-feature-discovery:v0.14.2

docker.io (Required Supporting Images)

  • docker.io/bitnami/openldap:2.6.7
  • docker.io/alpine/openssl:latest (used by bitnami/openldap)
  • docker.io/otel/opentelemetry-collector-contrib:0.95.0

quay.io (Required Supporting Images)

  • quay.io/brancz/kube-rbac-proxy:v0.14.2

Optional Container Images

These images are only required if certain features are enabled as part of the Helm installation: -

  • CertManager
  • ingress-ninx

quay.io (Optional Supporting Images)

  • quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)

registry.k8s.io (Optional Supporting Images)

  • registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)
  • registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c

Which Kinetica Core Image do I use?

Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64)
kinetica-k8s-cpu (1)
kinetica-k8s-cpu-avx512
kinetica-k8s-gpu (2) (2) (2)
  1. It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image
  2. With a supported nVidia GPU.

Install the kinetica-operators chart

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

1. Add the Kinetica chart repository

Add the repo locally as kinetica-operators:

Helm repo add
helm repo add kinetica-operators https://kineticadb.github.io/charts
+

Helm Repo Add

2. Obtain the default Helm values file

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Helm values.yaml download
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml
+

3. Determine the following prior to the chart install

Default Admin User

the default admin user in the Helm chart is kadmin but this is configurable. Non-ASCII characters and typographical symbols in the password must be escaped with a "\". For example, --set dbAdminUser.password="MyPassword\!"

  1. Obtain a LICENSE-KEY as described in the introduction above.
  2. Choose a PASSWORD for the initial administrator user
  3. As the storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain the DEFAULT-STORAGE-CLASS name with the command:


Find the default storageclass
kubectl get sc -o name 
+

Find Storage Class

use the name found after the /, For example, in storageclass.storage.k8s.io/local-path use "local-path" as the parameter.

Amazon EKS

If installing on Amazon EKS See here

\ No newline at end of file diff --git a/GettingStarted/quickstart/index.html b/GettingStarted/quickstart/index.html new file mode 100644 index 0000000..34a0bb0 --- /dev/null +++ b/GettingStarted/quickstart/index.html @@ -0,0 +1,29 @@ + Quickstart - Kinetica for Kubernetes
Skip to content

Quickstart

For the quickstart we have examples for Kind or k3s.

  • Kind - is suitable for CPU only installations.
  • k3s - is suitable for CPU or GPU installations.

Kubernetes >= 1.25

The current version of the chart supports kubernetes version 1.25 and above.

Please select your target installation type:

Kind (kubernetes in docker kind.sigs.k8s.io)

This installation in a kind cluster is for trying out the operators and the database in a non-production environment.

CPU Only

This method currently only supports installing a CPU version of the database.

Please contact Kinetica Support to request a trial key.

Create Kind Cluster 1.29

Create a new Kind Cluster
kind create cluster --config charts/kinetica-operators/kind.yaml
+

Kind - Install kinetica-operators including a sample db to try out

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

Kind - Install the Kinetica-Operators Chart

Kind - Install the Kinetca-Operators Chart
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml
+
+helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password"
+

or if you have been asked by the Kinetica Support team to try a development version

Using a development version
helm search repo kinetica-operators --devel --versions
+
+helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password" --devel --version 7.2.0-2.rc-2
+

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

k3s (k3s.io)

Install k3s 1.29

Install k3s
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik  --node-name kinetica-master --token 12345" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -
+

K3s - Install kinetica-operators including a sample db to try out

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Configure local acces - /etc/hosts
127.0.0.1  local.kinetica
+

K3S - Install the Kinetica-Operators Chart (CPU)

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml
+
+helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password"
+

or if you have been asked by the Kinetica Support team to try a development version

Using a development version
helm search repo kinetica-operators --devel --versions
+
+helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password" --devel --version 7.2.0-2.rc-2
+

K3S - Install the Kinetica-Operators Chart (GPU)

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

k3s GPU Installation
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.gpu.yaml
+
+helm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password"
+

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

Uninstall k3s

uninstall k3s
/usr/local/bin/k3s-uninstall.sh
+

Default User

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

\ No newline at end of file diff --git a/Help/help_and_tutorials/index.html b/Help/help_and_tutorials/index.html new file mode 100644 index 0000000..013c6c5 --- /dev/null +++ b/Help/help_and_tutorials/index.html @@ -0,0 +1,10 @@ + Help & Tutorials - Kinetica for Kubernetes
Skip to content

Help & Tutorials

  • Tutorials


    Tutorials

  • Help


    Help

Coming Soon

\ No newline at end of file diff --git a/Monitoring/logs/index.html b/Monitoring/logs/index.html new file mode 100644 index 0000000..3bf6f01 --- /dev/null +++ b/Monitoring/logs/index.html @@ -0,0 +1,10 @@ + Logs - Kinetica for Kubernetes
Skip to content

Log Collection & Display

Coming Soon

\ No newline at end of file diff --git a/Monitoring/metrics_and_monitoring/index.html b/Monitoring/metrics_and_monitoring/index.html new file mode 100644 index 0000000..396428a --- /dev/null +++ b/Monitoring/metrics_and_monitoring/index.html @@ -0,0 +1,10 @@ + Metrics & Monitoring - Kinetica for Kubernetes
Skip to content

MetricsCollection & Display

Coming Soon

\ No newline at end of file diff --git a/Operations/backup_and_restore/index.html b/Operations/backup_and_restore/index.html new file mode 100644 index 0000000..c8d98ee --- /dev/null +++ b/Operations/backup_and_restore/index.html @@ -0,0 +1,10 @@ + Kinetica Backup & Restore - Kinetica for Kubernetes
Skip to content

Kinetica Backup & Restore

Velero Installation

Backup & Restore requires that v is installed into the Kubernetes Cluster.

Coming Soon

\ No newline at end of file diff --git a/Operations/index.html b/Operations/index.html new file mode 100644 index 0000000..9d09101 --- /dev/null +++ b/Operations/index.html @@ -0,0 +1,10 @@ + Operations - Kinetica for Kubernetes
Skip to content

Operations

  • Metrics


    Collecting and storing metrics as time series data.

    Metrics

  • Logs


    Log aggregation.

    Logs

  • Distribution


    Metrics & Logs can be distributed to other systems using OpenTelemetry.

    OpenTelemety

  • Backup & Restore


    Backup & Restore of the Kinetica DB.

    Backup & Restore

\ No newline at end of file diff --git a/Operations/otel/index.html b/Operations/otel/index.html new file mode 100644 index 0000000..496d482 --- /dev/null +++ b/Operations/otel/index.html @@ -0,0 +1,10 @@ + OpenTelemetry - Kinetica for Kubernetes
Skip to content

OpenTelemetry Integration for Metric & Log Distribution

Coming Soon

\ No newline at end of file diff --git a/Operators/k3s/index.html b/Operators/k3s/index.html index afdb50b..d308b20 100644 --- a/Operators/k3s/index.html +++ b/Operators/k3s/index.html @@ -1,4 +1,13 @@ - On-Prem K3s - Kinetica DB Operator Helm Charts
Skip to content

Overview

Kinetica Operators can be installed in any on-prem kubernetes cluster. This document provides instructions to install the operators in k3s. If you are on another distribution, you should be able to change the values file to suit your environment.

You will need a license key for this to work. Please contact Kinetica Support.

Kinetica on k3s (k3s.io)

Current version of the chart supports kubernetes version 1.25 and above.

Install k3s 1.29

Bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik  --node-name kinetica-master --token 12345" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -
+ Overview - Kinetica for Kubernetes      

Overview

Kinetica Operators can be installed in any on-prem kubernetes cluster. This document provides instructions to install the operators in k3s. If you are on another distribution, you should be able to change the values file to suit your environment.

You will need a license key for this to work. Please contact Kinetica Support.

Kinetica on k3s (k3s.io)

Current version of the chart supports kubernetes version 1.25 and above.

Install k3s 1.29

Bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik  --node-name kinetica-master --token 12345" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -
 

K3s -Install kinetica-operators including a sample db to try out

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Text Only
127.0.0.1 local.kinetica
 
K3s - Install the kinetica-operators chart
Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml
 
@@ -12,4 +21,4 @@
 
 helm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password"
 

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

Uninstall k3s

Bash
/usr/local/bin/k3s-uninstall.sh
-
\ No newline at end of file +
\ No newline at end of file diff --git a/Operators/k8s/index.html b/Operators/k8s/index.html index fc0beae..a72031f 100644 --- a/Operators/k8s/index.html +++ b/Operators/k8s/index.html @@ -1,4 +1,13 @@ - Other K8s - Kinetica DB Operator Helm Charts
Skip to content

Overview

For managed Kubernetes solutions (AKS, EKS, GKE) or other on-prem K8s flavors, follow this generic guide to install the Kinetica Operators, Database and Workbench. A product license key will be required for install. Please contact Kinetica Support to request a trial key.

Preparation and prerequisites

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. kubectl is optional but highly recommended. The context for the desired target cluster must be selected from your ~/.kube/config file or set via the KUBECONFIG environment variable. Check to see if you have the correct context with,

Bash
kubectl config current-context
+ Overview - Kinetica for Kubernetes      

Overview

For managed Kubernetes solutions (AKS, EKS, GKE) or other on-prem K8s flavors, follow this generic guide to install the Kinetica Operators, Database and Workbench. A product license key will be required for install. Please contact Kinetica Support to request a trial key.

Preparation and prerequisites

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. kubectl is optional but highly recommended. The context for the desired target cluster must be selected from your ~/.kube/config file or set via the KUBECONFIG environment variable. Check to see if you have the correct context with,

Bash
kubectl config current-context
 

and that you can access this cluster correctly with,

Bash
kubectl get nodes
 

If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).

Install the kinetica-operators chart

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

Alternatively, if you are installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Bash
127.0.0.1  local.kinetica
 

Note that the default chart configuration points to local.kinetica but this is configurable.

1. Add the Kinetica chart repository

Add the repo locally as kinetica-operators:

Bash
helm repo add kinetica-operators https://kineticadb.github.io/charts
@@ -22,4 +31,4 @@
 # replace local.kinetica with the loadbalancer dns name
 # save and exit
 # you should be able to access the workbench from the loadbalancer dns name
-
Option 1: Use your custom domain

Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

\ No newline at end of file +
Option 1: Use your custom domain

Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

\ No newline at end of file diff --git a/Operators/kind/index.html b/Operators/kind/index.html index 70c5061..09fc1cc 100644 --- a/Operators/kind/index.html +++ b/Operators/kind/index.html @@ -1,4 +1,13 @@ - On-Prem KiND - Kinetica DB Operator Helm Charts
Skip to content

Overview

This installation in a kind cluster is for trying out the operators and the database in a non production environment. This method currently only supports installing a CPU version of the database.

You will need a license key for this to work. Please contact Kinetica Support.

Kind (kubernetes in docker kind.sigs.k8s.io)

Create Kind Cluster 1.29

Bash
kind create cluster --config charts/kinetica-operators/kind.yaml
+ Overview - Kinetica for Kubernetes      

Overview

This installation in a kind cluster is for trying out the operators and the database in a non production environment. This method currently only supports installing a CPU version of the database.

You will need a license key for this to work. Please contact Kinetica Support.

Kind (kubernetes in docker kind.sigs.k8s.io)

Create Kind Cluster 1.29

Bash
kind create cluster --config charts/kinetica-operators/kind.yaml
 

Kind - Install kinetica-operators including a sample db to try out

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

Kind - Install the kinetica-operators chart
Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml
 
 helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password"
@@ -6,4 +15,4 @@
 # if you want to try out a development version,
 helm search repo kinetica-operators --devel --versions
 helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password" --devel --version 7.2.0-2.rc-2
-

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

\ No newline at end of file +

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

\ No newline at end of file diff --git a/Operators/kinetica-operators/index.html b/Operators/kinetica-operators/index.html new file mode 100644 index 0000000..f4f0ba7 --- /dev/null +++ b/Operators/kinetica-operators/index.html @@ -0,0 +1,218 @@ + Kinetica DB Operator Helm Charts - Kinetica for Kubernetes
Skip to content

Kinetica DB Operator Helm Charts

To install all the required operators in a single command perform the following: -

Bash
helm install -n kinetica-system \
+kinetica-operators kinetica-operators/kinetica-operators --create-namespace
+

This will install all the Kubernetes Operators required into the kinetica-system namespace and create the namespace if it is not currently present.

Note

Depending on what target platform you are installing to it may be necessary to supply an additional parameter pointing to a values file to successfully provision the DB.

Bash
helm install -n kinetica-system -f values.yaml --set provider=aks \
+kinetica-operators kinetica-operators/kinetica-operators --create-namespace
+

The command above uses a custom values.yaml for helm and sets the install platform to Microsoft Azure AKS.

Currently supported providers are: -

  • aks - Microsoft Azure AKS
  • eks - Amazon AWS EKS
  • local - Generic 'On-Prem' Kubernetes Clusters e.g. one deployed using kubeadm

Example Helm values.yaml for different Cloud Providers/On-Prem installations: -

values.yaml
 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
namespace: kinetica-system
+
+db:
+  serviceAccount: {}
+  image:
+    # Kinetica DB Operator installer image
+    repository: "registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator"
+    #  Kinetica DB Operator installer image tag
+    tag: ""
+
+  parameters:
+    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to
+    kubeconfig: ""
+    # The storage class to use for PVCs
+    storageClass: "managed-premium"
+
+  storageClass:
+    persist:
+      # Workbench Operator Persistent Volume Storage Class
+      provisioner: "disk.csi.azure.com"
+    procs:
+      # Workbench Operator Procs Volume Storage Class
+      provisioner: "disk.csi.azure.com"
+    cache:
+      # Workbench Operator Cache Volume Storage Class
+      provisioner: "disk.csi.azure.com"
+

15 storageClass: "managed-premium" - sets the appropriate storageClass for Microsoft Azure AKS Persistent Volume (PV)

20 provisioner: "disk.csi.azure.com" - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure

23 provisioner: "disk.csi.azure.com" - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure

26 provisioner: "disk.csi.azure.com" - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure

values.yaml
 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
+23
+24
+25
+26
namespace: kinetica-system
+
+db:
+  serviceAccount: {}
+  image:
+    # Kinetica DB Operator installer image
+    repository: "registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator"
+    #  Kinetica DB Operator installer image tag
+    tag: ""
+
+  parameters:
+    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to
+    kubeconfig: ""
+    # The storage class to use for PVCs
+    storageClass: "gp2"
+
+  storageClass:
+    persist:
+      # Workbench Operator Persistent Volume Storage Class
+      provisioner: "kubernetes.io/aws-ebs"
+    procs:
+      # Workbench Operator Procs Volume Storage Class
+      provisioner: "kubernetes.io/aws-ebs"
+    cache:
+      # Workbench Operator Cache Volume Storage Class
+      provisioner: "kubernetes.io/aws-ebs"
+

15 storageClass: "gp2" - sets the appropriate storageClass for Amazon EKS Persistent Volume (PV)

20 provisioner: "kubernetes.io/aws-ebs" - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure

23 provisioner: "kubernetes.io/aws-ebs" - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure

26 provisioner: "kubernetes.io/aws-ebs" - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure

values.yaml
 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+10
+11
+12
+13
+14
+15
+16
+17
+18
+19
+20
+21
+22
namespace: kinetica-system
+
+db:
+  serviceAccount: {}
+  image:
+    # Kinetica DB Operator installer image
+    repository: "registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator"
+    #  Kinetica DB Operator installer image tag
+    tag: ""
+
+  parameters:
+    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to
+    kubeconfig: ""
+    # the type of installation e.g. aks, eks, local
+    environment: "local"
+    # The storage class to use for PVCs
+    storageClass: "standard"
+
+  storageClass:
+    procs: {}
+    persist: {}
+    cache: {}
+

15 environment: "local" - tells the DB Operator to deploy the DB as a 'local' instance to the Kubernetes Cluster

17 storageClass: "standard" - sets the appropriate storageClass for the On-Prem Persistent Volume Provisioner

storageClass

The storageClass should be present in the target environment.

A list of available storageClass can be obtained using: -

Bash
kubectl get sc
+

Components

The kinetica-db Helm Chart wraps the deployment of a number of sub-components: -

Installation/Upgrading/Deletion of the Kinetica Operators is done via two CRs which leverage porter.sh as the orchestrator. The corresponding Porter Operator, DB Operator & Workbench Operator CRs are submitted by running the appropriate helm command i.e.

  • install
  • upgrade
  • uninstall

Porter Operator

Database Operator

The Kinetica DB Operator installation CR for the porter.sh operator is: -

YAML
apiVersion: porter.sh/v1
+kind: Installation
+metadata:
+  annotations:
+    meta.helm.sh/release-name: kinetica-operators
+    meta.helm.sh/release-namespace: kinetica-system
+  labels:
+    app.kubernetes.io/instance: kinetica-operators
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/name: kinetica-operators
+    app.kubernetes.io/version: 0.1.0
+    helm.sh/chart: kinetica-operators-0.1.0
+    installVersion: 0.38.10
+  name: kinetica-operators-operator-install
+  namespace: kinetica-system
+spec:
+  action: install
+  agentConfig:
+    volumeSize: '0'
+  parameters:
+    environment: local
+    storageclass: managed-premium
+  reference: docker.io/kinetica/kinetica-k8s-operator:v7.1.9-7.rc3
+

Workbench Operator

The Kinetica Workbench installation CR for the porter.sh operator is: -

YAML
apiVersion: porter.sh/v1
+kind: Installation
+metadata:
+  annotations:
+    meta.helm.sh/release-name: kinetica-operators
+    meta.helm.sh/release-namespace: kinetica-system
+  labels:
+    app.kubernetes.io/instance: kinetica-operators
+    app.kubernetes.io/managed-by: Helm
+    app.kubernetes.io/name: kinetica-operators
+    app.kubernetes.io/version: 0.1.0
+    helm.sh/chart: kinetica-operators-0.1.0
+    installVersion: 0.38.10
+  name: kinetica-operators-wb-operator-install
+  namespace: kinetica-system
+spec:
+  action: install
+  agentConfig:
+    volumeSize: '0'
+  parameters:
+    environment: local
+  reference: docker.io/kinetica/workbench-operator:v7.1.9-7.rc3
+

Overriding Images Tags

Bash
1
+2
+3
+4
+5
+6
+7
helm install -n kinetica-system kinetica-operators kinetica-operators/kinetica-operators \
+--create-namespace \
+--set provider=aks  
+--set dbOperator.image.tag=v7.1.9-7.rc3 \
+--set dbOperator.image.repository=docker.io/kinetica/kinetica-k8s-operator \
+--set wbOperator.image.repository=docker.io/kinetica/workbench-operator \
+--set wbOperator.image.tag=v7.1.9-7.rc3
+
\ No newline at end of file diff --git a/Reference/database/index.html b/Reference/database/index.html new file mode 100644 index 0000000..f644fca --- /dev/null +++ b/Reference/database/index.html @@ -0,0 +1,90 @@ + Kinetica Database Configuration - Kinetica for Kubernetes
Skip to content

Kinetica Database Configuration

  • kubectl (yaml)

KineticaCluster

To deploy a new Database Instance into a Kubernetes cluster...

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica DB Cluster is as follows: -

kineticacluster.yaml
1
+2
apiVersion: app.kinetica.com/v1
+kind: KineticaCluster
+

Metadata

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

kineticacluster.yaml
1
+2
+3
+4
+5
+6
apiVersion: app.kinetica.com/v1
+kind: KineticaCluster
+metadata:
+  name: my-kinetica-db-cr
+  namespace: gpudb
+spec:
+

Spec

Under the spec: section of the KineticaCLuster CR we have a number of sections supporting different aspects of the deployed DB cluster:-

gpudbCluster

Configuartion items specific to the DB itself.

kineticacluster.yaml - gpudbCluster
1
+2
+3
+4
+5
+6
+7
apiVersion: app.kinetica.com/v1
+kind: KineticaCluster
+metadata:
+  name: my-kinetica-db-cr
+  namespace: gpudb
+spec:
+  gpudbCluster:
+
gpudbCluster
cluster name & size
1
+2
+3
+4
+5
+6
+7
clusterName: kinetica-cluster 
+clusterSize: 
+  tshirtSize: M 
+  tshirtType: LargeCPU 
+fqdn: kinetica-cluster.saas.kinetica.com
+haRingName: default
+hasPools: false    
+

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

3. tshirtSize - sets the cluster size to a defined size based upon the t-shirt size. Valid sizes are: -

  • XS - 1 DB Rank
  • S - 2 DB Ranks
  • M - 4 DB Ranks
  • L - 8 DB Ranks
  • XL - 16 DB Ranks
  • XXL - 32 DB Ranks
  • XXXL - 64 DB Ranks

4. tshirtType - block that defines the tyoe DB Ranks to run: -

  • SmallCPU -
  • LargeCPU -
  • SmallGPU -
  • LargeGPU -

5. fqdn - The fully qualified URL for the DB cluster. Used on the Ingress records for any exposed services.

6. haRingName - Default: default

7. hasPools - Whether to enable the separate node 'pools' for "infra", "compute" pod scheduling. Default: false +optional

autoSuspend

The DB Cluster autosuspend section allows for the spinning down of the core DB Pods to release the underlying Kubernetes nodes to reduce infrastructure costs when the DB is not in use.

kineticacluster.yaml - autoSuspend
1
+2
+3
+4
+5
+6
+7
+8
+9
apiVersion: app.kinetica.com/v1
+kind: KineticaCluster
+metadata:
+  name: my-kinetica-db-cr
+  namespace: gpudb
+spec:
+  autoSuspend:
+    enabled: false
+    inactivityDuration: 1h0m0s
+

7. the start of the autoSuspend definition

8. enabled when set to true auto suspend of the DB cluster is enabled otherwise set to false and no automatic suspending of the DB takes place. If omitted it defaults to false

9. inactivityDuration the duration after which if no DB activity has taken place the DB will be suspended

Horizontal Pod Autoscaler

In order for autoSuspend to work correctly the Kubernetes Horizontal Pod Autoscaler needs to be deployed to the cluster.

gadmin

GAdmin the Database Administration Console

GAdmin

kineticacluster.yaml - gadmin
 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+10
+11
+12
apiVersion: app.kinetica.com/v1
+kind: KineticaCluster
+metadata:
+  name: my-kinetica-db-cr
+  namespace: gpudb
+spec:
+  gadmin:
+    containerPort:
+      containerPort: 8080
+      name: gadmin
+      protocol: TCP
+    isEnabled: true
+

7. gadmin configuration block definition

8. containerPort configuration block i.e. where gadmin is exposed on the DB Pod

9. containerPort the port number as an integer. Default: 8080

10. name the name of the port being exposed. Default: gadmin

11. protocol network protocal used. Default: TCP

12. isEnabled whether gadmin is exposed from the DB pod. Default: true

KineticaUser

KineticaGrant

KineticaSchema

KineticaResourceGroup

\ No newline at end of file diff --git a/Reference/helm_kinetica_operators/index.html b/Reference/helm_kinetica_operators/index.html new file mode 100644 index 0000000..ad3c355 --- /dev/null +++ b/Reference/helm_kinetica_operators/index.html @@ -0,0 +1,10 @@ + Kinetica Operators - Kinetica for Kubernetes
Skip to content

Kinetica Operators Helm Chart Reference

Coming Soon

\ No newline at end of file diff --git a/Reference/index.html b/Reference/index.html new file mode 100644 index 0000000..9f671b3 --- /dev/null +++ b/Reference/index.html @@ -0,0 +1,10 @@ + Reference Section - Kinetica for Kubernetes
Skip to content

Reference Section

  • Kinetica Operators Helm


    Kinetica Operators Helm charts & values file reference data.

    Charts

  • Kinetica Core DB CRDs


    Kinetica DB Kubernetes CRD & ConfigMap reference data.

    Cluster CRDs

  • Kinetica Workbench CRDs


    Kinetica Workbench Kubernetes CRD & ConfigMap reference data.

    Workbench

\ No newline at end of file diff --git a/Reference/kinetica_cluster_admins/index.html b/Reference/kinetica_cluster_admins/index.html new file mode 100644 index 0000000..e6eb1b0 --- /dev/null +++ b/Reference/kinetica_cluster_admins/index.html @@ -0,0 +1,101 @@ + Kinetica Cluster Admins Reference - Kinetica for Kubernetes
Skip to content

Kinetica Cluster Admins Reference

Full KineticaClusterAdmin CR Structure

kineticaclusteradmins.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an
+# object. Servers should convert recognized schemas to the latest
+# internal value, and may reject unrecognized values. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+apiVersion: app.kinetica.com/v1
+# Kind is a string value representing the REST resource this object
+# represents. Servers may infer this from the endpoint the client
+# submits requests to. Cannot be updated. In CamelCase. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+kind: KineticaClusterAdmin
+metadata: {}
+# KineticaClusterAdminSpec defines the desired state of
+# KineticaClusterAdmin
+spec:
+  # ForceDBStatus - Force a Status of the DB.
+  forceDbStatus: string
+  # Name - The name of the cluster to target.
+  kineticaClusterName: string
+  # Offline - Pause/Resume of the DB.
+  offline:
+    # Set to true if desired state is offline. The supported values are:
+    # true false
+    offline: false
+    # Optional parameters. The default value is an empty map (
+    # {} ). Supported Parameters: flush_to_disk Flush to disk when
+    # going offline The supported values are: true false
+    options: {}
+  # Rebalance of the DB.
+  rebalance:
+    # Optional parameters. The default value is an empty map (
+    # {} ). Supported Parameters: rebalance_sharded_data        If true,
+    # sharded data will be rebalanced approximately equally across the
+    # cluster. Note that for clusters with large amounts of sharded
+    # data, this data transfer could be time-consuming and result in
+    # delayed query responses. The default value is true. The supported
+    # values are: true false rebalance_unsharded_data   If true,
+    # unsharded data (a.k.a. randomly-sharded) will be rebalanced
+    # approximately equally across the cluster. Note that for clusters
+    # with large amounts of unsharded data, this data transfer could be
+    # time-consuming and result in delayed query responses. The default
+    # value is true. The supported values are: true false
+    # table_includes                Comma-separated list of unsharded table names
+    # to rebalance. Not applicable to sharded tables because they are
+    # always rebalanced. Cannot be used simultaneously with
+    # table_excludes. This parameter is ignored if
+    # rebalance_unsharded_data is false.
+    # table_excludes                Comma-separated list of unsharded table names
+    # to not rebalance. Not applicable to sharded tables because they
+    # are always rebalanced. Cannot be used simultaneously with
+    # table_includes. This parameter is ignored if rebalance_
+    # unsharded_data is false. aggressiveness               Influences how much
+    # data is moved at a time during rebalance. A higher aggressiveness
+    # will complete the rebalance faster. A lower aggressiveness will
+    # take longer but allow for better interleaving between the
+    # rebalance and other queries. Valid values are constants from 1
+    # (lowest) to 10 (highest). The default value is '1'.
+    # compact_after_rebalance   Perform compaction of deleted records
+    # once the rebalance completes to reclaim memory and disk space.
+    # Default is true, unless repair_incorrectly_sharded_data is set to
+    # true. The default value is true. The supported values are: true
+    # false compact_only                If set to true, ignore rebalance options
+    # and attempt to perform compaction of deleted records to reclaim
+    # memory and disk space without rebalancing first. The default
+    # value is false. The supported values are: true false
+    # repair_incorrectly_sharded_data       Scans for any data sharded
+    # incorrectly and re-routes the data to the correct location. Only
+    # necessary if /admin/verifydb reports an error in sharding
+    # alignment. This can be done as part of a typical rebalance after
+    # expanding the cluster or in a standalone fashion when it is
+    # believed that data is sharded incorrectly somewhere in the
+    # cluster. Compaction will not be performed by default when this is
+    # enabled. If this option is set to true, the time necessary to
+    # rebalance and the memory used by the rebalance may increase. The
+    # default value is false. The supported values are: true false
+    options: {}
+  # RegenerateDBConfig - Force regenerate of DB ConfigMap. true -
+  # restarts DB Pods after config generation false - writes new
+  # configuration without restarting the DB Pods
+  regenerateDBConfig:
+    # Restart - Scales down the DB STS and back up once the DB
+    # Configuration has been regenerated.
+    restart: false
+# KineticaClusterAdminStatus defines the observed state of
+# KineticaClusterAdmin
+status:
+  # Phase - The current phase/state of the Admin request
+  phase: string
+  # Processed - Indicates if the admin request has already been
+  # processed. Avoids the request being rerun in the case the Operator
+  # gets restarted.
+  processed: false
+

\ No newline at end of file diff --git a/Reference/kinetica_cluster_backups/index.html b/Reference/kinetica_cluster_backups/index.html new file mode 100644 index 0000000..ae29ae7 --- /dev/null +++ b/Reference/kinetica_cluster_backups/index.html @@ -0,0 +1,228 @@ + Kinetica Cluster Backups Reference - Kinetica for Kubernetes
Skip to content

Kinetica Cluster Backups Reference

Full KineticaClusterBackup CR Structure

kineticaclusterbackups.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an
+# object. Servers should convert recognized schemas to the latest
+# internal value, and may reject unrecognized values. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+apiVersion: app.kinetica.com/v1
+# Kind is a string value representing the REST resource this object
+# represents. Servers may infer this from the endpoint the client
+# submits requests to. Cannot be updated. In CamelCase. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+kind: KineticaClusterBackup 
+metadata: {}
+# Fields specific to the linked backup engine
+provider:
+  # Name of the backup/restore provider. FOR INTERNAL USE ONLY.
+  backupProvider: "velero"
+  # Name of the backup in the linked BackupProvider. FOR INTERNAL USE
+  # ONLY.
+  linkedItemName: ""
+# BackupSpec defines the specification for a Velero backup.
+spec:
+  # DefaultVolumesToRestic specifies whether restic should be used to
+  # take a backup of all pod volumes by default.
+  defaultVolumesToRestic: true
+  # ExcludedNamespaces contains a list of namespaces that are not
+  # included in the backup.
+  excludedNamespaces: ["string"]
+  # ExcludedResources is a slice of resource names that are not included
+  # in the backup.
+  excludedResources: ["string"]
+  # Hooks represent custom behaviors that should be executed at
+  # different phases of the backup.
+  hooks:
+    # Resources are hooks that should be executed when backing up
+    # individual instances of a resource.
+    resources:
+    - excludedNamespaces: ["string"]
+      # ExcludedResources specifies the resources to which this hook
+      # spec does not apply.
+      excludedResources: ["string"]
+      # IncludedNamespaces specifies the namespaces to which this hook
+      # spec applies. If empty, it applies to all namespaces.
+      includedNamespaces: ["string"]
+      # IncludedResources specifies the resources to which this hook
+      # spec applies. If empty, it applies to all resources.
+      includedResources: ["string"]
+      # LabelSelector, if specified, filters the resources to which this
+      # hook spec applies.
+      labelSelector:
+        # matchExpressions is a list of label selector requirements. The
+        # requirements are ANDed.
+        matchExpressions:
+        - key: string
+          # operator represents a key's relationship to a set of values.
+          # Valid operators are In, NotIn, Exists and DoesNotExist.
+          operator: string
+          # values is an array of string values. If the operator is In
+          # or NotIn, the values array must be non-empty. If the
+          # operator is Exists or DoesNotExist, the values array must
+          # be empty. This array is replaced during a strategic merge
+          # patch.
+          values: ["string"]
+        # matchLabels is a map of {key,value} pairs. A single
+        # {key,value} in the matchLabels map is equivalent to an
+        # element of matchExpressions, whose key field is "key", the
+        # operator is "In", and the values array contains only "value".
+        # The requirements are ANDed.
+        matchLabels: {}
+      # Name is the name of this hook.
+      name: string
+      # PostHooks is a list of BackupResourceHooks to execute after
+      # storing the item in the backup. These are executed after
+      # all "additional items" from item actions are processed.
+      post:
+      - exec:
+          # Command is the command and arguments to execute.
+          command: ["string"]
+          # Container is the container in the pod where the command
+          # should be executed. If not specified, the pod's first
+          # container is used.
+          container: string
+          # OnError specifies how Velero should behave if it encounters
+          # an error executing this hook.
+          onError: string
+          # Timeout defines the maximum amount of time Velero should
+          # wait for the hook to complete before considering the
+          # execution a failure.
+          timeout: string
+      # PreHooks is a list of BackupResourceHooks to execute prior to
+      # storing the item in the backup. These are executed before
+      # any "additional items" from item actions are processed.
+      pre:
+      - exec:
+          # Command is the command and arguments to execute.
+          command: ["string"]
+          # Container is the container in the pod where the command
+          # should be executed. If not specified, the pod's first
+          # container is used.
+          container: string
+          # OnError specifies how Velero should behave if it encounters
+          # an error executing this hook.
+          onError: string
+          # Timeout defines the maximum amount of time Velero should
+          # wait for the hook to complete before considering the
+          # execution a failure.
+          timeout: string
+  # IncludeClusterResources specifies whether cluster-scoped resources
+  # should be included for consideration in the backup.
+  includeClusterResources: true
+  # IncludedNamespaces is a slice of namespace names to include objects
+  # from. If empty, all namespaces are included.
+  includedNamespaces: ["string"]
+  # IncludedResources is a slice of resource names to include in the
+  # backup. If empty, all resources are included.
+  includedResources: ["string"]
+  # LabelSelector is a metav1.LabelSelector to filter with when adding
+  # individual objects to the backup. If empty or nil, all objects are
+  # included. Optional.
+  labelSelector:
+    # matchExpressions is a list of label selector requirements. The
+    # requirements are ANDed.
+    matchExpressions:
+    - key: string
+      # operator represents a key's relationship to a set of values.
+      # Valid operators are In, NotIn, Exists and DoesNotExist.
+      operator: string
+      # values is an array of string values. If the operator is In or
+      # NotIn, the values array must be non-empty. If the operator is
+      # Exists or DoesNotExist, the values array must be empty. This
+      # array is replaced during a strategic merge patch.
+      values: ["string"]
+    # matchLabels is a map of {key,value} pairs. A single {key,value} in
+    # the matchLabels map is equivalent to an element of
+    # matchExpressions, whose key field is "key", the operator is "In",
+    # and the values array contains only "value". The requirements are
+    # ANDed.
+    matchLabels: {} metadata: labels: {}
+  # OrderedResources specifies the backup order of resources of specific
+  # Kind. The map key is the Kind name and value is a list of resource
+  # names separated by commas. Each resource name has
+  # format "namespace/resourcename".  For cluster resources, simply
+  # use "resourcename".
+  orderedResources: {}
+  # SnapshotVolumes specifies whether to take cloud snapshots of any
+  # PV's referenced in the set of objects included in the Backup.
+  snapshotVolumes: true
+  # StorageLocation is a string containing the name of a
+  # BackupStorageLocation where the backup should be stored.
+  storageLocation: string
+  # TTL is a time.Duration-parseable string describing how long the
+  # Backup should be retained for.
+  ttl: string
+  # VolumeSnapshotLocations is a list containing names of
+  # VolumeSnapshotLocations associated with this backup.
+  volumeSnapshotLocations: ["string"] status:
+  # ClusterSize the current number of ranks & type i.e. CPU or GPU of
+  # the cluster when the backup took place.
+  clusterSize:
+    # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a
+    # representation of the number of nodes in a simple to understand
+    # T-Short size scheme. This indicates the size of the cluster i.e.
+    # the number of nodes. It does not identify the size of the cloud
+    # provider nodes. For node size see ClusterTypeEnum. Supported
+    # Values are: - XS S M L XL XXL XXXL
+    tshirtSize: string
+    # ClusterTypeEnum - An Enum of the node types of a KineticaCluster
+    # e.g. CPU, GPU along with the Cloud Provider node size e.g. size
+    # of the VM.
+    tshirtType: string coldTierBackup: string
+  # CompletionTimestamp records the time a backup was completed.
+  # Completion time is recorded even on failed backups. Completion time
+  # is recorded before uploading the backup object. The server's time
+  # is used for CompletionTimestamps
+  completionTimestamp: string
+  # Errors is a count of all error messages that were generated during
+  # execution of the backup.  The actual errors are in the backup's log
+  # file in object storage.
+  errors: 1
+  # Expiration is when this Backup is eligible for garbage-collection.
+  expiration: string
+  # FormatVersion is the backup format version, including major, minor,
+  # and patch version.
+  formatVersion: string
+  # Phase is the current state of the Backup.
+  phase: string
+  # Progress contains information about the backup's execution progress.
+  # Note that this information is best-effort only -- if Velero fails
+  # to update it during a backup for any reason, it may be
+  # inaccurate/stale.
+  progress:
+    # ItemsBackedUp is the number of items that have actually been
+    # written to the backup tarball so far.
+    itemsBackedUp: 1
+    # TotalItems is the total number of items to be backed up. This
+    # number may change throughout the execution of the backup due to
+    # plugins that return additional related items to back up, the
+    # velero.io/exclude-from-backup label, and various other filters
+    # that happen as items are processed.
+    totalItems: 1
+  # StartTimestamp records the time a backup was started. Separate from
+  # CreationTimestamp, since that value changes on restores. The
+  # server's time is used for StartTimestamps
+  startTimestamp: string
+  # ValidationErrors is a slice of all validation errors
+  # (if applicable).
+  validationErrors: ["string"]
+  # Version is the backup format major version. Deprecated: Please see
+  # FormatVersion
+  version: 1
+  # VolumeSnapshotsAttempted is the total number of attempted volume
+  # snapshots for this backup.
+  volumeSnapshotsAttempted: 1
+  # VolumeSnapshotsCompleted is the total number of successfully
+  # completed volume snapshots for this backup.
+  volumeSnapshotsCompleted: 1
+  # Warnings is a count of all warning messages that were generated
+  # during execution of the backup. The actual warnings are in the
+  # backup's log file in object storage.
+  warnings: 1
+

\ No newline at end of file diff --git a/Reference/kinetica_cluster_grants/index.html b/Reference/kinetica_cluster_grants/index.html new file mode 100644 index 0000000..81e9ef9 --- /dev/null +++ b/Reference/kinetica_cluster_grants/index.html @@ -0,0 +1,111 @@ + Kinetica Cluster Grants Reference - Kinetica for Kubernetes
Skip to content

Kinetica Cluster Grants CRD Reference

Full KineticaGrant CR Structure

kineticagrants.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an
+# object. Servers should convert recognized schemas to the latest
+# internal value, and may reject unrecognized values. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+apiVersion: app.kinetica.com/v1
+# Kind is a string value representing the REST resource this object
+# represents. Servers may infer this from the endpoint the client
+# submits requests to. Cannot be updated. In CamelCase. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+kind: KineticaGrant 
+metadata: {}
+# KineticaGrantSpec defines the desired state of KineticaGrant
+spec:
+  # Grants system-level and/or table permissions to a user or role.
+  addGrantAllOnSchemaRequest:
+    # Name of the user or role that will be granted membership in input
+    # parameter role. Must be an existing user or role.
+    member: string
+    # Optional parameters. The default value is an empty map ( {} ).
+    options: {}
+    # SchemaName - name of the schema on which to perform the Grant All
+    schemaName: string
+  # Grants system-level and/or table permissions to a user or role.
+  addGrantPermissionRequest:
+    # Optional parameters. The default value is an empty map ( {} ).
+    options: {}
+    # Permission to grant to the user or role. Supported
+    # Values    Description system_admin    Full access to all data and
+    # system functions. system_user_admin   Access to administer users
+    # and roles that do not have system_admin permission.
+    # system_write  Read and write access to all tables.
+    # system_read   Read-only access to all tables.
+    systemPermission:
+      # UID of the user or role to which the permission will be granted.
+      # Must be an existing user or role.
+      name: string
+      # Optional parameters. The default value is an empty map (
+      # {} ). Supported Parameters: resource_group  Name of an existing
+      # resource group to associate with this role.
+      options: {}
+      # Permission to grant to the user or role. Supported
+      # Values  Description table_admin Full read/write and
+      # administrative access to the table. table_insert    Insert access
+      # to the table. table_update  Update access to the table.
+      # table_delete    Delete access to the table. table_read  Read access
+      # to the table.
+      permission: string
+    # Permission to grant to the user or role. Supported
+    # Values    Description<br/> system_admin   Full access to all data and
+    # system functions.<br/> system_user_admin  Access to administer
+    # users and roles that do not have system_admin permission.<br/>
+    # system_write  Read and write access to all tables.<br/>
+    # system_read   Read-only access to all tables.<br/>
+    tablePermissions:
+    - filter_expression: ""
+      # UID of the user or role to which the permission will be granted.
+      # Must be an existing user or role.
+      name: string
+      # Optional parameters. The default value is an empty map (
+      # {} ). Supported Parameters: resource_group  Name of an existing
+      # resource group to associate with this role.
+      options: {}
+      # Permission to grant to the user or role. Supported
+      # Values  Description table_admin Full read/write and
+      # administrative access to the table. table_insert    Insert access
+      # to the table. table_update  Update access to the table.
+      # table_delete    Delete access to the table. table_read  Read access
+      # to the table.
+      permission: string
+      # Name of the table for which the Permission is to be granted
+      table_name: string
+  # Grants membership in a role to a user or role.
+  addGrantRoleRequest:
+    # Name of the user or role that will be granted membership in input
+    # parameter role. Must be an existing user or role.
+    member: string
+    # Optional parameters. The default value is an empty map ( {} ).
+    options: {}
+    # Name of the role in which membership will be granted. Must be an
+    # existing role.
+    role: string
+  # Debug debug the call
+  debug: false
+  # RingName is the name of the kinetica ring that this user belongs
+  # to.
+  ringName: string
+# KineticaGrantStatus defines the observed state of KineticaGrant
+status:
+  # DBStringResponse - The GPUdb server embeds the endpoint response
+  # inside a standard response structure which contains status
+  # information and the actual response to the query.
+  db_response: data: string
+    # This embedded JSON represents the result of the endpoint
+    data_str: string
+    # API Call Specific
+    data_type: string
+    # Empty if success or an error message
+    message: string
+    # 'OK' or 'ERROR'
+    status: string 
+    ldap_response: string
+

\ No newline at end of file diff --git a/Reference/kinetica_cluster_reference/index.html b/Reference/kinetica_cluster_reference/index.html new file mode 100644 index 0000000..e9ad721 --- /dev/null +++ b/Reference/kinetica_cluster_reference/index.html @@ -0,0 +1,10 @@ + Kinetica Core DB CRDs - Kinetica for Kubernetes
Skip to content

Kinetica Core DB CRDs

  • DB Clusters


    Core Kinetica Database Cluster Management CRD & sample CR.

    KineticaCluster

  • DB Users


    Kinetica Database User Management CRD & sample CR.

    KineticaUser

  • DB Roles


    Kinetica Database Role Management CRD & sample CR.

    KineticaRole

  • DB Schemas


    Kinetica Database Schema Management CRD & sample CR.

    KineticaSchema

  • DB Grants


    Kinetica Database Grant Management CRD & sample CR.

    KineticaGrant

  • DB Resource Groups


    Kinetica Database Resource Group Management CRD & sample CR.

    KineticaResourceGroup

  • DB Administration


    Kinetica Database Administration CRD & sample CR.

    KineticaAdmin

  • DB Backups


    Kinetica Database Backup Management CRD & sample CR.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaBackup

  • DB Restore


    Kinetica Database Restore CRD & sample CR.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaRestore

\ No newline at end of file diff --git a/Reference/kinetica_cluster_resource_groups/index.html b/Reference/kinetica_cluster_resource_groups/index.html new file mode 100644 index 0000000..c2895e7 --- /dev/null +++ b/Reference/kinetica_cluster_resource_groups/index.html @@ -0,0 +1,46 @@ + Kinetica Cluster Resource Groups Reference - Kinetica for Kubernetes
Skip to content

Kinetica Cluster Resource Groups CRD Reference

Full KineticaResourceGroup CR Structure

kineticaclusterresourcegroups.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an
+# object. Servers should convert recognized schemas to the latest
+# internal value, and may reject unrecognized values. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+apiVersion: app.kinetica.com/v1
+# Kind is a string value representing the REST resource this object
+# represents. Servers may infer this from the endpoint the client
+# submits requests to. Cannot be updated. In CamelCase. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+kind: KineticaClusterResourceGroup 
+metadata: {}
+# KineticaClusterResourceGroupSpec defines the desired state of
+# KineticaClusterResourceGroup
+spec: 
+  db_create_resource_group_request:
+    # AdjoiningResourceGroup -
+    adjoining_resource_group: ""
+    # Name - name of the DB ResourceGroup
+    # https://docs.kinetica.com/7.1/azure/sql/resource_group/?search-highlight=resource+group#id-baea5b60-769c-5373-bff1-53f4f1ca5c21
+    name: string
+    # Options - DB Options used when creating the ResourceGroup
+    options: {}
+    # Ranking - Indicates the relative ranking among existing resource
+    # groups where this new resource group will be placed. When using
+    # before or after, specify which resource group this one will be
+    # inserted before or after in input parameter
+    # adjoining_resource_group. The supported values are: first last
+    # before after
+    ranking: ""
+  # RingName is the name of the kinetica ring that this user belongs
+  # to.
+  ringName: string
+# KineticaClusterResourceGroupStatus defines the observed state of
+# KineticaClusterResourceGroup
+status: 
+  provisioned: string
+

\ No newline at end of file diff --git a/Reference/kinetica_cluster_restores/index.html b/Reference/kinetica_cluster_restores/index.html new file mode 100644 index 0000000..075d474 --- /dev/null +++ b/Reference/kinetica_cluster_restores/index.html @@ -0,0 +1,107 @@ + Kinetica Cluster Restores Reference - Kinetica for Kubernetes
Skip to content

Kinetica Cluster Restores Reference

Full KineticaClusterRestore CR Structure

kineticaclusterrestores.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an
+# object. Servers should convert recognized schemas to the latest
+# internal value, and may reject unrecognized values. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+apiVersion: app.kinetica.com/v1
+# Kind is a string value representing the REST resource this object
+# represents. Servers may infer this from the endpoint the client
+# submits requests to. Cannot be updated. In CamelCase. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+kind: KineticaClusterRestore 
+metadata: {}
+# RestoreSpec defines the specification for a Velero restore.
+spec:
+  # BackupName is the unique name of the Velero backup to restore from.
+  backupName: string
+  # ExcludedNamespaces contains a list of namespaces that are not
+  # included in the restore.
+  excludedNamespaces: ["string"]
+  # ExcludedResources is a slice of resource names that are not included
+  # in the restore.
+  excludedResources: ["string"]
+  # IncludeClusterResources specifies whether cluster-scoped resources
+  # should be included for consideration in the restore. If null,
+  # defaults to true.
+  includeClusterResources: true
+  # IncludedNamespaces is a slice of namespace names to include objects
+  # from. If empty, all namespaces are included.
+  includedNamespaces: ["string"]
+  # IncludedResources is a slice of resource names to include in the
+  # restore. If empty, all resources in the backup are included.
+  includedResources: ["string"]
+  # LabelSelector is a metav1.LabelSelector to filter with when
+  # restoring individual objects from the backup. If empty or nil, all
+  # objects are included. Optional.
+  labelSelector:
+    # matchExpressions is a list of label selector requirements. The
+    # requirements are ANDed.
+    matchExpressions:
+    - key: string
+      # operator represents a key's relationship to a set of values.
+      # Valid operators are In, NotIn, Exists and DoesNotExist.
+      operator: string
+      # values is an array of string values. If the operator is In or
+      # NotIn, the values array must be non-empty. If the operator is
+      # Exists or DoesNotExist, the values array must be empty. This
+      # array is replaced during a strategic merge patch.
+      values: ["string"]
+    # matchLabels is a map of {key,value} pairs. A single {key,value} in
+    # the matchLabels map is equivalent to an element of
+    # matchExpressions, whose key field is "key", the operator is "In",
+    # and the values array contains only "value". The requirements are
+    # ANDed.
+    matchLabels: {}
+  # NamespaceMapping is a map of source namespace names to target
+  # namespace names to restore into. Any source namespaces not included
+  # in the map will be restored into namespaces of the same name.
+  namespaceMapping: {}
+  # RestorePVs specifies whether to restore all included PVs from
+  # snapshot (via the cloudprovider).
+  restorePVs: true
+  # ScheduleName is the unique name of the Velero schedule to restore
+  # from. If specified, and BackupName is empty, Velero will restore
+  # from the most recent successful backup created from this schedule.
+  scheduleName: string status: coldTierRestore: ""
+  # CompletionTimestamp records the time the restore operation was
+  # completed. Completion time is recorded even on failed restore. The
+  # server's time is used for StartTimestamps
+  completionTimestamp: string
+  # Errors is a count of all error messages that were generated during
+  # execution of the restore. The actual errors are stored in object
+  # storage.
+  errors: 1
+  # FailureReason is an error that caused the entire restore to fail.
+  failureReason: string
+  # Phase is the current state of the Restore
+  phase: string
+  # Progress contains information about the restore's execution
+  # progress. Note that this information is best-effort only -- if
+  # Velero fails to update it during a restore for any reason, it may
+  # be inaccurate/stale.
+  progress:
+    # ItemsRestored is the number of items that have actually been
+    # restored so far
+    itemsRestored: 1
+    # TotalItems is the total number of items to be restored. This
+    # number may change throughout the execution of the restore due to
+    # plugins that return additional related items to restore
+    totalItems: 1
+  # StartTimestamp records the time the restore operation was started.
+  # The server's time is used for StartTimestamps
+  startTimestamp: string
+  # ValidationErrors is a slice of all validation errors(if applicable)
+  validationErrors: ["string"]
+  # Warnings is a count of all warning messages that were generated
+  # during execution of the restore. The actual warnings are stored in
+  # object storage.
+  warnings: 1
+

\ No newline at end of file diff --git a/Reference/kinetica_cluster_roles/index.html b/Reference/kinetica_cluster_roles/index.html new file mode 100644 index 0000000..5abd371 --- /dev/null +++ b/Reference/kinetica_cluster_roles/index.html @@ -0,0 +1,66 @@ + Kinetica Cluster Roles Reference - Kinetica for Kubernetes
Skip to content

Kinetica Cluster Roles CRD

Full KineticaRole CR Structure

kineticaroles.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an
+# object. Servers should convert recognized schemas to the latest
+# internal value, and may reject unrecognized values. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+apiVersion: app.kinetica.com/v1
+# Kind is a string value representing the REST resource this object
+# represents. Servers may infer this from the endpoint the client
+# submits requests to. Cannot be updated. In CamelCase. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+kind: KineticaRole 
+metadata: {}
+# KineticaRoleSpec defines the desired state of KineticaRole
+spec:
+  # AlterRoleRequest Kinetica DB REST API Request Format Object.
+  alter_role:
+    # Action - Modification operation to be applied to the role.
+    action: string
+    # Role UID - Name of the role to be altered. Must be an existing
+    # role.
+    name: string
+    # Optional parameters. The default value is an empty map ( {} ).
+    options: {}
+    # Value - The value of the modification, depending on input
+    # parameter action.
+    value: string
+  # Debug debug the call
+  debug: false
+  # RingName is the name of the kinetica ring that this user belongs
+  # to.
+  ringName: string
+  # AddRoleRequest Kinetica DB REST API Request Format Object.
+  role:
+    # User UID
+    name: string
+    # Optional parameters. The default value is an empty map (
+    # {} ). Supported Parameters: resource_group    Name of an existing
+    # resource group to associate with this role.
+    options: {}
+    # ResourceGroupName of an existing resource group to associate with
+    # this role
+    resourceGroupName: ""
+# KineticaRoleStatus defines the observed state of KineticaRole
+status:
+  # DBStringResponse - The GPUdb server embeds the endpoint response
+  # inside a standard response structure which contains status
+  # information and the actual response to the query.
+  db_response: data: string
+    # This embedded JSON represents the result of the endpoint
+    data_str: string
+    # API Call Specific
+    data_type: string
+    # Empty if success or an error message
+    message: string
+    # 'OK' or 'ERROR'
+    status: string 
+    ldap_response: string
+

\ No newline at end of file diff --git a/Reference/kinetica_cluster_schemas/index.html b/Reference/kinetica_cluster_schemas/index.html new file mode 100644 index 0000000..c819679 --- /dev/null +++ b/Reference/kinetica_cluster_schemas/index.html @@ -0,0 +1,37 @@ + Kinetica Cluster Schemas Reference - Kinetica for Kubernetes
Skip to content

Kinetica Cluster Schemas CRD Reference

Full Kinetica Cluster Schemas CR Structure

kineticaclusterschemas.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an
+# object. Servers should convert recognized schemas to the latest
+# internal value, and may reject unrecognized values. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+apiVersion: app.kinetica.com/v1
+# Kind is a string value representing the REST resource this object
+# represents. Servers may infer this from the endpoint the client
+# submits requests to. Cannot be updated. In CamelCase. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+kind: KineticaClusterSchema 
+metadata: {}
+# KineticaClusterSchemaSpec defines the desired state of
+# KineticaClusterSchema
+spec: 
+  db_create_schema_request:
+    # Name - the name of the resource group to create in the DB
+    name: string
+    # Optional parameters. The default value is an empty map (
+    # {} ). Supported Parameters: "max_cpu_concurrency", "max_data"
+    options: {}
+  # RingName is the name of the kinetica ring that this user belongs
+  # to.
+  ringName: string
+# KineticaClusterSchemaStatus defines the observed state of
+# KineticaClusterSchema
+status: 
+  provisioned: string
+

\ No newline at end of file diff --git a/Reference/kinetica_cluster_users/index.html b/Reference/kinetica_cluster_users/index.html new file mode 100644 index 0000000..3258184 --- /dev/null +++ b/Reference/kinetica_cluster_users/index.html @@ -0,0 +1,91 @@ + Kinetica Cluster Users Reference - Kinetica for Kubernetes
Skip to content

Kinetica Cluster Users CRD Reference

Full KineticaUser CR Structure

kineticausers.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an
+# object. Servers should convert recognized schemas to the latest
+# internal value, and may reject unrecognized values. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+apiVersion: app.kinetica.com/v1
+# Kind is a string value representing the REST resource this object
+# represents. Servers may infer this from the endpoint the client
+# submits requests to. Cannot be updated. In CamelCase. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+kind: KineticaUser
+metadata: {}
+# KineticaUserSpec defines the desired state of KineticaUser
+spec:
+  # Action field contains UserActionEnum field indicating whether it is
+  # an Upsert or Change Password operation. For deletion delete the
+  # KineticaUser CR and a finalizer will remove the user from LDAP.
+  action: string
+  # ChangePassword specific fields
+  changePassword:
+    # PasswordSecret - Not the actual user password but the name of a
+    # Kubernetes Secret containing a Data element with a Password
+    # attribute. The secret is removed on user creation. Must be in the
+    # same namespace as the Kinetica Cluster. Must contain the
+    # following fields: - oldPassword newPassword
+    passwordSecret: string
+  # Debug debug the call
+  debug: false
+  # GroupID - Organisation or Team Id the user belongs to.
+  groupId: string
+  # Create the user in Reveal
+  reveal: true
+  # RingName is the name of the kinetica ring that this user belongs
+  # to.
+  ringName: string
+  # UID is the username (not UUID UID).
+  uid: string
+  # Upsert specific fields
+  upsert:
+    # CreateHomeDirectory - when true, a home directory in KiFS is
+    # created for this user The default value is true. The supported
+    # values are: true false
+    createHomeDirectory: true
+    # DB Memory user data size limit
+    dataLimit: "10Gi"
+    # DisplayName
+    displayName: string
+    # GivenName is Firstname also called Christian name. givenName in
+    # LDAP terms.
+    givenName: string
+    # KIFs user data size limit
+    kifsDataLimit: "2Gi"
+    # LastName refers to last name or surname. sn in LDAP terms.
+    lastName: string
+    # Options -
+    options: {}
+    # PasswordSecret - Not the actual user password but the name of a
+    # Kubernetes Secret containing a Data element with a Password
+    # attribute. The secret is removed on user creation. Must be in the
+    # same namespace as the Kinetica Cluster.
+    passwordSecret: string
+    # UPN or UserPrincipalName - e.g. guyt@cp.com  
+    # Looks like an email address.
+    userPrincipalName: string
+  # UUID is the user unique UUID from the Control Plane.
+  uuid: string
+# KineticaUserStatus defines the observed state of KineticaUser
+status:
+  # DBStringResponse - The GPUdb server embeds the endpoint response
+  # inside a standard response structure which contains status
+  # information and the actual response to the query.
+  db_response: data: string
+    # This embedded JSON represents the result of the endpoint
+    data_str: string
+    # API Call Specific
+    data_type: string
+    # Empty if success or an error message
+    message: string
+    # 'OK' or 'ERROR'
+    status: string 
+    ldap_response: string 
+    reveal_admin: string
+

\ No newline at end of file diff --git a/Reference/kinetica_clusters/index.html b/Reference/kinetica_clusters/index.html new file mode 100644 index 0000000..7693cec --- /dev/null +++ b/Reference/kinetica_clusters/index.html @@ -0,0 +1,6383 @@ + Kinetica Clusters Reference - Kinetica for Kubernetes
Skip to content

Kinetica Clusters CRD Reference

This page covers the Kinetica Cluster Kubernetes CRD.

Full KineticaCluster CR Structure

kineticaclusters.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an
+# object. Servers should convert recognized schemas to the latest
+# internal value, and may reject unrecognized values. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+apiVersion: app.kinetica.com/v1
+# Kind is a string value representing the REST resource this object
+# represents. Servers may infer this from the endpoint the client
+# submits requests to. Cannot be updated. In CamelCase. More info:
+# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+kind: KineticaCluster 
+metadata: {}
+# KineticaClusterSpec defines the configuration for KineticaCluster DB
+spec:
+  # An optional duration after which the database is stopped and DB
+  # resources are freed
+  autoSuspend: 
+    enabled: false
+    # InactivityDuration - the duration which the cluster should be idle
+    # before auto-pausing the DB Cluster.
+    inactivityDuration: "1h"
+  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem
+  # etc.
+  awsConfig:
+    # ClusterName - AWS name of the EKS Cluster. NOTE: Marked as
+    # optional but is mandatory
+    clusterName: string
+    # MarketplaceAppConfig - Amazon AWS specific DB Cluster
+    # information.
+    marketplaceApp:
+      # KmsKeyId - Key for disk encryption. The full Amazon Resource
+      # Name of the key to use when encrypting the volume. If none is
+      # supplied but encrypted is true, a key is generated by AWS. See
+      # AWS docs for valid ARN value.
+      kmsKeyId: string
+      # ProductCode - used to uniquely identify a product in AWS
+      # Marketplace. The product code should be the same as the one
+      # used during the publishing of a new product.
+      productCode: "1cmucncoyp9pi8xjdwqjimlf8"
+      # PublicKeyVersion - Public Key Version provided by AWS
+      # Marketplace
+      publicKeyVersion: 1
+      # ParentResourceGroup - The resource group of the ManagedApp
+      # itself ParentResourceGroup     string
+      # `json:"parentResourceGroup"` ResourceId - Identifier of the
+      # resource against which usage is emitted Format is GUID
+      # (UUID)
+      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json
+      # Optional only if that exactly of  ResourceId or ResourceUri is
+      # specified.
+      resourceId: string
+    # NodeGroups - List of NodeGroups for this cluster MUST contain at
+    # least one of the following keys: - 
+    #   * none
+    #   * infra 
+    #   * infra_public 
+    #   * compute 
+    #   * compute-gpu 
+    #   * aaw_cpu 
+    # NOTE: Marked as optional but is mandatory
+    nodeGroups: {}
+    # OTELTracing - OpenTelemetry Tracing Specifics
+    otelTracing:
+      # Endpoint - Set the OpenTelemetry reporting Endpoint
+      endpoint: ""
+      # Key - KineticaCluster specific Key required to send Telemetry
+      # information to the Cloud
+      key: string
+      # MaxBatchSize - Telemetry Reporting Interval to use in seconds.
+      maxBatchInterval: 10
+      # MaxBatchSize - Telemetry Maximum Batch Size to send.
+      maxBatchSize: 1024
+  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem
+  # etc.
+  azureConfig:
+    # App Insights Specifics
+    appInsights:
+      # Endpoint - Override the default AppInsights reporting Endpoint
+      endpoint: ""
+      # Key - KineticaCluster specific Application Insights Key required
+      # to send Telemetry information to the Azure Portal
+      key: string
+      # MaxBatchSize - Telemetry Reporting Interval to use in seconds.
+      maxBatchInterval: 10
+      # MaxBatchSize - Telemetry Maximum Batch Size to send.
+      maxBatchSize: 1024
+    # AzureManagedAppConfig - Microsoft Azure specific DB Cluster
+    # information.
+    managedApp:
+      # DiskEncryptionSetID - By default, managed disks use
+      # platform-managed encryption keys. All managed disks, snapshots,
+      # images, and data written to existing managed disks are
+      # automatically encrypted-at-rest with platform-managed keys. You
+      # can choose to manage encryption at the level of each managed
+      # disk, with your own keys. When you specify a customer-managed
+      # key, that key is used to protect and control access to the key
+      # that encrypts your data. Customer-managed keys offer greater
+      # flexibility to manage access controls.
+      diskEncryptionSetId: string
+      # PlanId - The Azure Marketplace Plan/Offer identifier selected by
+      # the customer for this DB cluster e.g. BYOL, Pay-As-You-Go etc.
+      planId: string
+      # ParentResourceGroup - The resource group of the ManagedApp
+      # itself ParentResourceGroup     string
+      # `json:"parentResourceGroup"` ResourceId - Identifier of the
+      # resource against which usage is emitted Format is GUID
+      # (UUID)
+      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json
+      # Optional only if that exactly of  ResourceId or ResourceUri is
+      # specified.
+      resourceId: string
+      # ResourceUri - Identifier of the managed app resource against
+      # which usage is emitted
+      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json
+      # Optional only if that exactly of  ResourceId or ResourceUri is
+      # specified.
+      resourceUri: string
+  # Tells the operator we want to run in Debug mode.
+  debug: false
+  # Identifies the type of Kubernetes deployment.
+  deploymentType:
+    # CloudRegionEnum - The target Kubernetes type to deploy to.
+    # Supported Values are: - aws_useast_1 aws_useast_2 aws_uswest_1
+    # az_useast_1 az_uswest_1
+    region: string
+    # DeploymentTypeEnum - The type of the Deployment. Supported Values
+    # are: - Managed FreeSaaS DedicatedSaaS OnPrem
+    type: string
+  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem
+  # etc.
+  devEditionConfig:
+    # Host IPv4 address. Used by KiND based Developer Edition where
+    # ingress paths set to *. Provides qualified, routable URLs to
+    # workbench.
+    hostIpAddress: ""
+  # The GAdmin Dashboard Configuration for the Kinetica Cluster.
+  gadmin:
+    # The port that GAdmin will be running on. It runs only on the head
+    # node pod in the cluster. Default: 8080
+    containerPort:
+      # Number of port to expose on the pod's IP address. This must be a
+      # valid port number, 0 < x < 65536.
+      containerPort: 1
+      # What host IP to bind the external port to.
+      hostIP: string
+      # Number of port to expose on the host. If specified, this must be
+      # a valid port number, 0 < x < 65536. If HostNetwork is
+      # specified, this must match ContainerPort. Most containers do
+      # not need this.
+      hostPort: 1
+      # If specified, this must be an IANA_SVC_NAME and unique within
+      # the pod. Each named port in a pod must have a unique name. Name
+      # for the port that can be referred to by services.
+      name: string
+      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+      # to "TCP".
+      protocol: "TCP"
+    # The Ingress Endpoint that GAdmin will be running on.
+    ingressPath:
+      # backend defines the referenced service endpoint to which the
+      # traffic will be forwarded to.
+      backend:
+        # resource is an ObjectRef to another Kubernetes resource in the
+        # namespace of the Ingress object. If resource is specified,
+        # serviceName and servicePort must not be specified.
+        resource:
+          # APIGroup is the group for the resource being referenced. If
+          # APIGroup is not specified, the specified Kind must be in
+          # the core API group. For any other third-party types,
+          # APIGroup is required.
+          apiGroup: string
+          # Kind is the type of resource being referenced
+          kind: KineticaCluster
+          # Name is the name of resource being referenced
+          name: string
+        # serviceName specifies the name of the referenced service.
+        serviceName: string
+        # servicePort Specifies the port of the referenced service.
+        servicePort: 
+      # path is matched against the path of an incoming request.
+      # Currently it can contain characters disallowed from the
+      # conventional "path" part of a URL as defined by RFC 3986. Paths
+      # must begin with a '/' and must be present when using PathType
+      # with value "Exact" or "Prefix".
+      path: string
+      # pathType determines the interpretation of the path matching.
+      # PathType can be one of the following values: * Exact: Matches
+      # the URL path exactly. * Prefix: Matches based on a URL path
+      # prefix split by '/'. Matching is done on a path element by
+      # element basis. A path element refers is the list of labels in
+      # the path split by the '/' separator. A request is a match for
+      # path p if every p is an element-wise prefix of p of the request
+      # path. Note that if the last element of the path is a substring
+      # of the last element in request path, it is not a match
+      # (e.g. /foo/bar matches /foo/bar/baz, but does not
+      # match /foo/barbaz). * ImplementationSpecific: Interpretation of
+      # the Path matching is up to the IngressClass. Implementations
+      # can treat this as a separate PathType or treat it identically
+      # to Prefix or Exact path types. Implementations are required to
+      # support all path types. Defaults to ImplementationSpecific.
+      pathType: string
+    # Whether to enable the GAdmin Dashboard on the Cluster. Default:
+    # true
+    isEnabled: true
+  # Gaia - gaia.properties configuration
+  gaia: admin:
+      # AdminLoginOnlyGpudbDown - When GPUdb is down, only allow admin
+      # user to login
+      admin_login_only_gpudb_down: true
+      # Username - We do check for admin username in various places
+      admin_username: "admin"
+      # LoginAnimationEnabled - Display any animation in login page
+      login_animation_enabled: true
+      # AdminLoginOnlyGpudbDown - Convenience settings for dev mode
+      login_bypass_enabled: false
+      # RequireStrongPassword - Convenience settings for dev mode
+      require_strong_password: true
+      # SSLTruststorePasswordScript - Display any animation in login
+      # page
+      ssl_truststore_password_script: string
+    # DemoSchema - Schema-related configuration
+    demo_schema: "demo" gpudb:
+      # DataFileStringNullValue - Table import/export null value string
+      data_file_string_null_value: "\\N"
+      gpudb_ext_url: "http://127.0.0.1:8082/gpudb-0"
+      # URL - Current instance of gpudb, when running in HA mode change
+      # this to load balancer endpoint
+      gpudb_url: "http://127.0.0.1:9191"
+      # LoggingLogFileName - Which file to use when displaying logging
+      # on Cluster page.
+      logging_log_file_name: "gpudb.log"
+      # SampleRepoURL - Table import/export null value string
+      sample_repo_url: "//s3.amazonaws.com/kinetica-ce-data" hm:
+      gpudb_ext_hm_url: "http://127.0.0.1:8082/gpudb-host-manager"
+      gpudb_hm_url: "http://127.0.0.1:9300" http:
+      # ClientTimeout - Number of seconds for proxy request timeout
+      http_client_timeout: 3600
+      # ClientTimeoutV2 - Force override of previous default with 0 as
+      # infinite timeout
+      http_client_timeout_v2: 0
+      # TomcatPathKey - Name of folder where Tomcat apps are installed
+      tomcat_path_key: "tomcat"
+      # WebappContext - Web App context
+      webapp_context: "gadmin"
+    # GAdminIsRemote - True if the gadmin application is running on a
+    # remote machine (not on same node as gpudb). If running on a
+    # remote machine the manage options will be disabled.
+    is_remote: false
+    # KAgentCLIPath - Schema-related configuration
+    kagent_cli_path: "/opt/gpudb/kagent/bin/kagent"
+    # KIO - KIO-related configuration
+    kio: kio_log_file_path: "/opt/gpudb/kitools/kio/logs/gadmin.log"
+    kio_log_level: "DEBUG" kio_log_size_limit: 10485760 kisql:
+      # QueryResultsLimit - KiSQL limit on the number of results in each
+      # query
+      kisql_query_results_limit: 10000
+      # QueryTimezone - KiSQL TimeZoneId setting for queries
+      # (use "system" for local system time)
+      kisql_query_timezone: "GMT" license:
+      # Status - Stub for license manager
+      status: "ok"
+      # Type - Stub for license manager
+      type: "unlimited"
+    # MaxConcurrentUserSessions - Session management configuration
+    max_concurrent_user_sessions: 0
+    # PublicSchema - Schema-related configuration
+    public_schema: "ki_home"
+    # RevealDBInfoFile - Path to file containing Reveal DB location
+    reveal_db_info_file: "/opt/gpudb/connectors/reveal/var/REVEAL_DB_DIR"
+    # RootSchema - Schema-related configuration
+    root_schema: "root" stats:
+      # GraphanaURL -
+      graphana_url: "http://127.0.0.1:3000"
+      # GraphiteURL
+      graphite_url: "http://127.0.0.1:8181"
+      # StatsGrafanaURL - Port used to host the Grafana user interface
+      # and embeddable metric dashboards in GAdmin. Note: If this value
+      # is defaulted then it will be replaced by the name of the Stats
+      # service if it is deployed & Grafana is enabled e.g.
+      # cluster-1234.gpudb.svc.cluster.local
+      stats_grafana_url: "http://127.0.0.1:9091"
+  # https://github.com/kubernetes-sigs/controller-tools/issues/622 if we
+  # want to set usePools as false, need to set defaults GPUDBCluster is
+  # an instance of a Kinetica DB Cluster i.e. it's StatefulSet,
+  # Service, Ingress, ConfigMap etc.
+  gpudbCluster:
+    # Affinity - is a group of affinity scheduling rules.
+    affinity:
+      # Describes node affinity scheduling rules for the pod.
+      nodeAffinity:
+        # The scheduler will prefer to schedule pods to nodes that
+        # satisfy the affinity expressions specified by this field, but
+        # it may choose a node that violates one or more of the
+        # expressions. The node that is most preferred is the one with
+        # the greatest sum of weights, i.e. for each node that meets
+        # all of the scheduling requirements (resource request,
+        # requiredDuringScheduling affinity expressions, etc.), compute
+        # a sum by iterating through the elements of this field and
+        # adding "weight" to the sum if the node matches the
+        # corresponding matchExpressions; the node(s) with the highest
+        # sum are the most preferred.
+        preferredDuringSchedulingIgnoredDuringExecution:
+        - preference:
+            # A list of node selector requirements by node's labels.
+            matchExpressions:
+            - key: string
+              # Represents a key's relationship to a set of values.
+              # Valid operators are In, NotIn, Exists, DoesNotExist.
+              # Gt, and Lt.
+              operator: string
+              # An array of string values. If the operator is In or
+              # NotIn, the values array must be non-empty. If the
+              # operator is Exists or DoesNotExist, the values array
+              # must be empty. If the operator is Gt or Lt, the values
+              # array must have a single element, which will be
+              # interpreted as an integer. This array is replaced
+              # during a strategic merge patch.
+              values: ["string"]
+            # A list of node selector requirements by node's fields.
+            matchFields:
+            - key: string
+              # Represents a key's relationship to a set of values.
+              # Valid operators are In, NotIn, Exists, DoesNotExist.
+              # Gt, and Lt.
+              operator: string
+              # An array of string values. If the operator is In or
+              # NotIn, the values array must be non-empty. If the
+              # operator is Exists or DoesNotExist, the values array
+              # must be empty. If the operator is Gt or Lt, the values
+              # array must have a single element, which will be
+              # interpreted as an integer. This array is replaced
+              # during a strategic merge patch.
+              values: ["string"]
+          # Weight associated with matching the corresponding
+          # nodeSelectorTerm, in the range 1-100.
+          weight: 1
+        # If the affinity requirements specified by this field are not
+        # met at scheduling time, the pod will not be scheduled onto
+        # the node. If the affinity requirements specified by this
+        # field cease to be met at some point during pod execution
+        # (e.g. due to an update), the system may or may not try to
+        # eventually evict the pod from its node.
+        requiredDuringSchedulingIgnoredDuringExecution:
+          # Required. A list of node selector terms. The terms are
+          # ORed.
+          nodeSelectorTerms:
+          - matchExpressions:
+            - key: string
+              # Represents a key's relationship to a set of values.
+              # Valid operators are In, NotIn, Exists, DoesNotExist.
+              # Gt, and Lt.
+              operator: string
+              # An array of string values. If the operator is In or
+              # NotIn, the values array must be non-empty. If the
+              # operator is Exists or DoesNotExist, the values array
+              # must be empty. If the operator is Gt or Lt, the values
+              # array must have a single element, which will be
+              # interpreted as an integer. This array is replaced
+              # during a strategic merge patch.
+              values: ["string"]
+            # A list of node selector requirements by node's fields.
+            matchFields:
+            - key: string
+              # Represents a key's relationship to a set of values.
+              # Valid operators are In, NotIn, Exists, DoesNotExist.
+              # Gt, and Lt.
+              operator: string
+              # An array of string values. If the operator is In or
+              # NotIn, the values array must be non-empty. If the
+              # operator is Exists or DoesNotExist, the values array
+              # must be empty. If the operator is Gt or Lt, the values
+              # array must have a single element, which will be
+              # interpreted as an integer. This array is replaced
+              # during a strategic merge patch.
+              values: ["string"]
+      # Describes pod affinity scheduling rules (e.g. co-locate this pod
+      # in the same node, zone, etc. as some other pod(s)).
+      podAffinity:
+        # The scheduler will prefer to schedule pods to nodes that
+        # satisfy the affinity expressions specified by this field, but
+        # it may choose a node that violates one or more of the
+        # expressions. The node that is most preferred is the one with
+        # the greatest sum of weights, i.e. for each node that meets
+        # all of the scheduling requirements (resource request,
+        # requiredDuringScheduling affinity expressions, etc.), compute
+        # a sum by iterating through the elements of this field and
+        # adding "weight" to the sum if the node has pods which matches
+        # the corresponding podAffinityTerm; the node(s) with the
+        # highest sum are the most preferred.
+        preferredDuringSchedulingIgnoredDuringExecution:
+        - podAffinityTerm:
+            # A label query over a set of resources, in this case pods.
+            labelSelector:
+              # matchExpressions is a list of label selector
+              # requirements. The requirements are ANDed.
+              matchExpressions:
+              - key: string
+                # operator represents a key's relationship to a set of
+                # values. Valid operators are In, NotIn, Exists and
+                # DoesNotExist.
+                operator: string
+                # values is an array of string values. If the operator
+                # is In or NotIn, the values array must be non-empty.
+                # If the operator is Exists or DoesNotExist, the values
+                # array must be empty. This array is replaced during a
+                # strategic merge patch.
+                values: ["string"]
+              # matchLabels is a map of {key,value} pairs. A single
+              # {key,value} in the matchLabels map is equivalent to an
+              # element of matchExpressions, whose key field is "key",
+              # the operator is "In", and the values array contains
+              # only "value". The requirements are ANDed.
+              matchLabels: {}
+            # A label query over the set of namespaces that the term
+            # applies to. The term is applied to the union of the
+            # namespaces selected by this field and the ones listed in
+            # the namespaces field. null selector and null or empty
+            # namespaces list means "this pod's namespace". An empty
+            # selector ({}) matches all namespaces.
+            namespaceSelector:
+              # matchExpressions is a list of label selector
+              # requirements. The requirements are ANDed.
+              matchExpressions:
+              - key: string
+                # operator represents a key's relationship to a set of
+                # values. Valid operators are In, NotIn, Exists and
+                # DoesNotExist.
+                operator: string
+                # values is an array of string values. If the operator
+                # is In or NotIn, the values array must be non-empty.
+                # If the operator is Exists or DoesNotExist, the values
+                # array must be empty. This array is replaced during a
+                # strategic merge patch.
+                values: ["string"]
+              # matchLabels is a map of {key,value} pairs. A single
+              # {key,value} in the matchLabels map is equivalent to an
+              # element of matchExpressions, whose key field is "key",
+              # the operator is "In", and the values array contains
+              # only "value". The requirements are ANDed.
+              matchLabels: {}
+            # namespaces specifies a static list of namespace names that
+            # the term applies to. The term is applied to the union of
+            # the namespaces listed in this field and the ones selected
+            # by namespaceSelector. null or empty namespaces list and
+            # null namespaceSelector means "this pod's namespace".
+            namespaces: ["string"]
+            # This pod should be co-located (affinity) or not
+            # co-located (anti-affinity) with the pods matching the
+            # labelSelector in the specified namespaces, where
+            # co-located is defined as running on a node whose value of
+            # the label with key topologyKey matches that of any node
+            # on which any of the selected pods is running. Empty
+            # topologyKey is not allowed.
+            topologyKey: string
+          # weight associated with matching the corresponding
+          # podAffinityTerm, in the range 1-100.
+          weight: 1
+        # If the affinity requirements specified by this field are not
+        # met at scheduling time, the pod will not be scheduled onto
+        # the node. If the affinity requirements specified by this
+        # field cease to be met at some point during pod execution
+        # (e.g. due to a pod label update), the system may or may not
+        # try to eventually evict the pod from its node. When there are
+        # multiple elements, the lists of nodes corresponding to each
+        # podAffinityTerm are intersected, i.e. all terms must be
+        # satisfied.
+        requiredDuringSchedulingIgnoredDuringExecution:
+        - labelSelector:
+            # matchExpressions is a list of label selector requirements.
+            # The requirements are ANDed.
+            matchExpressions:
+            - key: string
+              # operator represents a key's relationship to a set of
+              # values. Valid operators are In, NotIn, Exists and
+              # DoesNotExist.
+              operator: string
+              # values is an array of string values. If the operator is
+              # In or NotIn, the values array must be non-empty. If the
+              # operator is Exists or DoesNotExist, the values array
+              # must be empty. This array is replaced during a
+              # strategic merge patch.
+              values: ["string"]
+            # matchLabels is a map of {key,value} pairs. A single
+            # {key,value} in the matchLabels map is equivalent to an
+            # element of matchExpressions, whose key field is "key",
+            # the operator is "In", and the values array contains
+            # only "value". The requirements are ANDed.
+            matchLabels: {}
+          # A label query over the set of namespaces that the term
+          # applies to. The term is applied to the union of the
+          # namespaces selected by this field and the ones listed in
+          # the namespaces field. null selector and null or empty
+          # namespaces list means "this pod's namespace". An empty
+          # selector ({}) matches all namespaces.
+          namespaceSelector:
+            # matchExpressions is a list of label selector requirements.
+            # The requirements are ANDed.
+            matchExpressions:
+            - key: string
+              # operator represents a key's relationship to a set of
+              # values. Valid operators are In, NotIn, Exists and
+              # DoesNotExist.
+              operator: string
+              # values is an array of string values. If the operator is
+              # In or NotIn, the values array must be non-empty. If the
+              # operator is Exists or DoesNotExist, the values array
+              # must be empty. This array is replaced during a
+              # strategic merge patch.
+              values: ["string"]
+            # matchLabels is a map of {key,value} pairs. A single
+            # {key,value} in the matchLabels map is equivalent to an
+            # element of matchExpressions, whose key field is "key",
+            # the operator is "In", and the values array contains
+            # only "value". The requirements are ANDed.
+            matchLabels: {}
+          # namespaces specifies a static list of namespace names that
+          # the term applies to. The term is applied to the union of
+          # the namespaces listed in this field and the ones selected
+          # by namespaceSelector. null or empty namespaces list and
+          # null namespaceSelector means "this pod's namespace".
+          namespaces: ["string"]
+          # This pod should be co-located (affinity) or not co-located
+          # (anti-affinity) with the pods matching the labelSelector in
+          # the specified namespaces, where co-located is defined as
+          # running on a node whose value of the label with key
+          # topologyKey matches that of any node on which any of the
+          # selected pods is running. Empty topologyKey is not
+          # allowed.
+          topologyKey: string
+      # Describes pod anti-affinity scheduling rules (e.g. avoid putting
+      # this pod in the same node, zone, etc. as some other pod(s)).
+      podAntiAffinity:
+        # The scheduler will prefer to schedule pods to nodes that
+        # satisfy the anti-affinity expressions specified by this
+        # field, but it may choose a node that violates one or more of
+        # the expressions. The node that is most preferred is the one
+        # with the greatest sum of weights, i.e. for each node that
+        # meets all of the scheduling requirements (resource request,
+        # requiredDuringScheduling anti-affinity expressions, etc.),
+        # compute a sum by iterating through the elements of this field
+        # and adding "weight" to the sum if the node has pods which
+        # matches the corresponding podAffinityTerm; the node(s) with
+        # the highest sum are the most preferred.
+        preferredDuringSchedulingIgnoredDuringExecution:
+        - podAffinityTerm:
+            # A label query over a set of resources, in this case pods.
+            labelSelector:
+              # matchExpressions is a list of label selector
+              # requirements. The requirements are ANDed.
+              matchExpressions:
+              - key: string
+                # operator represents a key's relationship to a set of
+                # values. Valid operators are In, NotIn, Exists and
+                # DoesNotExist.
+                operator: string
+                # values is an array of string values. If the operator
+                # is In or NotIn, the values array must be non-empty.
+                # If the operator is Exists or DoesNotExist, the values
+                # array must be empty. This array is replaced during a
+                # strategic merge patch.
+                values: ["string"]
+              # matchLabels is a map of {key,value} pairs. A single
+              # {key,value} in the matchLabels map is equivalent to an
+              # element of matchExpressions, whose key field is "key",
+              # the operator is "In", and the values array contains
+              # only "value". The requirements are ANDed.
+              matchLabels: {}
+            # A label query over the set of namespaces that the term
+            # applies to. The term is applied to the union of the
+            # namespaces selected by this field and the ones listed in
+            # the namespaces field. null selector and null or empty
+            # namespaces list means "this pod's namespace". An empty
+            # selector ({}) matches all namespaces.
+            namespaceSelector:
+              # matchExpressions is a list of label selector
+              # requirements. The requirements are ANDed.
+              matchExpressions:
+              - key: string
+                # operator represents a key's relationship to a set of
+                # values. Valid operators are In, NotIn, Exists and
+                # DoesNotExist.
+                operator: string
+                # values is an array of string values. If the operator
+                # is In or NotIn, the values array must be non-empty.
+                # If the operator is Exists or DoesNotExist, the values
+                # array must be empty. This array is replaced during a
+                # strategic merge patch.
+                values: ["string"]
+              # matchLabels is a map of {key,value} pairs. A single
+              # {key,value} in the matchLabels map is equivalent to an
+              # element of matchExpressions, whose key field is "key",
+              # the operator is "In", and the values array contains
+              # only "value". The requirements are ANDed.
+              matchLabels: {}
+            # namespaces specifies a static list of namespace names that
+            # the term applies to. The term is applied to the union of
+            # the namespaces listed in this field and the ones selected
+            # by namespaceSelector. null or empty namespaces list and
+            # null namespaceSelector means "this pod's namespace".
+            namespaces: ["string"]
+            # This pod should be co-located (affinity) or not
+            # co-located (anti-affinity) with the pods matching the
+            # labelSelector in the specified namespaces, where
+            # co-located is defined as running on a node whose value of
+            # the label with key topologyKey matches that of any node
+            # on which any of the selected pods is running. Empty
+            # topologyKey is not allowed.
+            topologyKey: string
+          # weight associated with matching the corresponding
+          # podAffinityTerm, in the range 1-100.
+          weight: 1
+        # If the anti-affinity requirements specified by this field are
+        # not met at scheduling time, the pod will not be scheduled
+        # onto the node. If the anti-affinity requirements specified by
+        # this field cease to be met at some point during pod
+        # execution (e.g. due to a pod label update), the system may or
+        # may not try to eventually evict the pod from its node. When
+        # there are multiple elements, the lists of nodes corresponding
+        # to each podAffinityTerm are intersected, i.e. all terms must
+        # be satisfied.
+        requiredDuringSchedulingIgnoredDuringExecution:
+        - labelSelector:
+            # matchExpressions is a list of label selector requirements.
+            # The requirements are ANDed.
+            matchExpressions:
+            - key: string
+              # operator represents a key's relationship to a set of
+              # values. Valid operators are In, NotIn, Exists and
+              # DoesNotExist.
+              operator: string
+              # values is an array of string values. If the operator is
+              # In or NotIn, the values array must be non-empty. If the
+              # operator is Exists or DoesNotExist, the values array
+              # must be empty. This array is replaced during a
+              # strategic merge patch.
+              values: ["string"]
+            # matchLabels is a map of {key,value} pairs. A single
+            # {key,value} in the matchLabels map is equivalent to an
+            # element of matchExpressions, whose key field is "key",
+            # the operator is "In", and the values array contains
+            # only "value". The requirements are ANDed.
+            matchLabels: {}
+          # A label query over the set of namespaces that the term
+          # applies to. The term is applied to the union of the
+          # namespaces selected by this field and the ones listed in
+          # the namespaces field. null selector and null or empty
+          # namespaces list means "this pod's namespace". An empty
+          # selector ({}) matches all namespaces.
+          namespaceSelector:
+            # matchExpressions is a list of label selector requirements.
+            # The requirements are ANDed.
+            matchExpressions:
+            - key: string
+              # operator represents a key's relationship to a set of
+              # values. Valid operators are In, NotIn, Exists and
+              # DoesNotExist.
+              operator: string
+              # values is an array of string values. If the operator is
+              # In or NotIn, the values array must be non-empty. If the
+              # operator is Exists or DoesNotExist, the values array
+              # must be empty. This array is replaced during a
+              # strategic merge patch.
+              values: ["string"]
+            # matchLabels is a map of {key,value} pairs. A single
+            # {key,value} in the matchLabels map is equivalent to an
+            # element of matchExpressions, whose key field is "key",
+            # the operator is "In", and the values array contains
+            # only "value". The requirements are ANDed.
+            matchLabels: {}
+          # namespaces specifies a static list of namespace names that
+          # the term applies to. The term is applied to the union of
+          # the namespaces listed in this field and the ones selected
+          # by namespaceSelector. null or empty namespaces list and
+          # null namespaceSelector means "this pod's namespace".
+          namespaces: ["string"]
+          # This pod should be co-located (affinity) or not co-located
+          # (anti-affinity) with the pods matching the labelSelector in
+          # the specified namespaces, where co-located is defined as
+          # running on a node whose value of the label with key
+          # topologyKey matches that of any node on which any of the
+          # selected pods is running. Empty topologyKey is not
+          # allowed.
+          topologyKey: string
+    # Annotations - Annotations labels to be applied to the Statefulset
+    # DB pods.
+    annotations: {}
+    # The name of the cluster to form.
+    clusterName: string
+    # The Ingress Endpoint that GAdmin will be running on.
+    clusterSize:
+      # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a
+      # representation of the number of nodes in a simple to understand
+      # T-Short size scheme. This indicates the size of the cluster
+      # i.e. the number of nodes. It does not identify the size of the
+      # cloud provider nodes. For node size see ClusterTypeEnum.
+      # Supported Values are: - XS S M L XL XXL XXXL
+      tshirtSize: string
+      # ClusterTypeEnum - An Enum of the node types of a KineticaCluster
+      # e.g. CPU, GPU along with the Cloud Provider node size e.g. size
+      # of the VM.
+      tshirtType: string
+    # Config Kinetica DB Configuration Object
+    config: ai: apiKey: string
+        # Provider - AI API provider type. The default is "sqlgpt"
+        apiProvider: "sqlgpt" apiUrl: string
+      # AlertManagerConfig
+      alertManager:
+        # AlertManager IP address (run on head node) default port
+        # is "2003"
+        ipAddress: "${gaia.host0.address}" port: 2003
+      # AlertConfig
+      alerts: alertDiskAbsolute: [integer]
+        # Trigger an alert if available disk space on any given node
+        # falls to or below a certain threshold, either absolute
+        # (number of bytes) or percentage of total disk space. For
+        # multiple thresholds, use a comma-delimited list of values.
+        alertDiskPercentage: [1,5,10,20]
+        # Trigger generic error message alerts, in cases of various
+        # significant runtime errors.
+        alertErrorMessages: true
+        # Executable to run when an alert condition occurs. This
+        # executable will only be run on **rank0** and does not need to
+        # be present on other nodes.
+        alertExe: ""
+        # Trigger an alert whenever the status of a host or rank
+        # changes.
+        alertHostStatus: true
+        # Optionally, filter host alerts for a comma-delimited list of
+        # statuses. If a filter is empty, every host status change will
+        # trigger an alert.
+        alertHostStatusFilter: "fatal_init_error"
+        # The maximum number of triggered alerts guaranteed to be stored
+        # at any given time. When this number of alerts is exceeded,
+        # older alerts may be discarded to stay within the limit.
+        alertMaxStoredAlerts: 100 alertMemoryAbsolute: [integer]
+        # Trigger an alert if available memory on any given node falls
+        # to or below a certain threshold, either absolute (number of
+        # bytes) or percentage of total memory. For multiple
+        # thresholds, use a comma-delimited list of values.
+        alertMemoryPercentage: [1,5,10,20]
+        # Trigger an alert if a CUDA error occurs on a rank.
+        alertRankCudaError: true
+        # Trigger alerts when the fallback allocator is employed; e.g.,
+        # host memory is allocated because GPU allocation fails. NOTE:
+        # To prevent a flooding of alerts, if a fallback allocator is
+        # triggered in bursts, not every use will generate an alert.
+        alertRankFallbackAllocator: true
+        # Trigger an alert whenever the status of a rank changes.
+        alertRankStatus: true
+        # Optionally, filter rank alerts for a comma-delimited list of
+        # statuses. If a filter is empty, every rank status change will
+        # trigger an alert.
+        alertRankStatusFilter:
+        ["fatal_init_error","not_responding","terminated"]
+        # Enable the alerting system.
+        enableAlerts: true
+        # Directory where the trace event and summary files are stored.
+        # Must be a fully qualified path with sufficient free space for
+        # required volume of data.
+        traceDirectory: "/tmp"
+        # The maximum number of trace events to be collected
+        traceEventBufferSize: 1000000
+      # Audit - This section controls the request auditor, which will
+      # audit all requests received by the server in full or in part
+      # based on the settings.
+      audit:
+        # Controls whether the body of each request is audited (in JSON
+        # format). If 'enable_audit' is "false" this setting has no
+        # effect. NOTE: For requests that insert data records, this
+        # setting does not control the auditing of the records being
+        # inserted, only the rest of the request body; see 'audit_data'
+        # below to control this. audit_body = false
+        body: false
+        # Controls whether records being inserted are audited (in JSON
+        # format) for requests that insert data records. If
+        # either 'enable_audit' or 'audit_body' is "false", this
+        # setting has no effect. NOTE: Enabling this setting during
+        # bulk ingestion of data will rapidly produce very large audit
+        # logs and may cause disk space exhaustion; use with caution.
+        # audit_data = false
+        data: false
+        # Controls whether request auditing is enabled. If set
+        # to "true", the following information is audited for every
+        # request: Job ID, URI, User, and Client Address. The settings
+        # below control whether additional information about each
+        # request is also audited. If set to "false", all auditing is
+        # disabled. enable_audit = false
+        enable: false
+        # Controls whether HTTP headers are audited for each request.
+        # If 'enable_audit' is "false" this setting has no effect.
+        # audit_headers = false
+        headers: true
+        # Controls whether the above audit settings can be altered at
+        # runtime via the /alter/system/properties endpoint. In a
+        # secure environment where auditing is required at all times,
+        # this should be set to "true" to lock the settings to what is
+        # set in this file. lock_audit = false
+        lock: false
+        # Controls whether response information is audited for each
+        # request. If 'enable_audit' is "false" this setting has no
+        # effect. audit_response = false
+        response: false
+      # EventConfig
+      events:
+        # Run a statistics server to collect information about Kinetica
+        # and the machines it runs on.
+        internal: true
+        # Statistics server IP address (run on head node) default port
+        # is "2003"
+        ipAddress: "${gaia.host0.address}" port: 2003
+        # Statistics server namespace - should be a machine identifier
+        statsServerNamespace: "gpudb"
+      # ExternalFilesConfig
+      externalFiles:
+        # Defines the directory from which external files can be loaded
+        directory: "/opt/gpudb/persist"
+        # # Parquet files compression type egress_parquet_compression =
+        #   snappy
+        egressParquetCompression: "snappy"
+        # Max file size (in MB) to allow saving to a single file. May be
+        # overridden by target limitations. egress_single_file_max_size
+        # = 100
+        egressSingleFileMaxSize: "100"
+        # Maximum number of simultaneous threads allocated to a given
+        # external file read request, on each rank. Note that thread
+        # allocation may also be limited by resource group limits, the
+        # subtask_concurrency_limit setting, or system load.
+        readerNumTasks: "-1"
+      # GeneralConfig - the root of the gpudb.conf configuration in the
+      # CRD
+      general:
+        # Timeout (in seconds) to wait for a rank to start during a
+        # cluster event (ex: failover) event is considered failed.
+        clusterEventTimeoutStartupRank: "300"
+        # Enable (if "true") multiple kernels to run concurrently on the
+        # same GPU
+        concurrentKernelExecution: true
+        # Time-to-live in minutes of non-protected tables before they
+        # are automatically deleted from the database.
+        defaultTTL: "20"
+        # Disallow the /clear/table request to clear all tables.
+        disableClearAll: true
+        # Enable overlapped-equi-join filters
+        enableOverlappedEquiJoin: true
+        # Enable predicate-equi-join filter plan type
+        enablePredicateEquiJoin: true
+        # If "true" then all filter execution will be host-only
+        # (i.e. CPU). This can be useful for high-concurrency
+        # situations and when PCIe bandwidth is a limiting factor.
+        forceHostFilterExecution: false
+        # Maximum number of kernels that can be running at the same time
+        # on a given GPU. Set to "0" for no limit. Only takes effect
+        # if 'concurrent_kernel_execution' is "true"
+        maxConcurrentKernels: "0"
+        # Maximum number of records that data retrieval requests such
+        # as /get/records and /aggregate/groupby will return per
+        # request.
+        maxGetRecordsSize: 20000
+        # Set an optional executable command that will be run once when
+        # Kinetica is ready for client requests. This can be used to
+        # perform any initialization logic that needs to be run before
+        # clients connect. It will be run as the "gpudb" user, so you
+        # must ensure that any required permissions are set on the file
+        # to allow it to be executed.  If the command cannot be
+        # executed or returns a non-zero error code, then Kinetica will
+        # be stopped.  Output from the startup script will be logged
+        # to "/opt/gpudb/core/logs/gpudb-on-start.log" (and its dated
+        # relatives).  The "gpudb_env.sh" script is run directly before
+        # the command, so the path will be set to include the supplied
+        # Python runtime. Example: on_startup_script
+        # = /home/gpudb/on-start.sh param1 param2 ...
+        onStartupScript: ""
+        # Size in bytes of the pinned memory pool per-rank process to
+        # speed up copying data to the GPU.  Set to "0" to disable.
+        pinnedMemoryPoolSize: 2000000000
+        # Tables and collections with these names will not be deleted
+        # (comma separated).
+        protectedSets: "MASTER,_MASTER,_DATASOURCE"
+        # Timeout (in minutes) for filter-type requests
+        requestTimeout: "20"
+        # Timeout (in seconds) to wait for a rank to exit gracefully
+        # before it is force-killed. Machines with slow disk drives may
+        # require longer times and data may be lost if a drive is not
+        # responsive.
+        timeoutShutdownRank: "300"
+        # Timeout (in seconds) to wait for each database subsystem to
+        # exit gracefully before it is force-killed.
+        timeoutShutdownSubsystem: "20"
+        # Timeout (in seconds) to wait for each database subsystem to
+        # startup. Subsystems include the Query Planner, Graph,
+        # Stats, & HTTP servers, as well as external text-search
+        # ranks.
+        timeoutStartupSubsystem: "60"
+      # GraphConfig
+      graph:
+        # Enable the graph server
+        enable: false
+        # List of GPU devices to be used by graph server The server
+        # would ideally be run on a different node with dedicated GPU
+        # (s)
+        gpuList: ""
+        # Specify where the graph server should be run, defaults to head
+        # node
+        ipAddress: "${gaia.rank0_ip_address}"
+        # Maximum memory that can be used by the graph server, set
+        # to "0" to disable memory restriction
+        maxMemory: 0
+        # Port used for responses from the graph server to the database
+        # server
+        pullPort: 8100
+        # Port used for requests from the database server to the graph
+        # server
+        pushPort: 8099
+        # Number of seconds the graph client will wait for a response
+        # from the graph server
+        timeout: 1200
+      # HardwareConfig
+      hardware:
+        # Rank0HardwareConfig
+        rank0:
+          # Specify the GPU to use for all calculations on the HTTP
+          # server node, **rank0**. NOTE: The **rank0** GPU may be
+          # shared with another rank.
+          gpu: 0
+          # Set the head HTTP **rank0** numa node(s). If left empty,
+          # there will be no thread affinity or preferred memory node.
+          # The node list may be either a single node number or a
+          # range; e.g., "1-5,7,10". If there will be many simultaneous
+          # users, specify as many nodes as possible that won't overlap
+          # the **rank1** to **rankN** worker numa nodes that the GPUs
+          # are on. If there will be few simultaneous users and WMS
+          # speed is important, choose the numa node the 'rank0.gpu' is
+          # on.
+          numaNode: ranks:
+        - baseNumaNode: string
+          # Set each worker rank's preferred data numa node for CPU
+          # affinity and memory allocation.
+          # The 'rank<#>.data_numa_node' is the node or nodes that data
+          # intensive threads will run in and should be set to the same
+          # numa node that the GPU specified by the
+          # corresponding 'rank<#>.taskcalc_gpu' is on for best
+          # performance. If the 'rank<#>.taskcalc_gpu' is specified
+          # the 'rank<#>.data_numa_node' will be automatically set to
+          # the node the GPU is attached to, otherwise there will be no
+          # CPU thread affinity or preferred node for memory allocation
+          # if not specified or left empty. The node list may be a
+          # single node number or a range; e.g., "1-5,7,10".
+          dataNumaNode: string
+          # Set the GPU device for each worker rank to use. If no GPUs
+          # are specified, each rank will round-robin the available
+          # GPUs per host system. Add 'rank<#>.taskcalc_gpu' as needed
+          # for the worker ranks, where *#* ranges from "1" to the
+          # highest *rank #* among the 'rank<#>.host' parameters
+          # Example setting the GPUs to use for ranks 1 and 2: 
+          #  #   rank1.taskcalc_gpu = 0 #   rank2.taskcalc_gpu = 1
+          taskCalcGPU: kafka:
+        # Maximum number of records to be ingested in a single batch
+        # kafka.batch_size = 1000
+        batchSize: 1000
+        # Maximum time (milliseconds) for each poll to get records from
+        # kafka kafka.poll_timeout = 0
+        pollTimeout: 1
+        # Maximum wait time (seconds) to buffer records received from
+        # kafka before ingestion kafka.wait_time = 30
+        waitTime: 30
+      # KifsConfig
+      kifs:
+        # KIFs user data size limit
+        dataLimit: "4Gi"
+        # sudo usermod -a -G gpudb_proc <user>
+        enable: false
+        # Parent directory of the mount point for the KiFS file system.
+        # Must be a fully qualified path. The actual mount point will
+        # be a subdirectory *mount* below this directory. Note that
+        # this folder must have read, write and execute permissions for
+        # the "gpudb" user and the "gpudb_proc" group, and it cannot be
+        # a path on an NFS.
+        mountPoint: "/gpudb/kifs" useManagedCredentials: true
+      # Etcd *ETCDConfig `json:"etcd,omitempty"` HA        HAConfig
+      # `json:"ha,omitempty"`
+      ml:
+        # Enable the ML server.
+        enable: false
+      # NetworkConfig
+      network:
+        # HAAddress - An optional address to allow inter-cluster
+        # communication with HA when 'address' is not routable between
+        # clusters.
+        HAAddress: string
+        # CompressNetworkData - Enables compression of inter-node
+        # network data transfers.
+        compressNetworkData: false
+        # EnableHTTPDProxy - Start an HTTP server as a proxy to handle
+        # LDAP and/or Kerberos authentication. Each host will run an
+        # HTTP server and access to each rank is available through
+        # http://host:8082/gpudb-1, where port "8082" is defined
+        # by 'httpd_proxy_port'. NOTE: HTTP external endpoints are not
+        # affected by the 'use_https' parameter above. If you wish to
+        # enable HTTPS, you must edit
+        # the "/opt/gpudb/httpd/conf/httpd.conf" and setup HTTPS as per
+        # the Apache httpd documentation at
+        # https://httpd.apache.org/docs/2.2/
+        enableHTTPDProxy: true
+        # EnableWorkerHTTPServers - Enable worker HTTP servers; each
+        # process runs its own server for multi-head ingest.
+        enableWorkerHTTPServers: true
+        # GlobalManagerLocalPubPort - ?
+        globalManagerLocalPubPort: 5554
+        # GlobalManagerPortOne - Internal communication ports -  Host
+        # manager status notification channel
+        globalManagerPortOne: 5552
+        # GlobalManagerPubPort - Host manager synchronization message
+        # publishing channel port
+        globalManagerPubPort: 5553
+        # HeadIPAddress - Head HTTP server IP address. Set to the
+        # publicly accessible IP address of the first
+        # process, **rank0**.
+        headIPAddress: "172.20.0.10"
+        # HeadPort - Head HTTP server port to use
+        # for 'head_ip_address'.
+        headPort: 9191
+        # HostManagerHTTPPort - HTTP port for web portal of the host
+        # manager
+        hostManagerHTTPPort: 9300
+        # HTTPAllowOrigin - Value to return via
+        # Access-Control-Allow-Origin HTTP header (for Cross-Origin
+        # Resource Sharing). Set to empty to not return the header and
+        # disallow CORS.
+        httpAllowOrigin: "*"
+        # HTTPKeepAlive - Keep HTTP connections alive between requests
+        httpKeepAlive: false
+        # HTTPDProxyPort - TCP port that the httpd auth proxy server
+        # will listen on if 'enable_httpd_proxy' is "true".
+        httpdProxyPort: 8082
+        # HTTPDProxyUseHTTPS - Set to "true" if the httpd auth proxy
+        # server is configured to use HTTPS.
+        httpdProxyUseHTTPS: false
+        # HTTPSCertFile - File containing the SSL certificate  e.g.
+        # cert.pem If required, a self-signed certificate(expires after
+        # 10 years) can be generated via the command: e.g. cert.pem
+        # openssl req -newkey rsa:2048 -new -nodes -x509 \ -days
+        # 3650 -keyout key.pem -out cert.pem
+        httpsCertFile: ""
+        # HTTPSKeyFile - File containing the SSL private Key e.g.
+        # key.pem If required, a self-signed certificate (expires after
+        # 10 years) can be generated via the command: openssl
+        # req -newkey rsa:2048 -new -nodes -x509 \ -days 3650 -keyout
+        # key.pem -out cert.pem
+        httpsKeyFile: ""
+        # Rank0IPAddress - Internal use IP address of the head HTTP
+        # server, **rank0**. Set to either a second internal network
+        # accessible by all ranks or to '${gaia.head_ip_address}'.
+        rank0IPAddress: "${gaia.rank0.host}" ranks:
+        - communicatorPort:
+            # Number of port to expose on the pod's IP address. This
+            # must be a valid port number, 0 < x < 65536.
+            containerPort: 1
+            # What host IP to bind the external port to.
+            hostIP: string
+            # Number of port to expose on the host. If specified, this
+            # must be a valid port number, 0 < x < 65536. If
+            # HostNetwork is specified, this must match ContainerPort.
+            # Most containers do not need this.
+            hostPort: 1
+            # If specified, this must be an IANA_SVC_NAME and unique
+            # within the pod. Each named port in a pod must have a
+            # unique name. Name for the port that can be referred to by
+            # services.
+            name: string
+            # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+            # to "TCP".
+            protocol: "TCP"
+          # Specify the hosts to run each rank worker process in the
+          # cluster. For a single machine system, use "127.0.0.1", but
+          # if using two or more machines, a hostname or IP address
+          # must be specified for each rank that is accessible from the
+          # other ranks. See also 'head_ip_address'
+          # and 'rank0_ip_address'.
+          host: string
+          # Optionally, specify the worker HTTP server ports. The
+          # default is to use ('head_port' + *rank #*) for each worker
+          # process where rank number is from "1" to number of ranks
+          # in 'rank<#>.host' below.
+          httpServerPort:
+            # Number of port to expose on the pod's IP address. This
+            # must be a valid port number, 0 < x < 65536.
+            containerPort: 1
+            # What host IP to bind the external port to.
+            hostIP: string
+            # Number of port to expose on the host. If specified, this
+            # must be a valid port number, 0 < x < 65536. If
+            # HostNetwork is specified, this must match ContainerPort.
+            # Most containers do not need this.
+            hostPort: 1
+            # If specified, this must be an IANA_SVC_NAME and unique
+            # within the pod. Each named port in a pod must have a
+            # unique name. Name for the port that can be referred to by
+            # services.
+            name: string
+            # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+            # to "TCP".
+            protocol: "TCP"
+          # This is the Kubernetes pod IP Address of the current rank
+          # which we need to populate in the operator. NOTE: Internal
+          # Attribute
+          podIP: string
+          # Optionally, specify a public URL for each worker HTTP server
+          # that clients should use to connect for multi-head
+          # operations. NOTE: If specified for any ranks, a public URL
+          # must be specified for all ranks.
+          publicURL: "https://:8082/gpudb-{{.Rank}}"
+          # Define the rank number of this rank.
+          rank: 1
+        # SetMonitorPort - Set monitor ZMQ publisher server port (-1 to
+        # disable), uses the 'head_ip_address' interface.
+        setMonitorPort: 9002
+        # SetMonitorProxyPort - Set monitor ZMQ publisher internal proxy
+        # server port ("-1" to disable), uses the 'head_ip_address'
+        # interface. IMPORTANT:  Disabling this port effectively
+        # prevents worker nodes from publishing set monitor
+        # notifications when multi-head ingest is enabled
+        # (see 'enable_worker_http_servers').
+        setMonitorProxyPort: 9003
+        # SetMonitorQueueSize - Set monitor queue size
+        setMonitorQueueSize: 1000
+        # TriggerPort - Trigger ZMQ publisher server port ("-1" to
+        # disable), uses the 'head_ip_address' interface.
+        triggerPort: -1
+        # UseHTTPS - Set to "true" to use HTTPS; if "true"
+        # then 'https_key_file' and 'https_cert_file' must be provided
+        useHttps: false
+      # PersistenceConfig
+      persistence:
+        # Removed in 7.2
+        IndexDBFlushImmediate: true
+        # DataLoadingSchema Startup data-loading scheme
+        buildMaterializedViewsOnStart: "on_demand"
+        # DataLoadingSchema Startup data-loading scheme
+        buildPKIndexOnStart: "on_demand"
+        # Target maximum data size for any one column in a chunk
+        # (512 MB) (0 = disable). chunk_max_memory = 8192000000
+        chunkColumnMaxMemory: 8192000000
+        # Target maximum total data size for all columns in a chunk
+        # (8 GB) (0 = disable).
+        chunkMaxMemory: 512000000
+        # Number of records per chunk ("0" disables chunking)
+        chunkSize: 8000000
+        # Determines whether to execute kernels on host (CPU) or device
+        # (GPU). Possible values are: 
+        #  * "default"   : engine decides * "host"      : execute only
+        #     host * "device"    : execute only device * *<rows>*    :
+        #     execute on the host if chunked column contains the given
+        #     number of *rows* or fewer; otherwise, execute on device.
+        executionMode: "device"
+        # Removed in 7.2
+        fsyncIndexDBImmediate: true
+        # Removed in 7.2
+        fsyncInodesImmediate: true
+        # Removed in 7.2
+        fsyncMetadataImmediate: true
+        # Removed in 7.2
+        fsyncOnInterval: true
+        # Maximum number of open files for IndexedDb object file store.
+        # Removed in 7.2
+        indexDBMaxOpenFiles: 
+        # Table of contents size for IndexedDb object file store.
+        # Removed in 7.2
+        indexDBTOCSize: 
+        # Disable detection of sparse file support and use the full file
+        # length which may be an over-estimate of the actual usage in
+        # the persist tier. Removed in 7.2
+        indexDBTierByFileLength: false
+        # Startup data-loading scheme: 
+        #  * "always"    : load all the data into memory before
+        #     accepting requests * "lazy"      : load the necessary
+        #     data to start, but load the remainder
+        #     lazily * "on_demand" : only load data as requests use it
+        loadVectorsOnStart: "on_demand"
+        # Removed in 7.2
+        metadataFlushImmediate: true
+        # Specify a base directory to store persistence data files.
+        persistDirectory: "/opt/gpudb/persist"
+        # Whether to use synchronous persistence file writing.
+        # If "false", files will be written asynchronously. Removed in
+        # 7.2
+        persistSync: true
+        # Duration in seconds, for which persistence files will be
+        # force-synced if out of sync, once per minute. NOTE: Files are
+        # always opportunistically saved; this simply enforces a
+        # maximum time a file can be out of date. Set to a very high
+        # number to disable.
+        persistSyncTime: 5
+        # The maximum number of bytes in the shadow aggregate cache
+        shadowAggSize: 100000000
+        # Whether to enable chunk caching
+        shadowCubeEnabled: true
+        # The maximum number of bytes in the shadow filter cache
+        shadowFilterSize: 100000000
+        # Base directory to store hashed strings.
+        smsDirectory: "${gaia.persist_directory}"
+        # Maximum number of open files (per-TOM) for the SMS
+        # (string) store.
+        smsMaxOpenFiles: 128
+        # Synchronous compression: compress vectors on set compression.
+        synchronousCompression: false
+        # Directory for GPUdb to use to store temporary files. Must be a
+        # fully qualified path, have at least 100Mb of free space, and
+        # execute permission.
+        tempDirectory: "${gaia.persist_directory}/tmp"
+        # Base directory to store the text search index.
+        textIndexDirectory: "${gaia.persist_directory}"
+        # Enable checksum protection on the wal entries. New in 7.2
+        walChecksum: true
+        # Specifies how frequently wal entries are written with
+        # background sync. New in 7.2
+        walFlushFrequency: 60
+        # Maximum size of each wal segment file New in 7.2
+        walMaxSegmentSize: 500000000
+        # Approximate number of segment files to split the wal across. A
+        # minimum of two is required. The size of the wal is limited by
+        # segment_count * max_segment_size. (per rank and per tom) Set
+        # to 0 to remove a size limit on the wal itself, but still be
+        # bounded by rank tier limits. Set to -1 to have the database
+        # decide automatically per table. New in 7.2
+        walSegmentCount: 
+        # Sync mode to use when persisting wal entries to disk: 
+        #  "none"       : Disable the wal "background" : Wal entries are
+        #   periodically written instead of immediately after each
+        #   operation "flush"      : Protects entries in the event of a
+        #   database crash "fsync"      : Protects entries in the event
+        #   of an OS crash New in 7.2
+        walSyncPolicy: "flush"
+        # If true, any table that is found to be corrupt after replaying
+        # its wal at startup will automatically be truncated so that
+        # the table becomes operable. If false, the user will be
+        # responsible for resolving the issue via sql REPAIR TABLE or
+        # similar. New in 7.2
+        walTruncateCorruptTablesOnStart: true
+      # PostgresProxy
+      postgresProxy:
+        # Postgres Proxy Server Start an Postgres(TCP) server as a proxy
+        # to handle postgres wire protocol messages.
+        enablePostgresProxy: false
+        # Set idle connection  timeout in seconds. (default: "1200")
+        idleConnectionTimeout: 1200
+        # Set max number of queued server connections. (default: "1")
+        maxQueuedConnections: 1
+        # Set max number of server threads to spawn. (default: "64")
+        maxThreads: 64
+        # Set min number of server threads to spawn. (default: "2")
+        minThreads: 2
+        # TCP port that the postgres  proxy server will listen on
+        # if 'enable_postgres_proxy' is "true".
+        port:
+          # Number of port to expose on the pod's IP address. This must
+          # be a valid port number, 0 < x < 65536.
+          containerPort: 1
+          # What host IP to bind the external port to.
+          hostIP: string
+          # Number of port to expose on the host. If specified, this
+          # must be a valid port number, 0 < x < 65536. If HostNetwork
+          # is specified, this must match ContainerPort. Most
+          # containers do not need this.
+          hostPort: 1
+          # If specified, this must be an IANA_SVC_NAME and unique
+          # within the pod. Each named port in a pod must have a unique
+          # name. Name for the port that can be referred to by
+          # services.
+          name: string
+          # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+          # to "TCP".
+          protocol: "TCP"
+        # Set to "true" to use SSL; if "true" then 'ssl_key_file'
+        # and 'ssl_cert_file' must be provided
+        ssl: false sslCertFile: ""
+        # Files containing the SSL private Key and the SSL certificate
+        # for. If required, a self signed certificate (expires after 10
+        # years) can be generated via the command: openssl req -newkey
+        # rsa:2048 -new -nodes -x509 \ -days 3650 -keyout key.pem -out
+        # cert.pem
+        sslKeyFile: ""
+      # ProcessesConfig
+      processes:
+        # Set the maximum number of threads per tom for table
+        # initialization on startup
+        initTablesNumThreadsPerTom: 8
+        # Set the number of parallel calculation threads to use for data
+        # processing use -1 to use the max number of threads
+        # (not recommended)
+        kernelOmpThreads: 3
+        # The maximum number of web server threads to spawn
+        maxHttpThreads: 512
+        # Set the maximum number of threads (both workers and masters)
+        # to be passed to TBB on initialization.  Generally
+        # speaking, 'max_tbb_threads_per_rank' - "1" TBB workers will
+        # be created.  Use "-1" for no limit.
+        maxTbbThreadsPerRank: "-1"
+        # The minimum number of web server threads to spawn
+        minHttpThreads: 8
+        # Set the number of parallel jobs to create for multi-child set
+        # calulations use "-1" to use the max number of threads
+        # (not recommended)
+        smOmpThreads: 2
+        # Maximum number of simultaneous threads allocated to a given
+        # request, on each rank. Note that thread allocation may also
+        # be limted by resource group limits and/or system load.
+        subtaskConcurrentyLimit: "-1"
+        # Set the number of TaskCalculators per TOM, GPU data
+        # processors.
+        tcsPerTom: "-1"
+        # Set the number of TOMs (data container shards) per rank
+        tomsPerRank: 1
+        # Set the number of TaskProcessors per TOM, CPU data
+        # processors.
+        tpsPerTom: "-1"
+      # ProcsConfig
+      procs:
+        # Directory where proc files are stored at runtime. Must be a
+        # fully qualified path with execute permission. If not
+        # specified, 'temp_directory' will be used.
+        directory:
+          # PersistentVolumeClaim is a user's request for and claim to a
+          # persistent volume
+          persistVolumeClaim:
+            # APIVersion defines the versioned schema of this
+            # representation of an object. Servers should convert
+            # recognized schemas to the latest internal value, and may
+            # reject unrecognized values. More info:
+            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+            apiVersion: app.kinetica.com/v1
+            # Kind is a string value representing the REST resource this
+            # object represents. Servers may infer this from the
+            # endpoint the client submits requests to. Cannot be
+            # updated. In CamelCase. More info:
+            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+            kind: KineticaCluster
+            # Standard object's metadata. More info:
+            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+            metadata: {}
+            # spec defines the desired characteristics of a volume
+            # requested by a pod author. More info:
+            # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+            spec:
+              # accessModes contains the desired access modes the volume
+              # should have. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+              accessModes: ["string"]
+              # dataSource field can be used to specify either: * An
+              # existing VolumeSnapshot object
+              # (snapshot.storage.k8s.io/VolumeSnapshot) * An existing
+              # PVC (PersistentVolumeClaim) If the provisioner or an
+              # external controller can support the specified data
+              # source, it will create a new volume based on the
+              # contents of the specified data source. When the
+              # AnyVolumeDataSource feature gate is enabled, dataSource
+              # contents will be copied to dataSourceRef, and
+              # dataSourceRef contents will be copied to dataSource
+              # when dataSourceRef.namespace is not specified. If the
+              # namespace is specified, then dataSourceRef will not be
+              # copied to dataSource.
+              dataSource:
+                # APIGroup is the group for the resource being
+                # referenced. If APIGroup is not specified, the
+                # specified Kind must be in the core API group. For any
+                # other third-party types, APIGroup is required.
+                apiGroup: string
+                # Kind is the type of resource being referenced
+                kind: KineticaCluster
+                # Name is the name of resource being referenced
+                name: string
+              # dataSourceRef specifies the object from which to
+              # populate the volume with data, if a non-empty volume is
+              # desired. This may be any object from a non-empty API
+              # group (non core object) or a PersistentVolumeClaim
+              # object. When this field is specified, volume binding
+              # will only succeed if the type of the specified object
+              # matches some installed volume populator or dynamic
+              # provisioner. This field will replace the functionality
+              # of the dataSource field and as such if both fields are
+              # non-empty, they must have the same value. For backwards
+              # compatibility, when namespace isn't specified in
+              # dataSourceRef, both fields (dataSource and
+              # dataSourceRef) will be set to the same value
+              # automatically if one of them is empty and the other is
+              # non-empty. When namespace is specified in
+              # dataSourceRef, dataSource isn't set to the same value
+              # and must be empty. There are three important
+              # differences between dataSource and dataSourceRef: *
+              # While dataSource only allows two specific types of
+              # objects, dataSourceRef allows any non-core object, as
+              # well as PersistentVolumeClaim objects. * While
+              # dataSource ignores disallowed values (dropping them),
+              # dataSourceRef preserves all values, and generates an
+              # error if a disallowed value is specified. * While
+              # dataSource only allows local objects, dataSourceRef
+              # allows objects in any namespaces. (Beta) Using this
+              # field requires the AnyVolumeDataSource feature gate to
+              # be enabled. (Alpha) Using the namespace field of
+              # dataSourceRef requires the
+              # CrossNamespaceVolumeDataSource feature gate to be
+              # enabled.
+              dataSourceRef:
+                # APIGroup is the group for the resource being
+                # referenced. If APIGroup is not specified, the
+                # specified Kind must be in the core API group. For any
+                # other third-party types, APIGroup is required.
+                apiGroup: string
+                # Kind is the type of resource being referenced
+                kind: KineticaCluster
+                # Name is the name of resource being referenced
+                name: string
+                # Namespace is the namespace of resource being
+                # referenced Note that when a namespace is specified, a
+                # gateway.networking.k8s.io/ReferenceGrant object is
+                # required in the referent namespace to allow that
+                # namespace's owner to accept the reference. See the
+                # ReferenceGrant documentation for details.(Alpha) This
+                # field requires the CrossNamespaceVolumeDataSource
+                # feature gate to be enabled.
+                namespace: string
+              # resources represents the minimum resources the volume
+              # should have. If RecoverVolumeExpansionFailure feature
+              # is enabled users are allowed to specify resource
+              # requirements that are lower than previous value but
+              # must still be higher than capacity recorded in the
+              # status field of the claim. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+              resources:
+                # Claims lists the names of resources, defined in
+                # spec.resourceClaims, that are used by this container.
+                # This is an alpha field and requires enabling the
+                # DynamicResourceAllocation feature gate. This field is
+                # immutable. It can only be set for containers.
+                claims:
+                - name: string
+                # Limits describes the maximum amount of compute
+                # resources allowed. More info:
+                # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                limits: {}
+                # Requests describes the minimum amount of compute
+                # resources required. If Requests is omitted for a
+                # container, it defaults to Limits if that is
+                # explicitly specified, otherwise to an
+                # implementation-defined value. Requests cannot exceed
+                # Limits. More info:
+                # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                requests: {}
+              # selector is a label query over volumes to consider for
+              # binding.
+              selector:
+                # matchExpressions is a list of label selector
+                # requirements. The requirements are ANDed.
+                matchExpressions:
+                - key: string
+                  # operator represents a key's relationship to a set of
+                  # values. Valid operators are In, NotIn, Exists and
+                  # DoesNotExist.
+                  operator: string
+                  # values is an array of string values. If the operator
+                  # is In or NotIn, the values array must be non-empty.
+                  # If the operator is Exists or DoesNotExist, the
+                  # values array must be empty. This array is replaced
+                  # during a strategic merge patch.
+                  values: ["string"]
+                # matchLabels is a map of {key,value} pairs. A single
+                # {key,value} in the matchLabels map is equivalent to
+                # an element of matchExpressions, whose key field
+                # is "key", the operator is "In", and the values array
+                # contains only "value". The requirements are ANDed.
+                matchLabels: {}
+              # storageClassName is the name of the StorageClass
+              # required by the claim. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+              storageClassName: string
+              # volumeMode defines what type of volume is required by
+              # the claim. Value of Filesystem is implied when not
+              # included in claim spec.
+              volumeMode: string
+              # volumeName is the binding reference to the
+              # PersistentVolume backing this claim.
+              volumeName: string
+            # status represents the current information/status of a
+            # persistent volume claim. Read-only. More info:
+            # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+            status:
+              # accessModes contains the actual access modes the volume
+              # backing the PVC has. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+              accessModes: ["string"]
+              # allocatedResources is the storage resource within
+              # AllocatedResources tracks the capacity allocated to a
+              # PVC. It may be larger than the actual capacity when a
+              # volume expansion operation is requested. For storage
+              # quota, the larger value from allocatedResources and
+              # PVC.spec.resources is used. If allocatedResources is
+              # not set, PVC.spec.resources alone is used for quota
+              # calculation. If a volume expansion capacity request is
+              # lowered, allocatedResources is only lowered if there
+              # are no expansion operations in progress and if the
+              # actual volume capacity is equal or lower than the
+              # requested capacity. This is an alpha field and requires
+              # enabling RecoverVolumeExpansionFailure feature.
+              allocatedResources: {}
+              # capacity represents the actual resources of the
+              # underlying volume.
+              capacity: {}
+              # conditions is the current Condition of persistent volume
+              # claim. If underlying persistent volume is being resized
+              # then the Condition will be set to 'ResizeStarted'.
+              conditions:
+              - lastProbeTime: string
+                # lastTransitionTime is the time the condition
+                # transitioned from one status to another.
+                lastTransitionTime: string
+                # message is the human-readable message indicating
+                # details about last transition.
+                message: string
+                # reason is a unique, this should be a short, machine
+                # understandable string that gives the reason for
+                # condition's last transition. If it
+                # reports "ResizeStarted" that means the underlying
+                # persistent volume is being resized.
+                reason: string status: string
+                # PersistentVolumeClaimConditionType is a valid value of
+                # PersistentVolumeClaimCondition.Type
+                type: string
+              # phase represents the current phase of
+              # PersistentVolumeClaim.
+              phase: string
+              # resizeStatus stores status of resize operation.
+              # ResizeStatus is not set by default but when expansion
+              # is complete resizeStatus is set to empty string by
+              # resize controller or kubelet. This is an alpha field
+              # and requires enabling RecoverVolumeExpansionFailure
+              # feature.
+              resizeStatus: string
+          # VolumeMount describes a mounting of a Volume within a
+          # container.
+          volumeMount:
+            # Path within the container at which the volume should be
+            # mounted.  Must not contain ':'.
+            mountPath: string
+            # mountPropagation determines how mounts are propagated from
+            # the host to container and the other way around. When not
+            # set, MountPropagationNone is used. This field is beta in
+            # 1.10.
+            mountPropagation: string
+            # This must match the Name of a Volume.
+            name: string
+            # Mounted read-only if true, read-write otherwise (false or
+            # unspecified). Defaults to false.
+            readOnly: true
+            # Path within the volume from which the container's volume
+            # should be mounted. Defaults to "" (volume's root).
+            subPath: string
+            # Expanded path within the volume from which the container's
+            # volume should be mounted. Behaves similarly to SubPath
+            # but environment variable references $(VAR_NAME) are
+            # expanded using the container's environment. Defaults
+            # to "" (volume's root). SubPathExpr and SubPath are
+            # mutually exclusive.
+            subPathExpr: string
+        # Enable procs (UDFs)
+        enable: true
+      # SecurityConfig
+      security:
+        # Automatically create accounts for externally-authenticated
+        # users. If 'enable_external_authentication' is "false", this
+        # setting has no effect. Note that accounts are not
+        # automatically deleted if users are removed from the external
+        # authentication provider and will be orphaned.
+        autoCreateExternalUsers: false
+        # Automatically add roles passed in via the "KINETICA_ROLES"
+        # HTTP header to externally-authenticated users. Specified
+        # roles that do not exist are ignored.
+        # If 'enable_external_authentication' is "false", this setting
+        # has no effect. IMPORTANT: DO NOT ENABLE unless the
+        # authentication proxy is configured to block "KINETICA_ROLES"
+        # HTTP headers passed in from clients.
+        autoGrantExternalRoles: false
+        # Comma-separated list of roles to revoke from
+        # externally-authenticated users prior to granting roles passed
+        # in via the "KINETICA_ROLES" HTTP header, or "*" to revoke all
+        # roles. Preceding a role name with an "!" overrides the
+        # revocation (e.g. "*,!foo" revokes all roles except "foo").
+        # Leave blank to disable. If
+        # either 'enable_external_authentication'
+        # or 'auto_grant_external_roles' is "false", this setting has
+        # no effect.
+        autoRevokeExternalRoles: false
+        # Enable authorization checks.  When disabled, all requests will
+        # be treated as the administrative user.
+        enableAuthorization: true
+        # Enable external (LDAP, Kerberos, etc.) authentication. User
+        # IDs of externally-authenticated users must be passed in via
+        # the "REMOTE_USER" HTTP header from the authentication proxy.
+        # May be used in conjuntion with the 'enable_httpd_proxy'
+        # setting above for an integrated external authentication
+        # solution. IMPORTANT: DO NOT ENABLE unless external access to
+        # GPUdb ports has been blocked via firewall AND the
+        # authentication proxy is configured to block "REMOTE_USER"
+        # HTTP headers passed in from clients. server.
+        enableExternalAuthentication: true
+        # ExternalSecurity
+        externalSecurity:
+          # Ranger
+          ranger:
+            # AuthorizerAddress - The network URI for the
+            # ranger_authorizer to start. The URI can be either TCP or
+            # IPC. TCP address is used to indicate the remote
+            # ranger_authorizer which may run at other hosts. The IPC
+            # address is for a local ranger_authorizer. Example
+            # addresses for remote or TCP servers: tcp://127.0.0.1:9293
+            # tcp://HOST_IP:9293 Example address for local IPC servers:
+            # ipc:///tmp/gpudb-ranger-0
+            # security.external.ranger_authorizer.address = ipc://$
+            # {gaia.temp_directory}/gpudb-ranger-0
+            authorizerAddress: "ipc://$
+            {gaia.temp_directory}/gpudb-ranger-0"
+            # Remote debugger port used for the ranger_authorizer.
+            # Setting the port to "0" disables remote debugging. NOTE:
+            # Recommended port to use is "5005"
+            # security.external.ranger_authorizer.remote_debug_port =
+            # 0
+            authorizerRemoteDebugPort: 0
+            # AuthorizerTimeout - Ranger Authorizer timeout in seconds
+            # security.external.ranger_authorizer.timeout = 120
+            authorizerTimeout: 120
+            # CacheMinutes- Maximum minutes to hold on to data from
+            # Ranger security.external.ranger.cache_minutes = 60
+            cacheMinutes: 60
+            # Name of the service created on the Ranger Server to manage
+            # this Kinetica instance
+            # security.external.ranger.service_name = kinetica
+            name: "kinetica"
+            # ExtURL - URL of Ranger REST API.  E.g.,
+            # https://localhost:6080/ Leave blank for no Ranger Server
+            # security.external.ranger.url =
+            url: string
+        # The minimum allowable password length.
+        minPasswordLength: 4
+        # Require all users to be authenticated.  Disable this to allow
+        # users to access the database as the 'unauthenticated' user.
+        # Useful for situations where the public needs to access the
+        # data.
+        requireAuthentication: true
+        # UnifiedSecurityNamespace - Use a single namespace for internal
+        # and external user IDs and role names. If false, external user
+        # IDs must be prefixed with "@" to differentiate them from
+        # internal user IDs and role names (except in the "REMOTE_USER"
+        # HTTP header, where the "@" is omitted).
+        # unified_security_namespace = true
+        unifiedSecurityNamespace: true
+      # SQLConfig
+      sql:
+        # SQLPlannerAddress is not included as it is just default
+        # always
+        address: "ipc://${gaia.temp_directory}/gpudb-query-engine-0"
+        # Enable the cost-based optimizer
+        costBasedOptimization: false
+        # Enable distributed joins
+        distributedJoins: true
+        # Enable distributed operations
+        distributedOperations: true
+        # Enable Query Planner
+        enablePlanner: true
+        # Perform joins between only 2 tables at a time; default is all
+        # tables involved in the operation at once
+        forceBinaryJoins: false
+        # Perform unions/intersections/exceptions between only 2 tables
+        # at a time; default is all tables involved in the operation at
+        # once
+        forceBinarySetOps: false
+        # Max parallel steps
+        maxParallelSteps: 4
+        # Max allowed view nesting levels. Valid range(1-64)
+        maxViewNestingLevels: 16
+        # TTL of the paging results table
+        pagingTableTTL: 20
+        # Enable parallel query evaluation
+        parallelExecution: true
+        # The maximum number of entries in the SQL plan cache.  The
+        # default is "4000" entries, but the configurable range
+        # is "1" - "1000000".  Plan caching will be disabled if the
+        # value is set outside of that range.
+        planCacheSize: 4000
+        # The maximum memory for the query planner to use in Megabytes.
+        plannerMaxMemory: 4096
+        # The maximum stack size for the query planner threads to use in
+        # Megabytes.
+        plannerMaxStack: 6
+        # Query planner timeout in seconds
+        plannerTimeout: 120
+        # Max Query planner threads
+        plannerWorkers: 16
+        # Remote debugger port used for the query planner. Setting the
+        # port to "0" disables remote debugging. NOTE:  Recommended
+        # port to use is "5005"
+        remoteDebugPort: 5005
+        # TTL of the query cache results table
+        resultsCacheTTL: 60
+        # Enable query results caching
+        resultsCaching: true
+        # Enable rule-based query rewrites
+        ruleBasedOptimization: true
+      # SQLEngineConfig
+      sqlEngine:
+        # Enable the cost-based optimizer
+        costBasedOptimization: false
+        # Name of default collection for user tables
+        defaultSchema: ""
+        # Enable distributed joins
+        distributedJoins: true
+        # Enable distributed operations
+        distributedOperations: true
+        # Perform joins between only 2 tables at a time; default is all
+        # tables involved in the operation at once
+        forceBinaryJoins: false
+        # Perform unions/intersections/exceptions between only 2 tables
+        # at a time; default is all tables involved in the operation at
+        # once
+        forceBinarySetOps: false
+        # Max parallel steps
+        maxParallelSteps: 4
+        # Max allowed view nesting levels. Valid range(1-64)
+        maxViewNestingLevels: 16
+        # TTL of the paging results table
+        pagingTableTTL: 20
+        # Enable parallel query evaluation
+        parallelExecution: true
+        # The maximum number of entries in the SQL plan cache.  The
+        # default is "4000" entries, but the configurable range
+        # is "1" - "1000000".  Plan caching will be disabled if the
+        # value is set outside of that range.
+        planCacheSize: 4000
+        # PlannerConfig
+        planner:
+          # Enable Query Planner
+          enablePlanner: true
+          # The maximum memory for the query planner to use in
+          # Megabytes.
+          maxMemory: 4096
+          # The maximum stack size for the query planner threads to use
+          # in Megabytes.
+          maxStack: 6
+          # The network URI for the query planner to start. The URI can
+          # be either TCP or IPC. TCP address is used to indicate the
+          # remote query planner which may run at other hosts. The IPC
+          # address is for a local query planner. Example for remote or
+          # TCP servers: 
+          #  #  sql.planner.address  = tcp://127.0.0.1:9293 #
+          #     sql.planner.address  = tcp://HOST_IP:9293 Example for
+          #     local IPC servers: 
+          #  #  sql.planner.address  = ipc:///tmp/gpudb-query-engine-0
+          plannerAddress: "ipc:///tmp/gpudb-query-engine-0"
+          # Remote debugger port used for the query planner. Setting the
+          # port to "0" disables remote debugging. NOTE:  Recommended
+          # port to use is "5005"
+          remoteDebugPort: 0
+          # Query planner timeout in seconds
+          timeout: 120
+          # Max Query planner threads
+          workers: 16 results:
+          # TTL of the query cache results table
+          cacheTTL: 60
+          # Enable query results caching
+          caching: true
+        # Enable rule-based query rewrites
+        ruleBasedOptimization: true
+        # Name of collection that will be used to store result tables
+        # generated as part of query execution
+        tempCollection: "__SQL_TEMP"
+      # StatisticsConfig
+      statistics:
+        # system_metadata.stats_aggr_rowcount = 10000
+        aggrRowCount: 10000
+        # system_metadata.stats_aggr_time = 1
+        aggrTime: 1
+        # Run a statistics server to collect information about Kinetica
+        # and the machines it runs on.
+        enable: true
+        # Statistics server IP address (run on head node) default port
+        # is "2003"
+        ipAddress: "${gaia.host0.address}"
+        # Statistics server namespace - should be a machine identifier
+        namespace: "gpudb" port: 2003
+        # System metadata catalog settings
+        # system_metadata.stats_retention_days = 21
+        retentionDays: 21
+      # TextSearchConfig
+      textSearch:
+        # Enable text search capability within the database.
+        enableTextSearch: false
+        # Number of text indices to start for each rank
+        textIndicesPerTom: 2
+        # Searcher refresh intervals - specifies the maximum delay
+        # (in seconds) between writing to the text search index and
+        # being able to search for the value just written.  A value
+        # of "0" insures that writes to the index are immediately
+        # available to be searched.  A more nominal value of "100"
+        # should improve ingest speed at the cost of some delay in
+        # being able to text search newly added values.
+        textSearcherRefreshInterval: 20
+        # Use the production capable external text server instead of a
+        # lightweight internal server which should only be used for
+        # light testing. Note: The internal text server is deprecated
+        # and may be removed in future versions.
+        useExternalTextServer: true tieredStorage:
+        # Cold Storage Tiers can be used to extend the storage capacity
+        # of the Persist Tier. Assign a tier strategy with cold storage
+        # to objects that will be infrequently accessed since they will
+        # be moved as needed from the Persist Tier. The Cold Storage
+        # Tier is typically a much larger capacity physical disk or a
+        # cloud-based storage system which may not be as performant as
+        # the Persist Tier storage. A default storage limit and
+        # eviction thresholds can be set across all ranks for a given
+        # Cold Storage Tier, while one or more ranks within a Cold
+        # Storage Tier may be configured to override those defaults.
+        # NOTE: If an object needs to be pulled out of cold storage
+        # during a query, it may need to use the local persist
+        # directory as a temporary swap space. This may trigger an
+        # eviction of other persisted items to cold storage due to low
+        # disk space condition defined by the watermark settings for
+        # the Persist Tier.
+        coldStorageTier:
+          # ColdStorageAzure
+          coldStorageAzure:
+            # 'base_path'             : A base path based on the
+            #  provider type for this tier.
+            basePath: string clientID: string clientSecret: string
+            # 'connection_timeout'    : Timeout in seconds for
+            #  connecting to this storage provider.
+            connectionTimeout: "30"
+            # 'base_path'             : A base path based on the
+            #  provider type for this tier. BasePath string
+            #  `json:"basePath,omitempty"`
+            containerName: "/gpudb/cold_storage"
+            # * 'high_watermark' : Percentage used eviction threshold.
+            #    Once usage exceeds this value, evictions from this
+            #    tier will be scheduled in the background and continue
+            #    until the 'low_watermark' percentage usage is reached.
+            #    Default is "90", signifying a 90% memory usage
+            #    threshold.
+            highWatermark: 90
+            # * 'limit'          : The maximum (bytes) per rank that can
+            #    be allocated across all resource groups.
+            limit: "1Gi"
+            # * 'low_watermark'  : Percentage used recovery threshold.
+            #    Once usage exceeds the 'high_watermark', evictions
+            #    will continue until usage falls below this recovery
+            #    threshold. Default is "80", signifying an 80% usage
+            #    threshold.
+            lowWatermark: 80 name: string
+            # A base directory to use as a space for this tier.
+            path: "default" provisioner: "docker.io/hostpath" sasToken:
+            string storageAccountKey: string storageAccountName: string
+            tenantID: string useManagedCredentials: false
+            # Kubernetes Persistent Volume Claim for this disk tier.
+            volumeClaim:
+              # APIVersion defines the versioned schema of this
+              # representation of an object. Servers should convert
+              # recognized schemas to the latest internal value, and
+              # may reject unrecognized values. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+              apiVersion: app.kinetica.com/v1
+              # Kind is a string value representing the REST resource
+              # this object represents. Servers may infer this from the
+              # endpoint the client submits requests to. Cannot be
+              # updated. In CamelCase. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+              kind: KineticaCluster
+              # Standard object's metadata. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+              metadata: {}
+              # spec defines the desired characteristics of a volume
+              # requested by a pod author. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              spec:
+                # accessModes contains the desired access modes the
+                # volume should have. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # dataSource field can be used to specify either: * An
+                # existing VolumeSnapshot object
+                # (snapshot.storage.k8s.io/VolumeSnapshot) * An
+                # existing PVC (PersistentVolumeClaim) If the
+                # provisioner or an external controller can support the
+                # specified data source, it will create a new volume
+                # based on the contents of the specified data source.
+                # When the AnyVolumeDataSource feature gate is enabled,
+                # dataSource contents will be copied to dataSourceRef,
+                # and dataSourceRef contents will be copied to
+                # dataSource when dataSourceRef.namespace is not
+                # specified. If the namespace is specified, then
+                # dataSourceRef will not be copied to dataSource.
+                dataSource:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                # dataSourceRef specifies the object from which to
+                # populate the volume with data, if a non-empty volume
+                # is desired. This may be any object from a non-empty
+                # API group (non core object) or a
+                # PersistentVolumeClaim object. When this field is
+                # specified, volume binding will only succeed if the
+                # type of the specified object matches some installed
+                # volume populator or dynamic provisioner. This field
+                # will replace the functionality of the dataSource
+                # field and as such if both fields are non-empty, they
+                # must have the same value. For backwards
+                # compatibility, when namespace isn't specified in
+                # dataSourceRef, both fields (dataSource and
+                # dataSourceRef) will be set to the same value
+                # automatically if one of them is empty and the other
+                # is non-empty. When namespace is specified in
+                # dataSourceRef, dataSource isn't set to the same value
+                # and must be empty. There are three important
+                # differences between dataSource and dataSourceRef: *
+                # While dataSource only allows two specific types of
+                # objects, dataSourceRef allows any non-core object, as
+                # well as PersistentVolumeClaim objects. * While
+                # dataSource ignores disallowed values (dropping them),
+                # dataSourceRef preserves all values, and generates an
+                # error if a disallowed value is specified. * While
+                # dataSource only allows local objects, dataSourceRef
+                # allows objects in any namespaces. (Beta) Using this
+                # field requires the AnyVolumeDataSource feature gate
+                # to be enabled. (Alpha) Using the namespace field of
+                # dataSourceRef requires the
+                # CrossNamespaceVolumeDataSource feature gate to be
+                # enabled.
+                dataSourceRef:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                  # Namespace is the namespace of resource being
+                  # referenced Note that when a namespace is specified,
+                  # a gateway.networking.k8s.io/ReferenceGrant object
+                  # is required in the referent namespace to allow that
+                  # namespace's owner to accept the reference. See the
+                  # ReferenceGrant documentation for details.
+                  # (Alpha) This field requires the
+                  # CrossNamespaceVolumeDataSource feature gate to be
+                  # enabled.
+                  namespace: string
+                # resources represents the minimum resources the volume
+                # should have. If RecoverVolumeExpansionFailure feature
+                # is enabled users are allowed to specify resource
+                # requirements that are lower than previous value but
+                # must still be higher than capacity recorded in the
+                # status field of the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+                resources:
+                  # Claims lists the names of resources, defined in
+                  # spec.resourceClaims, that are used by this
+                  # container. This is an alpha field and requires
+                  # enabling the DynamicResourceAllocation feature
+                  # gate. This field is immutable. It can only be set
+                  # for containers.
+                  claims:
+                  - name: string
+                  # Limits describes the maximum amount of compute
+                  # resources allowed. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  limits: {}
+                  # Requests describes the minimum amount of compute
+                  # resources required. If Requests is omitted for a
+                  # container, it defaults to Limits if that is
+                  # explicitly specified, otherwise to an
+                  # implementation-defined value. Requests cannot
+                  # exceed Limits. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  requests: {}
+                # selector is a label query over volumes to consider for
+                # binding.
+                selector:
+                  # matchExpressions is a list of label selector
+                  # requirements. The requirements are ANDed.
+                  matchExpressions:
+                  - key: string
+                    # operator represents a key's relationship to a set
+                    # of values. Valid operators are In, NotIn, Exists
+                    # and DoesNotExist.
+                    operator: string
+                    # values is an array of string values. If the
+                    # operator is In or NotIn, the values array must be
+                    # non-empty. If the operator is Exists or
+                    # DoesNotExist, the values array must be empty.
+                    # This array is replaced during a strategic merge
+                    # patch.
+                    values: ["string"]
+                  # matchLabels is a map of {key,value} pairs. A single
+                  # {key,value} in the matchLabels map is equivalent to
+                  # an element of matchExpressions, whose key field
+                  # is "key", the operator is "In", and the values
+                  # array contains only "value". The requirements are
+                  # ANDed.
+                  matchLabels: {}
+                # storageClassName is the name of the StorageClass
+                # required by the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+                storageClassName: string
+                # volumeMode defines what type of volume is required by
+                # the claim. Value of Filesystem is implied when not
+                # included in claim spec.
+                volumeMode: string
+                # volumeName is the binding reference to the
+                # PersistentVolume backing this claim.
+                volumeName: string
+              # status represents the current information/status of a
+              # persistent volume claim. Read-only. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              status:
+                # accessModes contains the actual access modes the
+                # volume backing the PVC has. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # allocatedResources is the storage resource within
+                # AllocatedResources tracks the capacity allocated to a
+                # PVC. It may be larger than the actual capacity when a
+                # volume expansion operation is requested. For storage
+                # quota, the larger value from allocatedResources and
+                # PVC.spec.resources is used. If allocatedResources is
+                # not set, PVC.spec.resources alone is used for quota
+                # calculation. If a volume expansion capacity request
+                # is lowered, allocatedResources is only lowered if
+                # there are no expansion operations in progress and if
+                # the actual volume capacity is equal or lower than the
+                # requested capacity. This is an alpha field and
+                # requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                allocatedResources: {}
+                # capacity represents the actual resources of the
+                # underlying volume.
+                capacity: {}
+                # conditions is the current Condition of persistent
+                # volume claim. If underlying persistent volume is
+                # being resized then the Condition will be set
+                # to 'ResizeStarted'.
+                conditions:
+                - lastProbeTime: string
+                  # lastTransitionTime is the time the condition
+                  # transitioned from one status to another.
+                  lastTransitionTime: string
+                  # message is the human-readable message indicating
+                  # details about last transition.
+                  message: string
+                  # reason is a unique, this should be a short, machine
+                  # understandable string that gives the reason for
+                  # condition's last transition. If it
+                  # reports "ResizeStarted" that means the underlying
+                  # persistent volume is being resized.
+                  reason: string status: string
+                  # PersistentVolumeClaimConditionType is a valid value
+                  # of PersistentVolumeClaimCondition.Type
+                  type: string
+                # phase represents the current phase of
+                # PersistentVolumeClaim.
+                phase: string
+                # resizeStatus stores status of resize operation.
+                # ResizeStatus is not set by default but when expansion
+                # is complete resizeStatus is set to empty string by
+                # resize controller or kubelet. This is an alpha field
+                # and requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                resizeStatus: string
+            # 'wait_timeout'          : Timeout in seconds for reading
+            #  from or writing to this storage provider.
+            waitTimeout: "90"
+          # ColdStorageDisk
+          coldStorageDisk:
+            # 'base_path'             : A base path based on the
+            #  provider type for this tier.
+            basePath: string
+            # 'connection_timeout'    : Timeout in seconds for
+            #  connecting to this storage provider.
+            connectionTimeout: "30"
+            # * 'high_watermark' : Percentage used eviction threshold.
+            #    Once usage exceeds this value, evictions from this
+            #    tier will be scheduled in the background and continue
+            #    until the 'low_watermark' percentage usage is reached.
+            #    Default is "90", signifying a 90% memory usage
+            #    threshold.
+            highWatermark: 90
+            # * 'limit'          : The maximum (bytes) per rank that can
+            #    be allocated across all resource groups.
+            limit: "1Gi"
+            # * 'low_watermark'  : Percentage used recovery threshold.
+            #    Once usage exceeds the 'high_watermark', evictions
+            #    will continue until usage falls below this recovery
+            #    threshold. Default is "80", signifying an 80% usage
+            #    threshold.
+            lowWatermark: 80 name: string
+            # A base directory to use as a space for this tier.
+            path: "default" provisioner: "docker.io/hostpath"
+            # Kubernetes Persistent Volume Claim for this disk tier.
+            volumeClaim:
+              # APIVersion defines the versioned schema of this
+              # representation of an object. Servers should convert
+              # recognized schemas to the latest internal value, and
+              # may reject unrecognized values. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+              apiVersion: app.kinetica.com/v1
+              # Kind is a string value representing the REST resource
+              # this object represents. Servers may infer this from the
+              # endpoint the client submits requests to. Cannot be
+              # updated. In CamelCase. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+              kind: KineticaCluster
+              # Standard object's metadata. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+              metadata: {}
+              # spec defines the desired characteristics of a volume
+              # requested by a pod author. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              spec:
+                # accessModes contains the desired access modes the
+                # volume should have. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # dataSource field can be used to specify either: * An
+                # existing VolumeSnapshot object
+                # (snapshot.storage.k8s.io/VolumeSnapshot) * An
+                # existing PVC (PersistentVolumeClaim) If the
+                # provisioner or an external controller can support the
+                # specified data source, it will create a new volume
+                # based on the contents of the specified data source.
+                # When the AnyVolumeDataSource feature gate is enabled,
+                # dataSource contents will be copied to dataSourceRef,
+                # and dataSourceRef contents will be copied to
+                # dataSource when dataSourceRef.namespace is not
+                # specified. If the namespace is specified, then
+                # dataSourceRef will not be copied to dataSource.
+                dataSource:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                # dataSourceRef specifies the object from which to
+                # populate the volume with data, if a non-empty volume
+                # is desired. This may be any object from a non-empty
+                # API group (non core object) or a
+                # PersistentVolumeClaim object. When this field is
+                # specified, volume binding will only succeed if the
+                # type of the specified object matches some installed
+                # volume populator or dynamic provisioner. This field
+                # will replace the functionality of the dataSource
+                # field and as such if both fields are non-empty, they
+                # must have the same value. For backwards
+                # compatibility, when namespace isn't specified in
+                # dataSourceRef, both fields (dataSource and
+                # dataSourceRef) will be set to the same value
+                # automatically if one of them is empty and the other
+                # is non-empty. When namespace is specified in
+                # dataSourceRef, dataSource isn't set to the same value
+                # and must be empty. There are three important
+                # differences between dataSource and dataSourceRef: *
+                # While dataSource only allows two specific types of
+                # objects, dataSourceRef allows any non-core object, as
+                # well as PersistentVolumeClaim objects. * While
+                # dataSource ignores disallowed values (dropping them),
+                # dataSourceRef preserves all values, and generates an
+                # error if a disallowed value is specified. * While
+                # dataSource only allows local objects, dataSourceRef
+                # allows objects in any namespaces. (Beta) Using this
+                # field requires the AnyVolumeDataSource feature gate
+                # to be enabled. (Alpha) Using the namespace field of
+                # dataSourceRef requires the
+                # CrossNamespaceVolumeDataSource feature gate to be
+                # enabled.
+                dataSourceRef:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                  # Namespace is the namespace of resource being
+                  # referenced Note that when a namespace is specified,
+                  # a gateway.networking.k8s.io/ReferenceGrant object
+                  # is required in the referent namespace to allow that
+                  # namespace's owner to accept the reference. See the
+                  # ReferenceGrant documentation for details.
+                  # (Alpha) This field requires the
+                  # CrossNamespaceVolumeDataSource feature gate to be
+                  # enabled.
+                  namespace: string
+                # resources represents the minimum resources the volume
+                # should have. If RecoverVolumeExpansionFailure feature
+                # is enabled users are allowed to specify resource
+                # requirements that are lower than previous value but
+                # must still be higher than capacity recorded in the
+                # status field of the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+                resources:
+                  # Claims lists the names of resources, defined in
+                  # spec.resourceClaims, that are used by this
+                  # container. This is an alpha field and requires
+                  # enabling the DynamicResourceAllocation feature
+                  # gate. This field is immutable. It can only be set
+                  # for containers.
+                  claims:
+                  - name: string
+                  # Limits describes the maximum amount of compute
+                  # resources allowed. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  limits: {}
+                  # Requests describes the minimum amount of compute
+                  # resources required. If Requests is omitted for a
+                  # container, it defaults to Limits if that is
+                  # explicitly specified, otherwise to an
+                  # implementation-defined value. Requests cannot
+                  # exceed Limits. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  requests: {}
+                # selector is a label query over volumes to consider for
+                # binding.
+                selector:
+                  # matchExpressions is a list of label selector
+                  # requirements. The requirements are ANDed.
+                  matchExpressions:
+                  - key: string
+                    # operator represents a key's relationship to a set
+                    # of values. Valid operators are In, NotIn, Exists
+                    # and DoesNotExist.
+                    operator: string
+                    # values is an array of string values. If the
+                    # operator is In or NotIn, the values array must be
+                    # non-empty. If the operator is Exists or
+                    # DoesNotExist, the values array must be empty.
+                    # This array is replaced during a strategic merge
+                    # patch.
+                    values: ["string"]
+                  # matchLabels is a map of {key,value} pairs. A single
+                  # {key,value} in the matchLabels map is equivalent to
+                  # an element of matchExpressions, whose key field
+                  # is "key", the operator is "In", and the values
+                  # array contains only "value". The requirements are
+                  # ANDed.
+                  matchLabels: {}
+                # storageClassName is the name of the StorageClass
+                # required by the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+                storageClassName: string
+                # volumeMode defines what type of volume is required by
+                # the claim. Value of Filesystem is implied when not
+                # included in claim spec.
+                volumeMode: string
+                # volumeName is the binding reference to the
+                # PersistentVolume backing this claim.
+                volumeName: string
+              # status represents the current information/status of a
+              # persistent volume claim. Read-only. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              status:
+                # accessModes contains the actual access modes the
+                # volume backing the PVC has. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # allocatedResources is the storage resource within
+                # AllocatedResources tracks the capacity allocated to a
+                # PVC. It may be larger than the actual capacity when a
+                # volume expansion operation is requested. For storage
+                # quota, the larger value from allocatedResources and
+                # PVC.spec.resources is used. If allocatedResources is
+                # not set, PVC.spec.resources alone is used for quota
+                # calculation. If a volume expansion capacity request
+                # is lowered, allocatedResources is only lowered if
+                # there are no expansion operations in progress and if
+                # the actual volume capacity is equal or lower than the
+                # requested capacity. This is an alpha field and
+                # requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                allocatedResources: {}
+                # capacity represents the actual resources of the
+                # underlying volume.
+                capacity: {}
+                # conditions is the current Condition of persistent
+                # volume claim. If underlying persistent volume is
+                # being resized then the Condition will be set
+                # to 'ResizeStarted'.
+                conditions:
+                - lastProbeTime: string
+                  # lastTransitionTime is the time the condition
+                  # transitioned from one status to another.
+                  lastTransitionTime: string
+                  # message is the human-readable message indicating
+                  # details about last transition.
+                  message: string
+                  # reason is a unique, this should be a short, machine
+                  # understandable string that gives the reason for
+                  # condition's last transition. If it
+                  # reports "ResizeStarted" that means the underlying
+                  # persistent volume is being resized.
+                  reason: string status: string
+                  # PersistentVolumeClaimConditionType is a valid value
+                  # of PersistentVolumeClaimCondition.Type
+                  type: string
+                # phase represents the current phase of
+                # PersistentVolumeClaim.
+                phase: string
+                # resizeStatus stores status of resize operation.
+                # ResizeStatus is not set by default but when expansion
+                # is complete resizeStatus is set to empty string by
+                # resize controller or kubelet. This is an alpha field
+                # and requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                resizeStatus: string
+            # 'wait_timeout'          : Timeout in seconds for reading
+            #  from or writing to this storage provider.
+            waitTimeout: "90"
+          # ColdStorageGCS - Google Cloud Storage-specific *parameter*
+          # names: 
+          #  * BucketName =        'gcs_bucket_name' *
+          #    ProjectID - 'gcs_project_id'
+          #    (optional) * AccountID - 'gcs_service_account_id'
+          #    (optional) *
+          #    AccountPrivateKey - 'gcs_service_account_private_key'
+          #    (optional) * AccountKeys -  'gcs_service_account_keys'
+          #    (optional) NOTE: If
+          #    the 'gcs_service_account_id', 'gcs_service_account_private_key'
+          #    and/or 'gcs_service_account_keys' values are not
+          #    specified, the Google Clould Client Libraries will
+          #    attempt to find and use service account credentials from
+          #    the GOOGLE_APPLICATION_CREDENTIALS environment
+          #    variable.
+          coldStorageGCS: accountID: string accountKeys: string
+          accountPrivateKey: string
+            # 'base_path'             : A base path based on the
+            #  provider type for this tier.
+            basePath: string bucketName: string
+            # 'connection_timeout'    : Timeout in seconds for
+            #  connecting to this storage provider.
+            connectionTimeout: "30"
+            # * 'high_watermark' : Percentage used eviction threshold.
+            #    Once usage exceeds this value, evictions from this
+            #    tier will be scheduled in the background and continue
+            #    until the 'low_watermark' percentage usage is reached.
+            #    Default is "90", signifying a 90% memory usage
+            #    threshold.
+            highWatermark: 90
+            # * 'limit'          : The maximum (bytes) per rank that can
+            #    be allocated across all resource groups.
+            limit: "1Gi"
+            # * 'low_watermark'  : Percentage used recovery threshold.
+            #    Once usage exceeds the 'high_watermark', evictions
+            #    will continue until usage falls below this recovery
+            #    threshold. Default is "80", signifying an 80% usage
+            #    threshold.
+            lowWatermark: 80 name: string
+            # A base directory to use as a space for this tier.
+            path: "default" projectID: string
+            provisioner: "docker.io/hostpath"
+            # Kubernetes Persistent Volume Claim for this disk tier.
+            volumeClaim:
+              # APIVersion defines the versioned schema of this
+              # representation of an object. Servers should convert
+              # recognized schemas to the latest internal value, and
+              # may reject unrecognized values. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+              apiVersion: app.kinetica.com/v1
+              # Kind is a string value representing the REST resource
+              # this object represents. Servers may infer this from the
+              # endpoint the client submits requests to. Cannot be
+              # updated. In CamelCase. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+              kind: KineticaCluster
+              # Standard object's metadata. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+              metadata: {}
+              # spec defines the desired characteristics of a volume
+              # requested by a pod author. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              spec:
+                # accessModes contains the desired access modes the
+                # volume should have. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # dataSource field can be used to specify either: * An
+                # existing VolumeSnapshot object
+                # (snapshot.storage.k8s.io/VolumeSnapshot) * An
+                # existing PVC (PersistentVolumeClaim) If the
+                # provisioner or an external controller can support the
+                # specified data source, it will create a new volume
+                # based on the contents of the specified data source.
+                # When the AnyVolumeDataSource feature gate is enabled,
+                # dataSource contents will be copied to dataSourceRef,
+                # and dataSourceRef contents will be copied to
+                # dataSource when dataSourceRef.namespace is not
+                # specified. If the namespace is specified, then
+                # dataSourceRef will not be copied to dataSource.
+                dataSource:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                # dataSourceRef specifies the object from which to
+                # populate the volume with data, if a non-empty volume
+                # is desired. This may be any object from a non-empty
+                # API group (non core object) or a
+                # PersistentVolumeClaim object. When this field is
+                # specified, volume binding will only succeed if the
+                # type of the specified object matches some installed
+                # volume populator or dynamic provisioner. This field
+                # will replace the functionality of the dataSource
+                # field and as such if both fields are non-empty, they
+                # must have the same value. For backwards
+                # compatibility, when namespace isn't specified in
+                # dataSourceRef, both fields (dataSource and
+                # dataSourceRef) will be set to the same value
+                # automatically if one of them is empty and the other
+                # is non-empty. When namespace is specified in
+                # dataSourceRef, dataSource isn't set to the same value
+                # and must be empty. There are three important
+                # differences between dataSource and dataSourceRef: *
+                # While dataSource only allows two specific types of
+                # objects, dataSourceRef allows any non-core object, as
+                # well as PersistentVolumeClaim objects. * While
+                # dataSource ignores disallowed values (dropping them),
+                # dataSourceRef preserves all values, and generates an
+                # error if a disallowed value is specified. * While
+                # dataSource only allows local objects, dataSourceRef
+                # allows objects in any namespaces. (Beta) Using this
+                # field requires the AnyVolumeDataSource feature gate
+                # to be enabled. (Alpha) Using the namespace field of
+                # dataSourceRef requires the
+                # CrossNamespaceVolumeDataSource feature gate to be
+                # enabled.
+                dataSourceRef:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                  # Namespace is the namespace of resource being
+                  # referenced Note that when a namespace is specified,
+                  # a gateway.networking.k8s.io/ReferenceGrant object
+                  # is required in the referent namespace to allow that
+                  # namespace's owner to accept the reference. See the
+                  # ReferenceGrant documentation for details.
+                  # (Alpha) This field requires the
+                  # CrossNamespaceVolumeDataSource feature gate to be
+                  # enabled.
+                  namespace: string
+                # resources represents the minimum resources the volume
+                # should have. If RecoverVolumeExpansionFailure feature
+                # is enabled users are allowed to specify resource
+                # requirements that are lower than previous value but
+                # must still be higher than capacity recorded in the
+                # status field of the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+                resources:
+                  # Claims lists the names of resources, defined in
+                  # spec.resourceClaims, that are used by this
+                  # container. This is an alpha field and requires
+                  # enabling the DynamicResourceAllocation feature
+                  # gate. This field is immutable. It can only be set
+                  # for containers.
+                  claims:
+                  - name: string
+                  # Limits describes the maximum amount of compute
+                  # resources allowed. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  limits: {}
+                  # Requests describes the minimum amount of compute
+                  # resources required. If Requests is omitted for a
+                  # container, it defaults to Limits if that is
+                  # explicitly specified, otherwise to an
+                  # implementation-defined value. Requests cannot
+                  # exceed Limits. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  requests: {}
+                # selector is a label query over volumes to consider for
+                # binding.
+                selector:
+                  # matchExpressions is a list of label selector
+                  # requirements. The requirements are ANDed.
+                  matchExpressions:
+                  - key: string
+                    # operator represents a key's relationship to a set
+                    # of values. Valid operators are In, NotIn, Exists
+                    # and DoesNotExist.
+                    operator: string
+                    # values is an array of string values. If the
+                    # operator is In or NotIn, the values array must be
+                    # non-empty. If the operator is Exists or
+                    # DoesNotExist, the values array must be empty.
+                    # This array is replaced during a strategic merge
+                    # patch.
+                    values: ["string"]
+                  # matchLabels is a map of {key,value} pairs. A single
+                  # {key,value} in the matchLabels map is equivalent to
+                  # an element of matchExpressions, whose key field
+                  # is "key", the operator is "In", and the values
+                  # array contains only "value". The requirements are
+                  # ANDed.
+                  matchLabels: {}
+                # storageClassName is the name of the StorageClass
+                # required by the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+                storageClassName: string
+                # volumeMode defines what type of volume is required by
+                # the claim. Value of Filesystem is implied when not
+                # included in claim spec.
+                volumeMode: string
+                # volumeName is the binding reference to the
+                # PersistentVolume backing this claim.
+                volumeName: string
+              # status represents the current information/status of a
+              # persistent volume claim. Read-only. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              status:
+                # accessModes contains the actual access modes the
+                # volume backing the PVC has. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # allocatedResources is the storage resource within
+                # AllocatedResources tracks the capacity allocated to a
+                # PVC. It may be larger than the actual capacity when a
+                # volume expansion operation is requested. For storage
+                # quota, the larger value from allocatedResources and
+                # PVC.spec.resources is used. If allocatedResources is
+                # not set, PVC.spec.resources alone is used for quota
+                # calculation. If a volume expansion capacity request
+                # is lowered, allocatedResources is only lowered if
+                # there are no expansion operations in progress and if
+                # the actual volume capacity is equal or lower than the
+                # requested capacity. This is an alpha field and
+                # requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                allocatedResources: {}
+                # capacity represents the actual resources of the
+                # underlying volume.
+                capacity: {}
+                # conditions is the current Condition of persistent
+                # volume claim. If underlying persistent volume is
+                # being resized then the Condition will be set
+                # to 'ResizeStarted'.
+                conditions:
+                - lastProbeTime: string
+                  # lastTransitionTime is the time the condition
+                  # transitioned from one status to another.
+                  lastTransitionTime: string
+                  # message is the human-readable message indicating
+                  # details about last transition.
+                  message: string
+                  # reason is a unique, this should be a short, machine
+                  # understandable string that gives the reason for
+                  # condition's last transition. If it
+                  # reports "ResizeStarted" that means the underlying
+                  # persistent volume is being resized.
+                  reason: string status: string
+                  # PersistentVolumeClaimConditionType is a valid value
+                  # of PersistentVolumeClaimCondition.Type
+                  type: string
+                # phase represents the current phase of
+                # PersistentVolumeClaim.
+                phase: string
+                # resizeStatus stores status of resize operation.
+                # ResizeStatus is not set by default but when expansion
+                # is complete resizeStatus is set to empty string by
+                # resize controller or kubelet. This is an alpha field
+                # and requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                resizeStatus: string
+            # 'wait_timeout'          : Timeout in seconds for reading
+            #  from or writing to this storage provider.
+            waitTimeout: "90"
+          # ColdStorageHDFS
+          coldStorageHDFS:
+            # ColdStorageDisk
+            default:
+              # 'base_path'             : A base path based on the
+              #  provider type for this tier.
+              basePath: string
+              # 'connection_timeout'    : Timeout in seconds for
+              #  connecting to this storage provider.
+              connectionTimeout: "30"
+              # * 'high_watermark' : Percentage used eviction threshold.
+              #    Once usage exceeds this value, evictions from this
+              #    tier will be scheduled in the background and
+              #    continue until the 'low_watermark' percentage usage
+              #    is reached.  Default is "90", signifying a 90%
+              #    memory usage threshold.
+              highWatermark: 90
+              # * 'limit'          : The maximum (bytes) per rank that
+              #    can be allocated across all resource groups.
+              limit: "1Gi"
+              # * 'low_watermark'  : Percentage used recovery threshold.
+              #    Once usage exceeds the 'high_watermark', evictions
+              #    will continue until usage falls below this recovery
+              #    threshold. Default is "80", signifying an 80% usage
+              #    threshold.
+              lowWatermark: 80 name: string
+              # A base directory to use as a space for this tier.
+              path: "default" provisioner: "docker.io/hostpath"
+              # Kubernetes Persistent Volume Claim for this disk tier.
+              volumeClaim:
+                # APIVersion defines the versioned schema of this
+                # representation of an object. Servers should convert
+                # recognized schemas to the latest internal value, and
+                # may reject unrecognized values. More info:
+                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+                apiVersion: app.kinetica.com/v1
+                # Kind is a string value representing the REST resource
+                # this object represents. Servers may infer this from
+                # the endpoint the client submits requests to. Cannot
+                # be updated. In CamelCase. More info:
+                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+                kind: KineticaCluster
+                # Standard object's metadata. More info:
+                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+                metadata: {}
+                # spec defines the desired characteristics of a volume
+                # requested by a pod author. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+                spec:
+                  # accessModes contains the desired access modes the
+                  # volume should have. More info:
+                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                  accessModes: ["string"]
+                  # dataSource field can be used to specify either: * An
+                  # existing VolumeSnapshot object
+                  # (snapshot.storage.k8s.io/VolumeSnapshot) * An
+                  # existing PVC (PersistentVolumeClaim) If the
+                  # provisioner or an external controller can support
+                  # the specified data source, it will create a new
+                  # volume based on the contents of the specified data
+                  # source. When the AnyVolumeDataSource feature gate
+                  # is enabled, dataSource contents will be copied to
+                  # dataSourceRef, and dataSourceRef contents will be
+                  # copied to dataSource when dataSourceRef.namespace
+                  # is not specified. If the namespace is specified,
+                  # then dataSourceRef will not be copied to
+                  # dataSource.
+                  dataSource:
+                    # APIGroup is the group for the resource being
+                    # referenced. If APIGroup is not specified, the
+                    # specified Kind must be in the core API group. For
+                    # any other third-party types, APIGroup is
+                    # required.
+                    apiGroup: string
+                    # Kind is the type of resource being referenced
+                    kind: KineticaCluster
+                    # Name is the name of resource being referenced
+                    name: string
+                  # dataSourceRef specifies the object from which to
+                  # populate the volume with data, if a non-empty
+                  # volume is desired. This may be any object from a
+                  # non-empty API group (non core object) or a
+                  # PersistentVolumeClaim object. When this field is
+                  # specified, volume binding will only succeed if the
+                  # type of the specified object matches some installed
+                  # volume populator or dynamic provisioner. This field
+                  # will replace the functionality of the dataSource
+                  # field and as such if both fields are non-empty,
+                  # they must have the same value. For backwards
+                  # compatibility, when namespace isn't specified in
+                  # dataSourceRef, both fields (dataSource and
+                  # dataSourceRef) will be set to the same value
+                  # automatically if one of them is empty and the other
+                  # is non-empty. When namespace is specified in
+                  # dataSourceRef, dataSource isn't set to the same
+                  # value and must be empty. There are three important
+                  # differences between dataSource and dataSourceRef: *
+                  # While dataSource only allows two specific types of
+                  # objects, dataSourceRef allows any non-core object,
+                  # as well as PersistentVolumeClaim objects. * While
+                  # dataSource ignores disallowed values
+                  # (dropping them), dataSourceRef preserves all
+                  # values, and generates an error if a disallowed
+                  # value is specified. * While dataSource only allows
+                  # local objects, dataSourceRef allows objects in any
+                  # namespaces. (Beta) Using this field requires the
+                  # AnyVolumeDataSource feature gate to be enabled.
+                  # (Alpha) Using the namespace field of dataSourceRef
+                  # requires the CrossNamespaceVolumeDataSource feature
+                  # gate to be enabled.
+                  dataSourceRef:
+                    # APIGroup is the group for the resource being
+                    # referenced. If APIGroup is not specified, the
+                    # specified Kind must be in the core API group. For
+                    # any other third-party types, APIGroup is
+                    # required.
+                    apiGroup: string
+                    # Kind is the type of resource being referenced
+                    kind: KineticaCluster
+                    # Name is the name of resource being referenced
+                    name: string
+                    # Namespace is the namespace of resource being
+                    # referenced Note that when a namespace is
+                    # specified, a
+                    # gateway.networking.k8s.io/ReferenceGrant object
+                    # is required in the referent namespace to allow
+                    # that namespace's owner to accept the reference.
+                    # See the ReferenceGrant documentation for
+                    # details. (Alpha) This field requires the
+                    # CrossNamespaceVolumeDataSource feature gate to be
+                    # enabled.
+                    namespace: string
+                  # resources represents the minimum resources the
+                  # volume should have. If
+                  # RecoverVolumeExpansionFailure feature is enabled
+                  # users are allowed to specify resource requirements
+                  # that are lower than previous value but must still
+                  # be higher than capacity recorded in the status
+                  # field of the claim. More info:
+                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+                  resources:
+                    # Claims lists the names of resources, defined in
+                    # spec.resourceClaims, that are used by this
+                    # container. This is an alpha field and requires
+                    # enabling the DynamicResourceAllocation feature
+                    # gate. This field is immutable. It can only be set
+                    # for containers.
+                    claims:
+                    - name: string
+                    # Limits describes the maximum amount of compute
+                    # resources allowed. More info:
+                    # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                    limits: {}
+                    # Requests describes the minimum amount of compute
+                    # resources required. If Requests is omitted for a
+                    # container, it defaults to Limits if that is
+                    # explicitly specified, otherwise to an
+                    # implementation-defined value. Requests cannot
+                    # exceed Limits. More info:
+                    # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                    requests: {}
+                  # selector is a label query over volumes to consider
+                  # for binding.
+                  selector:
+                    # matchExpressions is a list of label selector
+                    # requirements. The requirements are ANDed.
+                    matchExpressions:
+                    - key: string
+                      # operator represents a key's relationship to a
+                      # set of values. Valid operators are In, NotIn,
+                      # Exists and DoesNotExist.
+                      operator: string
+                      # values is an array of string values. If the
+                      # operator is In or NotIn, the values array must
+                      # be non-empty. If the operator is Exists or
+                      # DoesNotExist, the values array must be empty.
+                      # This array is replaced during a strategic merge
+                      # patch.
+                      values: ["string"]
+                    # matchLabels is a map of {key,value} pairs. A
+                    # single {key,value} in the matchLabels map is
+                    # equivalent to an element of matchExpressions,
+                    # whose key field is "key", the operator is "In",
+                    # and the values array contains only "value". The
+                    # requirements are ANDed.
+                    matchLabels: {}
+                  # storageClassName is the name of the StorageClass
+                  # required by the claim. More info:
+                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+                  storageClassName: string
+                  # volumeMode defines what type of volume is required
+                  # by the claim. Value of Filesystem is implied when
+                  # not included in claim spec.
+                  volumeMode: string
+                  # volumeName is the binding reference to the
+                  # PersistentVolume backing this claim.
+                  volumeName: string
+                # status represents the current information/status of a
+                # persistent volume claim. Read-only. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+                status:
+                  # accessModes contains the actual access modes the
+                  # volume backing the PVC has. More info:
+                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                  accessModes: ["string"]
+                  # allocatedResources is the storage resource within
+                  # AllocatedResources tracks the capacity allocated to
+                  # a PVC. It may be larger than the actual capacity
+                  # when a volume expansion operation is requested. For
+                  # storage quota, the larger value from
+                  # allocatedResources and PVC.spec.resources is used.
+                  # If allocatedResources is not set,
+                  # PVC.spec.resources alone is used for quota
+                  # calculation. If a volume expansion capacity request
+                  # is lowered, allocatedResources is only lowered if
+                  # there are no expansion operations in progress and
+                  # if the actual volume capacity is equal or lower
+                  # than the requested capacity. This is an alpha field
+                  # and requires enabling RecoverVolumeExpansionFailure
+                  # feature.
+                  allocatedResources: {}
+                  # capacity represents the actual resources of the
+                  # underlying volume.
+                  capacity: {}
+                  # conditions is the current Condition of persistent
+                  # volume claim. If underlying persistent volume is
+                  # being resized then the Condition will be set
+                  # to 'ResizeStarted'.
+                  conditions:
+                  - lastProbeTime: string
+                    # lastTransitionTime is the time the condition
+                    # transitioned from one status to another.
+                    lastTransitionTime: string
+                    # message is the human-readable message indicating
+                    # details about last transition.
+                    message: string
+                    # reason is a unique, this should be a short,
+                    # machine understandable string that gives the
+                    # reason for condition's last transition. If it
+                    # reports "ResizeStarted" that means the underlying
+                    # persistent volume is being resized.
+                    reason: string status: string
+                    # PersistentVolumeClaimConditionType is a valid
+                    # value of PersistentVolumeClaimCondition.Type
+                    type: string
+                  # phase represents the current phase of
+                  # PersistentVolumeClaim.
+                  phase: string
+                  # resizeStatus stores status of resize operation.
+                  # ResizeStatus is not set by default but when
+                  # expansion is complete resizeStatus is set to empty
+                  # string by resize controller or kubelet. This is an
+                  # alpha field and requires enabling
+                  # RecoverVolumeExpansionFailure feature.
+                  resizeStatus: string
+              # 'wait_timeout'          : Timeout in seconds for reading
+              #  from or writing to this storage provider.
+              waitTimeout: "90"
+            # 'hdfs_kerberos_keytab'  : The Kerberos keytab file used to
+            #  authenticate the "gpudb" Kerberos
+            kerberosKeytab: string
+            # 'hdfs_principal'        : The effective principal name to
+            #  use when connecting to the hadoop cluster.
+            principal: string
+            # 'hdfs_uri'              : The host IP address & port for
+            #  the hadoop distributed file system. For example:
+            #  hdfs://localhost:8020
+            uri: string
+            # 'hdfs_use_kerberos'     : Set to "true" to enable Kerberos
+            #  authentication to an HDFS storage server. The
+            #  credentials of the principal are in the file specified
+            #  by the 'hdfs_kerberos_keytab' parameter. Note that
+            #  Kerberos's *kinit* command will be run when the database
+            #  is started.
+            useKerberos: true
+          # ColdStorageS3
+          coldStorageS3: awsAccessKeyId: string awsRoleARN: string
+          awsSecretAccessKey: string
+            # 'base_path'             : A base path based on the
+            #  provider type for this tier.
+            basePath: string bucketName: string
+            # 'connection_timeout'    : Timeout in seconds for
+            #  connecting to this storage provider.
+            connectionTimeout: "30" encryptionCustomerAlgorithm: string
+            encryptionCustomerKey: string
+            # EncryptionType - This is optional and valid values are
+            # sse-s3 (Encryption key is managed by Amazon S3) and
+            # sse-kms (Encryption key is managed by AWS Key Management
+            # Service (kms)).
+            encryptionType: string
+            # Endpoint - s3_endpoint
+            endpoint: string
+            # * 'high_watermark' : Percentage used eviction threshold.
+            #    Once usage exceeds this value, evictions from this
+            #    tier will be scheduled in the background and continue
+            #    until the 'low_watermark' percentage usage is reached.
+            #    Default is "90", signifying a 90% memory usage
+            #    threshold.
+            highWatermark: 90
+            # KMSKeyID - This is optional and must be specified when
+            # encryption type is sse-kms.
+            kmsKeyID: string
+            # * 'limit'          : The maximum (bytes) per rank that can
+            #    be allocated across all resource groups.
+            limit: "1Gi"
+            # * 'low_watermark'  : Percentage used recovery threshold.
+            #    Once usage exceeds the 'high_watermark', evictions
+            #    will continue until usage falls below this recovery
+            #    threshold. Default is "80", signifying an 80% usage
+            #    threshold.
+            lowWatermark: 80 name: string
+            # A base directory to use as a space for this tier.
+            path: "default" provisioner: "docker.io/hostpath" region:
+            string useManagedCredentials: true
+            # UseVirtualAddressing - 's3_use_virtual_addressing'  : If
+            # true (default), S3 endpoints will be constructed using
+            # the 'virtual' style which includes the bucket name as
+            # part of the hostname. Set to false to use the 'path'
+            # style which treats the bucket name as if it is a path in
+            # the URI.
+            useVirtualAddressing: true
+            # Kubernetes Persistent Volume Claim for this disk tier.
+            volumeClaim:
+              # APIVersion defines the versioned schema of this
+              # representation of an object. Servers should convert
+              # recognized schemas to the latest internal value, and
+              # may reject unrecognized values. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+              apiVersion: app.kinetica.com/v1
+              # Kind is a string value representing the REST resource
+              # this object represents. Servers may infer this from the
+              # endpoint the client submits requests to. Cannot be
+              # updated. In CamelCase. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+              kind: KineticaCluster
+              # Standard object's metadata. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+              metadata: {}
+              # spec defines the desired characteristics of a volume
+              # requested by a pod author. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              spec:
+                # accessModes contains the desired access modes the
+                # volume should have. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # dataSource field can be used to specify either: * An
+                # existing VolumeSnapshot object
+                # (snapshot.storage.k8s.io/VolumeSnapshot) * An
+                # existing PVC (PersistentVolumeClaim) If the
+                # provisioner or an external controller can support the
+                # specified data source, it will create a new volume
+                # based on the contents of the specified data source.
+                # When the AnyVolumeDataSource feature gate is enabled,
+                # dataSource contents will be copied to dataSourceRef,
+                # and dataSourceRef contents will be copied to
+                # dataSource when dataSourceRef.namespace is not
+                # specified. If the namespace is specified, then
+                # dataSourceRef will not be copied to dataSource.
+                dataSource:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                # dataSourceRef specifies the object from which to
+                # populate the volume with data, if a non-empty volume
+                # is desired. This may be any object from a non-empty
+                # API group (non core object) or a
+                # PersistentVolumeClaim object. When this field is
+                # specified, volume binding will only succeed if the
+                # type of the specified object matches some installed
+                # volume populator or dynamic provisioner. This field
+                # will replace the functionality of the dataSource
+                # field and as such if both fields are non-empty, they
+                # must have the same value. For backwards
+                # compatibility, when namespace isn't specified in
+                # dataSourceRef, both fields (dataSource and
+                # dataSourceRef) will be set to the same value
+                # automatically if one of them is empty and the other
+                # is non-empty. When namespace is specified in
+                # dataSourceRef, dataSource isn't set to the same value
+                # and must be empty. There are three important
+                # differences between dataSource and dataSourceRef: *
+                # While dataSource only allows two specific types of
+                # objects, dataSourceRef allows any non-core object, as
+                # well as PersistentVolumeClaim objects. * While
+                # dataSource ignores disallowed values (dropping them),
+                # dataSourceRef preserves all values, and generates an
+                # error if a disallowed value is specified. * While
+                # dataSource only allows local objects, dataSourceRef
+                # allows objects in any namespaces. (Beta) Using this
+                # field requires the AnyVolumeDataSource feature gate
+                # to be enabled. (Alpha) Using the namespace field of
+                # dataSourceRef requires the
+                # CrossNamespaceVolumeDataSource feature gate to be
+                # enabled.
+                dataSourceRef:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                  # Namespace is the namespace of resource being
+                  # referenced Note that when a namespace is specified,
+                  # a gateway.networking.k8s.io/ReferenceGrant object
+                  # is required in the referent namespace to allow that
+                  # namespace's owner to accept the reference. See the
+                  # ReferenceGrant documentation for details.
+                  # (Alpha) This field requires the
+                  # CrossNamespaceVolumeDataSource feature gate to be
+                  # enabled.
+                  namespace: string
+                # resources represents the minimum resources the volume
+                # should have. If RecoverVolumeExpansionFailure feature
+                # is enabled users are allowed to specify resource
+                # requirements that are lower than previous value but
+                # must still be higher than capacity recorded in the
+                # status field of the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+                resources:
+                  # Claims lists the names of resources, defined in
+                  # spec.resourceClaims, that are used by this
+                  # container. This is an alpha field and requires
+                  # enabling the DynamicResourceAllocation feature
+                  # gate. This field is immutable. It can only be set
+                  # for containers.
+                  claims:
+                  - name: string
+                  # Limits describes the maximum amount of compute
+                  # resources allowed. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  limits: {}
+                  # Requests describes the minimum amount of compute
+                  # resources required. If Requests is omitted for a
+                  # container, it defaults to Limits if that is
+                  # explicitly specified, otherwise to an
+                  # implementation-defined value. Requests cannot
+                  # exceed Limits. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  requests: {}
+                # selector is a label query over volumes to consider for
+                # binding.
+                selector:
+                  # matchExpressions is a list of label selector
+                  # requirements. The requirements are ANDed.
+                  matchExpressions:
+                  - key: string
+                    # operator represents a key's relationship to a set
+                    # of values. Valid operators are In, NotIn, Exists
+                    # and DoesNotExist.
+                    operator: string
+                    # values is an array of string values. If the
+                    # operator is In or NotIn, the values array must be
+                    # non-empty. If the operator is Exists or
+                    # DoesNotExist, the values array must be empty.
+                    # This array is replaced during a strategic merge
+                    # patch.
+                    values: ["string"]
+                  # matchLabels is a map of {key,value} pairs. A single
+                  # {key,value} in the matchLabels map is equivalent to
+                  # an element of matchExpressions, whose key field
+                  # is "key", the operator is "In", and the values
+                  # array contains only "value". The requirements are
+                  # ANDed.
+                  matchLabels: {}
+                # storageClassName is the name of the StorageClass
+                # required by the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+                storageClassName: string
+                # volumeMode defines what type of volume is required by
+                # the claim. Value of Filesystem is implied when not
+                # included in claim spec.
+                volumeMode: string
+                # volumeName is the binding reference to the
+                # PersistentVolume backing this claim.
+                volumeName: string
+              # status represents the current information/status of a
+              # persistent volume claim. Read-only. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              status:
+                # accessModes contains the actual access modes the
+                # volume backing the PVC has. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # allocatedResources is the storage resource within
+                # AllocatedResources tracks the capacity allocated to a
+                # PVC. It may be larger than the actual capacity when a
+                # volume expansion operation is requested. For storage
+                # quota, the larger value from allocatedResources and
+                # PVC.spec.resources is used. If allocatedResources is
+                # not set, PVC.spec.resources alone is used for quota
+                # calculation. If a volume expansion capacity request
+                # is lowered, allocatedResources is only lowered if
+                # there are no expansion operations in progress and if
+                # the actual volume capacity is equal or lower than the
+                # requested capacity. This is an alpha field and
+                # requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                allocatedResources: {}
+                # capacity represents the actual resources of the
+                # underlying volume.
+                capacity: {}
+                # conditions is the current Condition of persistent
+                # volume claim. If underlying persistent volume is
+                # being resized then the Condition will be set
+                # to 'ResizeStarted'.
+                conditions:
+                - lastProbeTime: string
+                  # lastTransitionTime is the time the condition
+                  # transitioned from one status to another.
+                  lastTransitionTime: string
+                  # message is the human-readable message indicating
+                  # details about last transition.
+                  message: string
+                  # reason is a unique, this should be a short, machine
+                  # understandable string that gives the reason for
+                  # condition's last transition. If it
+                  # reports "ResizeStarted" that means the underlying
+                  # persistent volume is being resized.
+                  reason: string status: string
+                  # PersistentVolumeClaimConditionType is a valid value
+                  # of PersistentVolumeClaimCondition.Type
+                  type: string
+                # phase represents the current phase of
+                # PersistentVolumeClaim.
+                phase: string
+                # resizeStatus stores status of resize operation.
+                # ResizeStatus is not set by default but when expansion
+                # is complete resizeStatus is set to empty string by
+                # resize controller or kubelet. This is an alpha field
+                # and requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                resizeStatus: string
+            # 'wait_timeout'          : Timeout in seconds for reading
+            #  from or writing to this storage provider.
+            waitTimeout: "90"
+          # ColdStorageType The storage provider type. Currently,
+          # supports "none", "disk"(local/network storage), "hdfs"
+          # (Hadoop distributed filesystem), "s3" (Amazon S3
+          # bucket), "azure_blob" (Microsoft Azure Blob Storage)
+          # and "gcs" (Google GCS Bucket).
+          coldStorageType: "none" name: string
+        # The DiskCacheTier are used as temporary swap space for data
+        # that doesn't fit in RAM or VRAM. The disk should be as fast
+        # or faster than the Persist Tier storage since this tier is
+        # used as an intermediary cache between the RAM and Persist
+        # Tiers.
+        diskCacheTier:
+          # DiskTierStorageLimit
+          default:
+            # * 'high_watermark' : Percentage used eviction threshold.
+            #    Once usage exceeds this value, evictions from this
+            #    tier will be scheduled in the background and continue
+            #    until the 'low_watermark' percentage usage is reached.
+            #    Default is "90", signifying a 90% memory usage
+            #    threshold.
+            highWatermark: 90
+            # * 'limit'          : The maximum (bytes) per rank that can
+            #    be allocated across all resource groups.
+            limit: "1Gi"
+            # * 'low_watermark'  : Percentage used recovery threshold.
+            #    Once usage exceeds the 'high_watermark', evictions
+            #    will continue until usage falls below this recovery
+            #    threshold. Default is "80", signifying an 80% usage
+            #    threshold.
+            lowWatermark: 80 name: string
+            # A base directory to use as a space for this tier.
+            path: "default" provisioner: "docker.io/hostpath"
+            # Kubernetes Persistent Volume Claim for this disk tier.
+            volumeClaim:
+              # APIVersion defines the versioned schema of this
+              # representation of an object. Servers should convert
+              # recognized schemas to the latest internal value, and
+              # may reject unrecognized values. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+              apiVersion: app.kinetica.com/v1
+              # Kind is a string value representing the REST resource
+              # this object represents. Servers may infer this from the
+              # endpoint the client submits requests to. Cannot be
+              # updated. In CamelCase. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+              kind: KineticaCluster
+              # Standard object's metadata. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+              metadata: {}
+              # spec defines the desired characteristics of a volume
+              # requested by a pod author. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              spec:
+                # accessModes contains the desired access modes the
+                # volume should have. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # dataSource field can be used to specify either: * An
+                # existing VolumeSnapshot object
+                # (snapshot.storage.k8s.io/VolumeSnapshot) * An
+                # existing PVC (PersistentVolumeClaim) If the
+                # provisioner or an external controller can support the
+                # specified data source, it will create a new volume
+                # based on the contents of the specified data source.
+                # When the AnyVolumeDataSource feature gate is enabled,
+                # dataSource contents will be copied to dataSourceRef,
+                # and dataSourceRef contents will be copied to
+                # dataSource when dataSourceRef.namespace is not
+                # specified. If the namespace is specified, then
+                # dataSourceRef will not be copied to dataSource.
+                dataSource:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                # dataSourceRef specifies the object from which to
+                # populate the volume with data, if a non-empty volume
+                # is desired. This may be any object from a non-empty
+                # API group (non core object) or a
+                # PersistentVolumeClaim object. When this field is
+                # specified, volume binding will only succeed if the
+                # type of the specified object matches some installed
+                # volume populator or dynamic provisioner. This field
+                # will replace the functionality of the dataSource
+                # field and as such if both fields are non-empty, they
+                # must have the same value. For backwards
+                # compatibility, when namespace isn't specified in
+                # dataSourceRef, both fields (dataSource and
+                # dataSourceRef) will be set to the same value
+                # automatically if one of them is empty and the other
+                # is non-empty. When namespace is specified in
+                # dataSourceRef, dataSource isn't set to the same value
+                # and must be empty. There are three important
+                # differences between dataSource and dataSourceRef: *
+                # While dataSource only allows two specific types of
+                # objects, dataSourceRef allows any non-core object, as
+                # well as PersistentVolumeClaim objects. * While
+                # dataSource ignores disallowed values (dropping them),
+                # dataSourceRef preserves all values, and generates an
+                # error if a disallowed value is specified. * While
+                # dataSource only allows local objects, dataSourceRef
+                # allows objects in any namespaces. (Beta) Using this
+                # field requires the AnyVolumeDataSource feature gate
+                # to be enabled. (Alpha) Using the namespace field of
+                # dataSourceRef requires the
+                # CrossNamespaceVolumeDataSource feature gate to be
+                # enabled.
+                dataSourceRef:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                  # Namespace is the namespace of resource being
+                  # referenced Note that when a namespace is specified,
+                  # a gateway.networking.k8s.io/ReferenceGrant object
+                  # is required in the referent namespace to allow that
+                  # namespace's owner to accept the reference. See the
+                  # ReferenceGrant documentation for details.
+                  # (Alpha) This field requires the
+                  # CrossNamespaceVolumeDataSource feature gate to be
+                  # enabled.
+                  namespace: string
+                # resources represents the minimum resources the volume
+                # should have. If RecoverVolumeExpansionFailure feature
+                # is enabled users are allowed to specify resource
+                # requirements that are lower than previous value but
+                # must still be higher than capacity recorded in the
+                # status field of the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+                resources:
+                  # Claims lists the names of resources, defined in
+                  # spec.resourceClaims, that are used by this
+                  # container. This is an alpha field and requires
+                  # enabling the DynamicResourceAllocation feature
+                  # gate. This field is immutable. It can only be set
+                  # for containers.
+                  claims:
+                  - name: string
+                  # Limits describes the maximum amount of compute
+                  # resources allowed. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  limits: {}
+                  # Requests describes the minimum amount of compute
+                  # resources required. If Requests is omitted for a
+                  # container, it defaults to Limits if that is
+                  # explicitly specified, otherwise to an
+                  # implementation-defined value. Requests cannot
+                  # exceed Limits. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  requests: {}
+                # selector is a label query over volumes to consider for
+                # binding.
+                selector:
+                  # matchExpressions is a list of label selector
+                  # requirements. The requirements are ANDed.
+                  matchExpressions:
+                  - key: string
+                    # operator represents a key's relationship to a set
+                    # of values. Valid operators are In, NotIn, Exists
+                    # and DoesNotExist.
+                    operator: string
+                    # values is an array of string values. If the
+                    # operator is In or NotIn, the values array must be
+                    # non-empty. If the operator is Exists or
+                    # DoesNotExist, the values array must be empty.
+                    # This array is replaced during a strategic merge
+                    # patch.
+                    values: ["string"]
+                  # matchLabels is a map of {key,value} pairs. A single
+                  # {key,value} in the matchLabels map is equivalent to
+                  # an element of matchExpressions, whose key field
+                  # is "key", the operator is "In", and the values
+                  # array contains only "value". The requirements are
+                  # ANDed.
+                  matchLabels: {}
+                # storageClassName is the name of the StorageClass
+                # required by the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+                storageClassName: string
+                # volumeMode defines what type of volume is required by
+                # the claim. Value of Filesystem is implied when not
+                # included in claim spec.
+                volumeMode: string
+                # volumeName is the binding reference to the
+                # PersistentVolume backing this claim.
+                volumeName: string
+              # status represents the current information/status of a
+              # persistent volume claim. Read-only. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              status:
+                # accessModes contains the actual access modes the
+                # volume backing the PVC has. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # allocatedResources is the storage resource within
+                # AllocatedResources tracks the capacity allocated to a
+                # PVC. It may be larger than the actual capacity when a
+                # volume expansion operation is requested. For storage
+                # quota, the larger value from allocatedResources and
+                # PVC.spec.resources is used. If allocatedResources is
+                # not set, PVC.spec.resources alone is used for quota
+                # calculation. If a volume expansion capacity request
+                # is lowered, allocatedResources is only lowered if
+                # there are no expansion operations in progress and if
+                # the actual volume capacity is equal or lower than the
+                # requested capacity. This is an alpha field and
+                # requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                allocatedResources: {}
+                # capacity represents the actual resources of the
+                # underlying volume.
+                capacity: {}
+                # conditions is the current Condition of persistent
+                # volume claim. If underlying persistent volume is
+                # being resized then the Condition will be set
+                # to 'ResizeStarted'.
+                conditions:
+                - lastProbeTime: string
+                  # lastTransitionTime is the time the condition
+                  # transitioned from one status to another.
+                  lastTransitionTime: string
+                  # message is the human-readable message indicating
+                  # details about last transition.
+                  message: string
+                  # reason is a unique, this should be a short, machine
+                  # understandable string that gives the reason for
+                  # condition's last transition. If it
+                  # reports "ResizeStarted" that means the underlying
+                  # persistent volume is being resized.
+                  reason: string status: string
+                  # PersistentVolumeClaimConditionType is a valid value
+                  # of PersistentVolumeClaimCondition.Type
+                  type: string
+                # phase represents the current phase of
+                # PersistentVolumeClaim.
+                phase: string
+                # resizeStatus stores status of resize operation.
+                # ResizeStatus is not set by default but when expansion
+                # is complete resizeStatus is set to empty string by
+                # resize controller or kubelet. This is an alpha field
+                # and requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                resizeStatus: string defaultStorePersistentObjects: true
+                ranks:
+          - highWatermark: 90
+            # * 'limit'          : The maximum (bytes) per rank that can
+            #    be allocated across all resource groups.
+            limit: "1Gi"
+            # * 'low_watermark'  : Percentage used recovery threshold.
+            #    Once usage exceeds the 'high_watermark', evictions
+            #    will continue until usage falls below this recovery
+            #    threshold. Default is "80", signifying an 80% usage
+            #    threshold.
+            lowWatermark: 80 name: string
+            # A base directory to use as a space for this tier.
+            path: "default" provisioner: "docker.io/hostpath"
+            # Kubernetes Persistent Volume Claim for this disk tier.
+            volumeClaim:
+              # APIVersion defines the versioned schema of this
+              # representation of an object. Servers should convert
+              # recognized schemas to the latest internal value, and
+              # may reject unrecognized values. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+              apiVersion: app.kinetica.com/v1
+              # Kind is a string value representing the REST resource
+              # this object represents. Servers may infer this from the
+              # endpoint the client submits requests to. Cannot be
+              # updated. In CamelCase. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+              kind: KineticaCluster
+              # Standard object's metadata. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+              metadata: {}
+              # spec defines the desired characteristics of a volume
+              # requested by a pod author. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              spec:
+                # accessModes contains the desired access modes the
+                # volume should have. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # dataSource field can be used to specify either: * An
+                # existing VolumeSnapshot object
+                # (snapshot.storage.k8s.io/VolumeSnapshot) * An
+                # existing PVC (PersistentVolumeClaim) If the
+                # provisioner or an external controller can support the
+                # specified data source, it will create a new volume
+                # based on the contents of the specified data source.
+                # When the AnyVolumeDataSource feature gate is enabled,
+                # dataSource contents will be copied to dataSourceRef,
+                # and dataSourceRef contents will be copied to
+                # dataSource when dataSourceRef.namespace is not
+                # specified. If the namespace is specified, then
+                # dataSourceRef will not be copied to dataSource.
+                dataSource:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                # dataSourceRef specifies the object from which to
+                # populate the volume with data, if a non-empty volume
+                # is desired. This may be any object from a non-empty
+                # API group (non core object) or a
+                # PersistentVolumeClaim object. When this field is
+                # specified, volume binding will only succeed if the
+                # type of the specified object matches some installed
+                # volume populator or dynamic provisioner. This field
+                # will replace the functionality of the dataSource
+                # field and as such if both fields are non-empty, they
+                # must have the same value. For backwards
+                # compatibility, when namespace isn't specified in
+                # dataSourceRef, both fields (dataSource and
+                # dataSourceRef) will be set to the same value
+                # automatically if one of them is empty and the other
+                # is non-empty. When namespace is specified in
+                # dataSourceRef, dataSource isn't set to the same value
+                # and must be empty. There are three important
+                # differences between dataSource and dataSourceRef: *
+                # While dataSource only allows two specific types of
+                # objects, dataSourceRef allows any non-core object, as
+                # well as PersistentVolumeClaim objects. * While
+                # dataSource ignores disallowed values (dropping them),
+                # dataSourceRef preserves all values, and generates an
+                # error if a disallowed value is specified. * While
+                # dataSource only allows local objects, dataSourceRef
+                # allows objects in any namespaces. (Beta) Using this
+                # field requires the AnyVolumeDataSource feature gate
+                # to be enabled. (Alpha) Using the namespace field of
+                # dataSourceRef requires the
+                # CrossNamespaceVolumeDataSource feature gate to be
+                # enabled.
+                dataSourceRef:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                  # Namespace is the namespace of resource being
+                  # referenced Note that when a namespace is specified,
+                  # a gateway.networking.k8s.io/ReferenceGrant object
+                  # is required in the referent namespace to allow that
+                  # namespace's owner to accept the reference. See the
+                  # ReferenceGrant documentation for details.
+                  # (Alpha) This field requires the
+                  # CrossNamespaceVolumeDataSource feature gate to be
+                  # enabled.
+                  namespace: string
+                # resources represents the minimum resources the volume
+                # should have. If RecoverVolumeExpansionFailure feature
+                # is enabled users are allowed to specify resource
+                # requirements that are lower than previous value but
+                # must still be higher than capacity recorded in the
+                # status field of the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+                resources:
+                  # Claims lists the names of resources, defined in
+                  # spec.resourceClaims, that are used by this
+                  # container. This is an alpha field and requires
+                  # enabling the DynamicResourceAllocation feature
+                  # gate. This field is immutable. It can only be set
+                  # for containers.
+                  claims:
+                  - name: string
+                  # Limits describes the maximum amount of compute
+                  # resources allowed. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  limits: {}
+                  # Requests describes the minimum amount of compute
+                  # resources required. If Requests is omitted for a
+                  # container, it defaults to Limits if that is
+                  # explicitly specified, otherwise to an
+                  # implementation-defined value. Requests cannot
+                  # exceed Limits. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  requests: {}
+                # selector is a label query over volumes to consider for
+                # binding.
+                selector:
+                  # matchExpressions is a list of label selector
+                  # requirements. The requirements are ANDed.
+                  matchExpressions:
+                  - key: string
+                    # operator represents a key's relationship to a set
+                    # of values. Valid operators are In, NotIn, Exists
+                    # and DoesNotExist.
+                    operator: string
+                    # values is an array of string values. If the
+                    # operator is In or NotIn, the values array must be
+                    # non-empty. If the operator is Exists or
+                    # DoesNotExist, the values array must be empty.
+                    # This array is replaced during a strategic merge
+                    # patch.
+                    values: ["string"]
+                  # matchLabels is a map of {key,value} pairs. A single
+                  # {key,value} in the matchLabels map is equivalent to
+                  # an element of matchExpressions, whose key field
+                  # is "key", the operator is "In", and the values
+                  # array contains only "value". The requirements are
+                  # ANDed.
+                  matchLabels: {}
+                # storageClassName is the name of the StorageClass
+                # required by the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+                storageClassName: string
+                # volumeMode defines what type of volume is required by
+                # the claim. Value of Filesystem is implied when not
+                # included in claim spec.
+                volumeMode: string
+                # volumeName is the binding reference to the
+                # PersistentVolume backing this claim.
+                volumeName: string
+              # status represents the current information/status of a
+              # persistent volume claim. Read-only. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              status:
+                # accessModes contains the actual access modes the
+                # volume backing the PVC has. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # allocatedResources is the storage resource within
+                # AllocatedResources tracks the capacity allocated to a
+                # PVC. It may be larger than the actual capacity when a
+                # volume expansion operation is requested. For storage
+                # quota, the larger value from allocatedResources and
+                # PVC.spec.resources is used. If allocatedResources is
+                # not set, PVC.spec.resources alone is used for quota
+                # calculation. If a volume expansion capacity request
+                # is lowered, allocatedResources is only lowered if
+                # there are no expansion operations in progress and if
+                # the actual volume capacity is equal or lower than the
+                # requested capacity. This is an alpha field and
+                # requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                allocatedResources: {}
+                # capacity represents the actual resources of the
+                # underlying volume.
+                capacity: {}
+                # conditions is the current Condition of persistent
+                # volume claim. If underlying persistent volume is
+                # being resized then the Condition will be set
+                # to 'ResizeStarted'.
+                conditions:
+                - lastProbeTime: string
+                  # lastTransitionTime is the time the condition
+                  # transitioned from one status to another.
+                  lastTransitionTime: string
+                  # message is the human-readable message indicating
+                  # details about last transition.
+                  message: string
+                  # reason is a unique, this should be a short, machine
+                  # understandable string that gives the reason for
+                  # condition's last transition. If it
+                  # reports "ResizeStarted" that means the underlying
+                  # persistent volume is being resized.
+                  reason: string status: string
+                  # PersistentVolumeClaimConditionType is a valid value
+                  # of PersistentVolumeClaimCondition.Type
+                  type: string
+                # phase represents the current phase of
+                # PersistentVolumeClaim.
+                phase: string
+                # resizeStatus stores status of resize operation.
+                # ResizeStatus is not set by default but when expansion
+                # is complete resizeStatus is set to empty string by
+                # resize controller or kubelet. This is an alpha field
+                # and requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                resizeStatus: string
+        # GlobalTier Parameters
+        globalTier:
+          # Co-locates all disks to a single disk i.e. persist, cache,
+          # UDF will be on a single PVC.
+          colocateDisks: true
+          # Timeout in seconds for subsequent requests to wait on a
+          # locked resource
+          concurrentWaitTimeout: 120
+          # EncryptDataAtRest - Enable disk encryption of data at rest
+          encryptDataAtRest: true
+        # The PersistTier are used as temporary swap space for data that
+        # doesn't fit in RAM or VRAM. The disk should be as fast or
+        # faster than the Persist Tier storage since this tier is used
+        # as an intermediary cache between the RAM and Persist Tiers.
+        persistTier:
+          # DiskTierStorageLimit
+          default:
+            # * 'high_watermark' : Percentage used eviction threshold.
+            #    Once usage exceeds this value, evictions from this
+            #    tier will be scheduled in the background and continue
+            #    until the 'low_watermark' percentage usage is reached.
+            #    Default is "90", signifying a 90% memory usage
+            #    threshold.
+            highWatermark: 90
+            # * 'limit'          : The maximum (bytes) per rank that can
+            #    be allocated across all resource groups.
+            limit: "1Gi"
+            # * 'low_watermark'  : Percentage used recovery threshold.
+            #    Once usage exceeds the 'high_watermark', evictions
+            #    will continue until usage falls below this recovery
+            #    threshold. Default is "80", signifying an 80% usage
+            #    threshold.
+            lowWatermark: 80 name: string
+            # A base directory to use as a space for this tier.
+            path: "default" provisioner: "docker.io/hostpath"
+            # Kubernetes Persistent Volume Claim for this disk tier.
+            volumeClaim:
+              # APIVersion defines the versioned schema of this
+              # representation of an object. Servers should convert
+              # recognized schemas to the latest internal value, and
+              # may reject unrecognized values. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+              apiVersion: app.kinetica.com/v1
+              # Kind is a string value representing the REST resource
+              # this object represents. Servers may infer this from the
+              # endpoint the client submits requests to. Cannot be
+              # updated. In CamelCase. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+              kind: KineticaCluster
+              # Standard object's metadata. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+              metadata: {}
+              # spec defines the desired characteristics of a volume
+              # requested by a pod author. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              spec:
+                # accessModes contains the desired access modes the
+                # volume should have. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # dataSource field can be used to specify either: * An
+                # existing VolumeSnapshot object
+                # (snapshot.storage.k8s.io/VolumeSnapshot) * An
+                # existing PVC (PersistentVolumeClaim) If the
+                # provisioner or an external controller can support the
+                # specified data source, it will create a new volume
+                # based on the contents of the specified data source.
+                # When the AnyVolumeDataSource feature gate is enabled,
+                # dataSource contents will be copied to dataSourceRef,
+                # and dataSourceRef contents will be copied to
+                # dataSource when dataSourceRef.namespace is not
+                # specified. If the namespace is specified, then
+                # dataSourceRef will not be copied to dataSource.
+                dataSource:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                # dataSourceRef specifies the object from which to
+                # populate the volume with data, if a non-empty volume
+                # is desired. This may be any object from a non-empty
+                # API group (non core object) or a
+                # PersistentVolumeClaim object. When this field is
+                # specified, volume binding will only succeed if the
+                # type of the specified object matches some installed
+                # volume populator or dynamic provisioner. This field
+                # will replace the functionality of the dataSource
+                # field and as such if both fields are non-empty, they
+                # must have the same value. For backwards
+                # compatibility, when namespace isn't specified in
+                # dataSourceRef, both fields (dataSource and
+                # dataSourceRef) will be set to the same value
+                # automatically if one of them is empty and the other
+                # is non-empty. When namespace is specified in
+                # dataSourceRef, dataSource isn't set to the same value
+                # and must be empty. There are three important
+                # differences between dataSource and dataSourceRef: *
+                # While dataSource only allows two specific types of
+                # objects, dataSourceRef allows any non-core object, as
+                # well as PersistentVolumeClaim objects. * While
+                # dataSource ignores disallowed values (dropping them),
+                # dataSourceRef preserves all values, and generates an
+                # error if a disallowed value is specified. * While
+                # dataSource only allows local objects, dataSourceRef
+                # allows objects in any namespaces. (Beta) Using this
+                # field requires the AnyVolumeDataSource feature gate
+                # to be enabled. (Alpha) Using the namespace field of
+                # dataSourceRef requires the
+                # CrossNamespaceVolumeDataSource feature gate to be
+                # enabled.
+                dataSourceRef:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                  # Namespace is the namespace of resource being
+                  # referenced Note that when a namespace is specified,
+                  # a gateway.networking.k8s.io/ReferenceGrant object
+                  # is required in the referent namespace to allow that
+                  # namespace's owner to accept the reference. See the
+                  # ReferenceGrant documentation for details.
+                  # (Alpha) This field requires the
+                  # CrossNamespaceVolumeDataSource feature gate to be
+                  # enabled.
+                  namespace: string
+                # resources represents the minimum resources the volume
+                # should have. If RecoverVolumeExpansionFailure feature
+                # is enabled users are allowed to specify resource
+                # requirements that are lower than previous value but
+                # must still be higher than capacity recorded in the
+                # status field of the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+                resources:
+                  # Claims lists the names of resources, defined in
+                  # spec.resourceClaims, that are used by this
+                  # container. This is an alpha field and requires
+                  # enabling the DynamicResourceAllocation feature
+                  # gate. This field is immutable. It can only be set
+                  # for containers.
+                  claims:
+                  - name: string
+                  # Limits describes the maximum amount of compute
+                  # resources allowed. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  limits: {}
+                  # Requests describes the minimum amount of compute
+                  # resources required. If Requests is omitted for a
+                  # container, it defaults to Limits if that is
+                  # explicitly specified, otherwise to an
+                  # implementation-defined value. Requests cannot
+                  # exceed Limits. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  requests: {}
+                # selector is a label query over volumes to consider for
+                # binding.
+                selector:
+                  # matchExpressions is a list of label selector
+                  # requirements. The requirements are ANDed.
+                  matchExpressions:
+                  - key: string
+                    # operator represents a key's relationship to a set
+                    # of values. Valid operators are In, NotIn, Exists
+                    # and DoesNotExist.
+                    operator: string
+                    # values is an array of string values. If the
+                    # operator is In or NotIn, the values array must be
+                    # non-empty. If the operator is Exists or
+                    # DoesNotExist, the values array must be empty.
+                    # This array is replaced during a strategic merge
+                    # patch.
+                    values: ["string"]
+                  # matchLabels is a map of {key,value} pairs. A single
+                  # {key,value} in the matchLabels map is equivalent to
+                  # an element of matchExpressions, whose key field
+                  # is "key", the operator is "In", and the values
+                  # array contains only "value". The requirements are
+                  # ANDed.
+                  matchLabels: {}
+                # storageClassName is the name of the StorageClass
+                # required by the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+                storageClassName: string
+                # volumeMode defines what type of volume is required by
+                # the claim. Value of Filesystem is implied when not
+                # included in claim spec.
+                volumeMode: string
+                # volumeName is the binding reference to the
+                # PersistentVolume backing this claim.
+                volumeName: string
+              # status represents the current information/status of a
+              # persistent volume claim. Read-only. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              status:
+                # accessModes contains the actual access modes the
+                # volume backing the PVC has. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # allocatedResources is the storage resource within
+                # AllocatedResources tracks the capacity allocated to a
+                # PVC. It may be larger than the actual capacity when a
+                # volume expansion operation is requested. For storage
+                # quota, the larger value from allocatedResources and
+                # PVC.spec.resources is used. If allocatedResources is
+                # not set, PVC.spec.resources alone is used for quota
+                # calculation. If a volume expansion capacity request
+                # is lowered, allocatedResources is only lowered if
+                # there are no expansion operations in progress and if
+                # the actual volume capacity is equal or lower than the
+                # requested capacity. This is an alpha field and
+                # requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                allocatedResources: {}
+                # capacity represents the actual resources of the
+                # underlying volume.
+                capacity: {}
+                # conditions is the current Condition of persistent
+                # volume claim. If underlying persistent volume is
+                # being resized then the Condition will be set
+                # to 'ResizeStarted'.
+                conditions:
+                - lastProbeTime: string
+                  # lastTransitionTime is the time the condition
+                  # transitioned from one status to another.
+                  lastTransitionTime: string
+                  # message is the human-readable message indicating
+                  # details about last transition.
+                  message: string
+                  # reason is a unique, this should be a short, machine
+                  # understandable string that gives the reason for
+                  # condition's last transition. If it
+                  # reports "ResizeStarted" that means the underlying
+                  # persistent volume is being resized.
+                  reason: string status: string
+                  # PersistentVolumeClaimConditionType is a valid value
+                  # of PersistentVolumeClaimCondition.Type
+                  type: string
+                # phase represents the current phase of
+                # PersistentVolumeClaim.
+                phase: string
+                # resizeStatus stores status of resize operation.
+                # ResizeStatus is not set by default but when expansion
+                # is complete resizeStatus is set to empty string by
+                # resize controller or kubelet. This is an alpha field
+                # and requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                resizeStatus: string defaultStorePersistentObjects: true
+                ranks:
+          - highWatermark: 90
+            # * 'limit'          : The maximum (bytes) per rank that can
+            #    be allocated across all resource groups.
+            limit: "1Gi"
+            # * 'low_watermark'  : Percentage used recovery threshold.
+            #    Once usage exceeds the 'high_watermark', evictions
+            #    will continue until usage falls below this recovery
+            #    threshold. Default is "80", signifying an 80% usage
+            #    threshold.
+            lowWatermark: 80 name: string
+            # A base directory to use as a space for this tier.
+            path: "default" provisioner: "docker.io/hostpath"
+            # Kubernetes Persistent Volume Claim for this disk tier.
+            volumeClaim:
+              # APIVersion defines the versioned schema of this
+              # representation of an object. Servers should convert
+              # recognized schemas to the latest internal value, and
+              # may reject unrecognized values. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+              apiVersion: app.kinetica.com/v1
+              # Kind is a string value representing the REST resource
+              # this object represents. Servers may infer this from the
+              # endpoint the client submits requests to. Cannot be
+              # updated. In CamelCase. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+              kind: KineticaCluster
+              # Standard object's metadata. More info:
+              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+              metadata: {}
+              # spec defines the desired characteristics of a volume
+              # requested by a pod author. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              spec:
+                # accessModes contains the desired access modes the
+                # volume should have. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # dataSource field can be used to specify either: * An
+                # existing VolumeSnapshot object
+                # (snapshot.storage.k8s.io/VolumeSnapshot) * An
+                # existing PVC (PersistentVolumeClaim) If the
+                # provisioner or an external controller can support the
+                # specified data source, it will create a new volume
+                # based on the contents of the specified data source.
+                # When the AnyVolumeDataSource feature gate is enabled,
+                # dataSource contents will be copied to dataSourceRef,
+                # and dataSourceRef contents will be copied to
+                # dataSource when dataSourceRef.namespace is not
+                # specified. If the namespace is specified, then
+                # dataSourceRef will not be copied to dataSource.
+                dataSource:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                # dataSourceRef specifies the object from which to
+                # populate the volume with data, if a non-empty volume
+                # is desired. This may be any object from a non-empty
+                # API group (non core object) or a
+                # PersistentVolumeClaim object. When this field is
+                # specified, volume binding will only succeed if the
+                # type of the specified object matches some installed
+                # volume populator or dynamic provisioner. This field
+                # will replace the functionality of the dataSource
+                # field and as such if both fields are non-empty, they
+                # must have the same value. For backwards
+                # compatibility, when namespace isn't specified in
+                # dataSourceRef, both fields (dataSource and
+                # dataSourceRef) will be set to the same value
+                # automatically if one of them is empty and the other
+                # is non-empty. When namespace is specified in
+                # dataSourceRef, dataSource isn't set to the same value
+                # and must be empty. There are three important
+                # differences between dataSource and dataSourceRef: *
+                # While dataSource only allows two specific types of
+                # objects, dataSourceRef allows any non-core object, as
+                # well as PersistentVolumeClaim objects. * While
+                # dataSource ignores disallowed values (dropping them),
+                # dataSourceRef preserves all values, and generates an
+                # error if a disallowed value is specified. * While
+                # dataSource only allows local objects, dataSourceRef
+                # allows objects in any namespaces. (Beta) Using this
+                # field requires the AnyVolumeDataSource feature gate
+                # to be enabled. (Alpha) Using the namespace field of
+                # dataSourceRef requires the
+                # CrossNamespaceVolumeDataSource feature gate to be
+                # enabled.
+                dataSourceRef:
+                  # APIGroup is the group for the resource being
+                  # referenced. If APIGroup is not specified, the
+                  # specified Kind must be in the core API group. For
+                  # any other third-party types, APIGroup is required.
+                  apiGroup: string
+                  # Kind is the type of resource being referenced
+                  kind: KineticaCluster
+                  # Name is the name of resource being referenced
+                  name: string
+                  # Namespace is the namespace of resource being
+                  # referenced Note that when a namespace is specified,
+                  # a gateway.networking.k8s.io/ReferenceGrant object
+                  # is required in the referent namespace to allow that
+                  # namespace's owner to accept the reference. See the
+                  # ReferenceGrant documentation for details.
+                  # (Alpha) This field requires the
+                  # CrossNamespaceVolumeDataSource feature gate to be
+                  # enabled.
+                  namespace: string
+                # resources represents the minimum resources the volume
+                # should have. If RecoverVolumeExpansionFailure feature
+                # is enabled users are allowed to specify resource
+                # requirements that are lower than previous value but
+                # must still be higher than capacity recorded in the
+                # status field of the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
+                resources:
+                  # Claims lists the names of resources, defined in
+                  # spec.resourceClaims, that are used by this
+                  # container. This is an alpha field and requires
+                  # enabling the DynamicResourceAllocation feature
+                  # gate. This field is immutable. It can only be set
+                  # for containers.
+                  claims:
+                  - name: string
+                  # Limits describes the maximum amount of compute
+                  # resources allowed. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  limits: {}
+                  # Requests describes the minimum amount of compute
+                  # resources required. If Requests is omitted for a
+                  # container, it defaults to Limits if that is
+                  # explicitly specified, otherwise to an
+                  # implementation-defined value. Requests cannot
+                  # exceed Limits. More info:
+                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+                  requests: {}
+                # selector is a label query over volumes to consider for
+                # binding.
+                selector:
+                  # matchExpressions is a list of label selector
+                  # requirements. The requirements are ANDed.
+                  matchExpressions:
+                  - key: string
+                    # operator represents a key's relationship to a set
+                    # of values. Valid operators are In, NotIn, Exists
+                    # and DoesNotExist.
+                    operator: string
+                    # values is an array of string values. If the
+                    # operator is In or NotIn, the values array must be
+                    # non-empty. If the operator is Exists or
+                    # DoesNotExist, the values array must be empty.
+                    # This array is replaced during a strategic merge
+                    # patch.
+                    values: ["string"]
+                  # matchLabels is a map of {key,value} pairs. A single
+                  # {key,value} in the matchLabels map is equivalent to
+                  # an element of matchExpressions, whose key field
+                  # is "key", the operator is "In", and the values
+                  # array contains only "value". The requirements are
+                  # ANDed.
+                  matchLabels: {}
+                # storageClassName is the name of the StorageClass
+                # required by the claim. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+                storageClassName: string
+                # volumeMode defines what type of volume is required by
+                # the claim. Value of Filesystem is implied when not
+                # included in claim spec.
+                volumeMode: string
+                # volumeName is the binding reference to the
+                # PersistentVolume backing this claim.
+                volumeName: string
+              # status represents the current information/status of a
+              # persistent volume claim. Read-only. More info:
+              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
+              status:
+                # accessModes contains the actual access modes the
+                # volume backing the PVC has. More info:
+                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+                accessModes: ["string"]
+                # allocatedResources is the storage resource within
+                # AllocatedResources tracks the capacity allocated to a
+                # PVC. It may be larger than the actual capacity when a
+                # volume expansion operation is requested. For storage
+                # quota, the larger value from allocatedResources and
+                # PVC.spec.resources is used. If allocatedResources is
+                # not set, PVC.spec.resources alone is used for quota
+                # calculation. If a volume expansion capacity request
+                # is lowered, allocatedResources is only lowered if
+                # there are no expansion operations in progress and if
+                # the actual volume capacity is equal or lower than the
+                # requested capacity. This is an alpha field and
+                # requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                allocatedResources: {}
+                # capacity represents the actual resources of the
+                # underlying volume.
+                capacity: {}
+                # conditions is the current Condition of persistent
+                # volume claim. If underlying persistent volume is
+                # being resized then the Condition will be set
+                # to 'ResizeStarted'.
+                conditions:
+                - lastProbeTime: string
+                  # lastTransitionTime is the time the condition
+                  # transitioned from one status to another.
+                  lastTransitionTime: string
+                  # message is the human-readable message indicating
+                  # details about last transition.
+                  message: string
+                  # reason is a unique, this should be a short, machine
+                  # understandable string that gives the reason for
+                  # condition's last transition. If it
+                  # reports "ResizeStarted" that means the underlying
+                  # persistent volume is being resized.
+                  reason: string status: string
+                  # PersistentVolumeClaimConditionType is a valid value
+                  # of PersistentVolumeClaimCondition.Type
+                  type: string
+                # phase represents the current phase of
+                # PersistentVolumeClaim.
+                phase: string
+                # resizeStatus stores status of resize operation.
+                # ResizeStatus is not set by default but when expansion
+                # is complete resizeStatus is set to empty string by
+                # resize controller or kubelet. This is an alpha field
+                # and requires enabling RecoverVolumeExpansionFailure
+                # feature.
+                resizeStatus: string
+        # The RAMTier represents the RAM available for data storage per
+        # rank. The RAM Tier is NOT used for small, non-data objects or
+        # variables that are allocated and deallocated for program flow
+        # control or used to store metadata or other similar
+        # information; these continue to use either the stack or the
+        # regular runtime memory allocator. This tier should be sized
+        # on each machine such that there is sufficient RAM left over
+        # to handle this overhead, as well as the needs of other
+        # processes running on the same machine.
+        ramTier:
+          # The RAM Tier represents the RAM available for data storage
+          # per rank. The RAM Tier is NOT used for small, non-data
+          # objects or variables that are allocated and deallocated for
+          # program flow control or used to store metadata or other
+          # similar information; these continue to use either the stack
+          # or the regular runtime memory allocator. This tier should
+          # be sized on each machine such that there is sufficient RAM
+          # left over to handle this overhead, as well as the needs of
+          # other processes running on the same machine. A default
+          # memory limit and eviction thresholds can be set across all
+          # ranks, while one or more ranks may be configured to
+          # override those defaults. The general format for RAM
+          # settings: 
+          #  # tier.ram.[default|rank<#>].<parameter> Valid *parameter*
+          #    names include: 
+          #  * 'limit'          : The maximum RAM (bytes) per rank that
+          #     can be allocated across all resource groups.  Default
+          #     is -1, signifying no limit and ignore watermark
+          #     settings. * 'high_watermark' : RAM percentage used
+          #     eviction threshold.  Once memory usage exceeds this
+          #     value, evictions from this tier will be scheduled in
+          #     the background and continue until the 'low_watermark'
+          #     percentage usage is reached.  Default is "90",
+          #     signifying a 90% memory usage
+          #     threshold. * 'low_watermark'  : RAM percentage used
+          #     recovery threshold.  Once memory usage exceeds
+          #     the 'high_watermark', evictions will continue until
+          #     memory usage falls below this recovery threshold.
+          #     Default is "50", signifying a 50% memory usage
+          #     threshold.
+          default:
+            # * 'high_watermark' : Percentage used eviction threshold.
+            #    Once usage exceeds this value, evictions from this
+            #    tier will be scheduled in the background and continue
+            #    until the 'low_watermark' percentage usage is reached.
+            #    Default is "90", signifying a 90% memory usage
+            #    threshold.
+            highWatermark: 90
+            # * 'limit'          : The maximum (bytes) per rank that can
+            #    be allocated across all resource groups.
+            limit: "1Gi"
+            # * 'low_watermark'  : Percentage used recovery threshold.
+            #    Once usage exceeds the 'high_watermark', evictions
+            #    will continue until usage falls below this recovery
+            #    threshold. Default is "80", signifying an 80% usage
+            #    threshold.
+            lowWatermark: 80 name: string
+          # The maximum RAM (bytes) for processing data at rank 0.
+          # Overrides the overall default RAM tier
+          # limit. #tier.ram.rank0.limit = -1
+          ranks:
+          - highWatermark: 90
+            # * 'limit'          : The maximum (bytes) per rank that can
+            #    be allocated across all resource groups.
+            limit: "1Gi"
+            # * 'low_watermark'  : Percentage used recovery threshold.
+            #    Once usage exceeds the 'high_watermark', evictions
+            #    will continue until usage falls below this recovery
+            #    threshold. Default is "80", signifying an 80% usage
+            #    threshold.
+            lowWatermark: 80 name: string tieredStrategy:
+        # Default strategy to apply to tables or columns when one was
+        # not provided during table creation. This strategy is also
+        # applied to a resource group that does not specify one at time
+        # of creation. The strategy is formed by chaining together the
+        # tier types and their respective eviction priorities. Any
+        # given tier may appear no more than once in the chain and the
+        # priority must be in range "1" - "10", where "1" is the lowest
+        # priority (first to be evicted) and "9" is the highest
+        # priority (last to be evicted).  A priority of "10" indicates
+        # that an object is unevictable. Each tier's priority is in
+        # relation to the priority of other objects in the same tier;
+        # e.g., "RAM 9, DISK2 1" indicates that an object will be the
+        # highest evictable priority among objects in the RAM Tier
+        # (last evicted), but that it will be the lowest priority among
+        # objects in the Disk Tier named 'disk2' (first evicted).  Note
+        # that since an object can only have one Disk Tier instance in
+        # its strategy, the corresponding priority will only apply in
+        # relation to other objects in Disk Tier instance 'disk2'. See
+        # the Tiered Storage section for more information about tier
+        # type names. Format: <tier1> <priority>, <tier2> <priority>,
+        # <tier3> <priority>, ... Examples using a Disk Tier
+        # named 'disk2' and a Cold Storage Tier 'cold0': vram 3, ram 5,
+        # disk2 3, persist 10 vram 3, ram 5, disk2 3, persist 6, cold0
+        # 10 tier_strategy.default = VRAM 1, RAM 5, PERSIST 5
+        default: "VRAM 1, RAM 5, PERSIST 5"
+        # Predicate evaluation interval (in minutes) -  indicates the
+        # interval at which the tier strategy predicates are evaluated
+        predicateEvaluationInterval: 60 video:
+        # System default TTL for videos. Time-to-live (TTL) is the
+        # number of minutes before a video will expire and be removed,
+        # or -1 to disable. video_default_ttl = -1
+        defaultTTL: "-1"
+        # The maximum number of videos to allow on the system. Set to 0
+        # to disable video rendering.  Set to -1 to allow an unlimited
+        # number of videos. video_max_count = -1
+        maxCount: "-1"
+        # Directory where video files should be temporarily stored while
+        # rendering. Only accessed by rank 0. video_temp_directory = $
+        # {gaia.temp_directory}/gpudb-temp-videos
+        tmpDir: "${gaia.temp_directory}/gpudb-temp-videos"
+      # VisualizationConfig
+      visualization:
+        # Enable level-of-details rendering for fast interaction with
+        # large WKT polygon data.  Only available for the OpenGL
+        # renderer (when 'enable_opengl_renderer' is "true").
+        enableLODRendering: true
+        # If "true", enable hardware-accelerated OpenGL renderer;
+        # if "false", use the software-based Cairo renderer.
+        enableOpenGLRenderer: true
+        # If "true", enable Vector Tile Service (VTS) to support
+        # client-side visualization of geospatial data. Enabling this
+        # option increases memory usage on ingestion.
+        enableVectorTileService: false
+        # Longitude and latitude ranges of geospatial data for which
+        # level-of-details representations are being generated. The
+        # parameter order is: <min_longitude> <min_latitude>
+        # <max_longitude> <max_latitude> The default values span over
+        # the world, but the level-of-details rendering becomes more
+        # efficient when the precise extent of geospatial data is
+        # specified. kubebuilder:default:={ -180, -90, 180, 90 }
+        lodDataExtent: [integer]
+        # The extent to which shape data are pre-processed for
+        # level-of-details rendering during data insert/load or
+        # processed on-the-fly in rendering time. This is a trade-off
+        # between speed and memory. The higher the value, the faster
+        # level-of-details rendering is, but the more memory is used
+        # for storing processed shape data. The maximum level is "10"
+        # (most shape data are pre-processed) and the minimum level
+        # is "0".
+        lodPreProcessingLevel: 5
+        # The number of subregions in horizontal and vertical geospatial
+        # data extent. The default values of "12 6" divide the world
+        # into subregions of 30 degree (lon.) x 30 degree (lat.)
+        lodSubRegionNum: [12,6]
+        # A base image resolution (width and height in pixels) at which
+        # a subregion would be rendered in a global view spanning over
+        # the whole dataset. Based on this resolution level-of-details
+        # representations are generated for the polygons located in the
+        # subregion.
+        lodSubRegionResolution: [512,512]
+        # Maximum heatmap size (in pixels) that can be generated. This
+        # reserves 'max_heatmap_size' ^ 2 * 8 bytes of GPU memory
+        # at **rank0**
+        maxHeatmapSize: 3072
+        # The maximum number of levels in the level-of-details
+        # rendering. As the number increases, level-of-details
+        # rendering becomes effective at higher zoom levels, but it may
+        # increase memory usage for storing level-of-details
+        # representations.
+        maxLODLevel: 8
+        # Input geometries are pre-processed upon ingestion for faster
+        # vector tile generation. This parameter determines the
+        # zoomlevel at which the vector tile pre-processing stops. A
+        # vector tile request for a higher zoomlevel than this
+        # parameter takes additional time because the vector tile needs
+        # to be generated on the fly.
+        maxVectorTileZoomLevel: 8
+        # Input geometries are pre-processed upon ingestion for faster
+        # vector tile generation. This parameter determines the
+        # zoomlevel from which the vector tile pre-processing starts. A
+        # vector tile request for a lower zoomlevel than this parameter
+        # takes additional time because the vector tile needs to be
+        # generated on the fly.
+        minVectorTileZoomLevel: 1
+        # The number of samples to use for antialiasing. Higher numbers
+        # will improve image quality but require more GPU memory to
+        # store the samples on worker ranks.  This affects only the
+        # OpenGL renderer. Value may be "0", "4", "8" or "16". When "0"
+        # antialiasing is disabled. The default value is "0".
+        openGLAntialiasingLevel: 1
+        # Threshold number of points (per-TOM) at which point rendering
+        # switches to fast mode.
+        pointRenderThreshold: 100000
+        # Single-precision coordinates are used for usual rendering
+        # processes, but depending on the precision of geometry data
+        # and use case, double precision processing may be required at
+        # a high zoomlevel. Double precision rendering processes are
+        # used from the zoomlevel specified by this parameter, which is
+        # corresponding to a zoomlevel of TMS or Google map service.
+        renderingPrecisionThreshold: 30
+        # The image width/height (in pixels) of svg symbols cached in
+        # the OpenGL symbol cache.
+        symbolResolution: 100
+        # The width/height (in pixels) of an OpenGL texture which caches
+        # symbol images for OpenGL rendering.
+        symbolTextureSize: 4000
+        # Threshold for the number of points (per-TOM) after which
+        # symbology rendering falls back to regular rendering
+        symbologyRenderThreshold: 10000
+        # The name of map tiler used for Vector Tile Service. "google"
+        # and "tms" map tilers are supported currently. This parameter
+        # should be matched with the map tiler of clients' vector tile
+        # renderer.
+        vectorTileMapTiler: "google" workbench:
+        # Start the Workbench app on the head host when host manager is
+        # started. enable_workbench = false
+        enable: false
+        # # HTTP server port for Workbench if enabled. workbench_port =
+        #   8000
+        port:
+          # Number of port to expose on the pod's IP address. This must
+          # be a valid port number, 0 < x < 65536.
+          containerPort: 1
+          # What host IP to bind the external port to.
+          hostIP: string
+          # Number of port to expose on the host. If specified, this
+          # must be a valid port number, 0 < x < 65536. If HostNetwork
+          # is specified, this must match ContainerPort. Most
+          # containers do not need this.
+          hostPort: 1
+          # If specified, this must be an IANA_SVC_NAME and unique
+          # within the pod. Each named port in a pod must have a unique
+          # name. Name for the port that can be referred to by
+          # services.
+          name: string
+          # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+          # to "TCP".
+          protocol: "TCP"
+    # The fully qualified URL used on the Ingress records for any
+    # exposed services. Completed buy yth Operator. DO NOT POPULATE
+    # MANUALLY.
+    fqdn: ""
+    # The name of the parent HA Ring this cluster belongs to.
+    haRingName: "default"
+    # Whether to enable the separate node 'pools' for "infra", "compute"
+    # pod scheduling. Default: false
+    hasPools: true
+    # The port the HostManager will be running in each pod in the
+    # cluster. Default: 9300, TCP
+    hostManagerPort:
+      # Number of port to expose on the pod's IP address. This must be a
+      # valid port number, 0 < x < 65536.
+      containerPort: 1
+      # What host IP to bind the external port to.
+      hostIP: string
+      # Number of port to expose on the host. If specified, this must be
+      # a valid port number, 0 < x < 65536. If HostNetwork is
+      # specified, this must match ContainerPort. Most containers do
+      # not need this.
+      hostPort: 1
+      # If specified, this must be an IANA_SVC_NAME and unique within
+      # the pod. Each named port in a pod must have a unique name. Name
+      # for the port that can be referred to by services.
+      name: string
+      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+      # to "TCP".
+      protocol: "TCP"
+    # Set the name of the container image to use.
+    image: "kinetica/kinetica-k8s-intel:v7.1.6.0"
+    # Set the policy for pulling container images.
+    imagePullPolicy: "IfNotPresent"
+    # ImagePullSecrets is an optional list of references to secrets in
+    # the same gpudb-namespace to use for pulling any of the images
+    # used by this PodSpec. If specified, these secrets will be passed
+    # to individual puller implementations for them to use. For
+    # example, in the case of docker, only DockerConfig type secrets
+    # are honored.
+    imagePullSecrets:
+    - name: string
+    # Labels - Pod labels to be applied to the Statefulset DB pods.
+    labels: {}
+    # The Ingress Endpoint that GAdmin will be running on.
+    letsEncrypt:
+      # Enable LetsEncrypt for Certificate generation.
+      enabled: false
+      # LetsEncryptEnvironment
+      environment: "staging"
+    # Set the Kinetica DB License.
+    license: string
+    # Periodic probe of container liveness. Container will be restarted
+    # if the probe fails. Cannot be updated. More info:
+    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+    livenessProbe:
+      # Minimum consecutive failures for the probe to be considered
+      # failed after having succeeded. Defaults to 3. Minimum value is
+      # 1.
+      failureThreshold: 3
+      # Number of seconds after the container has started before
+      # liveness probes are initiated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      initialDelaySeconds: 10
+      # How often (in seconds) to perform the probe. Default to 10
+      # seconds. Minimum value is 1.
+      periodSeconds: 10
+    # LoggerConfig Kinetica DB Logger Configuration Object Configure the
+    # LOG4CPLUS logger for the DB. Field takes a string containing the
+    # full configuration. If not specified a template file is used
+    # during DB configuration generation.
+    loggerConfig: configString: string
+    # Metrics - DB Metrics scrape & forward configuration for
+    # `fluent-bit`.
+    metricsRegistryRepositoryTag:
+      # Set the policy for pulling container images.
+      imagePullPolicy: "IfNotPresent"
+      # ImagePullSecrets is an optional list of references to secrets in
+      # the same gpudb-namespace to use for pulling any of the images
+      # used by this PodSpec. If specified, these secrets will be
+      # passed to individual puller implementations for them to use.
+      # For example, in the case of docker, only DockerConfig type
+      # secrets are honored.
+      imagePullSecrets:
+      - name: string
+      # The image registry & optional port containing the repository.
+      registry: "docker.io"
+      # The image repository path.
+      repository: "kineticadevcloud/"
+      # SemVer = Semantic Version for the Tag SemVer semver.Version
+      semVer: string
+      # The image sha.
+      sha: ""
+      # The image tag.
+      tag: "v7.1.5.2"
+    # Metrics - `fluent-bit` container requests/limits.
+    metricsResources:
+      # Claims lists the names of resources, defined in
+      # spec.resourceClaims, that are used by this container. This is
+      # an alpha field and requires enabling the
+      # DynamicResourceAllocation feature gate. This field is
+      # immutable. It can only be set for containers.
+      claims:
+      - name: string
+      # Limits describes the maximum amount of compute resources
+      # allowed. More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      limits: {}
+      # Requests describes the minimum amount of compute resources
+      # required. If Requests is omitted for a container, it defaults
+      # to Limits if that is explicitly specified, otherwise to an
+      # implementation-defined value. Requests cannot exceed Limits.
+      # More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      requests: {}
+    # NodeSelector - NodeSelector to be applied to the DB Pods
+    nodeSelector: {}
+    # Do not use internal Operator field only.
+    originalReplicas: 1
+    # podManagementPolicy controls how pods are created during initial
+    # scale up, when replacing pods on nodes, or when scaling down. The
+    # default policy is `OrderedReady`, where pods are created in
+    # increasing order (pod-0, then pod-1, etc) and the controller will
+    # wait until each pod is ready before continuing. When scaling
+    # down, the pods are removed in the opposite order. The alternative
+    # policy is `Parallel` which will create pods in parallel to match
+    # the desired scale without waiting, and on scale down will delete
+    # all pods at once.
+    podManagementPolicy: "Parallel"
+    # Number of ranks per node as a uint16 i.e. 1-65535 ranks per node.
+    # Default: 1
+    ranksPerNode: 1
+    # Periodic probe of container service readiness. Container will be
+    # removed from service endpoints if the probe fails. Cannot be
+    # updated. More info:
+    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+    readinessProbe:
+      # Minimum consecutive failures for the probe to be considered
+      # failed after having succeeded. Defaults to 3. Minimum value is
+      # 1.
+      failureThreshold: 3
+      # Number of seconds after the container has started before
+      # liveness probes are initiated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      initialDelaySeconds: 10
+      # How often (in seconds) to perform the probe. Default to 10
+      # seconds. Minimum value is 1.
+      periodSeconds: 10
+    # The number of DB ranks i.e. replicas that the cluster will spin
+    # up. Default: 3
+    replicas: 3
+    # Limit the resources a DB Pod can consume.
+    resources:
+      # Claims lists the names of resources, defined in
+      # spec.resourceClaims, that are used by this container. This is
+      # an alpha field and requires enabling the
+      # DynamicResourceAllocation feature gate. This field is
+      # immutable. It can only be set for containers.
+      claims:
+      - name: string
+      # Limits describes the maximum amount of compute resources
+      # allowed. More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      limits: {}
+      # Requests describes the minimum amount of compute resources
+      # required. If Requests is omitted for a container, it defaults
+      # to Limits if that is explicitly specified, otherwise to an
+      # implementation-defined value. Requests cannot exceed Limits.
+      # More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      requests: {}
+    # SecurityContext holds security configuration that will be applied
+    # to a container. Some fields are present in both SecurityContext
+    # and PodSecurityContext.  When both are set, the values in
+    # SecurityContext take precedence.
+    securityContext:
+      # AllowPrivilegeEscalation controls whether a process can gain
+      # more privileges than its parent process. This bool directly
+      # controls if the no_new_privs flag will be set on the container
+      # process. AllowPrivilegeEscalation is true always when the
+      # container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note
+      # that this field cannot be set when spec.os.name is windows.
+      allowPrivilegeEscalation: true
+      # The capabilities to add/drop when running containers. Defaults
+      # to the default set of capabilities granted by the container
+      # runtime. Note that this field cannot be set when spec.os.name
+      # is windows.
+      capabilities:
+        # Added capabilities
+        add: ["string"]
+        # Removed capabilities
+        drop: ["string"]
+      # Run container in privileged mode. Processes in privileged
+      # containers are essentially equivalent to root on the host.
+      # Defaults to false. Note that this field cannot be set when
+      # spec.os.name is windows.
+      privileged: true
+      # procMount denotes the type of proc mount to use for the
+      # containers. The default is DefaultProcMount which uses the
+      # container runtime defaults for readonly paths and masked paths.
+      # This requires the ProcMountType feature flag to be enabled.
+      # Note that this field cannot be set when spec.os.name is
+      # windows.
+      procMount: string
+      # Whether this container has a read-only root filesystem. Default
+      # is false. Note that this field cannot be set when spec.os.name
+      # is windows.
+      readOnlyRootFilesystem: true
+      # The GID to run the entrypoint of the container process. Uses
+      # runtime default if unset. May also be set in
+      # PodSecurityContext.  If set in both SecurityContext and
+      # PodSecurityContext, the value specified in SecurityContext
+      # takes precedence. Note that this field cannot be set when
+      # spec.os.name is windows.
+      runAsGroup: 1
+      # Indicates that the container must run as a non-root user. If
+      # true, the Kubelet will validate the image at runtime to ensure
+      # that it does not run as UID 0 (root) and fail to start the
+      # container if it does. If unset or false, no such validation
+      # will be performed. May also be set in PodSecurityContext.  If
+      # set in both SecurityContext and PodSecurityContext, the value
+      # specified in SecurityContext takes precedence.
+      runAsNonRoot: true
+      # The UID to run the entrypoint of the container process. Defaults
+      # to user specified in image metadata if unspecified. May also be
+      # set in PodSecurityContext.  If set in both SecurityContext and
+      # PodSecurityContext, the value specified in SecurityContext
+      # takes precedence. Note that this field cannot be set when
+      # spec.os.name is windows.
+      runAsUser: 1
+      # The SELinux context to be applied to the container. If
+      # unspecified, the container runtime will allocate a random
+      # SELinux context for each container.  May also be set in
+      # PodSecurityContext.  If set in both SecurityContext and
+      # PodSecurityContext, the value specified in SecurityContext
+      # takes precedence. Note that this field cannot be set when
+      # spec.os.name is windows.
+      seLinuxOptions:
+        # Level is SELinux level label that applies to the container.
+        level: string
+        # Role is a SELinux role label that applies to the container.
+        role: string
+        # Type is a SELinux type label that applies to the container.
+        type: string
+        # User is a SELinux user label that applies to the container.
+        user: string
+      # The seccomp options to use by this container. If seccomp options
+      # are provided at both the pod & container level, the container
+      # options override the pod options. Note that this field cannot
+      # be set when spec.os.name is windows.
+      seccompProfile:
+        # localhostProfile indicates a profile defined in a file on the
+        # node should be used. The profile must be preconfigured on the
+        # node to work. Must be a descending path, relative to the
+        # kubelet's configured seccomp profile location. Must only be
+        # set if type is "Localhost".
+        localhostProfile: string
+        # type indicates which kind of seccomp profile will be applied.
+        # Valid options are: Localhost - a profile defined in a file on
+        # the node should be used. RuntimeDefault - the container
+        # runtime default profile should be used. Unconfined - no
+        # profile should be applied.
+        type: string
+      # The Windows specific settings applied to all containers. If
+      # unspecified, the options from the PodSecurityContext will be
+      # used. If set in both SecurityContext and PodSecurityContext,
+      # the value specified in SecurityContext takes precedence. Note
+      # that this field cannot be set when spec.os.name is linux.
+      windowsOptions:
+        # GMSACredentialSpec is where the GMSA admission webhook
+        # (https://github.com/kubernetes-sigs/windows-gmsa) inlines the
+        # contents of the GMSA credential spec named by the
+        # GMSACredentialSpecName field.
+        gmsaCredentialSpec: string
+        # GMSACredentialSpecName is the name of the GMSA credential spec
+        # to use.
+        gmsaCredentialSpecName: string
+        # HostProcess determines if a container should be run as a 'Host
+        # Process' container. This field is alpha-level and will only
+        # be honored by components that enable the
+        # WindowsHostProcessContainers feature flag. Setting this field
+        # without the feature flag will result in errors when
+        # validating the Pod. All of a Pod's containers must have the
+        # same effective HostProcess value (it is not allowed to have a
+        # mix of HostProcess containers and non-HostProcess
+        # containers).  In addition, if HostProcess is true then
+        # HostNetwork must also be set to true.
+        hostProcess: true
+        # The UserName in Windows to run the entrypoint of the container
+        # process. Defaults to the user specified in image metadata if
+        # unspecified. May also be set in PodSecurityContext. If set in
+        # both SecurityContext and PodSecurityContext, the value
+        # specified in SecurityContext takes precedence.
+        runAsUserName: string
+    # StartupProbe indicates that the Pod has successfully initialized.
+    # If specified, no other probes are executed until this completes
+    # successfully. If this probe fails, the Pod will be restarted,
+    # just as if the livenessProbe failed. This can be used to provide
+    # different probe parameters at the beginning of a Pod's lifecycle,
+    # when it might take a long time to load data or warm a cache, than
+    # during steady-state operation. This cannot be updated. This is an
+    # alpha feature enabled by the StartupProbe feature flag. More
+    # info:
+    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+    startupProbe:
+      # Minimum consecutive failures for the probe to be considered
+      # failed after having succeeded. Defaults to 3. Minimum value is
+      # 1.
+      failureThreshold: 3
+      # Number of seconds after the container has started before
+      # liveness probes are initiated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      initialDelaySeconds: 10
+      # How often (in seconds) to perform the probe. Default to 10
+      # seconds. Minimum value is 1.
+      periodSeconds: 10
+  # HostManagerMonitor is used to monitor the Kinetica DB Ranks. If a
+  # rank is unavailable for the specified time(MaxRankFailureCount) the
+  # cluster will be restarted.
+  hostManagerMonitor:
+    # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and
+    # Liveness probes. Default: 8888
+    db_healthz_port:
+      # Number of port to expose on the pod's IP address. This must be a
+      # valid port number, 0 < x < 65536.
+      containerPort: 1
+      # What host IP to bind the external port to.
+      hostIP: string
+      # Number of port to expose on the host. If specified, this must be
+      # a valid port number, 0 < x < 65536. If HostNetwork is
+      # specified, this must match ContainerPort. Most containers do
+      # not need this.
+      hostPort: 1
+      # If specified, this must be an IANA_SVC_NAME and unique within
+      # the pod. Each named port in a pod must have a unique name. Name
+      # for the port that can be referred to by services.
+      name: string
+      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+      # to "TCP".
+      protocol: "TCP"
+    # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and
+    # Liveness probes. Default: 8889
+    hm_healthz_port:
+      # Number of port to expose on the pod's IP address. This must be a
+      # valid port number, 0 < x < 65536.
+      containerPort: 1
+      # What host IP to bind the external port to.
+      hostIP: string
+      # Number of port to expose on the host. If specified, this must be
+      # a valid port number, 0 < x < 65536. If HostNetwork is
+      # specified, this must match ContainerPort. Most containers do
+      # not need this.
+      hostPort: 1
+      # If specified, this must be an IANA_SVC_NAME and unique within
+      # the pod. Each named port in a pod must have a unique name. Name
+      # for the port that can be referred to by services.
+      name: string
+      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+      # to "TCP".
+      protocol: "TCP"
+    # Periodic probe of container liveness. Container will be restarted
+    # if the probe fails. Cannot be updated. More info:
+    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+    livenessProbe:
+      # Minimum consecutive failures for the probe to be considered
+      # failed after having succeeded. Defaults to 3. Minimum value is
+      # 1.
+      failureThreshold: 3
+      # Number of seconds after the container has started before
+      # liveness probes are initiated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      initialDelaySeconds: 10
+      # How often (in seconds) to perform the probe. Default to 10
+      # seconds. Minimum value is 1.
+      periodSeconds: 10
+    # Set the name of the container image to use.
+    monitorRegistryRepositoryTag:
+      # Set the policy for pulling container images.
+      imagePullPolicy: "IfNotPresent"
+      # ImagePullSecrets is an optional list of references to secrets in
+      # the same gpudb-namespace to use for pulling any of the images
+      # used by this PodSpec. If specified, these secrets will be
+      # passed to individual puller implementations for them to use.
+      # For example, in the case of docker, only DockerConfig type
+      # secrets are honored.
+      imagePullSecrets:
+      - name: string
+      # The image registry & optional port containing the repository.
+      registry: "docker.io"
+      # The image repository path.
+      repository: "kineticadevcloud/"
+      # SemVer = Semantic Version for the Tag SemVer semver.Version
+      semVer: string
+      # The image sha.
+      sha: ""
+      # The image tag.
+      tag: "v7.1.5.2"
+    # Periodic probe of container service readiness. Container will be
+    # removed from service endpoints if the probe fails. Cannot be
+    # updated. More info:
+    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+    readinessProbe:
+      # Minimum consecutive failures for the probe to be considered
+      # failed after having succeeded. Defaults to 3. Minimum value is
+      # 1.
+      failureThreshold: 3
+      # Number of seconds after the container has started before
+      # liveness probes are initiated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      initialDelaySeconds: 10
+      # How often (in seconds) to perform the probe. Default to 10
+      # seconds. Minimum value is 1.
+      periodSeconds: 10
+    # Allow for overriding resource requests/limits.
+    resources:
+      # Claims lists the names of resources, defined in
+      # spec.resourceClaims, that are used by this container. This is
+      # an alpha field and requires enabling the
+      # DynamicResourceAllocation feature gate. This field is
+      # immutable. It can only be set for containers.
+      claims:
+      - name: string
+      # Limits describes the maximum amount of compute resources
+      # allowed. More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      limits: {}
+      # Requests describes the minimum amount of compute resources
+      # required. If Requests is omitted for a container, it defaults
+      # to Limits if that is explicitly specified, otherwise to an
+      # implementation-defined value. Requests cannot exceed Limits.
+      # More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      requests: {}
+    # StartupProbe indicates that the Pod has successfully initialized.
+    # If specified, no other probes are executed until this completes
+    # successfully. If this probe fails, the Pod will be restarted,
+    # just as if the livenessProbe failed. This can be used to provide
+    # different probe parameters at the beginning of a Pod's lifecycle,
+    # when it might take a long time to load data or warm a cache, than
+    # during steady-state operation. This cannot be updated. This is an
+    # alpha feature enabled by the StartupProbe feature flag. More
+    # info:
+    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+    startupProbe:
+      # Minimum consecutive failures for the probe to be considered
+      # failed after having succeeded. Defaults to 3. Minimum value is
+      # 1.
+      failureThreshold: 3
+      # Number of seconds after the container has started before
+      # liveness probes are initiated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      initialDelaySeconds: 10
+      # How often (in seconds) to perform the probe. Default to 10
+      # seconds. Minimum value is 1.
+      periodSeconds: 10
+  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem
+  # etc.
+  infra: "on-prem"
+  # The Kubernetes Ingress Controller will be running on e.g.
+  # ingress-nginx, Traefik, Ambassador, Gloo, Kong etc.
+  ingressController: "nginx"
+  # The LDAP server to connect to.
+  ldap:
+    # BaseDN - The root base LDAP Distinguished Name to use as the base
+    # for the LDAP usage
+    baseDN: "dc=kinetica,dc=com"
+    # BindDN - The LDAP Distinguished Name to use for the LDAP
+    # connectivity/data connectivity/bind
+    bindDN: "cn=admin,dc=kinetica,dc=com"
+    # Host - The name of the host to connect to. If IsInLocalK8S=true
+    # then supply only the name e.g. `openldap` Default: openldap
+    host: "openldap"
+    # IsInLocalK8S - Is the LDAP server co-located in the same K8s
+    # cluster the operator is running in. Default: true
+    isInLocalK8S: true
+    # IsLDAPS - IUse LDAPS instead of LDAP Default: false
+    isLDAPS: false
+    # Namespace - The namespace the Default: openldap
+    namespace: "gpudb"
+    # Port - Defaults to LDAP Port 389 Default: 389
+    port: 389
+  # Tells the operator to use Cloud Provider Pay As You Go
+  # functionality.
+  payAsYouGo: false
+  # The Reveal Dashboard Configuration for the Kinetica Cluster.
+  reveal:
+    # The port that Reveal will be running on. It runs only on the head
+    # node pod in the cluster. Default: 8080
+    containerPort:
+      # Number of port to expose on the pod's IP address. This must be a
+      # valid port number, 0 < x < 65536.
+      containerPort: 1
+      # What host IP to bind the external port to.
+      hostIP: string
+      # Number of port to expose on the host. If specified, this must be
+      # a valid port number, 0 < x < 65536. If HostNetwork is
+      # specified, this must match ContainerPort. Most containers do
+      # not need this.
+      hostPort: 1
+      # If specified, this must be an IANA_SVC_NAME and unique within
+      # the pod. Each named port in a pod must have a unique name. Name
+      # for the port that can be referred to by services.
+      name: string
+      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+      # to "TCP".
+      protocol: "TCP"
+    # The Ingress Endpoint that Reveal will be running on.
+    ingressPath:
+      # backend defines the referenced service endpoint to which the
+      # traffic will be forwarded to.
+      backend:
+        # resource is an ObjectRef to another Kubernetes resource in the
+        # namespace of the Ingress object. If resource is specified,
+        # serviceName and servicePort must not be specified.
+        resource:
+          # APIGroup is the group for the resource being referenced. If
+          # APIGroup is not specified, the specified Kind must be in
+          # the core API group. For any other third-party types,
+          # APIGroup is required.
+          apiGroup: string
+          # Kind is the type of resource being referenced
+          kind: KineticaCluster
+          # Name is the name of resource being referenced
+          name: string
+        # serviceName specifies the name of the referenced service.
+        serviceName: string
+        # servicePort Specifies the port of the referenced service.
+        servicePort: 
+      # path is matched against the path of an incoming request.
+      # Currently it can contain characters disallowed from the
+      # conventional "path" part of a URL as defined by RFC 3986. Paths
+      # must begin with a '/' and must be present when using PathType
+      # with value "Exact" or "Prefix".
+      path: string
+      # pathType determines the interpretation of the path matching.
+      # PathType can be one of the following values: * Exact: Matches
+      # the URL path exactly. * Prefix: Matches based on a URL path
+      # prefix split by '/'. Matching is done on a path element by
+      # element basis. A path element refers is the list of labels in
+      # the path split by the '/' separator. A request is a match for
+      # path p if every p is an element-wise prefix of p of the request
+      # path. Note that if the last element of the path is a substring
+      # of the last element in request path, it is not a match
+      # (e.g. /foo/bar matches /foo/bar/baz, but does not
+      # match /foo/barbaz). * ImplementationSpecific: Interpretation of
+      # the Path matching is up to the IngressClass. Implementations
+      # can treat this as a separate PathType or treat it identically
+      # to Prefix or Exact path types. Implementations are required to
+      # support all path types. Defaults to ImplementationSpecific.
+      pathType: string
+    # Whether to enable the Reveal Dashboard on the Cluster. Default:
+    # true
+    isEnabled: true
+  # The Stats server to deploy & connect to if required.
+  stats:
+    # AlertManager - AlertManager specific configuration.
+    alertManager:
+      # Set the arguments for the command within the container to run.
+      args:
+      ["-c","/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug
+      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090
+      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage
+      --storage.tsdb.retention.time=7d  --web.enable-lifecycle"]
+      # Set the command within the container to run.
+      command: ["/bin/sh"]
+      # ConfigFile - Set the location of the Loki configuration file.
+      configFile: "/opt/gpudb/kagent/stats/loki/loki.yml"
+      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a
+      # ConfigMap
+      configFileAsConfigMap: true
+      # The port that Stats will be running on. It runs only on the head
+      # node pod in the cluster. Default: 9091
+      containerPort:
+        # Number of port to expose on the pod's IP address. This must be
+        # a valid port number, 0 < x < 65536.
+        containerPort: 1
+        # What host IP to bind the external port to.
+        hostIP: string
+        # Number of port to expose on the host. If specified, this must
+        # be a valid port number, 0 < x < 65536. If HostNetwork is
+        # specified, this must match ContainerPort. Most containers do
+        # not need this.
+        hostPort: 1
+        # If specified, this must be an IANA_SVC_NAME and unique within
+        # the pod. Each named port in a pod must have a unique name.
+        # Name for the port that can be referred to by services.
+        name: string
+        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+        # to "TCP".
+        protocol: "TCP"
+      # List of environment variables to set in the container.
+      env:
+      - name: string
+        # Variable references $(VAR_NAME) are expanded using the
+        # previously defined environment variables in the container and
+        # any service environment variables. If a variable cannot be
+        # resolved, the reference in the input string will be
+        # unchanged. Double $$ are reduced to a single $, which allows
+        # for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
+        # produce the string literal "$(VAR_NAME)". Escaped references
+        # will never be expanded, regardless of whether the variable
+        # exists or not. Defaults to "".
+        value: string
+        # Source for the environment variable's value. Cannot be used if
+        # value is not empty.
+        valueFrom:
+          # Selects a key of a ConfigMap.
+          configMapKeyRef:
+            # The key to select.
+            key: string
+            # Name of the referent. More info:
+            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+            # TODO: Add other useful fields. apiVersion, kind, uid?
+            name: string
+            # Specify whether the ConfigMap or its key must be defined
+            optional: true
+          # Selects a field of the pod: supports metadata.name,
+          # metadata.namespace, `metadata.labels
+          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
+          # spec.serviceAccountName, status.hostIP, status.podIP,
+          # status.podIPs.
+          fieldRef:
+            # Version of the schema the FieldPath is written in terms
+            # of, defaults to "v1".
+            apiVersion: app.kinetica.com/v1
+            # Path of the field to select in the specified API version.
+            fieldPath: string
+          # Selects a resource of the container: only resources limits
+          # and requests (limits.cpu, limits.memory,
+          # limits.ephemeral-storage, requests.cpu, requests.memory and
+          # requests.ephemeral-storage) are currently supported.
+          resourceFieldRef:
+            # Container name: required for volumes, optional for env
+            # vars
+            containerName: string
+            # Specifies the output format of the exposed resources,
+            # defaults to "1"
+            divisor: 
+            # Required: resource to select
+            resource: string
+          # Selects a key of a secret in the pod's namespace
+          secretKeyRef:
+            # The key of the secret to select from.  Must be a valid
+            # secret key.
+            key: string
+            # Name of the referent. More info:
+            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+            # TODO: Add other useful fields. apiVersion, kind, uid?
+            name: string
+            # Specify whether the Secret or its key must be defined
+            optional: true
+      # Set the name of the container image to use.
+      image:
+        # Set the policy for pulling container images.
+        imagePullPolicy: "IfNotPresent"
+        # ImagePullSecrets is an optional list of references to secrets
+        # in the same gpudb-namespace to use for pulling any of the
+        # images used by this PodSpec. If specified, these secrets will
+        # be passed to individual puller implementations for them to
+        # use. For example, in the case of docker, only DockerConfig
+        # type secrets are honored.
+        imagePullSecrets:
+        - name: string
+        # The image registry & optional port containing the repository.
+        registry: "docker.io"
+        # The image repository path.
+        repository: "kineticadevcloud/"
+        # SemVer = Semantic Version for the Tag SemVer semver.Version
+        semVer: string
+        # The image sha.
+        sha: ""
+        # The image tag.
+        tag: "v7.1.5.2"
+      # Whether to enable the Stats Server on the Cluster. Default:
+      # true
+      isEnabled: true
+      # Periodic probe of container liveness. Container will be
+      # restarted if the probe fails. Cannot be updated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      livenessProbe:
+        # Exec specifies the action to take.
+        exec:
+          # Command is the command line to execute inside the container,
+          # the working directory for the command  is root ('/') in the
+          # container's filesystem. The command is simply exec'd, it is
+          # not run inside a shell, so traditional shell instructions
+          # ('|', etc) won't work. To use a shell, you need to
+          # explicitly call out to that shell. Exit status of 0 is
+          # treated as live/healthy and non-zero is unhealthy.
+          command: ["string"]
+        # Minimum consecutive failures for the probe to be considered
+        # failed after having succeeded. Defaults to 3. Minimum value
+        # is 1.
+        failureThreshold: 1
+        # GRPC specifies an action involving a GRPC port.
+        grpc:
+          # Port number of the gRPC service. Number must be in the range
+          # 1 to 65535.
+          port: 1
+          # Service is the name of the service to place in the gRPC
+          # HealthCheckRequest
+          # (see
+          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
+          # If this is not specified, the default behavior is defined
+          # by gRPC.
+          service: string
+        # HTTPGet specifies the http request to perform.
+        httpGet:
+          # Host name to connect to, defaults to the pod IP. You
+          # probably want to set "Host" in httpHeaders instead.
+          host: string
+          # Custom headers to set in the request. HTTP allows repeated
+          # headers.
+          httpHeaders:
+          - name: string
+            # The header field value
+            value: string
+          # Path to access on the HTTP server.
+          path: string
+          # Name or number of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+          # Scheme to use for connecting to the host. Defaults to HTTP.
+          scheme: string
+        # Number of seconds after the container has started before
+        # liveness probes are initiated. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        initialDelaySeconds: 1
+        # How often (in seconds) to perform the probe. Default to 10
+        # seconds. Minimum value is 1.
+        periodSeconds: 1
+        # Minimum consecutive successes for the probe to be considered
+        # successful after having failed. Defaults to 1. Must be 1 for
+        # liveness and startup. Minimum value is 1.
+        successThreshold: 1
+        # TCPSocket specifies an action involving a TCP port.
+        tcpSocket:
+          # Optional: Host name to connect to, defaults to the pod IP.
+          host: string
+          # Number or name of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+        # Optional duration in seconds the pod needs to terminate
+        # gracefully upon probe failure. The grace period is the
+        # duration in seconds after the processes running in the pod
+        # are sent a termination signal and the time when the processes
+        # are forcibly halted with a kill signal. Set this value longer
+        # than the expected cleanup time for your process. If this
+        # value is nil, the pod's terminationGracePeriodSeconds will be
+        # used. Otherwise, this value overrides the value provided by
+        # the pod spec. Value must be non-negative integer. The value
+        # zero indicates stop immediately via the kill signal
+        # (no opportunity to shut down). This is a beta field and
+        # requires enabling ProbeTerminationGracePeriod feature gate.
+        # Minimum value is 1. spec.terminationGracePeriodSeconds is
+        # used if unset.
+        terminationGracePeriodSeconds: 1
+        # Number of seconds after which the probe times out. Defaults to
+        # 1 second. Minimum value is 1. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        timeoutSeconds: 1
+      # Logs - Set the location of the Loki configuration file.
+      logs: "/opt/gpudb/kagent/stats/logs" name: "stats"
+      # Periodic probe of container service readiness. Container will be
+      # removed from service endpoints if the probe fails. Cannot be
+      # updated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      readinessProbe:
+        # Exec specifies the action to take.
+        exec:
+          # Command is the command line to execute inside the container,
+          # the working directory for the command  is root ('/') in the
+          # container's filesystem. The command is simply exec'd, it is
+          # not run inside a shell, so traditional shell instructions
+          # ('|', etc) won't work. To use a shell, you need to
+          # explicitly call out to that shell. Exit status of 0 is
+          # treated as live/healthy and non-zero is unhealthy.
+          command: ["string"]
+        # Minimum consecutive failures for the probe to be considered
+        # failed after having succeeded. Defaults to 3. Minimum value
+        # is 1.
+        failureThreshold: 1
+        # GRPC specifies an action involving a GRPC port.
+        grpc:
+          # Port number of the gRPC service. Number must be in the range
+          # 1 to 65535.
+          port: 1
+          # Service is the name of the service to place in the gRPC
+          # HealthCheckRequest
+          # (see
+          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
+          # If this is not specified, the default behavior is defined
+          # by gRPC.
+          service: string
+        # HTTPGet specifies the http request to perform.
+        httpGet:
+          # Host name to connect to, defaults to the pod IP. You
+          # probably want to set "Host" in httpHeaders instead.
+          host: string
+          # Custom headers to set in the request. HTTP allows repeated
+          # headers.
+          httpHeaders:
+          - name: string
+            # The header field value
+            value: string
+          # Path to access on the HTTP server.
+          path: string
+          # Name or number of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+          # Scheme to use for connecting to the host. Defaults to HTTP.
+          scheme: string
+        # Number of seconds after the container has started before
+        # liveness probes are initiated. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        initialDelaySeconds: 1
+        # How often (in seconds) to perform the probe. Default to 10
+        # seconds. Minimum value is 1.
+        periodSeconds: 1
+        # Minimum consecutive successes for the probe to be considered
+        # successful after having failed. Defaults to 1. Must be 1 for
+        # liveness and startup. Minimum value is 1.
+        successThreshold: 1
+        # TCPSocket specifies an action involving a TCP port.
+        tcpSocket:
+          # Optional: Host name to connect to, defaults to the pod IP.
+          host: string
+          # Number or name of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+        # Optional duration in seconds the pod needs to terminate
+        # gracefully upon probe failure. The grace period is the
+        # duration in seconds after the processes running in the pod
+        # are sent a termination signal and the time when the processes
+        # are forcibly halted with a kill signal. Set this value longer
+        # than the expected cleanup time for your process. If this
+        # value is nil, the pod's terminationGracePeriodSeconds will be
+        # used. Otherwise, this value overrides the value provided by
+        # the pod spec. Value must be non-negative integer. The value
+        # zero indicates stop immediately via the kill signal
+        # (no opportunity to shut down). This is a beta field and
+        # requires enabling ProbeTerminationGracePeriod feature gate.
+        # Minimum value is 1. spec.terminationGracePeriodSeconds is
+        # used if unset.
+        terminationGracePeriodSeconds: 1
+        # Number of seconds after which the probe times out. Defaults to
+        # 1 second. Minimum value is 1. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        timeoutSeconds: 1
+      # Resource Requests & Limits for the Stats Pod.
+      resources:
+        # Claims lists the names of resources, defined in
+        # spec.resourceClaims, that are used by this container. This is
+        # an alpha field and requires enabling the
+        # DynamicResourceAllocation feature gate. This field is
+        # immutable. It can only be set for containers.
+        claims:
+        - name: string
+        # Limits describes the maximum amount of compute resources
+        # allowed. More info:
+        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+        limits: {}
+        # Requests describes the minimum amount of compute resources
+        # required. If Requests is omitted for a container, it defaults
+        # to Limits if that is explicitly specified, otherwise to an
+        # implementation-defined value. Requests cannot exceed Limits.
+        # More info:
+        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+        requests: {}
+      # StoragePath - Set the location of the AlertManager file
+      # storage.
+      storagePath: "/opt/gpudb/kagent/stats/storage/alertmanager/alertmanager"
+      # WebConfigFile - Set the location of the AlertManager
+      # alertmanager-web-config.yml.
+      webConfigFile: "/opt/gpudb/kagent/stats/alertmanager/alertmanager-web-config.yml"
+      # WebListenAddress - Set the location of the AlertManager
+      # alertmanager-web-config.yml.
+      webListenAddress: "0.0.0.0:9089"
+    # Grafana - Grafana specific configuration.
+    grafana:
+      # Set the arguments for the command within the container to run.
+      args:
+      ["-c","/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug
+      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090
+      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage
+      --storage.tsdb.retention.time=7d  --web.enable-lifecycle"]
+      # Set the command within the container to run.
+      command: ["/bin/sh"]
+      # ConfigFile - Set the location of the Loki configuration file.
+      configFile: "/opt/gpudb/kagent/stats/loki/loki.yml"
+      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a
+      # ConfigMap
+      configFileAsConfigMap: true
+      # The port that Stats will be running on. It runs only on the head
+      # node pod in the cluster. Default: 9091
+      containerPort:
+        # Number of port to expose on the pod's IP address. This must be
+        # a valid port number, 0 < x < 65536.
+        containerPort: 1
+        # What host IP to bind the external port to.
+        hostIP: string
+        # Number of port to expose on the host. If specified, this must
+        # be a valid port number, 0 < x < 65536. If HostNetwork is
+        # specified, this must match ContainerPort. Most containers do
+        # not need this.
+        hostPort: 1
+        # If specified, this must be an IANA_SVC_NAME and unique within
+        # the pod. Each named port in a pod must have a unique name.
+        # Name for the port that can be referred to by services.
+        name: string
+        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+        # to "TCP".
+        protocol: "TCP"
+      # List of environment variables to set in the container.
+      env:
+      - name: string
+        # Variable references $(VAR_NAME) are expanded using the
+        # previously defined environment variables in the container and
+        # any service environment variables. If a variable cannot be
+        # resolved, the reference in the input string will be
+        # unchanged. Double $$ are reduced to a single $, which allows
+        # for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
+        # produce the string literal "$(VAR_NAME)". Escaped references
+        # will never be expanded, regardless of whether the variable
+        # exists or not. Defaults to "".
+        value: string
+        # Source for the environment variable's value. Cannot be used if
+        # value is not empty.
+        valueFrom:
+          # Selects a key of a ConfigMap.
+          configMapKeyRef:
+            # The key to select.
+            key: string
+            # Name of the referent. More info:
+            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+            # TODO: Add other useful fields. apiVersion, kind, uid?
+            name: string
+            # Specify whether the ConfigMap or its key must be defined
+            optional: true
+          # Selects a field of the pod: supports metadata.name,
+          # metadata.namespace, `metadata.labels
+          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
+          # spec.serviceAccountName, status.hostIP, status.podIP,
+          # status.podIPs.
+          fieldRef:
+            # Version of the schema the FieldPath is written in terms
+            # of, defaults to "v1".
+            apiVersion: app.kinetica.com/v1
+            # Path of the field to select in the specified API version.
+            fieldPath: string
+          # Selects a resource of the container: only resources limits
+          # and requests (limits.cpu, limits.memory,
+          # limits.ephemeral-storage, requests.cpu, requests.memory and
+          # requests.ephemeral-storage) are currently supported.
+          resourceFieldRef:
+            # Container name: required for volumes, optional for env
+            # vars
+            containerName: string
+            # Specifies the output format of the exposed resources,
+            # defaults to "1"
+            divisor: 
+            # Required: resource to select
+            resource: string
+          # Selects a key of a secret in the pod's namespace
+          secretKeyRef:
+            # The key of the secret to select from.  Must be a valid
+            # secret key.
+            key: string
+            # Name of the referent. More info:
+            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+            # TODO: Add other useful fields. apiVersion, kind, uid?
+            name: string
+            # Specify whether the Secret or its key must be defined
+            optional: true
+      # HomePath - Set the location of the Grafana home directory.
+      homePath: "/opt/gpudb/kagent/stats/grafana"
+      # GraphiteHost - Host Address
+      host: "0.0.0.0"
+      # Set the name of the container image to use.
+      image:
+        # Set the policy for pulling container images.
+        imagePullPolicy: "IfNotPresent"
+        # ImagePullSecrets is an optional list of references to secrets
+        # in the same gpudb-namespace to use for pulling any of the
+        # images used by this PodSpec. If specified, these secrets will
+        # be passed to individual puller implementations for them to
+        # use. For example, in the case of docker, only DockerConfig
+        # type secrets are honored.
+        imagePullSecrets:
+        - name: string
+        # The image registry & optional port containing the repository.
+        registry: "docker.io"
+        # The image repository path.
+        repository: "kineticadevcloud/"
+        # SemVer = Semantic Version for the Tag SemVer semver.Version
+        semVer: string
+        # The image sha.
+        sha: ""
+        # The image tag.
+        tag: "v7.1.5.2"
+      # Whether to enable the Stats Server on the Cluster. Default:
+      # true
+      isEnabled: true
+      # Periodic probe of container liveness. Container will be
+      # restarted if the probe fails. Cannot be updated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      livenessProbe:
+        # Exec specifies the action to take.
+        exec:
+          # Command is the command line to execute inside the container,
+          # the working directory for the command  is root ('/') in the
+          # container's filesystem. The command is simply exec'd, it is
+          # not run inside a shell, so traditional shell instructions
+          # ('|', etc) won't work. To use a shell, you need to
+          # explicitly call out to that shell. Exit status of 0 is
+          # treated as live/healthy and non-zero is unhealthy.
+          command: ["string"]
+        # Minimum consecutive failures for the probe to be considered
+        # failed after having succeeded. Defaults to 3. Minimum value
+        # is 1.
+        failureThreshold: 1
+        # GRPC specifies an action involving a GRPC port.
+        grpc:
+          # Port number of the gRPC service. Number must be in the range
+          # 1 to 65535.
+          port: 1
+          # Service is the name of the service to place in the gRPC
+          # HealthCheckRequest
+          # (see
+          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
+          # If this is not specified, the default behavior is defined
+          # by gRPC.
+          service: string
+        # HTTPGet specifies the http request to perform.
+        httpGet:
+          # Host name to connect to, defaults to the pod IP. You
+          # probably want to set "Host" in httpHeaders instead.
+          host: string
+          # Custom headers to set in the request. HTTP allows repeated
+          # headers.
+          httpHeaders:
+          - name: string
+            # The header field value
+            value: string
+          # Path to access on the HTTP server.
+          path: string
+          # Name or number of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+          # Scheme to use for connecting to the host. Defaults to HTTP.
+          scheme: string
+        # Number of seconds after the container has started before
+        # liveness probes are initiated. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        initialDelaySeconds: 1
+        # How often (in seconds) to perform the probe. Default to 10
+        # seconds. Minimum value is 1.
+        periodSeconds: 1
+        # Minimum consecutive successes for the probe to be considered
+        # successful after having failed. Defaults to 1. Must be 1 for
+        # liveness and startup. Minimum value is 1.
+        successThreshold: 1
+        # TCPSocket specifies an action involving a TCP port.
+        tcpSocket:
+          # Optional: Host name to connect to, defaults to the pod IP.
+          host: string
+          # Number or name of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+        # Optional duration in seconds the pod needs to terminate
+        # gracefully upon probe failure. The grace period is the
+        # duration in seconds after the processes running in the pod
+        # are sent a termination signal and the time when the processes
+        # are forcibly halted with a kill signal. Set this value longer
+        # than the expected cleanup time for your process. If this
+        # value is nil, the pod's terminationGracePeriodSeconds will be
+        # used. Otherwise, this value overrides the value provided by
+        # the pod spec. Value must be non-negative integer. The value
+        # zero indicates stop immediately via the kill signal
+        # (no opportunity to shut down). This is a beta field and
+        # requires enabling ProbeTerminationGracePeriod feature gate.
+        # Minimum value is 1. spec.terminationGracePeriodSeconds is
+        # used if unset.
+        terminationGracePeriodSeconds: 1
+        # Number of seconds after which the probe times out. Defaults to
+        # 1 second. Minimum value is 1. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        timeoutSeconds: 1
+      # Logs - Set the location of the Loki configuration file.
+      logs: "/opt/gpudb/kagent/stats/logs" name: "stats"
+      # Periodic probe of container service readiness. Container will be
+      # removed from service endpoints if the probe fails. Cannot be
+      # updated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      readinessProbe:
+        # Exec specifies the action to take.
+        exec:
+          # Command is the command line to execute inside the container,
+          # the working directory for the command  is root ('/') in the
+          # container's filesystem. The command is simply exec'd, it is
+          # not run inside a shell, so traditional shell instructions
+          # ('|', etc) won't work. To use a shell, you need to
+          # explicitly call out to that shell. Exit status of 0 is
+          # treated as live/healthy and non-zero is unhealthy.
+          command: ["string"]
+        # Minimum consecutive failures for the probe to be considered
+        # failed after having succeeded. Defaults to 3. Minimum value
+        # is 1.
+        failureThreshold: 1
+        # GRPC specifies an action involving a GRPC port.
+        grpc:
+          # Port number of the gRPC service. Number must be in the range
+          # 1 to 65535.
+          port: 1
+          # Service is the name of the service to place in the gRPC
+          # HealthCheckRequest
+          # (see
+          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
+          # If this is not specified, the default behavior is defined
+          # by gRPC.
+          service: string
+        # HTTPGet specifies the http request to perform.
+        httpGet:
+          # Host name to connect to, defaults to the pod IP. You
+          # probably want to set "Host" in httpHeaders instead.
+          host: string
+          # Custom headers to set in the request. HTTP allows repeated
+          # headers.
+          httpHeaders:
+          - name: string
+            # The header field value
+            value: string
+          # Path to access on the HTTP server.
+          path: string
+          # Name or number of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+          # Scheme to use for connecting to the host. Defaults to HTTP.
+          scheme: string
+        # Number of seconds after the container has started before
+        # liveness probes are initiated. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        initialDelaySeconds: 1
+        # How often (in seconds) to perform the probe. Default to 10
+        # seconds. Minimum value is 1.
+        periodSeconds: 1
+        # Minimum consecutive successes for the probe to be considered
+        # successful after having failed. Defaults to 1. Must be 1 for
+        # liveness and startup. Minimum value is 1.
+        successThreshold: 1
+        # TCPSocket specifies an action involving a TCP port.
+        tcpSocket:
+          # Optional: Host name to connect to, defaults to the pod IP.
+          host: string
+          # Number or name of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+        # Optional duration in seconds the pod needs to terminate
+        # gracefully upon probe failure. The grace period is the
+        # duration in seconds after the processes running in the pod
+        # are sent a termination signal and the time when the processes
+        # are forcibly halted with a kill signal. Set this value longer
+        # than the expected cleanup time for your process. If this
+        # value is nil, the pod's terminationGracePeriodSeconds will be
+        # used. Otherwise, this value overrides the value provided by
+        # the pod spec. Value must be non-negative integer. The value
+        # zero indicates stop immediately via the kill signal
+        # (no opportunity to shut down). This is a beta field and
+        # requires enabling ProbeTerminationGracePeriod feature gate.
+        # Minimum value is 1. spec.terminationGracePeriodSeconds is
+        # used if unset.
+        terminationGracePeriodSeconds: 1
+        # Number of seconds after which the probe times out. Defaults to
+        # 1 second. Minimum value is 1. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        timeoutSeconds: 1
+      # Resource Requests & Limits for the Stats Pod.
+      resources:
+        # Claims lists the names of resources, defined in
+        # spec.resourceClaims, that are used by this container. This is
+        # an alpha field and requires enabling the
+        # DynamicResourceAllocation feature gate. This field is
+        # immutable. It can only be set for containers.
+        claims:
+        - name: string
+        # Limits describes the maximum amount of compute resources
+        # allowed. More info:
+        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+        limits: {}
+        # Requests describes the minimum amount of compute resources
+        # required. If Requests is omitted for a container, it defaults
+        # to Limits if that is explicitly specified, otherwise to an
+        # implementation-defined value. Requests cannot exceed Limits.
+        # More info:
+        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+        requests: {}
+    # Whether to enable the Stats Server on the Cluster. Default: true
+    isEnabled: true
+    # Loki - Loki specific configuration.
+    loki:
+      # Set the arguments for the command within the container to run.
+      args:
+      ["-c","/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug
+      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090
+      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage
+      --storage.tsdb.retention.time=7d  --web.enable-lifecycle"]
+      # Set the command within the container to run.
+      command: ["/bin/sh"]
+      # ConfigFile - Set the location of the Loki configuration file.
+      configFile: "/opt/gpudb/kagent/stats/loki/loki.yml"
+      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a
+      # ConfigMap
+      configFileAsConfigMap: true
+      # The port that Stats will be running on. It runs only on the head
+      # node pod in the cluster. Default: 9091
+      containerPort:
+        # Number of port to expose on the pod's IP address. This must be
+        # a valid port number, 0 < x < 65536.
+        containerPort: 1
+        # What host IP to bind the external port to.
+        hostIP: string
+        # Number of port to expose on the host. If specified, this must
+        # be a valid port number, 0 < x < 65536. If HostNetwork is
+        # specified, this must match ContainerPort. Most containers do
+        # not need this.
+        hostPort: 1
+        # If specified, this must be an IANA_SVC_NAME and unique within
+        # the pod. Each named port in a pod must have a unique name.
+        # Name for the port that can be referred to by services.
+        name: string
+        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+        # to "TCP".
+        protocol: "TCP"
+      # List of environment variables to set in the container.
+      env:
+      - name: string
+        # Variable references $(VAR_NAME) are expanded using the
+        # previously defined environment variables in the container and
+        # any service environment variables. If a variable cannot be
+        # resolved, the reference in the input string will be
+        # unchanged. Double $$ are reduced to a single $, which allows
+        # for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
+        # produce the string literal "$(VAR_NAME)". Escaped references
+        # will never be expanded, regardless of whether the variable
+        # exists or not. Defaults to "".
+        value: string
+        # Source for the environment variable's value. Cannot be used if
+        # value is not empty.
+        valueFrom:
+          # Selects a key of a ConfigMap.
+          configMapKeyRef:
+            # The key to select.
+            key: string
+            # Name of the referent. More info:
+            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+            # TODO: Add other useful fields. apiVersion, kind, uid?
+            name: string
+            # Specify whether the ConfigMap or its key must be defined
+            optional: true
+          # Selects a field of the pod: supports metadata.name,
+          # metadata.namespace, `metadata.labels
+          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
+          # spec.serviceAccountName, status.hostIP, status.podIP,
+          # status.podIPs.
+          fieldRef:
+            # Version of the schema the FieldPath is written in terms
+            # of, defaults to "v1".
+            apiVersion: app.kinetica.com/v1
+            # Path of the field to select in the specified API version.
+            fieldPath: string
+          # Selects a resource of the container: only resources limits
+          # and requests (limits.cpu, limits.memory,
+          # limits.ephemeral-storage, requests.cpu, requests.memory and
+          # requests.ephemeral-storage) are currently supported.
+          resourceFieldRef:
+            # Container name: required for volumes, optional for env
+            # vars
+            containerName: string
+            # Specifies the output format of the exposed resources,
+            # defaults to "1"
+            divisor: 
+            # Required: resource to select
+            resource: string
+          # Selects a key of a secret in the pod's namespace
+          secretKeyRef:
+            # The key of the secret to select from.  Must be a valid
+            # secret key.
+            key: string
+            # Name of the referent. More info:
+            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+            # TODO: Add other useful fields. apiVersion, kind, uid?
+            name: string
+            # Specify whether the Secret or its key must be defined
+            optional: true
+      # ExpandEnv
+      expandEnv: true
+      # Set the name of the container image to use.
+      image:
+        # Set the policy for pulling container images.
+        imagePullPolicy: "IfNotPresent"
+        # ImagePullSecrets is an optional list of references to secrets
+        # in the same gpudb-namespace to use for pulling any of the
+        # images used by this PodSpec. If specified, these secrets will
+        # be passed to individual puller implementations for them to
+        # use. For example, in the case of docker, only DockerConfig
+        # type secrets are honored.
+        imagePullSecrets:
+        - name: string
+        # The image registry & optional port containing the repository.
+        registry: "docker.io"
+        # The image repository path.
+        repository: "kineticadevcloud/"
+        # SemVer = Semantic Version for the Tag SemVer semver.Version
+        semVer: string
+        # The image sha.
+        sha: ""
+        # The image tag.
+        tag: "v7.1.5.2"
+      # Whether to enable the Stats Server on the Cluster. Default:
+      # true
+      isEnabled: true
+      # Periodic probe of container liveness. Container will be
+      # restarted if the probe fails. Cannot be updated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      livenessProbe:
+        # Exec specifies the action to take.
+        exec:
+          # Command is the command line to execute inside the container,
+          # the working directory for the command  is root ('/') in the
+          # container's filesystem. The command is simply exec'd, it is
+          # not run inside a shell, so traditional shell instructions
+          # ('|', etc) won't work. To use a shell, you need to
+          # explicitly call out to that shell. Exit status of 0 is
+          # treated as live/healthy and non-zero is unhealthy.
+          command: ["string"]
+        # Minimum consecutive failures for the probe to be considered
+        # failed after having succeeded. Defaults to 3. Minimum value
+        # is 1.
+        failureThreshold: 1
+        # GRPC specifies an action involving a GRPC port.
+        grpc:
+          # Port number of the gRPC service. Number must be in the range
+          # 1 to 65535.
+          port: 1
+          # Service is the name of the service to place in the gRPC
+          # HealthCheckRequest
+          # (see
+          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
+          # If this is not specified, the default behavior is defined
+          # by gRPC.
+          service: string
+        # HTTPGet specifies the http request to perform.
+        httpGet:
+          # Host name to connect to, defaults to the pod IP. You
+          # probably want to set "Host" in httpHeaders instead.
+          host: string
+          # Custom headers to set in the request. HTTP allows repeated
+          # headers.
+          httpHeaders:
+          - name: string
+            # The header field value
+            value: string
+          # Path to access on the HTTP server.
+          path: string
+          # Name or number of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+          # Scheme to use for connecting to the host. Defaults to HTTP.
+          scheme: string
+        # Number of seconds after the container has started before
+        # liveness probes are initiated. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        initialDelaySeconds: 1
+        # How often (in seconds) to perform the probe. Default to 10
+        # seconds. Minimum value is 1.
+        periodSeconds: 1
+        # Minimum consecutive successes for the probe to be considered
+        # successful after having failed. Defaults to 1. Must be 1 for
+        # liveness and startup. Minimum value is 1.
+        successThreshold: 1
+        # TCPSocket specifies an action involving a TCP port.
+        tcpSocket:
+          # Optional: Host name to connect to, defaults to the pod IP.
+          host: string
+          # Number or name of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+        # Optional duration in seconds the pod needs to terminate
+        # gracefully upon probe failure. The grace period is the
+        # duration in seconds after the processes running in the pod
+        # are sent a termination signal and the time when the processes
+        # are forcibly halted with a kill signal. Set this value longer
+        # than the expected cleanup time for your process. If this
+        # value is nil, the pod's terminationGracePeriodSeconds will be
+        # used. Otherwise, this value overrides the value provided by
+        # the pod spec. Value must be non-negative integer. The value
+        # zero indicates stop immediately via the kill signal
+        # (no opportunity to shut down). This is a beta field and
+        # requires enabling ProbeTerminationGracePeriod feature gate.
+        # Minimum value is 1. spec.terminationGracePeriodSeconds is
+        # used if unset.
+        terminationGracePeriodSeconds: 1
+        # Number of seconds after which the probe times out. Defaults to
+        # 1 second. Minimum value is 1. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        timeoutSeconds: 1
+      # Logs - Set the location of the Loki configuration file.
+      logs: "/opt/gpudb/kagent/stats/logs" name: "stats"
+      # Periodic probe of container service readiness. Container will be
+      # removed from service endpoints if the probe fails. Cannot be
+      # updated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      readinessProbe:
+        # Exec specifies the action to take.
+        exec:
+          # Command is the command line to execute inside the container,
+          # the working directory for the command  is root ('/') in the
+          # container's filesystem. The command is simply exec'd, it is
+          # not run inside a shell, so traditional shell instructions
+          # ('|', etc) won't work. To use a shell, you need to
+          # explicitly call out to that shell. Exit status of 0 is
+          # treated as live/healthy and non-zero is unhealthy.
+          command: ["string"]
+        # Minimum consecutive failures for the probe to be considered
+        # failed after having succeeded. Defaults to 3. Minimum value
+        # is 1.
+        failureThreshold: 1
+        # GRPC specifies an action involving a GRPC port.
+        grpc:
+          # Port number of the gRPC service. Number must be in the range
+          # 1 to 65535.
+          port: 1
+          # Service is the name of the service to place in the gRPC
+          # HealthCheckRequest
+          # (see
+          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
+          # If this is not specified, the default behavior is defined
+          # by gRPC.
+          service: string
+        # HTTPGet specifies the http request to perform.
+        httpGet:
+          # Host name to connect to, defaults to the pod IP. You
+          # probably want to set "Host" in httpHeaders instead.
+          host: string
+          # Custom headers to set in the request. HTTP allows repeated
+          # headers.
+          httpHeaders:
+          - name: string
+            # The header field value
+            value: string
+          # Path to access on the HTTP server.
+          path: string
+          # Name or number of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+          # Scheme to use for connecting to the host. Defaults to HTTP.
+          scheme: string
+        # Number of seconds after the container has started before
+        # liveness probes are initiated. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        initialDelaySeconds: 1
+        # How often (in seconds) to perform the probe. Default to 10
+        # seconds. Minimum value is 1.
+        periodSeconds: 1
+        # Minimum consecutive successes for the probe to be considered
+        # successful after having failed. Defaults to 1. Must be 1 for
+        # liveness and startup. Minimum value is 1.
+        successThreshold: 1
+        # TCPSocket specifies an action involving a TCP port.
+        tcpSocket:
+          # Optional: Host name to connect to, defaults to the pod IP.
+          host: string
+          # Number or name of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+        # Optional duration in seconds the pod needs to terminate
+        # gracefully upon probe failure. The grace period is the
+        # duration in seconds after the processes running in the pod
+        # are sent a termination signal and the time when the processes
+        # are forcibly halted with a kill signal. Set this value longer
+        # than the expected cleanup time for your process. If this
+        # value is nil, the pod's terminationGracePeriodSeconds will be
+        # used. Otherwise, this value overrides the value provided by
+        # the pod spec. Value must be non-negative integer. The value
+        # zero indicates stop immediately via the kill signal
+        # (no opportunity to shut down). This is a beta field and
+        # requires enabling ProbeTerminationGracePeriod feature gate.
+        # Minimum value is 1. spec.terminationGracePeriodSeconds is
+        # used if unset.
+        terminationGracePeriodSeconds: 1
+        # Number of seconds after which the probe times out. Defaults to
+        # 1 second. Minimum value is 1. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        timeoutSeconds: 1
+      # Resource Requests & Limits for the Stats Pod.
+      resources:
+        # Claims lists the names of resources, defined in
+        # spec.resourceClaims, that are used by this container. This is
+        # an alpha field and requires enabling the
+        # DynamicResourceAllocation feature gate. This field is
+        # immutable. It can only be set for containers.
+        claims:
+        - name: string
+        # Limits describes the maximum amount of compute resources
+        # allowed. More info:
+        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+        limits: {}
+        # Requests describes the minimum amount of compute resources
+        # required. If Requests is omitted for a container, it defaults
+        # to Limits if that is explicitly specified, otherwise to an
+        # implementation-defined value. Requests cannot exceed Limits.
+        # More info:
+        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+        requests: {}
+      # Storage - Set the path of the Loki storage.
+      storage: "/opt/gpudb/kagent/stats/storage/loki-storage"
+    # Which vmss/node group etc. to use as the NodeSelector
+    pool: "compute"
+    # Prometheus - Prometheus specific configuration.
+    prometheus:
+      # Set the arguments for the command within the container to run.
+      args:
+      ["-c","/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug
+      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090
+      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage
+      --storage.tsdb.retention.time=7d  --web.enable-lifecycle"]
+      # Set the command within the container to run.
+      command: ["/bin/sh"]
+      # ConfigFile - Set the location of the Loki configuration file.
+      configFile: "/opt/gpudb/kagent/stats/loki/loki.yml"
+      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a
+      # ConfigMap
+      configFileAsConfigMap: true
+      # The port that Stats will be running on. It runs only on the head
+      # node pod in the cluster. Default: 9091
+      containerPort:
+        # Number of port to expose on the pod's IP address. This must be
+        # a valid port number, 0 < x < 65536.
+        containerPort: 1
+        # What host IP to bind the external port to.
+        hostIP: string
+        # Number of port to expose on the host. If specified, this must
+        # be a valid port number, 0 < x < 65536. If HostNetwork is
+        # specified, this must match ContainerPort. Most containers do
+        # not need this.
+        hostPort: 1
+        # If specified, this must be an IANA_SVC_NAME and unique within
+        # the pod. Each named port in a pod must have a unique name.
+        # Name for the port that can be referred to by services.
+        name: string
+        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults
+        # to "TCP".
+        protocol: "TCP"
+      # List of environment variables to set in the container.
+      env:
+      - name: string
+        # Variable references $(VAR_NAME) are expanded using the
+        # previously defined environment variables in the container and
+        # any service environment variables. If a variable cannot be
+        # resolved, the reference in the input string will be
+        # unchanged. Double $$ are reduced to a single $, which allows
+        # for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
+        # produce the string literal "$(VAR_NAME)". Escaped references
+        # will never be expanded, regardless of whether the variable
+        # exists or not. Defaults to "".
+        value: string
+        # Source for the environment variable's value. Cannot be used if
+        # value is not empty.
+        valueFrom:
+          # Selects a key of a ConfigMap.
+          configMapKeyRef:
+            # The key to select.
+            key: string
+            # Name of the referent. More info:
+            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+            # TODO: Add other useful fields. apiVersion, kind, uid?
+            name: string
+            # Specify whether the ConfigMap or its key must be defined
+            optional: true
+          # Selects a field of the pod: supports metadata.name,
+          # metadata.namespace, `metadata.labels
+          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
+          # spec.serviceAccountName, status.hostIP, status.podIP,
+          # status.podIPs.
+          fieldRef:
+            # Version of the schema the FieldPath is written in terms
+            # of, defaults to "v1".
+            apiVersion: app.kinetica.com/v1
+            # Path of the field to select in the specified API version.
+            fieldPath: string
+          # Selects a resource of the container: only resources limits
+          # and requests (limits.cpu, limits.memory,
+          # limits.ephemeral-storage, requests.cpu, requests.memory and
+          # requests.ephemeral-storage) are currently supported.
+          resourceFieldRef:
+            # Container name: required for volumes, optional for env
+            # vars
+            containerName: string
+            # Specifies the output format of the exposed resources,
+            # defaults to "1"
+            divisor: 
+            # Required: resource to select
+            resource: string
+          # Selects a key of a secret in the pod's namespace
+          secretKeyRef:
+            # The key of the secret to select from.  Must be a valid
+            # secret key.
+            key: string
+            # Name of the referent. More info:
+            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+            # TODO: Add other useful fields. apiVersion, kind, uid?
+            name: string
+            # Specify whether the Secret or its key must be defined
+            optional: true
+      # Set the name of the container image to use.
+      image:
+        # Set the policy for pulling container images.
+        imagePullPolicy: "IfNotPresent"
+        # ImagePullSecrets is an optional list of references to secrets
+        # in the same gpudb-namespace to use for pulling any of the
+        # images used by this PodSpec. If specified, these secrets will
+        # be passed to individual puller implementations for them to
+        # use. For example, in the case of docker, only DockerConfig
+        # type secrets are honored.
+        imagePullSecrets:
+        - name: string
+        # The image registry & optional port containing the repository.
+        registry: "docker.io"
+        # The image repository path.
+        repository: "kineticadevcloud/"
+        # SemVer = Semantic Version for the Tag SemVer semver.Version
+        semVer: string
+        # The image sha.
+        sha: ""
+        # The image tag.
+        tag: "v7.1.5.2"
+      # Whether to enable the Stats Server on the Cluster. Default:
+      # true
+      isEnabled: true
+      # Periodic probe of container liveness. Container will be
+      # restarted if the probe fails. Cannot be updated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      livenessProbe:
+        # Exec specifies the action to take.
+        exec:
+          # Command is the command line to execute inside the container,
+          # the working directory for the command  is root ('/') in the
+          # container's filesystem. The command is simply exec'd, it is
+          # not run inside a shell, so traditional shell instructions
+          # ('|', etc) won't work. To use a shell, you need to
+          # explicitly call out to that shell. Exit status of 0 is
+          # treated as live/healthy and non-zero is unhealthy.
+          command: ["string"]
+        # Minimum consecutive failures for the probe to be considered
+        # failed after having succeeded. Defaults to 3. Minimum value
+        # is 1.
+        failureThreshold: 1
+        # GRPC specifies an action involving a GRPC port.
+        grpc:
+          # Port number of the gRPC service. Number must be in the range
+          # 1 to 65535.
+          port: 1
+          # Service is the name of the service to place in the gRPC
+          # HealthCheckRequest
+          # (see
+          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
+          # If this is not specified, the default behavior is defined
+          # by gRPC.
+          service: string
+        # HTTPGet specifies the http request to perform.
+        httpGet:
+          # Host name to connect to, defaults to the pod IP. You
+          # probably want to set "Host" in httpHeaders instead.
+          host: string
+          # Custom headers to set in the request. HTTP allows repeated
+          # headers.
+          httpHeaders:
+          - name: string
+            # The header field value
+            value: string
+          # Path to access on the HTTP server.
+          path: string
+          # Name or number of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+          # Scheme to use for connecting to the host. Defaults to HTTP.
+          scheme: string
+        # Number of seconds after the container has started before
+        # liveness probes are initiated. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        initialDelaySeconds: 1
+        # How often (in seconds) to perform the probe. Default to 10
+        # seconds. Minimum value is 1.
+        periodSeconds: 1
+        # Minimum consecutive successes for the probe to be considered
+        # successful after having failed. Defaults to 1. Must be 1 for
+        # liveness and startup. Minimum value is 1.
+        successThreshold: 1
+        # TCPSocket specifies an action involving a TCP port.
+        tcpSocket:
+          # Optional: Host name to connect to, defaults to the pod IP.
+          host: string
+          # Number or name of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+        # Optional duration in seconds the pod needs to terminate
+        # gracefully upon probe failure. The grace period is the
+        # duration in seconds after the processes running in the pod
+        # are sent a termination signal and the time when the processes
+        # are forcibly halted with a kill signal. Set this value longer
+        # than the expected cleanup time for your process. If this
+        # value is nil, the pod's terminationGracePeriodSeconds will be
+        # used. Otherwise, this value overrides the value provided by
+        # the pod spec. Value must be non-negative integer. The value
+        # zero indicates stop immediately via the kill signal
+        # (no opportunity to shut down). This is a beta field and
+        # requires enabling ProbeTerminationGracePeriod feature gate.
+        # Minimum value is 1. spec.terminationGracePeriodSeconds is
+        # used if unset.
+        terminationGracePeriodSeconds: 1
+        # Number of seconds after which the probe times out. Defaults to
+        # 1 second. Minimum value is 1. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        timeoutSeconds: 1
+      # Set the Prometheus logging level.
+      logLevel: "debug"
+      # Logs - Set the location of the Loki configuration file.
+      logs: "/opt/gpudb/kagent/stats/logs" name: "stats"
+      # Periodic probe of container service readiness. Container will be
+      # removed from service endpoints if the probe fails. Cannot be
+      # updated. More info:
+      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+      readinessProbe:
+        # Exec specifies the action to take.
+        exec:
+          # Command is the command line to execute inside the container,
+          # the working directory for the command  is root ('/') in the
+          # container's filesystem. The command is simply exec'd, it is
+          # not run inside a shell, so traditional shell instructions
+          # ('|', etc) won't work. To use a shell, you need to
+          # explicitly call out to that shell. Exit status of 0 is
+          # treated as live/healthy and non-zero is unhealthy.
+          command: ["string"]
+        # Minimum consecutive failures for the probe to be considered
+        # failed after having succeeded. Defaults to 3. Minimum value
+        # is 1.
+        failureThreshold: 1
+        # GRPC specifies an action involving a GRPC port.
+        grpc:
+          # Port number of the gRPC service. Number must be in the range
+          # 1 to 65535.
+          port: 1
+          # Service is the name of the service to place in the gRPC
+          # HealthCheckRequest
+          # (see
+          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
+          # If this is not specified, the default behavior is defined
+          # by gRPC.
+          service: string
+        # HTTPGet specifies the http request to perform.
+        httpGet:
+          # Host name to connect to, defaults to the pod IP. You
+          # probably want to set "Host" in httpHeaders instead.
+          host: string
+          # Custom headers to set in the request. HTTP allows repeated
+          # headers.
+          httpHeaders:
+          - name: string
+            # The header field value
+            value: string
+          # Path to access on the HTTP server.
+          path: string
+          # Name or number of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+          # Scheme to use for connecting to the host. Defaults to HTTP.
+          scheme: string
+        # Number of seconds after the container has started before
+        # liveness probes are initiated. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        initialDelaySeconds: 1
+        # How often (in seconds) to perform the probe. Default to 10
+        # seconds. Minimum value is 1.
+        periodSeconds: 1
+        # Minimum consecutive successes for the probe to be considered
+        # successful after having failed. Defaults to 1. Must be 1 for
+        # liveness and startup. Minimum value is 1.
+        successThreshold: 1
+        # TCPSocket specifies an action involving a TCP port.
+        tcpSocket:
+          # Optional: Host name to connect to, defaults to the pod IP.
+          host: string
+          # Number or name of the port to access on the container.
+          # Number must be in the range 1 to 65535. Name must be an
+          # IANA_SVC_NAME.
+          port: 
+        # Optional duration in seconds the pod needs to terminate
+        # gracefully upon probe failure. The grace period is the
+        # duration in seconds after the processes running in the pod
+        # are sent a termination signal and the time when the processes
+        # are forcibly halted with a kill signal. Set this value longer
+        # than the expected cleanup time for your process. If this
+        # value is nil, the pod's terminationGracePeriodSeconds will be
+        # used. Otherwise, this value overrides the value provided by
+        # the pod spec. Value must be non-negative integer. The value
+        # zero indicates stop immediately via the kill signal
+        # (no opportunity to shut down). This is a beta field and
+        # requires enabling ProbeTerminationGracePeriod feature gate.
+        # Minimum value is 1. spec.terminationGracePeriodSeconds is
+        # used if unset.
+        terminationGracePeriodSeconds: 1
+        # Number of seconds after which the probe times out. Defaults to
+        # 1 second. Minimum value is 1. More info:
+        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+        timeoutSeconds: 1
+      # Resource Requests & Limits for the Stats Pod.
+      resources:
+        # Claims lists the names of resources, defined in
+        # spec.resourceClaims, that are used by this container. This is
+        # an alpha field and requires enabling the
+        # DynamicResourceAllocation feature gate. This field is
+        # immutable. It can only be set for containers.
+        claims:
+        - name: string
+        # Limits describes the maximum amount of compute resources
+        # allowed. More info:
+        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+        limits: {}
+        # Requests describes the minimum amount of compute resources
+        # required. If Requests is omitted for a container, it defaults
+        # to Limits if that is explicitly specified, otherwise to an
+        # implementation-defined value. Requests cannot exceed Limits.
+        # More info:
+        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+        requests: {}
+      # Set the location of the TSDB database.
+      storageTSDBPath: "/opt/gpudb/kagent/stats/storage/prometheus-storage"
+      # Set the time to hold data in the TSDB database.
+      storageTSDBRetentionTime: "7d"
+      # Timings - Prometheus Intervals & Timeouts
+      timings: evaluationInterval: "30s" scrapeInterval: "30s"
+      scrapeTimeout: "10s"
+    # Whether to share a single PV for Loki, Prometheus & Grafana or
+    # have a separate PV for each. Default: true
+    sharedPV: true
+    # Resource block specifically for use with SharedPV = true to set
+    # storage `requests` & `limits`
+    sharedPVResources:
+      # Claims lists the names of resources, defined in
+      # spec.resourceClaims, that are used by this container. This is
+      # an alpha field and requires enabling the
+      # DynamicResourceAllocation feature gate. This field is
+      # immutable. It can only be set for containers.
+      claims:
+      - name: string
+      # Limits describes the maximum amount of compute resources
+      # allowed. More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      limits: {}
+      # Requests describes the minimum amount of compute resources
+      # required. If Requests is omitted for a container, it defaults
+      # to Limits if that is explicitly specified, otherwise to an
+      # implementation-defined value. Requests cannot exceed Limits.
+      # More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      requests: {}
+  # Supporting images like socat,busybox etc.
+  supportingImages:
+    # Set the resource requests/limits for the BusyBox Pod(s).
+    busyBoxResources:
+      # Claims lists the names of resources, defined in
+      # spec.resourceClaims, that are used by this container. This is
+      # an alpha field and requires enabling the
+      # DynamicResourceAllocation feature gate. This field is
+      # immutable. It can only be set for containers.
+      claims:
+      - name: string
+      # Limits describes the maximum amount of compute resources
+      # allowed. More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      limits: {}
+      # Requests describes the minimum amount of compute resources
+      # required. If Requests is omitted for a container, it defaults
+      # to Limits if that is explicitly specified, otherwise to an
+      # implementation-defined value. Requests cannot exceed Limits.
+      # More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      requests: {}
+    # Set the name of the container image to use.
+    busybox:
+      # Set the policy for pulling container images.
+      imagePullPolicy: "IfNotPresent"
+      # ImagePullSecrets is an optional list of references to secrets in
+      # the same gpudb-namespace to use for pulling any of the images
+      # used by this PodSpec. If specified, these secrets will be
+      # passed to individual puller implementations for them to use.
+      # For example, in the case of docker, only DockerConfig type
+      # secrets are honored.
+      imagePullSecrets:
+      - name: string
+      # The image registry & optional port containing the repository.
+      registry: "docker.io"
+      # The image repository path.
+      repository: "kineticadevcloud/"
+      # SemVer = Semantic Version for the Tag SemVer semver.Version
+      semVer: string
+      # The image sha.
+      sha: ""
+      # The image tag.
+      tag: "v7.1.5.2"
+    # Set the name of the container image to use.
+    socat:
+      # Set the policy for pulling container images.
+      imagePullPolicy: "IfNotPresent"
+      # ImagePullSecrets is an optional list of references to secrets in
+      # the same gpudb-namespace to use for pulling any of the images
+      # used by this PodSpec. If specified, these secrets will be
+      # passed to individual puller implementations for them to use.
+      # For example, in the case of docker, only DockerConfig type
+      # secrets are honored.
+      imagePullSecrets:
+      - name: string
+      # The image registry & optional port containing the repository.
+      registry: "docker.io"
+      # The image repository path.
+      repository: "kineticadevcloud/"
+      # SemVer = Semantic Version for the Tag SemVer semver.Version
+      semVer: string
+      # The image sha.
+      sha: ""
+      # The image tag.
+      tag: "v7.1.5.2"
+    # Set the resource requests/limits for the Socat Pod.
+    socatResources:
+      # Claims lists the names of resources, defined in
+      # spec.resourceClaims, that are used by this container. This is
+      # an alpha field and requires enabling the
+      # DynamicResourceAllocation feature gate. This field is
+      # immutable. It can only be set for containers.
+      claims:
+      - name: string
+      # Limits describes the maximum amount of compute resources
+      # allowed. More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      limits: {}
+      # Requests describes the minimum amount of compute resources
+      # required. If Requests is omitted for a container, it defaults
+      # to Limits if that is explicitly specified, otherwise to an
+      # implementation-defined value. Requests cannot exceed Limits.
+      # More info:
+      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+      requests: {}
+# KineticaClusterStatus defines the observed state of KineticaCluster
+status:
+  # CloudProvider the DB is deployed on
+  cloudProvider: string
+  # CloudRegion the DB is deployed on
+  cloudRegion: string
+  # ClusterSize the current number of ranks & type i.e. CPU or GPU of
+  # the cluster
+  clusterSize:
+    # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a
+    # representation of the number of nodes in a simple to understand
+    # T-Short size scheme. This indicates the size of the cluster i.e.
+    # the number of nodes. It does not identify the size of the cloud
+    # provider nodes. For node size see ClusterTypeEnum. Supported
+    # Values are: - XS S M L XL XXL XXXL
+    tshirtSize: string
+    # ClusterTypeEnum - An Enum of the node types of a KineticaCluster
+    # e.g. CPU, GPU along with the Cloud Provider node size e.g. size
+    # of the VM.
+    tshirtType: string
+  # The number of ranks (replicas) that the cluster was last run with
+  currentReplicas: 0
+  # The first start of a new cluster has completed.
+  firstStartComplete: false
+  # HostManagerStatusResponse - The contents of polling the HostManager
+  # on port 9300n are added to the BR status field. This allows clients
+  # to get the Host/Rank/Graph/ML status information.
+  hmStatus: cluster_leader: string cluster_operation: string graph:
+  status: string graph_status: string host_httpd_status: string
+  host_mode: string host_num_gpus: string host_pid: 1
+  host_stats_status: string host_status: string hostname: string hosts:
+  graph_status: string host_httpd_status: string host_mode: string
+  host_pid: 1 host_stats_status: string host_status: string ml_status:
+  string query_planner_status: string reveal_status: string
+  license_expiration: string license_status: string license_type:
+  string ml_status: string query_planner_status: string ranks: mode:
+  string
+      # Pid - The OS Process Id for the Rank.
+      pid: 1 status: string reveal_status: string system_idle_time:
+      string system_mode: string system_rebalancing: 1 system_status:
+      string text: status: string version: string
+  # The fully qualified Ingress routes.
+  ingressUrls: aaw: string dbMonitor: string files: string gadmin:
+  string postgresProxy: string ranks: {} reveal: string
+  # The fully qualified in-cluster Ingress routes.
+  internalIngressUrls: aaw: string dbMonitor: string files: string
+  gadmin: string postgresProxy: string ranks: {} reveal: string
+  # Identify FreeSaaS Cluster
+  isFreeSaaS: false
+  # HostOptions used during DB Cluster Scaling Functions
+  options: ram_limit: 1
+  # OutstandingBilling - A list of hours not yet billed for. Will only
+  # be present if the plan is Pay As You Go and the operator was unable
+  # to send the billing information due to an issue with the cloud
+  # providers billing APIs.
+  outstandingBillableHour:
+  - billable: true billed: true billedAt: string duration: string end:
+    string start: string
+  # The state or phase of the current DB installation
+  phase: stringv
+

\ No newline at end of file diff --git a/Reference/kinetica_workbench/index.html b/Reference/kinetica_workbench/index.html new file mode 100644 index 0000000..a1805d9 --- /dev/null +++ b/Reference/kinetica_workbench/index.html @@ -0,0 +1,10 @@ + Kinetica Workbench Reference - Kinetica for Kubernetes
Skip to content

Kinetica Workbench cRD Reference

Coming Soon

\ No newline at end of file diff --git a/Reference/workbench/index.html b/Reference/workbench/index.html new file mode 100644 index 0000000..e10cd34 --- /dev/null +++ b/Reference/workbench/index.html @@ -0,0 +1,47 @@ + Kinetica Workbench Configuration - Kinetica for Kubernetes
Skip to content

Kinetica Workbench Configuration

  • kubectl (yaml)
  • Helm Chart

Workbench

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica Workbench is as follows: -

Workbench GVK
1
+2
apiVersion: workbench.com.kinetica/v1
+kind: Workbench
+

Metadata

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

Workbench metadata
1
+2
+3
+4
+5
apiVersion: workbench.com.kinetica/v1
+kind: Workbench
+metadata:
+  name: workbench-kinetica-cluster
+  namespace: gpudb
+

The simplest valid Workbench CR looks as follows: -

workbench.yaml
 1
+ 2
+ 3
+ 4
+ 5
+ 6
+ 7
+ 8
+ 9
+10
+11
+12
+13
apiVersion: workbench.com.kinetica/v1
+kind: Workbench
+metadata:
+  name: workbench-kinetica-cluster
+  namespace: gpudb
+spec:
+  executeSqlLimit: 10000
+  fqdn: kinetica-cluster.saas.kinetica.com
+  image: kineticastagingcloud/workbench:v7.1.9-8.rc1
+  letsEncrypt:
+    enabled: false
+  userIdleTimeout: 60
+  ingressController: nginx-ingress
+

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

\ No newline at end of file diff --git a/Setup/index.html b/Setup/index.html new file mode 100644 index 0000000..6bdf5e8 --- /dev/null +++ b/Setup/index.html @@ -0,0 +1,10 @@ + Kinetica for Kubernetes Setup - Kinetica for Kubernetes
Skip to content

Kinetica for Kubernetes Setup

  • Prepare to Install


    What you need to know & do before beginning an installation.

    Preparation and Prerequisites

  • Set up in 15 minutes


    Install the Kinetica DB with helm and get up and running in minutes.

    Installation

  • Beyond a Simple Installation


    It is possible using the Helm Charts and Kinetica CRDs to customize your installation in a number of ways.

    Advanced Topics

\ No newline at end of file diff --git a/Support/index.html b/Support/index.html new file mode 100644 index 0000000..bf1348e --- /dev/null +++ b/Support/index.html @@ -0,0 +1,10 @@ + Support - Kinetica for Kubernetes
Skip to content

Support

  • Taking the next steps


    Further tutorials or help on configuring Kinetica in different environments.

    Help & Tutorials

  • Locating Issues


    In the unlikely event you require information on how to troubleshoot your installation, help can be found here.

    Troubleshooting

\ No newline at end of file diff --git a/Troubleshooting/troubleshooting/index.html b/Troubleshooting/troubleshooting/index.html new file mode 100644 index 0000000..4c7c4b3 --- /dev/null +++ b/Troubleshooting/troubleshooting/index.html @@ -0,0 +1,10 @@ + Troubleshooting - Kinetica for Kubernetes
Skip to content

Troubleshooting

Coming Soon

\ No newline at end of file diff --git a/Workbench/workbench/index.html b/Workbench/workbench/index.html deleted file mode 100644 index 532fdc2..0000000 --- a/Workbench/workbench/index.html +++ /dev/null @@ -1,38 +0,0 @@ - Workbench CR - Kinetica DB Operator Helm Charts
Skip to content

Kinetica Workbench Configuration

  • kubectl (yaml)
  • Helm Chart

Workbench

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica Workbench is as follows: -

Workbench GVK
1
-2
apiVersion: workbench.com.kinetica/v1
-kind: Workbench
-

Metadata

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

Workbench metadata
1
-2
-3
-4
-5
apiVersion: workbench.com.kinetica/v1
-kind: Workbench
-metadata:
-  name: workbench-kinetica-cluster
-  namespace: gpudb
-

An Example Workbench CR

The simplest valid Workbench CR looks as follows: -

workbench.yaml
 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
-10
-11
-12
-13
apiVersion: workbench.com.kinetica/v1
-kind: Workbench
-metadata:
-  name: workbench-kinetica-cluster
-  namespace: gpudb
-spec:
-  executeSqlLimit: 10000
-  fqdn: kinetica-cluster.saas.kinetica.com
-  image: kineticastagingcloud/workbench:v7.1.9-8.rc1
-  letsEncrypt:
-    enabled: false
-  userIdleTimeout: 60
-  ingressController: nginx-ingress
-
\ No newline at end of file diff --git a/Workbench/workbench_reference/index.html b/Workbench/workbench_reference/index.html deleted file mode 100644 index 5f2063c..0000000 --- a/Workbench/workbench_reference/index.html +++ /dev/null @@ -1 +0,0 @@ - Workbench CRD/CR Reference - Kinetica DB Operator Helm Charts
Skip to content

Workbench CRD/CR Reference

\ No newline at end of file diff --git a/assets/distributed.png b/assets/distributed.png new file mode 100644 index 0000000..87c4de4 Binary files /dev/null and b/assets/distributed.png differ diff --git a/assets/favicon.ico b/assets/favicon.ico new file mode 100644 index 0000000..9aa52c8 Binary files /dev/null and b/assets/favicon.ico differ diff --git a/assets/javascripts/glightbox.min.js b/assets/javascripts/glightbox.min.js new file mode 100644 index 0000000..614fb18 --- /dev/null +++ b/assets/javascripts/glightbox.min.js @@ -0,0 +1 @@ +!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?module.exports=t():"function"==typeof define&&define.amd?define(t):(e=e||self).GLightbox=t()}(this,(function(){"use strict";function e(t){return(e="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(e){return typeof e}:function(e){return e&&"function"==typeof Symbol&&e.constructor===Symbol&&e!==Symbol.prototype?"symbol":typeof e})(t)}function t(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function i(e,t){for(var i=0;i1&&void 0!==arguments[1]?arguments[1]:null,i=arguments.length>2&&void 0!==arguments[2]?arguments[2]:null,n=e[s]=e[s]||[],l={all:n,evt:null,found:null};return t&&i&&P(n)>0&&o(n,(function(e,n){if(e.eventName==t&&e.fn.toString()==i.toString())return l.found=!0,l.evt=n,!1})),l}function a(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},i=t.onElement,n=t.withCallback,s=t.avoidDuplicate,l=void 0===s||s,a=t.once,h=void 0!==a&&a,d=t.useCapture,c=void 0!==d&&d,u=arguments.length>2?arguments[2]:void 0,g=i||[];function v(e){T(n)&&n.call(u,e,this),h&&v.destroy()}return C(g)&&(g=document.querySelectorAll(g)),v.destroy=function(){o(g,(function(t){var i=r(t,e,v);i.found&&i.all.splice(i.evt,1),t.removeEventListener&&t.removeEventListener(e,v,c)}))},o(g,(function(t){var i=r(t,e,v);(t.addEventListener&&l&&!i.found||!l)&&(t.addEventListener(e,v,c),i.all.push({eventName:e,fn:v}))})),v}function h(e,t){o(t.split(" "),(function(t){return e.classList.add(t)}))}function d(e,t){o(t.split(" "),(function(t){return e.classList.remove(t)}))}function c(e,t){return e.classList.contains(t)}function u(e,t){for(;e!==document.body;){if(!(e=e.parentElement))return!1;if("function"==typeof e.matches?e.matches(t):e.msMatchesSelector(t))return e}}function g(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"",i=arguments.length>2&&void 0!==arguments[2]&&arguments[2];if(!e||""===t)return!1;if("none"==t)return T(i)&&i(),!1;var n=x(),s=t.split(" ");o(s,(function(t){h(e,"g"+t)})),a(n,{onElement:e,avoidDuplicate:!1,once:!0,withCallback:function(e,t){o(s,(function(e){d(t,"g"+e)})),T(i)&&i()}})}function v(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"";if(""==t)return e.style.webkitTransform="",e.style.MozTransform="",e.style.msTransform="",e.style.OTransform="",e.style.transform="",!1;e.style.webkitTransform=t,e.style.MozTransform=t,e.style.msTransform=t,e.style.OTransform=t,e.style.transform=t}function f(e){e.style.display="block"}function p(e){e.style.display="none"}function m(e){var t=document.createDocumentFragment(),i=document.createElement("div");for(i.innerHTML=e;i.firstChild;)t.appendChild(i.firstChild);return t}function y(){return{width:window.innerWidth||document.documentElement.clientWidth||document.body.clientWidth,height:window.innerHeight||document.documentElement.clientHeight||document.body.clientHeight}}function x(){var e,t=document.createElement("fakeelement"),i={animation:"animationend",OAnimation:"oAnimationEnd",MozAnimation:"animationend",WebkitAnimation:"webkitAnimationEnd"};for(e in i)if(void 0!==t.style[e])return i[e]}function b(e,t,i,n){if(e())t();else{var s;i||(i=100);var l=setInterval((function(){e()&&(clearInterval(l),s&&clearTimeout(s),t())}),i);n&&(s=setTimeout((function(){clearInterval(l)}),n))}}function S(e,t,i){if(I(e))console.error("Inject assets error");else if(T(t)&&(i=t,t=!1),C(t)&&t in window)T(i)&&i();else{var n;if(-1!==e.indexOf(".css")){if((n=document.querySelectorAll('link[href="'+e+'"]'))&&n.length>0)return void(T(i)&&i());var s=document.getElementsByTagName("head")[0],l=s.querySelectorAll('link[rel="stylesheet"]'),o=document.createElement("link");return o.rel="stylesheet",o.type="text/css",o.href=e,o.media="all",l?s.insertBefore(o,l[0]):s.appendChild(o),void(T(i)&&i())}if((n=document.querySelectorAll('script[src="'+e+'"]'))&&n.length>0){if(T(i)){if(C(t))return b((function(){return void 0!==window[t]}),(function(){i()})),!1;i()}}else{var r=document.createElement("script");r.type="text/javascript",r.src=e,r.onload=function(){if(T(i)){if(C(t))return b((function(){return void 0!==window[t]}),(function(){i()})),!1;i()}},document.body.appendChild(r)}}}function w(){return"navigator"in window&&window.navigator.userAgent.match(/(iPad)|(iPhone)|(iPod)|(Android)|(PlayBook)|(BB10)|(BlackBerry)|(Opera Mini)|(IEMobile)|(webOS)|(MeeGo)/i)}function T(e){return"function"==typeof e}function C(e){return"string"==typeof e}function k(e){return!(!e||!e.nodeType||1!=e.nodeType)}function E(e){return Array.isArray(e)}function A(e){return e&&e.length&&isFinite(e.length)}function L(t){return"object"===e(t)&&null!=t&&!T(t)&&!E(t)}function I(e){return null==e}function O(e,t){return null!==e&&hasOwnProperty.call(e,t)}function P(e){if(L(e)){if(e.keys)return e.keys().length;var t=0;for(var i in e)O(e,i)&&t++;return t}return e.length}function M(e){return!isNaN(parseFloat(e))&&isFinite(e)}function z(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:-1,t=document.querySelectorAll(".gbtn[data-taborder]:not(.disabled)");if(!t.length)return!1;if(1==t.length)return t[0];"string"==typeof e&&(e=parseInt(e));var i=[];o(t,(function(e){i.push(e.getAttribute("data-taborder"))}));var n=Math.max.apply(Math,i.map((function(e){return parseInt(e)}))),s=e<0?1:e+1;s>n&&(s="1");var l=i.filter((function(e){return e>=parseInt(s)})),r=l.sort()[0];return document.querySelector('.gbtn[data-taborder="'.concat(r,'"]'))}function X(e){if(e.events.hasOwnProperty("keyboard"))return!1;e.events.keyboard=a("keydown",{onElement:window,withCallback:function(t,i){var n=(t=t||window.event).keyCode;if(9==n){var s=document.querySelector(".gbtn.focused");if(!s){var l=!(!document.activeElement||!document.activeElement.nodeName)&&document.activeElement.nodeName.toLocaleLowerCase();if("input"==l||"textarea"==l||"button"==l)return}t.preventDefault();var o=document.querySelectorAll(".gbtn[data-taborder]");if(!o||o.length<=0)return;if(!s){var r=z();return void(r&&(r.focus(),h(r,"focused")))}var a=z(s.getAttribute("data-taborder"));d(s,"focused"),a&&(a.focus(),h(a,"focused"))}39==n&&e.nextSlide(),37==n&&e.prevSlide(),27==n&&e.close()}})}function Y(e){return Math.sqrt(e.x*e.x+e.y*e.y)}function q(e,t){var i=function(e,t){var i=Y(e)*Y(t);if(0===i)return 0;var n=function(e,t){return e.x*t.x+e.y*t.y}(e,t)/i;return n>1&&(n=1),Math.acos(n)}(e,t);return function(e,t){return e.x*t.y-t.x*e.y}(e,t)>0&&(i*=-1),180*i/Math.PI}var N=function(){function e(i){t(this,e),this.handlers=[],this.el=i}return n(e,[{key:"add",value:function(e){this.handlers.push(e)}},{key:"del",value:function(e){e||(this.handlers=[]);for(var t=this.handlers.length;t>=0;t--)this.handlers[t]===e&&this.handlers.splice(t,1)}},{key:"dispatch",value:function(){for(var e=0,t=this.handlers.length;e=0)console.log("ignore drag for this touched element",e.target.nodeName.toLowerCase());else{this.now=Date.now(),this.x1=e.touches[0].pageX,this.y1=e.touches[0].pageY,this.delta=this.now-(this.last||this.now),this.touchStart.dispatch(e,this.element),null!==this.preTapPosition.x&&(this.isDoubleTap=this.delta>0&&this.delta<=250&&Math.abs(this.preTapPosition.x-this.x1)<30&&Math.abs(this.preTapPosition.y-this.y1)<30,this.isDoubleTap&&clearTimeout(this.singleTapTimeout)),this.preTapPosition.x=this.x1,this.preTapPosition.y=this.y1,this.last=this.now;var t=this.preV;if(e.touches.length>1){this._cancelLongTap(),this._cancelSingleTap();var i={x:e.touches[1].pageX-this.x1,y:e.touches[1].pageY-this.y1};t.x=i.x,t.y=i.y,this.pinchStartLen=Y(t),this.multipointStart.dispatch(e,this.element)}this._preventTap=!1,this.longTapTimeout=setTimeout(function(){this.longTap.dispatch(e,this.element),this._preventTap=!0}.bind(this),750)}}}},{key:"move",value:function(e){if(e.touches){var t=this.preV,i=e.touches.length,n=e.touches[0].pageX,s=e.touches[0].pageY;if(this.isDoubleTap=!1,i>1){var l=e.touches[1].pageX,o=e.touches[1].pageY,r={x:e.touches[1].pageX-n,y:e.touches[1].pageY-s};null!==t.x&&(this.pinchStartLen>0&&(e.zoom=Y(r)/this.pinchStartLen,this.pinch.dispatch(e,this.element)),e.angle=q(r,t),this.rotate.dispatch(e,this.element)),t.x=r.x,t.y=r.y,null!==this.x2&&null!==this.sx2?(e.deltaX=(n-this.x2+l-this.sx2)/2,e.deltaY=(s-this.y2+o-this.sy2)/2):(e.deltaX=0,e.deltaY=0),this.twoFingerPressMove.dispatch(e,this.element),this.sx2=l,this.sy2=o}else{if(null!==this.x2){e.deltaX=n-this.x2,e.deltaY=s-this.y2;var a=Math.abs(this.x1-this.x2),h=Math.abs(this.y1-this.y2);(a>10||h>10)&&(this._preventTap=!0)}else e.deltaX=0,e.deltaY=0;this.pressMove.dispatch(e,this.element)}this.touchMove.dispatch(e,this.element),this._cancelLongTap(),this.x2=n,this.y2=s,i>1&&e.preventDefault()}}},{key:"end",value:function(e){if(e.changedTouches){this._cancelLongTap();var t=this;e.touches.length<2&&(this.multipointEnd.dispatch(e,this.element),this.sx2=this.sy2=null),this.x2&&Math.abs(this.x1-this.x2)>30||this.y2&&Math.abs(this.y1-this.y2)>30?(e.direction=this._swipeDirection(this.x1,this.x2,this.y1,this.y2),this.swipeTimeout=setTimeout((function(){t.swipe.dispatch(e,t.element)}),0)):(this.tapTimeout=setTimeout((function(){t._preventTap||t.tap.dispatch(e,t.element),t.isDoubleTap&&(t.doubleTap.dispatch(e,t.element),t.isDoubleTap=!1)}),0),t.isDoubleTap||(t.singleTapTimeout=setTimeout((function(){t.singleTap.dispatch(e,t.element)}),250))),this.touchEnd.dispatch(e,this.element),this.preV.x=0,this.preV.y=0,this.zoom=1,this.pinchStartLen=null,this.x1=this.x2=this.y1=this.y2=null}}},{key:"cancelAll",value:function(){this._preventTap=!0,clearTimeout(this.singleTapTimeout),clearTimeout(this.tapTimeout),clearTimeout(this.longTapTimeout),clearTimeout(this.swipeTimeout)}},{key:"cancel",value:function(e){this.cancelAll(),this.touchCancel.dispatch(e,this.element)}},{key:"_cancelLongTap",value:function(){clearTimeout(this.longTapTimeout)}},{key:"_cancelSingleTap",value:function(){clearTimeout(this.singleTapTimeout)}},{key:"_swipeDirection",value:function(e,t,i,n){return Math.abs(e-t)>=Math.abs(i-n)?e-t>0?"Left":"Right":i-n>0?"Up":"Down"}},{key:"on",value:function(e,t){this[e]&&this[e].add(t)}},{key:"off",value:function(e,t){this[e]&&this[e].del(t)}},{key:"destroy",value:function(){return this.singleTapTimeout&&clearTimeout(this.singleTapTimeout),this.tapTimeout&&clearTimeout(this.tapTimeout),this.longTapTimeout&&clearTimeout(this.longTapTimeout),this.swipeTimeout&&clearTimeout(this.swipeTimeout),this.element.removeEventListener("touchstart",this.start),this.element.removeEventListener("touchmove",this.move),this.element.removeEventListener("touchend",this.end),this.element.removeEventListener("touchcancel",this.cancel),this.rotate.del(),this.touchStart.del(),this.multipointStart.del(),this.multipointEnd.del(),this.pinch.del(),this.swipe.del(),this.tap.del(),this.doubleTap.del(),this.longTap.del(),this.singleTap.del(),this.pressMove.del(),this.twoFingerPressMove.del(),this.touchMove.del(),this.touchEnd.del(),this.touchCancel.del(),this.preV=this.pinchStartLen=this.zoom=this.isDoubleTap=this.delta=this.last=this.now=this.tapTimeout=this.singleTapTimeout=this.longTapTimeout=this.swipeTimeout=this.x1=this.x2=this.y1=this.y2=this.preTapPosition=this.rotate=this.touchStart=this.multipointStart=this.multipointEnd=this.pinch=this.swipe=this.tap=this.doubleTap=this.longTap=this.singleTap=this.pressMove=this.touchMove=this.touchEnd=this.touchCancel=this.twoFingerPressMove=null,window.removeEventListener("scroll",this._cancelAllHandler),null}}]),e}();function W(e){var t=function(){var e,t=document.createElement("fakeelement"),i={transition:"transitionend",OTransition:"oTransitionEnd",MozTransition:"transitionend",WebkitTransition:"webkitTransitionEnd"};for(e in i)if(void 0!==t.style[e])return i[e]}(),i=window.innerWidth||document.documentElement.clientWidth||document.body.clientWidth,n=c(e,"gslide-media")?e:e.querySelector(".gslide-media"),s=u(n,".ginner-container"),l=e.querySelector(".gslide-description");i>769&&(n=s),h(n,"greset"),v(n,"translate3d(0, 0, 0)"),a(t,{onElement:n,once:!0,withCallback:function(e,t){d(n,"greset")}}),n.style.opacity="",l&&(l.style.opacity="")}function B(e){if(e.events.hasOwnProperty("touch"))return!1;var t,i,n,s=y(),l=s.width,o=s.height,r=!1,a=null,g=null,f=null,p=!1,m=1,x=1,b=!1,S=!1,w=null,T=null,C=null,k=null,E=0,A=0,L=!1,I=!1,O={},P={},M=0,z=0,X=document.getElementById("glightbox-slider"),Y=document.querySelector(".goverlay"),q=new _(X,{touchStart:function(t){if(r=!0,(c(t.targetTouches[0].target,"ginner-container")||u(t.targetTouches[0].target,".gslide-desc")||"a"==t.targetTouches[0].target.nodeName.toLowerCase())&&(r=!1),u(t.targetTouches[0].target,".gslide-inline")&&!c(t.targetTouches[0].target.parentNode,"gslide-inline")&&(r=!1),r){if(P=t.targetTouches[0],O.pageX=t.targetTouches[0].pageX,O.pageY=t.targetTouches[0].pageY,M=t.targetTouches[0].clientX,z=t.targetTouches[0].clientY,a=e.activeSlide,g=a.querySelector(".gslide-media"),n=a.querySelector(".gslide-inline"),f=null,c(g,"gslide-image")&&(f=g.querySelector("img")),(window.innerWidth||document.documentElement.clientWidth||document.body.clientWidth)>769&&(g=a.querySelector(".ginner-container")),d(Y,"greset"),t.pageX>20&&t.pageXo){var a=O.pageX-P.pageX;if(Math.abs(a)<=13)return!1}p=!0;var h,d=s.targetTouches[0].clientX,c=s.targetTouches[0].clientY,u=M-d,m=z-c;if(Math.abs(u)>Math.abs(m)?(L=!1,I=!0):(I=!1,L=!0),t=P.pageX-O.pageX,E=100*t/l,i=P.pageY-O.pageY,A=100*i/o,L&&f&&(h=1-Math.abs(i)/o,Y.style.opacity=h,e.settings.touchFollowAxis&&(E=0)),I&&(h=1-Math.abs(t)/l,g.style.opacity=h,e.settings.touchFollowAxis&&(A=0)),!f)return v(g,"translate3d(".concat(E,"%, 0, 0)"));v(g,"translate3d(".concat(E,"%, ").concat(A,"%, 0)"))}},touchEnd:function(){if(r){if(p=!1,S||b)return C=w,void(k=T);var t=Math.abs(parseInt(A)),i=Math.abs(parseInt(E));if(!(t>29&&f))return t<29&&i<25?(h(Y,"greset"),Y.style.opacity=1,W(g)):void 0;e.close()}},multipointEnd:function(){setTimeout((function(){b=!1}),50)},multipointStart:function(){b=!0,m=x||1},pinch:function(e){if(!f||p)return!1;b=!0,f.scaleX=f.scaleY=m*e.zoom;var t=m*e.zoom;if(S=!0,t<=1)return S=!1,t=1,k=null,C=null,w=null,T=null,void f.setAttribute("style","");t>4.5&&(t=4.5),f.style.transform="scale3d(".concat(t,", ").concat(t,", 1)"),x=t},pressMove:function(e){if(S&&!b){var t=P.pageX-O.pageX,i=P.pageY-O.pageY;C&&(t+=C),k&&(i+=k),w=t,T=i;var n="translate3d(".concat(t,"px, ").concat(i,"px, 0)");x&&(n+=" scale3d(".concat(x,", ").concat(x,", 1)")),v(f,n)}},swipe:function(t){if(!S)if(b)b=!1;else{if("Left"==t.direction){if(e.index==e.elements.length-1)return W(g);e.nextSlide()}if("Right"==t.direction){if(0==e.index)return W(g);e.prevSlide()}}}});e.events.touch=q}var H=function(){function e(i,n){var s=this,l=arguments.length>2&&void 0!==arguments[2]?arguments[2]:null;if(t(this,e),this.img=i,this.slide=n,this.onclose=l,this.img.setZoomEvents)return!1;this.active=!1,this.zoomedIn=!1,this.dragging=!1,this.currentX=null,this.currentY=null,this.initialX=null,this.initialY=null,this.xOffset=0,this.yOffset=0,this.img.addEventListener("mousedown",(function(e){return s.dragStart(e)}),!1),this.img.addEventListener("mouseup",(function(e){return s.dragEnd(e)}),!1),this.img.addEventListener("mousemove",(function(e){return s.drag(e)}),!1),this.img.addEventListener("click",(function(e){return s.slide.classList.contains("dragging-nav")?(s.zoomOut(),!1):s.zoomedIn?void(s.zoomedIn&&!s.dragging&&s.zoomOut()):s.zoomIn()}),!1),this.img.setZoomEvents=!0}return n(e,[{key:"zoomIn",value:function(){var e=this.widowWidth();if(!(this.zoomedIn||e<=768)){var t=this.img;if(t.setAttribute("data-style",t.getAttribute("style")),t.style.maxWidth=t.naturalWidth+"px",t.style.maxHeight=t.naturalHeight+"px",t.naturalWidth>e){var i=e/2-t.naturalWidth/2;this.setTranslate(this.img.parentNode,i,0)}this.slide.classList.add("zoomed"),this.zoomedIn=!0}}},{key:"zoomOut",value:function(){this.img.parentNode.setAttribute("style",""),this.img.setAttribute("style",this.img.getAttribute("data-style")),this.slide.classList.remove("zoomed"),this.zoomedIn=!1,this.currentX=null,this.currentY=null,this.initialX=null,this.initialY=null,this.xOffset=0,this.yOffset=0,this.onclose&&"function"==typeof this.onclose&&this.onclose()}},{key:"dragStart",value:function(e){e.preventDefault(),this.zoomedIn?("touchstart"===e.type?(this.initialX=e.touches[0].clientX-this.xOffset,this.initialY=e.touches[0].clientY-this.yOffset):(this.initialX=e.clientX-this.xOffset,this.initialY=e.clientY-this.yOffset),e.target===this.img&&(this.active=!0,this.img.classList.add("dragging"))):this.active=!1}},{key:"dragEnd",value:function(e){var t=this;e.preventDefault(),this.initialX=this.currentX,this.initialY=this.currentY,this.active=!1,setTimeout((function(){t.dragging=!1,t.img.isDragging=!1,t.img.classList.remove("dragging")}),100)}},{key:"drag",value:function(e){this.active&&(e.preventDefault(),"touchmove"===e.type?(this.currentX=e.touches[0].clientX-this.initialX,this.currentY=e.touches[0].clientY-this.initialY):(this.currentX=e.clientX-this.initialX,this.currentY=e.clientY-this.initialY),this.xOffset=this.currentX,this.yOffset=this.currentY,this.img.isDragging=!0,this.dragging=!0,this.setTranslate(this.img,this.currentX,this.currentY))}},{key:"onMove",value:function(e){if(this.zoomedIn){var t=e.clientX-this.img.naturalWidth/2,i=e.clientY-this.img.naturalHeight/2;this.setTranslate(this.img,t,i)}}},{key:"setTranslate",value:function(e,t,i){e.style.transform="translate3d("+t+"px, "+i+"px, 0)"}},{key:"widowWidth",value:function(){return window.innerWidth||document.documentElement.clientWidth||document.body.clientWidth}}]),e}(),V=function(){function e(){var i=this,n=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{};t(this,e);var s=n.dragEl,l=n.toleranceX,o=void 0===l?40:l,r=n.toleranceY,a=void 0===r?65:r,h=n.slide,d=void 0===h?null:h,c=n.instance,u=void 0===c?null:c;this.el=s,this.active=!1,this.dragging=!1,this.currentX=null,this.currentY=null,this.initialX=null,this.initialY=null,this.xOffset=0,this.yOffset=0,this.direction=null,this.lastDirection=null,this.toleranceX=o,this.toleranceY=a,this.toleranceReached=!1,this.dragContainer=this.el,this.slide=d,this.instance=u,this.el.addEventListener("mousedown",(function(e){return i.dragStart(e)}),!1),this.el.addEventListener("mouseup",(function(e){return i.dragEnd(e)}),!1),this.el.addEventListener("mousemove",(function(e){return i.drag(e)}),!1)}return n(e,[{key:"dragStart",value:function(e){if(this.slide.classList.contains("zoomed"))this.active=!1;else{"touchstart"===e.type?(this.initialX=e.touches[0].clientX-this.xOffset,this.initialY=e.touches[0].clientY-this.yOffset):(this.initialX=e.clientX-this.xOffset,this.initialY=e.clientY-this.yOffset);var t=e.target.nodeName.toLowerCase();e.target.classList.contains("nodrag")||u(e.target,".nodrag")||-1!==["input","select","textarea","button","a"].indexOf(t)?this.active=!1:(e.preventDefault(),(e.target===this.el||"img"!==t&&u(e.target,".gslide-inline"))&&(this.active=!0,this.el.classList.add("dragging"),this.dragContainer=u(e.target,".ginner-container")))}}},{key:"dragEnd",value:function(e){var t=this;e&&e.preventDefault(),this.initialX=0,this.initialY=0,this.currentX=null,this.currentY=null,this.initialX=null,this.initialY=null,this.xOffset=0,this.yOffset=0,this.active=!1,this.doSlideChange&&(this.instance.preventOutsideClick=!0,"right"==this.doSlideChange&&this.instance.prevSlide(),"left"==this.doSlideChange&&this.instance.nextSlide()),this.doSlideClose&&this.instance.close(),this.toleranceReached||this.setTranslate(this.dragContainer,0,0,!0),setTimeout((function(){t.instance.preventOutsideClick=!1,t.toleranceReached=!1,t.lastDirection=null,t.dragging=!1,t.el.isDragging=!1,t.el.classList.remove("dragging"),t.slide.classList.remove("dragging-nav"),t.dragContainer.style.transform="",t.dragContainer.style.transition=""}),100)}},{key:"drag",value:function(e){if(this.active){e.preventDefault(),this.slide.classList.add("dragging-nav"),"touchmove"===e.type?(this.currentX=e.touches[0].clientX-this.initialX,this.currentY=e.touches[0].clientY-this.initialY):(this.currentX=e.clientX-this.initialX,this.currentY=e.clientY-this.initialY),this.xOffset=this.currentX,this.yOffset=this.currentY,this.el.isDragging=!0,this.dragging=!0,this.doSlideChange=!1,this.doSlideClose=!1;var t=Math.abs(this.currentX),i=Math.abs(this.currentY);if(t>0&&t>=Math.abs(this.currentY)&&(!this.lastDirection||"x"==this.lastDirection)){this.yOffset=0,this.lastDirection="x",this.setTranslate(this.dragContainer,this.currentX,0);var n=this.shouldChange();if(!this.instance.settings.dragAutoSnap&&n&&(this.doSlideChange=n),this.instance.settings.dragAutoSnap&&n)return this.instance.preventOutsideClick=!0,this.toleranceReached=!0,this.active=!1,this.instance.preventOutsideClick=!0,this.dragEnd(null),"right"==n&&this.instance.prevSlide(),void("left"==n&&this.instance.nextSlide())}if(this.toleranceY>0&&i>0&&i>=t&&(!this.lastDirection||"y"==this.lastDirection)){this.xOffset=0,this.lastDirection="y",this.setTranslate(this.dragContainer,0,this.currentY);var s=this.shouldClose();return!this.instance.settings.dragAutoSnap&&s&&(this.doSlideClose=!0),void(this.instance.settings.dragAutoSnap&&s&&this.instance.close())}}}},{key:"shouldChange",value:function(){var e=!1;if(Math.abs(this.currentX)>=this.toleranceX){var t=this.currentX>0?"right":"left";("left"==t&&this.slide!==this.slide.parentNode.lastChild||"right"==t&&this.slide!==this.slide.parentNode.firstChild)&&(e=t)}return e}},{key:"shouldClose",value:function(){var e=!1;return Math.abs(this.currentY)>=this.toleranceY&&(e=!0),e}},{key:"setTranslate",value:function(e,t,i){var n=arguments.length>3&&void 0!==arguments[3]&&arguments[3];e.style.transition=n?"all .2s ease":"",e.style.transform="translate3d(".concat(t,"px, ").concat(i,"px, 0)")}}]),e}();function j(e,t,i,n){var s=e.querySelector(".gslide-media"),l=new Image,o="gSlideTitle_"+i,r="gSlideDesc_"+i;l.addEventListener("load",(function(){T(n)&&n()}),!1),l.src=t.href,""!=t.sizes&&""!=t.srcset&&(l.sizes=t.sizes,l.srcset=t.srcset),l.alt="",I(t.alt)||""===t.alt||(l.alt=t.alt),""!==t.title&&l.setAttribute("aria-labelledby",o),""!==t.description&&l.setAttribute("aria-describedby",r),t.hasOwnProperty("_hasCustomWidth")&&t._hasCustomWidth&&(l.style.width=t.width),t.hasOwnProperty("_hasCustomHeight")&&t._hasCustomHeight&&(l.style.height=t.height),s.insertBefore(l,s.firstChild)}function F(e,t,i,n){var s=this,l=e.querySelector(".ginner-container"),o="gvideo"+i,r=e.querySelector(".gslide-media"),a=this.getAllPlayers();h(l,"gvideo-container"),r.insertBefore(m('
'),r.firstChild);var d=e.querySelector(".gvideo-wrapper");S(this.settings.plyr.css,"Plyr");var c=t.href,u=location.protocol.replace(":",""),g="",v="",f=!1;"file"==u&&(u="http"),r.style.maxWidth=t.width,S(this.settings.plyr.js,"Plyr",(function(){if(c.match(/vimeo\.com\/([0-9]*)/)){var l=/vimeo.*\/(\d+)/i.exec(c);g="vimeo",v=l[1]}if(c.match(/(youtube\.com|youtube-nocookie\.com)\/watch\?v=([a-zA-Z0-9\-_]+)/)||c.match(/youtu\.be\/([a-zA-Z0-9\-_]+)/)||c.match(/(youtube\.com|youtube-nocookie\.com)\/embed\/([a-zA-Z0-9\-_]+)/)){var r=function(e){var t="";t=void 0!==(e=e.replace(/(>|<)/gi,"").split(/(vi\/|v=|\/v\/|youtu\.be\/|\/embed\/)/))[2]?(t=e[2].split(/[^0-9a-z_\-]/i))[0]:e;return t}(c);g="youtube",v=r}if(null!==c.match(/\.(mp4|ogg|webm|mov)$/)){g="local";var u='")}var w=f||m('
'));h(d,"".concat(g,"-video gvideo")),d.appendChild(w),d.setAttribute("data-id",o),d.setAttribute("data-index",i);var C=O(s.settings.plyr,"config")?s.settings.plyr.config:{},k=new Plyr("#"+o,C);k.on("ready",(function(e){var t=e.detail.plyr;a[o]=t,T(n)&&n()})),b((function(){return e.querySelector("iframe")&&"true"==e.querySelector("iframe").dataset.ready}),(function(){s.resize(e)})),k.on("enterfullscreen",R),k.on("exitfullscreen",R)}))}function R(e){var t=u(e.target,".gslide-media");"enterfullscreen"==e.type&&h(t,"fullscreen"),"exitfullscreen"==e.type&&d(t,"fullscreen")}function G(e,t,i,n){var s,l=this,o=e.querySelector(".gslide-media"),r=!(!O(t,"href")||!t.href)&&t.href.split("#").pop().trim(),d=!(!O(t,"content")||!t.content)&&t.content;if(d&&(C(d)&&(s=m('
'.concat(d,"
"))),k(d))){"none"==d.style.display&&(d.style.display="block");var c=document.createElement("div");c.className="ginlined-content",c.appendChild(d),s=c}if(r){var u=document.getElementById(r);if(!u)return!1;var g=u.cloneNode(!0);g.style.height=t.height,g.style.maxWidth=t.width,h(g,"ginlined-content"),s=g}if(!s)return console.error("Unable to append inline slide content",t),!1;o.style.height=t.height,o.style.width=t.width,o.appendChild(s),this.events["inlineclose"+r]=a("click",{onElement:o.querySelectorAll(".gtrigger-close"),withCallback:function(e){e.preventDefault(),l.close()}}),T(n)&&n()}function Z(e,t,i,n){var s=e.querySelector(".gslide-media"),l=function(e){var t=e.url,i=e.allow,n=e.callback,s=e.appendTo,l=document.createElement("iframe");return l.className="vimeo-video gvideo",l.src=t,l.style.width="100%",l.style.height="100%",i&&l.setAttribute("allow",i),l.onload=function(){h(l,"node-ready"),T(n)&&n()},s&&s.appendChild(l),l}({url:t.href,callback:n});s.parentNode.style.maxWidth=t.width,s.parentNode.style.height=t.height,s.appendChild(l)}var $=function(){function e(){var i=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{};t(this,e),this.defaults={href:"",sizes:"",srcset:"",title:"",type:"",description:"",alt:"",descPosition:"bottom",effect:"",width:"",height:"",content:!1,zoomable:!0,draggable:!0},L(i)&&(this.defaults=l(this.defaults,i))}return n(e,[{key:"sourceType",value:function(e){var t=e;if(null!==(e=e.toLowerCase()).match(/\.(jpeg|jpg|jpe|gif|png|apn|webp|avif|svg)/))return"image";if(e.match(/(youtube\.com|youtube-nocookie\.com)\/watch\?v=([a-zA-Z0-9\-_]+)/)||e.match(/youtu\.be\/([a-zA-Z0-9\-_]+)/)||e.match(/(youtube\.com|youtube-nocookie\.com)\/embed\/([a-zA-Z0-9\-_]+)/))return"video";if(e.match(/vimeo\.com\/([0-9]*)/))return"video";if(null!==e.match(/\.(mp4|ogg|webm|mov)/))return"video";if(null!==e.match(/\.(mp3|wav|wma|aac|ogg)/))return"audio";if(e.indexOf("#")>-1&&""!==t.split("#").pop().trim())return"inline";return e.indexOf("goajax=true")>-1?"ajax":"external"}},{key:"parseConfig",value:function(e,t){var i=this,n=l({descPosition:t.descPosition},this.defaults);if(L(e)&&!k(e)){O(e,"type")||(O(e,"content")&&e.content?e.type="inline":O(e,"href")&&(e.type=this.sourceType(e.href)));var s=l(n,e);return this.setSize(s,t),s}var r="",a=e.getAttribute("data-glightbox"),h=e.nodeName.toLowerCase();if("a"===h&&(r=e.href),"img"===h&&(r=e.src,n.alt=e.alt),n.href=r,o(n,(function(s,l){O(t,l)&&"width"!==l&&(n[l]=t[l]);var o=e.dataset[l];I(o)||(n[l]=i.sanitizeValue(o))})),n.content&&(n.type="inline"),!n.type&&r&&(n.type=this.sourceType(r)),I(a)){if(!n.title&&"a"==h){var d=e.title;I(d)||""===d||(n.title=d)}if(!n.title&&"img"==h){var c=e.alt;I(c)||""===c||(n.title=c)}}else{var u=[];o(n,(function(e,t){u.push(";\\s?"+t)})),u=u.join("\\s?:|"),""!==a.trim()&&o(n,(function(e,t){var s=a,l=new RegExp("s?"+t+"s?:s?(.*?)("+u+"s?:|$)"),o=s.match(l);if(o&&o.length&&o[1]){var r=o[1].trim().replace(/;\s*$/,"");n[t]=i.sanitizeValue(r)}}))}if(n.description&&"."===n.description.substring(0,1)){var g;try{g=document.querySelector(n.description).innerHTML}catch(e){if(!(e instanceof DOMException))throw e}g&&(n.description=g)}if(!n.description){var v=e.querySelector(".glightbox-desc");v&&(n.description=v.innerHTML)}return this.setSize(n,t,e),this.slideConfig=n,n}},{key:"setSize",value:function(e,t){var i=arguments.length>2&&void 0!==arguments[2]?arguments[2]:null,n="video"==e.type?this.checkSize(t.videosWidth):this.checkSize(t.width),s=this.checkSize(t.height);return e.width=O(e,"width")&&""!==e.width?this.checkSize(e.width):n,e.height=O(e,"height")&&""!==e.height?this.checkSize(e.height):s,i&&"image"==e.type&&(e._hasCustomWidth=!!i.dataset.width,e._hasCustomHeight=!!i.dataset.height),e}},{key:"checkSize",value:function(e){return M(e)?"".concat(e,"px"):e}},{key:"sanitizeValue",value:function(e){return"true"!==e&&"false"!==e?e:"true"===e}}]),e}(),U=function(){function e(i,n,s){t(this,e),this.element=i,this.instance=n,this.index=s}return n(e,[{key:"setContent",value:function(){var e=this,t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:null,i=arguments.length>1&&void 0!==arguments[1]&&arguments[1];if(c(t,"loaded"))return!1;var n=this.instance.settings,s=this.slideConfig,l=w();T(n.beforeSlideLoad)&&n.beforeSlideLoad({index:this.index,slide:t,player:!1});var o=s.type,r=s.descPosition,a=t.querySelector(".gslide-media"),d=t.querySelector(".gslide-title"),u=t.querySelector(".gslide-desc"),g=t.querySelector(".gdesc-inner"),v=i,f="gSlideTitle_"+this.index,p="gSlideDesc_"+this.index;if(T(n.afterSlideLoad)&&(v=function(){T(i)&&i(),n.afterSlideLoad({index:e.index,slide:t,player:e.instance.getSlidePlayerInstance(e.index)})}),""==s.title&&""==s.description?g&&g.parentNode.parentNode.removeChild(g.parentNode):(d&&""!==s.title?(d.id=f,d.innerHTML=s.title):d.parentNode.removeChild(d),u&&""!==s.description?(u.id=p,l&&n.moreLength>0?(s.smallDescription=this.slideShortDesc(s.description,n.moreLength,n.moreText),u.innerHTML=s.smallDescription,this.descriptionEvents(u,s)):u.innerHTML=s.description):u.parentNode.removeChild(u),h(a.parentNode,"desc-".concat(r)),h(g.parentNode,"description-".concat(r))),h(a,"gslide-".concat(o)),h(t,"loaded"),"video"!==o){if("external"!==o)return"inline"===o?(G.apply(this.instance,[t,s,this.index,v]),void(s.draggable&&new V({dragEl:t.querySelector(".gslide-inline"),toleranceX:n.dragToleranceX,toleranceY:n.dragToleranceY,slide:t,instance:this.instance}))):void("image"!==o?T(v)&&v():j(t,s,this.index,(function(){var i=t.querySelector("img");s.draggable&&new V({dragEl:i,toleranceX:n.dragToleranceX,toleranceY:n.dragToleranceY,slide:t,instance:e.instance}),s.zoomable&&i.naturalWidth>i.offsetWidth&&(h(i,"zoomable"),new H(i,t,(function(){e.instance.resize()}))),T(v)&&v()})));Z.apply(this,[t,s,this.index,v])}else F.apply(this.instance,[t,s,this.index,v])}},{key:"slideShortDesc",value:function(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:50,i=arguments.length>2&&void 0!==arguments[2]&&arguments[2],n=document.createElement("div");n.innerHTML=e;var s=n.innerText,l=i;if((e=s.trim()).length<=t)return e;var o=e.substr(0,t-1);return l?(n=null,o+'... '+i+""):o}},{key:"descriptionEvents",value:function(e,t){var i=this,n=e.querySelector(".desc-more");if(!n)return!1;a("click",{onElement:n,withCallback:function(e,n){e.preventDefault();var s=document.body,l=u(n,".gslide-desc");if(!l)return!1;l.innerHTML=t.description,h(s,"gdesc-open");var o=a("click",{onElement:[s,u(l,".gslide-description")],withCallback:function(e,n){"a"!==e.target.nodeName.toLowerCase()&&(d(s,"gdesc-open"),h(s,"gdesc-closed"),l.innerHTML=t.smallDescription,i.descriptionEvents(l,t),setTimeout((function(){d(s,"gdesc-closed")}),400),o.destroy())}})}})}},{key:"create",value:function(){return m(this.instance.settings.slideHTML)}},{key:"getConfig",value:function(){k(this.element)||this.element.hasOwnProperty("draggable")||(this.element.draggable=this.instance.settings.draggable);var e=new $(this.instance.settings.slideExtraAttributes);return this.slideConfig=e.parseConfig(this.element,this.instance.settings),this.slideConfig}}]),e}(),J=w(),K=null!==w()||void 0!==document.createTouch||"ontouchstart"in window||"onmsgesturechange"in window||navigator.msMaxTouchPoints,Q=document.getElementsByTagName("html")[0],ee={selector:".glightbox",elements:null,skin:"clean",theme:"clean",closeButton:!0,startAt:null,autoplayVideos:!0,autofocusVideos:!0,descPosition:"bottom",width:"900px",height:"506px",videosWidth:"960px",beforeSlideChange:null,afterSlideChange:null,beforeSlideLoad:null,afterSlideLoad:null,slideInserted:null,slideRemoved:null,slideExtraAttributes:null,onOpen:null,onClose:null,loop:!1,zoomable:!0,draggable:!0,dragAutoSnap:!1,dragToleranceX:40,dragToleranceY:65,preload:!0,oneSlidePerOpen:!1,touchNavigation:!0,touchFollowAxis:!0,keyboardNavigation:!0,closeOnOutsideClick:!0,plugins:!1,plyr:{css:"https://cdn.plyr.io/3.6.8/plyr.css",js:"https://cdn.plyr.io/3.6.8/plyr.js",config:{ratio:"16:9",fullscreen:{enabled:!0,iosNative:!0},youtube:{noCookie:!0,rel:0,showinfo:0,iv_load_policy:3},vimeo:{byline:!1,portrait:!1,title:!1,transparent:!1}}},openEffect:"zoom",closeEffect:"zoom",slideEffect:"slide",moreText:"See more",moreLength:60,cssEfects:{fade:{in:"fadeIn",out:"fadeOut"},zoom:{in:"zoomIn",out:"zoomOut"},slide:{in:"slideInRight",out:"slideOutLeft"},slideBack:{in:"slideInLeft",out:"slideOutRight"},none:{in:"none",out:"none"}},svg:{close:'',next:' ',prev:''},slideHTML:'
\n
\n
\n
\n
\n
\n
\n

\n
\n
\n
\n
\n
\n
',lightboxHTML:''},te=function(){function e(){var i=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{};t(this,e),this.customOptions=i,this.settings=l(ee,i),this.effectsClasses=this.getAnimationClasses(),this.videoPlayers={},this.apiEvents=[],this.fullElementsList=!1}return n(e,[{key:"init",value:function(){var e=this,t=this.getSelector();t&&(this.baseEvents=a("click",{onElement:t,withCallback:function(t,i){t.preventDefault(),e.open(i)}})),this.elements=this.getElements()}},{key:"open",value:function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:null,t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:null;if(0==this.elements.length)return!1;this.activeSlide=null,this.prevActiveSlideIndex=null,this.prevActiveSlide=null;var i=M(t)?t:this.settings.startAt;if(k(e)){var n=e.getAttribute("data-gallery");n&&(this.fullElementsList=this.elements,this.elements=this.getGalleryElements(this.elements,n)),I(i)&&(i=this.getElementIndex(e))<0&&(i=0)}M(i)||(i=0),this.build(),g(this.overlay,"none"==this.settings.openEffect?"none":this.settings.cssEfects.fade.in);var s=document.body,l=window.innerWidth-document.documentElement.clientWidth;if(l>0){var o=document.createElement("style");o.type="text/css",o.className="gcss-styles",o.innerText=".gscrollbar-fixer {margin-right: ".concat(l,"px}"),document.head.appendChild(o),h(s,"gscrollbar-fixer")}h(s,"glightbox-open"),h(Q,"glightbox-open"),J&&(h(document.body,"glightbox-mobile"),this.settings.slideEffect="slide"),this.showSlide(i,!0),1==this.elements.length?(h(this.prevButton,"glightbox-button-hidden"),h(this.nextButton,"glightbox-button-hidden")):(d(this.prevButton,"glightbox-button-hidden"),d(this.nextButton,"glightbox-button-hidden")),this.lightboxOpen=!0,this.trigger("open"),T(this.settings.onOpen)&&this.settings.onOpen(),K&&this.settings.touchNavigation&&B(this),this.settings.keyboardNavigation&&X(this)}},{key:"openAt",value:function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:0;this.open(null,e)}},{key:"showSlide",value:function(){var e=this,t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:0,i=arguments.length>1&&void 0!==arguments[1]&&arguments[1];f(this.loader),this.index=parseInt(t);var n=this.slidesContainer.querySelector(".current");n&&d(n,"current"),this.slideAnimateOut();var s=this.slidesContainer.querySelectorAll(".gslide")[t];if(c(s,"loaded"))this.slideAnimateIn(s,i),p(this.loader);else{f(this.loader);var l=this.elements[t],o={index:this.index,slide:s,slideNode:s,slideConfig:l.slideConfig,slideIndex:this.index,trigger:l.node,player:null};this.trigger("slide_before_load",o),l.instance.setContent(s,(function(){p(e.loader),e.resize(),e.slideAnimateIn(s,i),e.trigger("slide_after_load",o)}))}this.slideDescription=s.querySelector(".gslide-description"),this.slideDescriptionContained=this.slideDescription&&c(this.slideDescription.parentNode,"gslide-media"),this.settings.preload&&(this.preloadSlide(t+1),this.preloadSlide(t-1)),this.updateNavigationClasses(),this.activeSlide=s}},{key:"preloadSlide",value:function(e){var t=this;if(e<0||e>this.elements.length-1)return!1;if(I(this.elements[e]))return!1;var i=this.slidesContainer.querySelectorAll(".gslide")[e];if(c(i,"loaded"))return!1;var n=this.elements[e],s=n.type,l={index:e,slide:i,slideNode:i,slideConfig:n.slideConfig,slideIndex:e,trigger:n.node,player:null};this.trigger("slide_before_load",l),"video"==s||"external"==s?setTimeout((function(){n.instance.setContent(i,(function(){t.trigger("slide_after_load",l)}))}),200):n.instance.setContent(i,(function(){t.trigger("slide_after_load",l)}))}},{key:"prevSlide",value:function(){this.goToSlide(this.index-1)}},{key:"nextSlide",value:function(){this.goToSlide(this.index+1)}},{key:"goToSlide",value:function(){var e=arguments.length>0&&void 0!==arguments[0]&&arguments[0];if(this.prevActiveSlide=this.activeSlide,this.prevActiveSlideIndex=this.index,!this.loop()&&(e<0||e>this.elements.length-1))return!1;e<0?e=this.elements.length-1:e>=this.elements.length&&(e=0),this.showSlide(e)}},{key:"insertSlide",value:function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:-1;t<0&&(t=this.elements.length);var i=new U(e,this,t),n=i.getConfig(),s=l({},n),o=i.create(),r=this.elements.length-1;s.index=t,s.node=!1,s.instance=i,s.slideConfig=n,this.elements.splice(t,0,s);var a=null,h=null;if(this.slidesContainer){if(t>r)this.slidesContainer.appendChild(o);else{var d=this.slidesContainer.querySelectorAll(".gslide")[t];this.slidesContainer.insertBefore(o,d)}(this.settings.preload&&0==this.index&&0==t||this.index-1==t||this.index+1==t)&&this.preloadSlide(t),0==this.index&&0==t&&(this.index=1),this.updateNavigationClasses(),a=this.slidesContainer.querySelectorAll(".gslide")[t],h=this.getSlidePlayerInstance(t),s.slideNode=a}this.trigger("slide_inserted",{index:t,slide:a,slideNode:a,slideConfig:n,slideIndex:t,trigger:null,player:h}),T(this.settings.slideInserted)&&this.settings.slideInserted({index:t,slide:a,player:h})}},{key:"removeSlide",value:function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:-1;if(e<0||e>this.elements.length-1)return!1;var t=this.slidesContainer&&this.slidesContainer.querySelectorAll(".gslide")[e];t&&(this.getActiveSlideIndex()==e&&(e==this.elements.length-1?this.prevSlide():this.nextSlide()),t.parentNode.removeChild(t)),this.elements.splice(e,1),this.trigger("slide_removed",e),T(this.settings.slideRemoved)&&this.settings.slideRemoved(e)}},{key:"slideAnimateIn",value:function(e,t){var i=this,n=e.querySelector(".gslide-media"),s=e.querySelector(".gslide-description"),l={index:this.prevActiveSlideIndex,slide:this.prevActiveSlide,slideNode:this.prevActiveSlide,slideIndex:this.prevActiveSlide,slideConfig:I(this.prevActiveSlideIndex)?null:this.elements[this.prevActiveSlideIndex].slideConfig,trigger:I(this.prevActiveSlideIndex)?null:this.elements[this.prevActiveSlideIndex].node,player:this.getSlidePlayerInstance(this.prevActiveSlideIndex)},o={index:this.index,slide:this.activeSlide,slideNode:this.activeSlide,slideConfig:this.elements[this.index].slideConfig,slideIndex:this.index,trigger:this.elements[this.index].node,player:this.getSlidePlayerInstance(this.index)};if(n.offsetWidth>0&&s&&(p(s),s.style.display=""),d(e,this.effectsClasses),t)g(e,this.settings.cssEfects[this.settings.openEffect].in,(function(){i.settings.autoplayVideos&&i.slidePlayerPlay(e),i.trigger("slide_changed",{prev:l,current:o}),T(i.settings.afterSlideChange)&&i.settings.afterSlideChange.apply(i,[l,o])}));else{var r=this.settings.slideEffect,a="none"!==r?this.settings.cssEfects[r].in:r;this.prevActiveSlideIndex>this.index&&"slide"==this.settings.slideEffect&&(a=this.settings.cssEfects.slideBack.in),g(e,a,(function(){i.settings.autoplayVideos&&i.slidePlayerPlay(e),i.trigger("slide_changed",{prev:l,current:o}),T(i.settings.afterSlideChange)&&i.settings.afterSlideChange.apply(i,[l,o])}))}setTimeout((function(){i.resize(e)}),100),h(e,"current")}},{key:"slideAnimateOut",value:function(){if(!this.prevActiveSlide)return!1;var e=this.prevActiveSlide;d(e,this.effectsClasses),h(e,"prev");var t=this.settings.slideEffect,i="none"!==t?this.settings.cssEfects[t].out:t;this.slidePlayerPause(e),this.trigger("slide_before_change",{prev:{index:this.prevActiveSlideIndex,slide:this.prevActiveSlide,slideNode:this.prevActiveSlide,slideIndex:this.prevActiveSlideIndex,slideConfig:I(this.prevActiveSlideIndex)?null:this.elements[this.prevActiveSlideIndex].slideConfig,trigger:I(this.prevActiveSlideIndex)?null:this.elements[this.prevActiveSlideIndex].node,player:this.getSlidePlayerInstance(this.prevActiveSlideIndex)},current:{index:this.index,slide:this.activeSlide,slideNode:this.activeSlide,slideIndex:this.index,slideConfig:this.elements[this.index].slideConfig,trigger:this.elements[this.index].node,player:this.getSlidePlayerInstance(this.index)}}),T(this.settings.beforeSlideChange)&&this.settings.beforeSlideChange.apply(this,[{index:this.prevActiveSlideIndex,slide:this.prevActiveSlide,player:this.getSlidePlayerInstance(this.prevActiveSlideIndex)},{index:this.index,slide:this.activeSlide,player:this.getSlidePlayerInstance(this.index)}]),this.prevActiveSlideIndex>this.index&&"slide"==this.settings.slideEffect&&(i=this.settings.cssEfects.slideBack.out),g(e,i,(function(){var t=e.querySelector(".ginner-container"),i=e.querySelector(".gslide-media"),n=e.querySelector(".gslide-description");t.style.transform="",i.style.transform="",d(i,"greset"),i.style.opacity="",n&&(n.style.opacity=""),d(e,"prev")}))}},{key:"getAllPlayers",value:function(){return this.videoPlayers}},{key:"getSlidePlayerInstance",value:function(e){var t="gvideo"+e,i=this.getAllPlayers();return!(!O(i,t)||!i[t])&&i[t]}},{key:"stopSlideVideo",value:function(e){if(k(e)){var t=e.querySelector(".gvideo-wrapper");t&&(e=t.getAttribute("data-index"))}console.log("stopSlideVideo is deprecated, use slidePlayerPause");var i=this.getSlidePlayerInstance(e);i&&i.playing&&i.pause()}},{key:"slidePlayerPause",value:function(e){if(k(e)){var t=e.querySelector(".gvideo-wrapper");t&&(e=t.getAttribute("data-index"))}var i=this.getSlidePlayerInstance(e);i&&i.playing&&i.pause()}},{key:"playSlideVideo",value:function(e){if(k(e)){var t=e.querySelector(".gvideo-wrapper");t&&(e=t.getAttribute("data-index"))}console.log("playSlideVideo is deprecated, use slidePlayerPlay");var i=this.getSlidePlayerInstance(e);i&&!i.playing&&i.play()}},{key:"slidePlayerPlay",value:function(e){if(k(e)){var t=e.querySelector(".gvideo-wrapper");t&&(e=t.getAttribute("data-index"))}var i=this.getSlidePlayerInstance(e);i&&!i.playing&&(i.play(),this.settings.autofocusVideos&&i.elements.container.focus())}},{key:"setElements",value:function(e){var t=this;this.settings.elements=!1;var i=[];e&&e.length&&o(e,(function(e,n){var s=new U(e,t,n),o=s.getConfig(),r=l({},o);r.slideConfig=o,r.instance=s,r.index=n,i.push(r)})),this.elements=i,this.lightboxOpen&&(this.slidesContainer.innerHTML="",this.elements.length&&(o(this.elements,(function(){var e=m(t.settings.slideHTML);t.slidesContainer.appendChild(e)})),this.showSlide(0,!0)))}},{key:"getElementIndex",value:function(e){var t=!1;return o(this.elements,(function(i,n){if(O(i,"node")&&i.node==e)return t=n,!0})),t}},{key:"getElements",value:function(){var e=this,t=[];this.elements=this.elements?this.elements:[],!I(this.settings.elements)&&E(this.settings.elements)&&this.settings.elements.length&&o(this.settings.elements,(function(i,n){var s=new U(i,e,n),o=s.getConfig(),r=l({},o);r.node=!1,r.index=n,r.instance=s,r.slideConfig=o,t.push(r)}));var i=!1;return this.getSelector()&&(i=document.querySelectorAll(this.getSelector())),i?(o(i,(function(i,n){var s=new U(i,e,n),o=s.getConfig(),r=l({},o);r.node=i,r.index=n,r.instance=s,r.slideConfig=o,r.gallery=i.getAttribute("data-gallery"),t.push(r)})),t):t}},{key:"getGalleryElements",value:function(e,t){return e.filter((function(e){return e.gallery==t}))}},{key:"getSelector",value:function(){return!this.settings.elements&&(this.settings.selector&&"data-"==this.settings.selector.substring(0,5)?"*[".concat(this.settings.selector,"]"):this.settings.selector)}},{key:"getActiveSlide",value:function(){return this.slidesContainer.querySelectorAll(".gslide")[this.index]}},{key:"getActiveSlideIndex",value:function(){return this.index}},{key:"getAnimationClasses",value:function(){var e=[];for(var t in this.settings.cssEfects)if(this.settings.cssEfects.hasOwnProperty(t)){var i=this.settings.cssEfects[t];e.push("g".concat(i.in)),e.push("g".concat(i.out))}return e.join(" ")}},{key:"build",value:function(){var e=this;if(this.built)return!1;var t=document.body.childNodes,i=[];o(t,(function(e){e.parentNode==document.body&&"#"!==e.nodeName.charAt(0)&&e.hasAttribute&&!e.hasAttribute("aria-hidden")&&(i.push(e),e.setAttribute("aria-hidden","true"))}));var n=O(this.settings.svg,"next")?this.settings.svg.next:"",s=O(this.settings.svg,"prev")?this.settings.svg.prev:"",l=O(this.settings.svg,"close")?this.settings.svg.close:"",r=this.settings.lightboxHTML;r=m(r=(r=(r=r.replace(/{nextSVG}/g,n)).replace(/{prevSVG}/g,s)).replace(/{closeSVG}/g,l)),document.body.appendChild(r);var d=document.getElementById("glightbox-body");this.modal=d;var g=d.querySelector(".gclose");this.prevButton=d.querySelector(".gprev"),this.nextButton=d.querySelector(".gnext"),this.overlay=d.querySelector(".goverlay"),this.loader=d.querySelector(".gloader"),this.slidesContainer=document.getElementById("glightbox-slider"),this.bodyHiddenChildElms=i,this.events={},h(this.modal,"glightbox-"+this.settings.skin),this.settings.closeButton&&g&&(this.events.close=a("click",{onElement:g,withCallback:function(t,i){t.preventDefault(),e.close()}})),g&&!this.settings.closeButton&&g.parentNode.removeChild(g),this.nextButton&&(this.events.next=a("click",{onElement:this.nextButton,withCallback:function(t,i){t.preventDefault(),e.nextSlide()}})),this.prevButton&&(this.events.prev=a("click",{onElement:this.prevButton,withCallback:function(t,i){t.preventDefault(),e.prevSlide()}})),this.settings.closeOnOutsideClick&&(this.events.outClose=a("click",{onElement:d,withCallback:function(t,i){e.preventOutsideClick||c(document.body,"glightbox-mobile")||u(t.target,".ginner-container")||u(t.target,".gbtn")||c(t.target,"gnext")||c(t.target,"gprev")||e.close()}})),o(this.elements,(function(t,i){e.slidesContainer.appendChild(t.instance.create()),t.slideNode=e.slidesContainer.querySelectorAll(".gslide")[i]})),K&&h(document.body,"glightbox-touch"),this.events.resize=a("resize",{onElement:window,withCallback:function(){e.resize()}}),this.built=!0}},{key:"resize",value:function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:null;if((e=e||this.activeSlide)&&!c(e,"zoomed")){var t=y(),i=e.querySelector(".gvideo-wrapper"),n=e.querySelector(".gslide-image"),s=this.slideDescription,l=t.width,o=t.height;if(l<=768?h(document.body,"glightbox-mobile"):d(document.body,"glightbox-mobile"),i||n){var r=!1;if(s&&(c(s,"description-bottom")||c(s,"description-top"))&&!c(s,"gabsolute")&&(r=!0),n)if(l<=768)n.querySelector("img");else if(r){var a=s.offsetHeight,u=n.querySelector("img");u.setAttribute("style","max-height: calc(100vh - ".concat(a,"px)")),s.setAttribute("style","max-width: ".concat(u.offsetWidth,"px;"))}if(i){var g=O(this.settings.plyr.config,"ratio")?this.settings.plyr.config.ratio:"";if(!g){var v=i.clientWidth,f=i.clientHeight,p=v/f;g="".concat(v/p,":").concat(f/p)}var m=g.split(":"),x=this.settings.videosWidth,b=this.settings.videosWidth,S=(b=M(x)||-1!==x.indexOf("px")?parseInt(x):-1!==x.indexOf("vw")?l*parseInt(x)/100:-1!==x.indexOf("vh")?o*parseInt(x)/100:-1!==x.indexOf("%")?l*parseInt(x)/100:parseInt(i.clientWidth))/(parseInt(m[0])/parseInt(m[1]));if(S=Math.floor(S),r&&(o-=s.offsetHeight),b>l||S>o||ob){var w=i.offsetWidth,T=i.offsetHeight,C=o/T,k={width:w*C,height:T*C};i.parentNode.setAttribute("style","max-width: ".concat(k.width,"px")),r&&s.setAttribute("style","max-width: ".concat(k.width,"px;"))}else i.parentNode.style.maxWidth="".concat(x),r&&s.setAttribute("style","max-width: ".concat(x,";"))}}}}},{key:"reload",value:function(){this.init()}},{key:"updateNavigationClasses",value:function(){var e=this.loop();d(this.nextButton,"disabled"),d(this.prevButton,"disabled"),0==this.index&&this.elements.length-1==0?(h(this.prevButton,"disabled"),h(this.nextButton,"disabled")):0!==this.index||e?this.index!==this.elements.length-1||e||h(this.nextButton,"disabled"):h(this.prevButton,"disabled")}},{key:"loop",value:function(){var e=O(this.settings,"loopAtEnd")?this.settings.loopAtEnd:null;return e=O(this.settings,"loop")?this.settings.loop:e,e}},{key:"close",value:function(){var e=this;if(!this.lightboxOpen){if(this.events){for(var t in this.events)this.events.hasOwnProperty(t)&&this.events[t].destroy();this.events=null}return!1}if(this.closing)return!1;this.closing=!0,this.slidePlayerPause(this.activeSlide),this.fullElementsList&&(this.elements=this.fullElementsList),this.bodyHiddenChildElms.length&&o(this.bodyHiddenChildElms,(function(e){e.removeAttribute("aria-hidden")})),h(this.modal,"glightbox-closing"),g(this.overlay,"none"==this.settings.openEffect?"none":this.settings.cssEfects.fade.out),g(this.activeSlide,this.settings.cssEfects[this.settings.closeEffect].out,(function(){if(e.activeSlide=null,e.prevActiveSlideIndex=null,e.prevActiveSlide=null,e.built=!1,e.events){for(var t in e.events)e.events.hasOwnProperty(t)&&e.events[t].destroy();e.events=null}var i=document.body;d(Q,"glightbox-open"),d(i,"glightbox-open touching gdesc-open glightbox-touch glightbox-mobile gscrollbar-fixer"),e.modal.parentNode.removeChild(e.modal),e.trigger("close"),T(e.settings.onClose)&&e.settings.onClose();var n=document.querySelector(".gcss-styles");n&&n.parentNode.removeChild(n),e.lightboxOpen=!1,e.closing=null}))}},{key:"destroy",value:function(){this.close(),this.clearAllEvents(),this.baseEvents&&this.baseEvents.destroy()}},{key:"on",value:function(e,t){var i=arguments.length>2&&void 0!==arguments[2]&&arguments[2];if(!e||!T(t))throw new TypeError("Event name and callback must be defined");this.apiEvents.push({evt:e,once:i,callback:t})}},{key:"once",value:function(e,t){this.on(e,t,!0)}},{key:"trigger",value:function(e){var t=this,i=arguments.length>1&&void 0!==arguments[1]?arguments[1]:null,n=[];o(this.apiEvents,(function(t,s){var l=t.evt,o=t.once,r=t.callback;l==e&&(r(i),o&&n.push(s))})),n.length&&o(n,(function(e){return t.apiEvents.splice(e,1)}))}},{key:"clearAllEvents",value:function(){this.apiEvents.splice(0,this.apiEvents.length)}},{key:"version",value:function(){return"3.1.1"}}]),e}();return function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=new te(e);return t.init(),t}})); diff --git a/assets/kinetica_aws.png b/assets/kinetica_aws.png new file mode 100644 index 0000000..24960af Binary files /dev/null and b/assets/kinetica_aws.png differ diff --git a/assets/kinetica_logo.png b/assets/kinetica_logo.png new file mode 100644 index 0000000..c7c706a Binary files /dev/null and b/assets/kinetica_logo.png differ diff --git a/assets/stylesheets/glightbox.min.css b/assets/stylesheets/glightbox.min.css new file mode 100644 index 0000000..3c9ff87 --- /dev/null +++ b/assets/stylesheets/glightbox.min.css @@ -0,0 +1 @@ +.glightbox-container{width:100%;height:100%;position:fixed;top:0;left:0;z-index:999999!important;overflow:hidden;-ms-touch-action:none;touch-action:none;-webkit-text-size-adjust:100%;-moz-text-size-adjust:100%;-ms-text-size-adjust:100%;text-size-adjust:100%;-webkit-backface-visibility:hidden;backface-visibility:hidden;outline:0}.glightbox-container.inactive{display:none}.glightbox-container .gcontainer{position:relative;width:100%;height:100%;z-index:9999;overflow:hidden}.glightbox-container .gslider{-webkit-transition:-webkit-transform .4s ease;transition:-webkit-transform .4s ease;transition:transform .4s ease;transition:transform .4s ease,-webkit-transform .4s ease;height:100%;left:0;top:0;width:100%;position:relative;overflow:hidden;display:-webkit-box!important;display:-ms-flexbox!important;display:flex!important;-webkit-box-pack:center;-ms-flex-pack:center;justify-content:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0)}.glightbox-container .gslide{width:100%;position:absolute;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;display:-webkit-box;display:-ms-flexbox;display:flex;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-pack:center;-ms-flex-pack:center;justify-content:center;opacity:0}.glightbox-container .gslide.current{opacity:1;z-index:99999;position:relative}.glightbox-container .gslide.prev{opacity:1;z-index:9999}.glightbox-container .gslide-inner-content{width:100%}.glightbox-container .ginner-container{position:relative;width:100%;display:-webkit-box;display:-ms-flexbox;display:flex;-webkit-box-pack:center;-ms-flex-pack:center;justify-content:center;-webkit-box-orient:vertical;-webkit-box-direction:normal;-ms-flex-direction:column;flex-direction:column;max-width:100%;margin:auto;height:100vh}.glightbox-container .ginner-container.gvideo-container{width:100%}.glightbox-container .ginner-container.desc-bottom,.glightbox-container .ginner-container.desc-top{-webkit-box-orient:vertical;-webkit-box-direction:normal;-ms-flex-direction:column;flex-direction:column}.glightbox-container .ginner-container.desc-left,.glightbox-container .ginner-container.desc-right{max-width:100%!important}.gslide iframe,.gslide video{outline:0!important;border:none;min-height:165px;-webkit-overflow-scrolling:touch;-ms-touch-action:auto;touch-action:auto}.gslide:not(.current){pointer-events:none}.gslide-image{-webkit-box-align:center;-ms-flex-align:center;align-items:center}.gslide-image img{max-height:100vh;display:block;padding:0;float:none;outline:0;border:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;max-width:100vw;width:auto;height:auto;-o-object-fit:cover;object-fit:cover;-ms-touch-action:none;touch-action:none;margin:auto;min-width:200px}.desc-bottom .gslide-image img,.desc-top .gslide-image img{width:auto}.desc-left .gslide-image img,.desc-right .gslide-image img{width:auto;max-width:100%}.gslide-image img.zoomable{position:relative}.gslide-image img.dragging{cursor:-webkit-grabbing!important;cursor:grabbing!important;-webkit-transition:none;transition:none}.gslide-video{position:relative;max-width:100vh;width:100%!important}.gslide-video .plyr__poster-enabled.plyr--loading .plyr__poster{display:none}.gslide-video .gvideo-wrapper{width:100%;margin:auto}.gslide-video::before{content:'';position:absolute;width:100%;height:100%;background:rgba(255,0,0,.34);display:none}.gslide-video.playing::before{display:none}.gslide-video.fullscreen{max-width:100%!important;min-width:100%;height:75vh}.gslide-video.fullscreen video{max-width:100%!important;width:100%!important}.gslide-inline{background:#fff;text-align:left;max-height:calc(100vh - 40px);overflow:auto;max-width:100%;margin:auto}.gslide-inline .ginlined-content{padding:20px;width:100%}.gslide-inline .dragging{cursor:-webkit-grabbing!important;cursor:grabbing!important;-webkit-transition:none;transition:none}.ginlined-content{overflow:auto;display:block!important;opacity:1}.gslide-external{display:-webkit-box;display:-ms-flexbox;display:flex;width:100%;min-width:100%;background:#fff;padding:0;overflow:auto;max-height:75vh;height:100%}.gslide-media{display:-webkit-box;display:-ms-flexbox;display:flex;width:auto}.zoomed .gslide-media{-webkit-box-shadow:none!important;box-shadow:none!important}.desc-bottom .gslide-media,.desc-top .gslide-media{margin:0 auto;-webkit-box-orient:vertical;-webkit-box-direction:normal;-ms-flex-direction:column;flex-direction:column}.gslide-description{position:relative;-webkit-box-flex:1;-ms-flex:1 0 100%;flex:1 0 100%}.gslide-description.description-left,.gslide-description.description-right{max-width:100%}.gslide-description.description-bottom,.gslide-description.description-top{margin:0 auto;width:100%}.gslide-description p{margin-bottom:12px}.gslide-description p:last-child{margin-bottom:0}.zoomed .gslide-description{display:none}.glightbox-button-hidden{display:none}.glightbox-mobile .glightbox-container .gslide-description{height:auto!important;width:100%;position:absolute;bottom:0;padding:19px 11px;max-width:100vw!important;-webkit-box-ordinal-group:3!important;-ms-flex-order:2!important;order:2!important;max-height:78vh;overflow:auto!important;background:-webkit-gradient(linear,left top,left bottom,from(rgba(0,0,0,0)),to(rgba(0,0,0,.75)));background:linear-gradient(to bottom,rgba(0,0,0,0) 0,rgba(0,0,0,.75) 100%);-webkit-transition:opacity .3s linear;transition:opacity .3s linear;padding-bottom:50px}.glightbox-mobile .glightbox-container .gslide-title{color:#fff;font-size:1em}.glightbox-mobile .glightbox-container .gslide-desc{color:#a1a1a1}.glightbox-mobile .glightbox-container .gslide-desc a{color:#fff;font-weight:700}.glightbox-mobile .glightbox-container .gslide-desc *{color:inherit}.glightbox-mobile .glightbox-container .gslide-desc .desc-more{color:#fff;opacity:.4}.gdesc-open .gslide-media{-webkit-transition:opacity .5s ease;transition:opacity .5s ease;opacity:.4}.gdesc-open .gdesc-inner{padding-bottom:30px}.gdesc-closed .gslide-media{-webkit-transition:opacity .5s ease;transition:opacity .5s ease;opacity:1}.greset{-webkit-transition:all .3s ease;transition:all .3s ease}.gabsolute{position:absolute}.grelative{position:relative}.glightbox-desc{display:none!important}.glightbox-open{overflow:hidden}.gloader{height:25px;width:25px;-webkit-animation:lightboxLoader .8s infinite linear;animation:lightboxLoader .8s infinite linear;border:2px solid #fff;border-right-color:transparent;border-radius:50%;position:absolute;display:block;z-index:9999;left:0;right:0;margin:0 auto;top:47%}.goverlay{width:100%;height:calc(100vh + 1px);position:fixed;top:-1px;left:0;background:#000;will-change:opacity}.glightbox-mobile .goverlay{background:#000}.gclose,.gnext,.gprev{z-index:99999;cursor:pointer;width:26px;height:44px;border:none;display:-webkit-box;display:-ms-flexbox;display:flex;-webkit-box-pack:center;-ms-flex-pack:center;justify-content:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-box-orient:vertical;-webkit-box-direction:normal;-ms-flex-direction:column;flex-direction:column}.gclose svg,.gnext svg,.gprev svg{display:block;width:25px;height:auto;margin:0;padding:0}.gclose.disabled,.gnext.disabled,.gprev.disabled{opacity:.1}.gclose .garrow,.gnext .garrow,.gprev .garrow{stroke:#fff}.gbtn.focused{outline:2px solid #0f3d81}iframe.wait-autoplay{opacity:0}.glightbox-closing .gclose,.glightbox-closing .gnext,.glightbox-closing .gprev{opacity:0!important}.glightbox-clean .gslide-description{background:#fff}.glightbox-clean .gdesc-inner{padding:22px 20px}.glightbox-clean .gslide-title{font-size:1em;font-weight:400;font-family:arial;color:#000;margin-bottom:19px;line-height:1.4em}.glightbox-clean .gslide-desc{font-size:.86em;margin-bottom:0;font-family:arial;line-height:1.4em}.glightbox-clean .gslide-video{background:#000}.glightbox-clean .gclose,.glightbox-clean .gnext,.glightbox-clean .gprev{background-color:rgba(0,0,0,.75);border-radius:4px}.glightbox-clean .gclose path,.glightbox-clean .gnext path,.glightbox-clean .gprev path{fill:#fff}.glightbox-clean .gprev{position:absolute;top:-100%;left:30px;width:40px;height:50px}.glightbox-clean .gnext{position:absolute;top:-100%;right:30px;width:40px;height:50px}.glightbox-clean .gclose{width:35px;height:35px;top:15px;right:10px;position:absolute}.glightbox-clean .gclose svg{width:18px;height:auto}.glightbox-clean .gclose:hover{opacity:1}.gfadeIn{-webkit-animation:gfadeIn .5s ease;animation:gfadeIn .5s ease}.gfadeOut{-webkit-animation:gfadeOut .5s ease;animation:gfadeOut .5s ease}.gslideOutLeft{-webkit-animation:gslideOutLeft .3s ease;animation:gslideOutLeft .3s ease}.gslideInLeft{-webkit-animation:gslideInLeft .3s ease;animation:gslideInLeft .3s ease}.gslideOutRight{-webkit-animation:gslideOutRight .3s ease;animation:gslideOutRight .3s ease}.gslideInRight{-webkit-animation:gslideInRight .3s ease;animation:gslideInRight .3s ease}.gzoomIn{-webkit-animation:gzoomIn .5s ease;animation:gzoomIn .5s ease}.gzoomOut{-webkit-animation:gzoomOut .5s ease;animation:gzoomOut .5s ease}@-webkit-keyframes lightboxLoader{0%{-webkit-transform:rotate(0);transform:rotate(0)}100%{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}@keyframes lightboxLoader{0%{-webkit-transform:rotate(0);transform:rotate(0)}100%{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}@-webkit-keyframes gfadeIn{from{opacity:0}to{opacity:1}}@keyframes gfadeIn{from{opacity:0}to{opacity:1}}@-webkit-keyframes gfadeOut{from{opacity:1}to{opacity:0}}@keyframes gfadeOut{from{opacity:1}to{opacity:0}}@-webkit-keyframes gslideInLeft{from{opacity:0;-webkit-transform:translate3d(-60%,0,0);transform:translate3d(-60%,0,0)}to{visibility:visible;-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0);opacity:1}}@keyframes gslideInLeft{from{opacity:0;-webkit-transform:translate3d(-60%,0,0);transform:translate3d(-60%,0,0)}to{visibility:visible;-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0);opacity:1}}@-webkit-keyframes gslideOutLeft{from{opacity:1;visibility:visible;-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0)}to{-webkit-transform:translate3d(-60%,0,0);transform:translate3d(-60%,0,0);opacity:0;visibility:hidden}}@keyframes gslideOutLeft{from{opacity:1;visibility:visible;-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0)}to{-webkit-transform:translate3d(-60%,0,0);transform:translate3d(-60%,0,0);opacity:0;visibility:hidden}}@-webkit-keyframes gslideInRight{from{opacity:0;visibility:visible;-webkit-transform:translate3d(60%,0,0);transform:translate3d(60%,0,0)}to{-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0);opacity:1}}@keyframes gslideInRight{from{opacity:0;visibility:visible;-webkit-transform:translate3d(60%,0,0);transform:translate3d(60%,0,0)}to{-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0);opacity:1}}@-webkit-keyframes gslideOutRight{from{opacity:1;visibility:visible;-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0)}to{-webkit-transform:translate3d(60%,0,0);transform:translate3d(60%,0,0);opacity:0}}@keyframes gslideOutRight{from{opacity:1;visibility:visible;-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0)}to{-webkit-transform:translate3d(60%,0,0);transform:translate3d(60%,0,0);opacity:0}}@-webkit-keyframes gzoomIn{from{opacity:0;-webkit-transform:scale3d(.3,.3,.3);transform:scale3d(.3,.3,.3)}to{opacity:1}}@keyframes gzoomIn{from{opacity:0;-webkit-transform:scale3d(.3,.3,.3);transform:scale3d(.3,.3,.3)}to{opacity:1}}@-webkit-keyframes gzoomOut{from{opacity:1}50%{opacity:0;-webkit-transform:scale3d(.3,.3,.3);transform:scale3d(.3,.3,.3)}to{opacity:0}}@keyframes gzoomOut{from{opacity:1}50%{opacity:0;-webkit-transform:scale3d(.3,.3,.3);transform:scale3d(.3,.3,.3)}to{opacity:0}}@media (min-width:769px){.glightbox-container .ginner-container{width:auto;height:auto;-webkit-box-orient:horizontal;-webkit-box-direction:normal;-ms-flex-direction:row;flex-direction:row}.glightbox-container .ginner-container.desc-top .gslide-description{-webkit-box-ordinal-group:1;-ms-flex-order:0;order:0}.glightbox-container .ginner-container.desc-top .gslide-image,.glightbox-container .ginner-container.desc-top .gslide-image img{-webkit-box-ordinal-group:2;-ms-flex-order:1;order:1}.glightbox-container .ginner-container.desc-left .gslide-description{-webkit-box-ordinal-group:1;-ms-flex-order:0;order:0}.glightbox-container .ginner-container.desc-left .gslide-image{-webkit-box-ordinal-group:2;-ms-flex-order:1;order:1}.gslide-image img{max-height:97vh;max-width:100%}.gslide-image img.zoomable{cursor:-webkit-zoom-in;cursor:zoom-in}.zoomed .gslide-image img.zoomable{cursor:-webkit-grab;cursor:grab}.gslide-inline{max-height:95vh}.gslide-external{max-height:100vh}.gslide-description.description-left,.gslide-description.description-right{max-width:275px}.glightbox-open{height:auto}.goverlay{background:rgba(0,0,0,.92)}.glightbox-clean .gslide-media{-webkit-box-shadow:1px 2px 9px 0 rgba(0,0,0,.65);box-shadow:1px 2px 9px 0 rgba(0,0,0,.65)}.glightbox-clean .description-left .gdesc-inner,.glightbox-clean .description-right .gdesc-inner{position:absolute;height:100%;overflow-y:auto}.glightbox-clean .gclose,.glightbox-clean .gnext,.glightbox-clean .gprev{background-color:rgba(0,0,0,.32)}.glightbox-clean .gclose:hover,.glightbox-clean .gnext:hover,.glightbox-clean .gprev:hover{background-color:rgba(0,0,0,.7)}.glightbox-clean .gprev{top:45%}.glightbox-clean .gnext{top:45%}}@media (min-width:992px){.glightbox-clean .gclose{opacity:.7;right:20px}}@media screen and (max-height:420px){.goverlay{background:#000}} \ No newline at end of file diff --git a/assets/three_bars_background_lit.png b/assets/three_bars_background_lit.png new file mode 100644 index 0000000..52d43b4 Binary files /dev/null and b/assets/three_bars_background_lit.png differ diff --git a/blog/index.html b/blog/index.html new file mode 100644 index 0000000..2345d44 --- /dev/null +++ b/blog/index.html @@ -0,0 +1,10 @@ + Blog - Kinetica for Kubernetes
Skip to content
\ No newline at end of file diff --git a/images/columnar-1024x373.png b/images/columnar-1024x373.png new file mode 100644 index 0000000..55eee6b Binary files /dev/null and b/images/columnar-1024x373.png differ diff --git a/images/find_storage_class.gif b/images/find_storage_class.gif new file mode 100644 index 0000000..382d0ea Binary files /dev/null and b/images/find_storage_class.gif differ diff --git a/images/get_nodes.gif b/images/get_nodes.gif new file mode 100644 index 0000000..b761f86 Binary files /dev/null and b/images/get_nodes.gif differ diff --git a/images/helm_alternative_versions.gif b/images/helm_alternative_versions.gif new file mode 100644 index 0000000..c26e264 Binary files /dev/null and b/images/helm_alternative_versions.gif differ diff --git a/images/helm_install.gif b/images/helm_install.gif new file mode 100644 index 0000000..ac31d71 Binary files /dev/null and b/images/helm_install.gif differ diff --git a/images/helm_repo_add.gif b/images/helm_repo_add.gif new file mode 100644 index 0000000..2d81763 Binary files /dev/null and b/images/helm_repo_add.gif differ diff --git a/images/massivley_parallel.png b/images/massivley_parallel.png new file mode 100644 index 0000000..baecccb Binary files /dev/null and b/images/massivley_parallel.png differ diff --git a/images/vectorized.png b/images/vectorized.png new file mode 100644 index 0000000..94eb380 Binary files /dev/null and b/images/vectorized.png differ diff --git a/index.html b/index.html index d127afa..d1a5abd 100644 --- a/index.html +++ b/index.html @@ -1,8 +1,10 @@ - Overview - Kinetica DB Operator Helm Charts
Skip to content

Overview

This repository provides the helm chart for deploying the Kubernetes Operators for Database and Workbench. Once the operators are installed, you should be able to use the Kinetica Database and Workbench CRDs to deploy and manage Kinetica clusters and workbenches.

Add Kinetica Helm Repository

To add the Kinetica Helm repoistory to Helm 3:-

Bash
helm repo add kinetica-operators https://kineticadb.github.io/charts
-helm repo update
-
-# if you get a 404 error on the .tgz file, you may do the following
-# you could get this if you had previously added the repo and the chart has been updated in development
-helm repo remove kinetica-operators
-helm repo add kinetica-operators https://kineticadb.github.io/charts
-

Installation values will depend on the target kubernetes platform you are using.

This chart provides out of the box support for trying out in K3s and Kind clusters. For k3s, you should be able to use either the GPU and CPU version of the databases. In these platforms, a non production configuration of the Kinetica Database and Workbench is also deployed for you to get started. However, you should be able to change the k3s values file to deploy in other platforms as well. For fine grained configuration of the Database or the Workbench, refer to the Database and Workbench documentation.

We use the same chart for our SaaS and AWS Marketplace offerings. If you want to try out in SaaS or AWS Marketplace, follow those links, you need not use this chart directly.

Current version of the chart supports kubernetes version 1.25 and above.

k3s (k3s.io)

Refer to Kinetica on K3s

Kind (kubernetes in docker kind.sigs.k8s.io)

Refer to Kinetica on Kind

K8s - Any flavour (kubernetes.io)

Refer to Kinetica on K8s

\ No newline at end of file + Kinetica for Kubernetes
Skip to content

kineticadb/charts

Accelerate your AI and analytics. Kinetica harnesses real-time data and the power of CPUs & GPUs for lightning-fast insights due to it being uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

Kinetica DB can be quickly installed into Kubernetes using Helm.

  • Set up in 15 minutes


    Install the Kinetica DB locally on Kind or k3s with helm to get up and running in minutes (Dev/Test).

    Quickstart

  • Prepare to Install


    What you need to know & do before beginning a production installation.

    Preparation and Prerequisites

  • Production DB Installation


    Install the Kinetica DB with helm to get up and running quickly (Production).

    Installation

  • Running and Managing the Platform


    Metrics, Monitoring, Logs and Telemetry Distribution.

    Operations

  • Product Architecture


    The Modern Analytics Database Architected for Performance at Scale.

    Architecture

  • Support


    Additional Help, Tutorials and Troubleshooting resources.

    Support

  • Configuration in Detail


    Detailed reference material for the Helm Charts & Kinetica for Kubernetes CRDs.

    Reference Documentation

\ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json index 244825b..57e0eee 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":"

This repository provides the helm chart for deploying the Kubernetes Operators for Database and Workbench. Once the operators are installed, you should be able to use the Kinetica Database and Workbench CRDs to deploy and manage Kinetica clusters and workbenches.

"},{"location":"#add-kinetica-helm-repository","title":"Add Kinetica Helm Repository","text":"

To add the Kinetica Helm repoistory to Helm 3:-

Bash
helm repo add kinetica-operators https://kineticadb.github.io/charts\nhelm repo update\n\n# if you get a 404 error on the .tgz file, you may do the following\n# you could get this if you had previously added the repo and the chart has been updated in development\nhelm repo remove kinetica-operators\nhelm repo add kinetica-operators https://kineticadb.github.io/charts\n

Installation values will depend on the target kubernetes platform you are using.

This chart provides out of the box support for trying out in K3s and Kind clusters. For k3s, you should be able to use either the GPU and CPU version of the databases. In these platforms, a non production configuration of the Kinetica Database and Workbench is also deployed for you to get started. However, you should be able to change the k3s values file to deploy in other platforms as well. For fine grained configuration of the Database or the Workbench, refer to the Database and Workbench documentation.

We use the same chart for our SaaS and AWS Marketplace offerings. If you want to try out in SaaS or AWS Marketplace, follow those links, you need not use this chart directly.

Current version of the chart supports kubernetes version 1.25 and above.

"},{"location":"#k3s-k3sio","title":"k3s (k3s.io)","text":"

Refer to Kinetica on K3s

"},{"location":"#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":"

Refer to Kinetica on Kind

"},{"location":"#k8s-any-flavour-kubernetesio","title":"K8s - Any flavour (kubernetes.io)","text":"

Refer to Kinetica on K8s

"},{"location":"Database/database/","title":"Kinetica Database Configuration","text":"
  • kubectl (yaml)
"},{"location":"Database/database/#kineticacluster","title":"KineticaCluster","text":"

To deploy a new Database Instance into a Kubernetes cluster...

kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica DB Cluster is as follows: -

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\n
"},{"location":"Database/database/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n
"},{"location":"Database/database/#spec","title":"Spec","text":"

Under the spec: section of the KineticaCLuster CR we have a number of sections supporting different aspects of the deployed DB cluster:-

  • gpudbCluster
  • autoSuspend
  • gadmin
"},{"location":"Database/database/#gpudbcluster","title":"gpudbCluster","text":"

Configuartion items specific to the DB itself.

kineticacluster.yaml - gpudbCluster
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  gpudbCluster:\n
"},{"location":"Database/database/#gpudbcluster_1","title":"gpudbCluster","text":"cluster name & size
clusterName: kinetica-cluster \nclusterSize: \n  tshirtSize: M \n  tshirtType: LargeCPU \nfqdn: kinetica-cluster.saas.kinetica.com\nhaRingName: default\nhasPools: false    \n

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

3. tshirtSize - sets the cluster size to a defined size based upon the t-shirt size. Valid sizes are: -

  • XS - 1 DB Rank
  • S - 2 DB Ranks
  • M - 4 DB Ranks
  • L - 8 DB Ranks
  • XL - 16 DB Ranks
  • XXL - 32 DB Ranks
  • XXXL - 64 DB Ranks

4. tshirtType - block that defines the tyoe DB Ranks to run: -

  • SmallCPU -
  • LargeCPU -
  • SmallGPU -
  • LargeGPU -

5. fqdn - The fully qualified URL for the DB cluster. Used on the Ingress records for any exposed services.

6. haRingName - Default: default

7. hasPools - Whether to enable the separate node 'pools' for \"infra\", \"compute\" pod scheduling. Default: false +optional

"},{"location":"Database/database/#autosuspend","title":"autoSuspend","text":"

The DB Cluster autosuspend section allows for the spinning down of the core DB Pods to release the underlying Kubernetes nodes to reduce infrastructure costs when the DB is not in use.

kineticacluster.yaml - autoSuspend
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  autoSuspend:\n    enabled: false\n    inactivityDuration: 1h0m0s\n

7. the start of the autoSuspend definition

8. enabled when set to true auto suspend of the DB cluster is enabled otherwise set to false and no automatic suspending of the DB takes place. If omitted it defaults to false

9. inactivityDuration the duration after which if no DB activity has taken place the DB will be suspended

Horizontal Pod Autoscaler

In order for autoSuspend to work correctly the Kubernetes Horizontal Pod Autoscaler needs to be deployed to the cluster.

"},{"location":"Database/database/#gadmin","title":"gadmin","text":"

GAdmin the Database Administration Console

kineticacluster.yaml - gadmin
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  gadmin:\n    containerPort:\n      containerPort: 8080\n      name: gadmin\n      protocol: TCP\n    isEnabled: true\n

7. gadmin configuration block definition

8. containerPort configuration block i.e. where gadmin is exposed on the DB Pod

9. containerPort the port number as an integer. Default: 8080

10. name the name of the port being exposed. Default: gadmin

11. protocol network protocal used. Default: TCP

12. isEnabled whether gadmin is exposed from the DB pod. Default: true

"},{"location":"Database/database/#example-db-cr","title":"Example DB CR","text":"YAML
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: kinetica-cluster\n  namespace: gpudb\nspec:\n  autoSuspend:\n    enabled: false\n    inactivityDuration: 1h0m0s\n  debug: false\n  gadmin:\n    isEnabled: true\n  gpudbCluster:\n    clusterName: kinetica-cluster\n    clusterSize:\n      tshirtSize: M\n      tshirtType: LargeCPU\n    config:\n      graph:\n        enable: true\n      postgresProxy:\n        enablePostgresProxy: true\n      textSearch:\n        enableTextSearch: true\n      kifs:\n        enable: false\n      ml:\n        enable: false\n      tieredStorage:\n        globalTier:\n          colocateDisks: true\n          concurrentWaitTimeout: 120\n          encryptDataAtRest: true\n        persistTier:\n          default:\n            highWatermark: 90\n            limit: 4Ti\n            lowWatermark: 50\n            name: ''\n            path: default\n            provisioner: docker.io/hostpath\n            volumeClaim:\n              metadata: {}\n              spec:\n                resources: {}\n                storageClassName: kinetica-db-persist\n              status: {}\n      tieredStrategy:\n        default: VRAM 1, RAM 5, PERSIST 5\n        predicateEvaluationInterval: 60\n    fqdn: kinetica-cluster.saas.kinetica.com\n    haRingName: default\n    hasPools: false\n    hostManagerPort:\n      containerPort: 9300\n      name: hostmanager\n      protocol: TCP\n    image: docker.io/kineticastagingcloud/kinetica-k8s-db:v7.1.9-8.rc1\n    imagePullPolicy: IfNotPresent\n    letsEncrypt:\n      enabled: false\n    license: >-\n\n    metricsRegistryRepositoryTag:\n      imagePullPolicy: IfNotPresent\n      registry: docker.io\n      repository: kineticastagingcloud/fluent-bit\n      sha: ''\n      tag: v7.1.9-8.rc1\n    podManagementPolicy: Parallel\n    ranksPerNode: 1\n    replicas: 4\n  hostManagerMonitor:\n    monitorRegistryRepositoryTag:\n      imagePullPolicy: IfNotPresent\n      registry: docker.io\n      repository: kineticastagingcloud/kinetica-k8s-monitor\n      sha: ''\n      tag: v7.1.9-8.rc1\n    readinessProbe:\n      failureThreshold: 20\n      initialDelaySeconds: 5\n      periodSeconds: 10\n    startupProbe:\n      failureThreshold: 20\n      initialDelaySeconds: 5\n      periodSeconds: 10\n  infra: on-prem\n  ingressController: nginx-ingress\n  ldap:\n    host: openldap\n    isInLocalK8S: true\n    isLDAPS: false\n    namespace: gpudb\n    port: 389\n  payAsYouGo: false\n  reveal:\n    containerPort:\n      containerPort: 8088\n      name: reveal\n      protocol: TCP\n    isEnabled: true\n  supportingImages:\n    busybox:\n      imagePullPolicy: IfNotPresent\n      registry: docker.io\n      repository: kineticastagingcloud/busybox\n      sha: ''\n      tag: v7.1.9-8.rc1\n    socat:\n      imagePullPolicy: IfNotPresent\n      registry: docker.io\n      repository: kineticastagingcloud/socat\n      sha: ''\n      tag: v7.1.9-8.rc1\n
"},{"location":"Database/database/#kineticauser","title":"KineticaUser","text":""},{"location":"Database/database/#kineticagrant","title":"KineticaGrant","text":""},{"location":"Database/database/#kineticaschema","title":"KineticaSchema","text":""},{"location":"Database/database/#kineticaresourcegroup","title":"KineticaResourceGroup","text":""},{"location":"Database/database_reference/","title":"Database CRD/CR Reference","text":""},{"location":"Database/grant_reference/","title":"Kinetica DB User Permission Grant CRD/CR Reference","text":""},{"location":"Database/resource_group_reference/","title":"Kinetica DB User Resource Group CRD/CR Reference","text":""},{"location":"Database/schema_reference/","title":"Kinetica DB User Schema CRD/CR Reference","text":""},{"location":"Database/user_reference/","title":"Kinetica DB User CRD/CR Reference","text":""},{"location":"Operators/k3s/","title":"Overview","text":"

Kinetica Operators can be installed in any on-prem kubernetes cluster. This document provides instructions to install the operators in k3s. If you are on another distribution, you should be able to change the values file to suit your environment.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/k3s/#kinetica-on-k3s-k3sio","title":"Kinetica on k3s (k3s.io)","text":"

Current version of the chart supports kubernetes version 1.25 and above.

"},{"location":"Operators/k3s/#install-k3s-129","title":"Install k3s 1.29","text":"Bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik  --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n
"},{"location":"Operators/k3s/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s -Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Text Only
127.0.0.1 local.kinetica\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart","title":"K3s - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart-gpu-capable-machine","title":"K3s - Install the kinetica-operators chart (GPU Capable Machine)","text":"

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Operators/k3s/#uninstall-k3s","title":"Uninstall k3s","text":"Bash
/usr/local/bin/k3s-uninstall.sh\n
"},{"location":"Operators/k8s/","title":"Overview","text":"

For managed Kubernetes solutions (AKS, EKS, GKE) or other on-prem K8s flavors, follow this generic guide to install the Kinetica Operators, Database and Workbench. A product license key will be required for install. Please contact Kinetica Support to request a trial key.

"},{"location":"Operators/k8s/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. kubectl is optional but highly recommended. The context for the desired target cluster must be selected from your ~/.kube/config file or set via the KUBECONFIG environment variable. Check to see if you have the correct context with,

Bash
kubectl config current-context\n

and that you can access this cluster correctly with,

Bash
kubectl get nodes\n

If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).

"},{"location":"Operators/k8s/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

Alternatively, if you are installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Bash
127.0.0.1  local.kinetica\n

Note that the default chart configuration points to local.kinetica but this is configurable.

"},{"location":"Operators/k8s/#1-add-the-kinetica-chart-repository","title":"1. Add the Kinetica chart repository","text":"

Add the repo locally as kinetica-operators:

Bash
helm repo add kinetica-operators https://kineticadb.github.io/charts\n
"},{"location":"Operators/k8s/#2-obtain-the-default-helm-values-file","title":"2. Obtain the default Helm values file","text":"

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n
"},{"location":"Operators/k8s/#3-determine-the-following-prior-to-the-chart-install","title":"3. Determine the following prior to the chart install","text":"

(a) Obtain a LICENSE-KEY as described in the introduction above. (b) Choose a PASSWORD for the initial administrator user (Note: the default in the chart for this user is kadmin but this is configurable). Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, --set dbAdminUser.password=\"MyPassword\\!\" \u00a9 As storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain DEFAULT-STORAGE-CLASS name with the command:

Bash
kubectl get sc -o name \n

use the name found after the /, For example, in \"storageclass.storage.k8s.io/TheName\" use \"TheName\" as the parameter.

"},{"location":"Operators/k8s/#4-install-the-helm-chart","title":"4. Install the helm chart","text":"

Run the following Helm install command after substituting values from section 3 above:

Bash
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
"},{"location":"Operators/k8s/#5-check-installation-progress","title":"5. Check installation progress","text":"

After a few moments, follow the progression of the main database pod startup with:

Bash
kubectl -n gpudb get po gpudb-0 -w\n

until it reaches \"gpudb-0 3/3 Running\" at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the helm command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using ctrl-c.

"},{"location":"Operators/k8s/#6-accessing-the-kinetica-installation","title":"6. Accessing the Kinetica installation","text":""},{"location":"Operators/k8s/#optional-install-a-development-chart-version","title":"(Optional) Install a development chart version","text":"

Find all alternative chart versions with:

Bash
helm search repo kinetica-operators --devel --versions\n

Then append --devel --version [CHART-DEVEL-VERSION] to the end of the Helm install command in section 4 above.

"},{"location":"Operators/k8s/#k8s-flavour-specific-notes","title":"K8s Flavour specific notes","text":""},{"location":"Operators/k8s/#eks","title":"EKS","text":""},{"location":"Operators/k8s/#ebs-csi-driver","title":"EBS CSI driver","text":"

Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work. Please refer to this AWS documentation for more information.

"},{"location":"Operators/k8s/#ingress","title":"Ingress","text":"

As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.

"},{"location":"Operators/k8s/#option-1-use-the-loadbalancer-domain","title":"Option 1: Use the LoadBalancer domain","text":"Bash
kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n
"},{"location":"Operators/k8s/#option-1-use-your-custom-domain","title":"Option 1: Use your custom domain","text":"

Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

"},{"location":"Operators/kind/","title":"Overview","text":"

This installation in a kind cluster is for trying out the operators and the database in a non production environment. This method currently only supports installing a CPU version of the database.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/kind/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":""},{"location":"Operators/kind/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Bash
kind create cluster --config charts/kinetica-operators/kind.yaml\n
"},{"location":"Operators/kind/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

"},{"location":"Operators/kind/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Workbench/workbench/","title":"Kinetica Workbench Configuration","text":"
  • kubectl (yaml)
  • Helm Chart
"},{"location":"Workbench/workbench/#workbench","title":"Workbench","text":"kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica Workbench is as follows: -

Workbench GVK
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\n
"},{"location":"Workbench/workbench/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

Workbench metadata
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n  name: workbench-kinetica-cluster\n  namespace: gpudb\n
"},{"location":"Workbench/workbench/#an-example-workbench-cr","title":"An Example Workbench CR","text":"

The simplest valid Workbench CR looks as follows: -

workbench.yaml
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n  name: workbench-kinetica-cluster\n  namespace: gpudb\nspec:\n  executeSqlLimit: 10000\n  fqdn: kinetica-cluster.saas.kinetica.com\n  image: kineticastagingcloud/workbench:v7.1.9-8.rc1\n  letsEncrypt:\n    enabled: false\n  userIdleTimeout: 60\n  ingressController: nginx-ingress\n
"},{"location":"Workbench/workbench_reference/","title":"Workbench CRD/CR Reference","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"kineticadb/charts","text":"

Accelerate your AI and analytics. Kinetica harnesses real-time data and the power of CPUs & GPUs for lightning-fast insights due to it being uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

Kinetica DB can be quickly installed into Kubernetes using Helm.

  • Set up in 15 minutes

    Install the Kinetica DB locally on Kind or k3s with helm to get up and running in minutes (Dev/Test).

    Quickstart

  • Prepare to Install

    What you need to know & do before beginning a production installation.

    Preparation and Prerequisites

  • Production DB Installation

    Install the Kinetica DB with helm to get up and running quickly (Production).

    Installation

  • Running and Managing the Platform

    Metrics, Monitoring, Logs and Telemetry Distribution.

    Operations

  • Product Architecture

    The Modern Analytics Database Architected for Performance at Scale.

    Architecture

  • Support

    Additional Help, Tutorials and Troubleshooting resources.

    Support

  • Configuration in Detail

    Detailed reference material for the Helm Charts & Kinetica for Kubernetes CRDs.

    Reference Documentation

"},{"location":"Advanced/advanced_topics/","title":"Advanced Topics","text":""},{"location":"Advanced/advanced_topics/#install-from-a-developmentpre-release-chart-version","title":"Install from a development/pre-release chart version","text":"

Find all alternative chart versions with:

Find alternative chart versions
helm search repo kinetica-operators --devel --versions\n

Then append --devel --version [CHART-DEVEL-VERSION] to the end of the Helm install command. See here.

"},{"location":"Advanced/advanced_topics/#using-your-own-opentelemetry-collector","title":"Using your own OpenTelemetry Collector","text":""},{"location":"Advanced/advanced_topics/#coming-soon","title":"Coming Soon","text":""},{"location":"Advanced/advanced_topics/#configuring-ingress-records","title":"Configuring Ingress Records","text":""},{"location":"Advanced/advanced_topics/#ingress-nginx","title":"ingress-nginx","text":""},{"location":"Advanced/advanced_topics/#coming-soon_1","title":"Coming Soon","text":""},{"location":"Advanced/advanced_topics/#nginx-ingress","title":"nginx-ingress","text":""},{"location":"Advanced/advanced_topics/#coming-soon_2","title":"Coming Soon","text":""},{"location":"Advanced/advanced_topics/#bare-metal-loadbalancer","title":"Bare Metal LoadBalancer","text":""},{"location":"Advanced/advanced_topics/#kube-vip","title":"kube-vip","text":"

kube-vip provides Kubernetes clusters with a virtual IP and load balancer for both the control plane (for building a highly-available cluster) and Kubernetes Services of type LoadBalancer without relying on any external hardware or software.

"},{"location":"Advanced/advanced_topics/#coming-soon_3","title":"Coming Soon","text":""},{"location":"Architecture/","title":"Architecture","text":"

Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance \u2013 particularly on streaming analytics and geospatial workloads.

Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

  • Kinetica Database Architecture

    Install the Kinetica DB with helm and get up and running in minutes

    Core Database Architecture

  • Kinetica for Kubernetes Architecture

    Install the Kinetica DB with helm and get up and running in minutes

    Kubernetes Architecture

"},{"location":"Architecture/db_architecture/","title":"Architecture","text":"

Kinetica is a distributed, vectorized, memory-first, columnar database with tiered storage that is optimized for high speed and performance \u2013 particularly on streaming analytics and geospatial workloads.

Kinetica has been uniquely designed for fast and flexible analytics on large volumes of changing data with incredible performance.

"},{"location":"Architecture/db_architecture/#database-architecture","title":"Database Architecture","text":""},{"location":"Architecture/db_architecture/#scale-out-architecture","title":"Scale-out Architecture","text":"

Kinetica has a distributed architecture that has been designed for data processing at scale. A standard cluster consists of identical nodes run on commodity hardware. A single node is chosen to be the head aggregation node.

A cluster can be scaled up at any time to increase storage capacity and processing power, with near-linear scale processing improvements for most operations. Sharding of data can be done automatically, or specified and optimized by the user.

"},{"location":"Architecture/db_architecture/#distributed-ingest-query","title":"Distributed Ingest & Query","text":"

Kinetica uses a shared-nothing data distribution across worker nodes. The head node receives a query and breaks it down into small tasks that can be spread across worker nodes. To avoid bottlenecks at the head node, ingestion can also be organized in parallel by all the worker nodes. Kinetica is able to distribute data client-side before sending it to designated worker nodes. This streamlines communication and processing time.

For the client application, there is no need to be aware of how many nodes are in the cluster, where they are, or how the data is distributed across them!

"},{"location":"Architecture/db_architecture/#column-oriented","title":"Column Oriented","text":"

Columnar data structures lend themselves to low-latency reads of data. But from a user's perspective, Kinetica behaves very similarly to a standard relational database \u2013 with tables of rows and columns and it can be queried with SQL or through APIs. Available column types include the standard base types (int, long, float, double, string, & bytes), as well as numerous sub-types supporting date/time, geospatial, and other data forms.

"},{"location":"Architecture/db_architecture/#vectorized-functions","title":"Vectorized Functions","text":"

Vectorization is Kinetica\u2019s secret sauce and the key feature that underpins its blazing fast performance.

Advanced vectorized kernels are optimized to use vectorized CPUs and GPUs for faster performance. The query engine automatically assigns tasks to the processor where they will be most performant. Aggregations, filters, window functions, joins and geospatial rendering are some of the capabilities that see performance improvements.

"},{"location":"Architecture/db_architecture/#memory-first-tiered-storage","title":"Memory-First, Tiered Storage","text":"

Tiered storage makes it possible to optimize where data lives for performance and cost. Recent data (such as all data where the timestamp is within the last 2 weeks) can be held in-memory, while older data can be moved to disk, or even to external storage services.

Kinetica operates on an entire data corpus by intelligently managing data across GPU memory, system memory, SIMD, disk / SSD, HDFS, and cloud storage like S3 for optimal performance.

Kinetica can also query and process data stored in data lakes, joining it with data managed by Kinetica in highly parallelized queries.

"},{"location":"Architecture/db_architecture/#performant-key-value-lookup","title":"Performant Key-Value Lookup","text":"

Kinetica is able to generate distributed key-value lookups, from columnar data, for high-performance and concurrency. Sharding logic is embedded directly within client APIs enabling linear scale-out as clients can lookup data directly from the node where the data lives.

"},{"location":"Architecture/kinetica_for_kubernetes_architecture/","title":"Kubernetes Architecture","text":""},{"location":"Architecture/kinetica_for_kubernetes_architecture/#coming-soon","title":"Coming Soon","text":""},{"location":"GettingStarted/","title":"Getting Started","text":"
  • Prepare to Install

    What you need to know & do before beginning an installation.

    Preparation and Prerequisites

  • Set up in 15 minutes (local install)

    Install the Kinetica DB with helm and get up and running in minutes.

    Quickstart

  • Set up in 15 minutes

    Install the Kinetica DB with helm and get up and running in minutes.

    Installation

  • Amazon Cloud

    Amazon Cloud EKS specific installation information.

    EKS

  • kind

    kind Kubernetes specific installation information.

    kind

  • k3s

    k3s Kubernetes specific installation information.

    k3s

"},{"location":"GettingStarted/aks/","title":"Azure AKS Specifics","text":"

This page covers any Microsoft Azure AKS cluster installation specifics.

"},{"location":"GettingStarted/eks/","title":"Amazon EKS Specifics","text":"

This page covers any Amazon EKS kubernetes cluster installation specifics.

"},{"location":"GettingStarted/eks/#ebs-csi-driver","title":"EBS CSI driver","text":"

Warning

Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work.

Please refer to this AWS documentation for more information.

"},{"location":"GettingStarted/installation/","title":"Installation","text":"

For managed Kubernetes solutions (AKS, EKS, GKE) or on-prem (kubeadm) Kubernetes variants, follow this generic guide to install the Kinetica Operators, Database and Workbench.

Preparation & Prequisites

Please make sure you have followed the Preparation & Prequisites steps

"},{"location":"GettingStarted/installation/#4-install-the-helm-chart","title":"4. Install the helm chart","text":"

Run the following Helm install command after substituting values from section 3

Helm install kinetica-operators
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
"},{"location":"GettingStarted/installation/#5-check-installation-progress","title":"5. Check installation progress","text":"

After a few moments, follow the progression of the main database pod startup with:

Monitor the Kinetica installation progress
kubectl -n gpudb get po gpudb-0 -w\n

until it reaches \"gpudb-0 3/3 Running\" at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the helm command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using ctrl-c.

"},{"location":"GettingStarted/installation/#6-accessing-the-kinetica-installation","title":"6. Accessing the Kinetica installation","text":""},{"location":"GettingStarted/installation/#target-platform-specifics","title":"Target Platform Specifics","text":"cloudlocal - devbare metal - prod

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.

Option 1: Use the LoadBalancer domain Set your FQDN in Kinetica

kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n

Option 2: Use your custom domain Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

Installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Configure local acces - /etc/hosts
127.0.0.1  local.kinetica\n

Note

The default chart configuration points to local.kinetica but this is configurable.

Installing on a bare metal machines which do not have an external hardware loadbalancer requires an Ingress controller along with a software loadbalancer in order to be accessible.

Kinetica for Kubernetes has been tested with kube-vip

"},{"location":"GettingStarted/k3s/","title":"k3s Installation Specifics","text":"

This page covers any k3s kubernetes cluster installation specifics.

"},{"location":"GettingStarted/kind/","title":"KinD Installation Specifics","text":"

This page covers any kind kubernetes cluster installation specifics.

"},{"location":"GettingStarted/preparation_and_prerequisites/","title":"Preparation & Prerequisites","text":"

Checks & steps to ensure a smooth installation.

Obtain a Kinetica License Key

A product license key will be required for install. Please contact Kinetica Support to request a trial key.

Failing to provide a license key at installation time will prevent the DB from starting.

"},{"location":"GettingStarted/preparation_and_prerequisites/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"

Free Resources

Your Kubernetes cluster version should be >= 1.22.x and have a minimum of 8 CPU, 8GB Ram and SSD or SATA 7200RPM hard drive(s) with 4X memory capacity.

GPU Support

For GPU enabled clusters the cards below have been tested in large-scale production environments and provide the best performance for the database.

GPU Driver P4/P40/P100 525.X (or higher) V100 525.X (or higher) T4 525.X (or higher) A10/A40/A100 525.X (or higher)"},{"location":"GettingStarted/preparation_and_prerequisites/#kubernetes-cluster-connectivity","title":"Kubernetes Cluster Connectivity","text":"

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. and the Kubernetes CLI kubectl.

The context for the desired target cluster must be selected from your ~/.kube/config file and set via the KUBECONFIG environment variable or kubectl ctx (if installed). Check to see if you have the correct context with,

show the current kubernetes context
kubectl config current-context\n

and that you can access this cluster correctly with,

list kubernetes cluster nodes
kubectl get nodes\n

If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).

Air-Gapped Environments

If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.

"},{"location":"GettingStarted/preparation_and_prerequisites/#required-container-images","title":"Required Container Images","text":""},{"location":"GettingStarted/preparation_and_prerequisites/#dockerio-required-kinetica-images-for-all-installations","title":"docker.io (Required Kinetica Images for All Installations)","text":"
  • docker.io/kineticastagingcloud/kinetica-k8s-operator:v7.2.0-3.rc-3
    • docker.io/kineticastagingcloud/kinetica-k8s-cpu:v7.2.0-3.rc-3 or
    • docker.io/kineticastagingcloud/kinetica-k8s-cpu-avx512:v7.2.0-3.rc-3 or
    • docker.io/kineticastagingcloud/kinetica-k8s-gpu:v7.2.0-3.rc-3
  • docker.io/kineticastagingcloud/workbench-operator:v7.2.0-3.rc-3
  • docker.io/kineticastagingcloud/workbench:v7.2.0-3.rc-3
  • docker.io/kineticastagingcloud/kinetica-k8s-monitor:v7.2.0-3.rc-3
  • docker.io/kineticastagingcloud/busybox:v7.2.0-3.rc-3
  • docker.io/kineticastagingcloud/fluent-bit:v7.2.0-3.rc-3
  • docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga
"},{"location":"GettingStarted/preparation_and_prerequisites/#nvcrio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • nvcr.io/nvidia/gpu-operator:v23.9.1
"},{"location":"GettingStarted/preparation_and_prerequisites/#registryk8sio-required-kinetica-images-for-gpu-installations-using-kinetica-k8s-gpu","title":"registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)","text":"
  • registry.k8s.io/nfd/node-feature-discovery:v0.14.2
"},{"location":"GettingStarted/preparation_and_prerequisites/#dockerio-required-supporting-images","title":"docker.io (Required Supporting Images)","text":"
  • docker.io/bitnami/openldap:2.6.7
  • docker.io/alpine/openssl:latest (used by bitnami/openldap)
  • docker.io/otel/opentelemetry-collector-contrib:0.95.0
"},{"location":"GettingStarted/preparation_and_prerequisites/#quayio-required-supporting-images","title":"quay.io (Required Supporting Images)","text":"
  • quay.io/brancz/kube-rbac-proxy:v0.14.2
"},{"location":"GettingStarted/preparation_and_prerequisites/#optional-container-images","title":"Optional Container Images","text":"

These images are only required if certain features are enabled as part of the Helm installation: -

  • CertManager
  • ingress-ninx
"},{"location":"GettingStarted/preparation_and_prerequisites/#quayio-optional-supporting-images","title":"quay.io (Optional Supporting Images)","text":"
  • quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
"},{"location":"GettingStarted/preparation_and_prerequisites/#registryk8sio-optional-supporting-images","title":"registry.k8s.io (Optional Supporting Images)","text":"
  • registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)
  • registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c
"},{"location":"GettingStarted/preparation_and_prerequisites/#which-kinetica-core-image-do-i-use","title":"Which Kinetica Core Image do I use?","text":"Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64) kinetica-k8s-cpu (1) kinetica-k8s-cpu-avx512 kinetica-k8s-gpu (2) (2) (2)
  1. It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image
  2. With a supported nVidia GPU.
"},{"location":"GettingStarted/preparation_and_prerequisites/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

"},{"location":"GettingStarted/preparation_and_prerequisites/#1-add-the-kinetica-chart-repository","title":"1. Add the Kinetica chart repository","text":"

Add the repo locally as kinetica-operators:

Helm repo add
helm repo add kinetica-operators https://kineticadb.github.io/charts\n

"},{"location":"GettingStarted/preparation_and_prerequisites/#2-obtain-the-default-helm-values-file","title":"2. Obtain the default Helm values file","text":"

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Helm values.yaml download
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n
"},{"location":"GettingStarted/preparation_and_prerequisites/#3-determine-the-following-prior-to-the-chart-install","title":"3. Determine the following prior to the chart install","text":"

Default Admin User

the default admin user in the Helm chart is kadmin but this is configurable. Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, --set dbAdminUser.password=\"MyPassword\\!\"

  1. Obtain a LICENSE-KEY as described in the introduction above.
  2. Choose a PASSWORD for the initial administrator user
  3. As the storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain the DEFAULT-STORAGE-CLASS name with the command:

Find the default storageclass
kubectl get sc -o name \n

use the name found after the /, For example, in storageclass.storage.k8s.io/local-path use \"local-path\" as the parameter.

Amazon EKS

If installing on Amazon EKS See here

"},{"location":"GettingStarted/quickstart/","title":"Quickstart","text":"

For the quickstart we have examples for Kind or k3s.

  • Kind - is suitable for CPU only installations.
  • k3s - is suitable for CPU or GPU installations.

Kubernetes >= 1.25

The current version of the chart supports kubernetes version 1.25 and above.

Please select your target installation type:

kindk3s

Default User

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"GettingStarted/quickstart/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":"

This installation in a kind cluster is for trying out the operators and the database in a non-production environment.

CPU Only

This method currently only supports installing a CPU version of the database.

Please contact Kinetica Support to request a trial key.

"},{"location":"GettingStarted/quickstart/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Create a new Kind Cluster
kind create cluster --config charts/kinetica-operators/kind.yaml\n
"},{"location":"GettingStarted/quickstart/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

"},{"location":"GettingStarted/quickstart/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the Kinetica-Operators Chart","text":"Kind - Install the Kinetca-Operators Chart
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

or if you have been asked by the Kinetica Support team to try a development version

Using a development version
helm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

"},{"location":"GettingStarted/quickstart/#k3s-k3sio","title":"k3s (k3s.io)","text":""},{"location":"GettingStarted/quickstart/#install-k3s-129","title":"Install k3s 1.29","text":"Install k3s
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik  --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n
"},{"location":"GettingStarted/quickstart/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Configure local acces - /etc/hosts
127.0.0.1  local.kinetica\n
"},{"location":"GettingStarted/quickstart/#k3s-install-the-kinetica-operators-chart-cpu","title":"K3S - Install the Kinetica-Operators Chart (CPU)","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

or if you have been asked by the Kinetica Support team to try a development version

Using a development version
helm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n
"},{"location":"GettingStarted/quickstart/#k3s-install-the-kinetica-operators-chart-gpu","title":"K3S - Install the Kinetica-Operators Chart (GPU)","text":"

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

k3s GPU Installation
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

Accessing the Workbench

You should be able to access the workbench at http://local.kinetica

"},{"location":"GettingStarted/quickstart/#uninstall-k3s","title":"Uninstall k3s","text":"uninstall k3s
/usr/local/bin/k3s-uninstall.sh\n
"},{"location":"Help/help_and_tutorials/","title":"Help & Tutorials","text":"
  • Tutorials

    Tutorials

  • Help

    Help

"},{"location":"Help/help_and_tutorials/#coming-soon","title":"Coming Soon","text":""},{"location":"Monitoring/logs/","title":"Log Collection & Display","text":""},{"location":"Monitoring/logs/#coming-soon","title":"Coming Soon","text":""},{"location":"Monitoring/metrics_and_monitoring/","title":"MetricsCollection & Display","text":""},{"location":"Monitoring/metrics_and_monitoring/#coming-soon","title":"Coming Soon","text":""},{"location":"Operations/","title":"Operations","text":"
  • Metrics

    Collecting and storing metrics as time series data.

    Metrics

  • Logs

    Log aggregation.

    Logs

  • Distribution

    Metrics & Logs can be distributed to other systems using OpenTelemetry.

    OpenTelemety

  • Backup & Restore

    Backup & Restore of the Kinetica DB.

    Backup & Restore

"},{"location":"Operations/backup_and_restore/","title":"Kinetica Backup & Restore","text":"

Velero Installation

Backup & Restore requires that v is installed into the Kubernetes Cluster.

"},{"location":"Operations/backup_and_restore/#coming-soon","title":"Coming Soon","text":""},{"location":"Operations/otel/","title":"OpenTelemetry Integration for Metric & Log Distribution","text":""},{"location":"Operations/otel/#coming-soon","title":"Coming Soon","text":""},{"location":"Operators/k3s/","title":"Overview","text":"

Kinetica Operators can be installed in any on-prem kubernetes cluster. This document provides instructions to install the operators in k3s. If you are on another distribution, you should be able to change the values file to suit your environment.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/k3s/#kinetica-on-k3s-k3sio","title":"Kinetica on k3s (k3s.io)","text":"

Current version of the chart supports kubernetes version 1.25 and above.

"},{"location":"Operators/k3s/#install-k3s-129","title":"Install k3s 1.29","text":"Bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik  --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.2+k3s1 sh -\n
"},{"location":"Operators/k3s/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s -Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Text Only
127.0.0.1 local.kinetica\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart","title":"K3s - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart-gpu-capable-machine","title":"K3s - Install the kinetica-operators chart (GPU Capable Machine)","text":"

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Operators/k3s/#uninstall-k3s","title":"Uninstall k3s","text":"Bash
/usr/local/bin/k3s-uninstall.sh\n
"},{"location":"Operators/k8s/","title":"Overview","text":"

For managed Kubernetes solutions (AKS, EKS, GKE) or other on-prem K8s flavors, follow this generic guide to install the Kinetica Operators, Database and Workbench. A product license key will be required for install. Please contact Kinetica Support to request a trial key.

"},{"location":"Operators/k8s/#preparation-and-prerequisites","title":"Preparation and prerequisites","text":"

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. kubectl is optional but highly recommended. The context for the desired target cluster must be selected from your ~/.kube/config file or set via the KUBECONFIG environment variable. Check to see if you have the correct context with,

Bash
kubectl config current-context\n

and that you can access this cluster correctly with,

Bash
kubectl get nodes\n

If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).

"},{"location":"Operators/k8s/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

If you are installing into a managed Kubernetes environment and the NGINX ingress controller that is installed as part of this install creates a LoadBalancer service, you may need to associate the LoadBalancer with the domain you plan to use.

Alternatively, if you are installing on a local machine which does not have a domain name, you can add the following entry to your /etc/hosts file or equivalent:

Bash
127.0.0.1  local.kinetica\n

Note that the default chart configuration points to local.kinetica but this is configurable.

"},{"location":"Operators/k8s/#1-add-the-kinetica-chart-repository","title":"1. Add the Kinetica chart repository","text":"

Add the repo locally as kinetica-operators:

Bash
helm repo add kinetica-operators https://kineticadb.github.io/charts\n
"},{"location":"Operators/k8s/#2-obtain-the-default-helm-values-file","title":"2. Obtain the default Helm values file","text":"

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n
"},{"location":"Operators/k8s/#3-determine-the-following-prior-to-the-chart-install","title":"3. Determine the following prior to the chart install","text":"

(a) Obtain a LICENSE-KEY as described in the introduction above. (b) Choose a PASSWORD for the initial administrator user (Note: the default in the chart for this user is kadmin but this is configurable). Non-ASCII characters and typographical symbols in the password must be escaped with a \"\\\". For example, --set dbAdminUser.password=\"MyPassword\\!\" \u00a9 As storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain DEFAULT-STORAGE-CLASS name with the command:

Bash
kubectl get sc -o name \n

use the name found after the /, For example, in \"storageclass.storage.k8s.io/TheName\" use \"TheName\" as the parameter.

"},{"location":"Operators/k8s/#4-install-the-helm-chart","title":"4. Install the helm chart","text":"

Run the following Helm install command after substituting values from section 3 above:

Bash
helm -n kinetica-system install \\\nkinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--values values.onPrem.k8s.yaml \\\n--set db.gpudbCluster.license=\"LICENSE-KEY\" \\\n--set dbAdminUser.password=\"PASSWORD\" \\\n--set global.defaultStorageClass=\"DEFAULT-STORAGE-CLASS\"\n
"},{"location":"Operators/k8s/#5-check-installation-progress","title":"5. Check installation progress","text":"

After a few moments, follow the progression of the main database pod startup with:

Bash
kubectl -n gpudb get po gpudb-0 -w\n

until it reaches \"gpudb-0 3/3 Running\" at which point the database should be ready and all other software installed in the cluster. You may have to run this command in a different terminal if the helm command from step 4 has not yet returned to the system prompt. Once running, you can quit this kubectl watch command using ctrl-c.

"},{"location":"Operators/k8s/#6-accessing-the-kinetica-installation","title":"6. Accessing the Kinetica installation","text":""},{"location":"Operators/k8s/#optional-install-a-development-chart-version","title":"(Optional) Install a development chart version","text":"

Find all alternative chart versions with:

Bash
helm search repo kinetica-operators --devel --versions\n

Then append --devel --version [CHART-DEVEL-VERSION] to the end of the Helm install command in section 4 above.

"},{"location":"Operators/k8s/#k8s-flavour-specific-notes","title":"K8s Flavour specific notes","text":""},{"location":"Operators/k8s/#eks","title":"EKS","text":""},{"location":"Operators/k8s/#ebs-csi-driver","title":"EBS CSI driver","text":"

Make sure you have enabled the ebs-csi driver in your EKS cluster. This is required for the default storage class to work. Please refer to this AWS documentation for more information.

"},{"location":"Operators/k8s/#ingress","title":"Ingress","text":"

As of now, the kinetica-operator chart installs NGINX ingress controller. So after the installation is complete, you may need to edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name.

"},{"location":"Operators/k8s/#option-1-use-the-loadbalancer-domain","title":"Option 1: Use the LoadBalancer domain","text":"Bash
kubectl get svc -n kinetica-system\n# look at the loadbalancer dns name, copy it\n\nkubectl -n gpudb edit $(kubectl -n gpudb get kc -o name)\n# replace local.kinetica with the loadbalancer dns name\nkubectl -n gpudb edit $(kubectl -n gpudb get wb -o name)\n# replace local.kinetica with the loadbalancer dns name\n# save and exit\n# you should be able to access the workbench from the loadbalancer dns name\n
"},{"location":"Operators/k8s/#option-1-use-your-custom-domain","title":"Option 1: Use your custom domain","text":"

Create a record in your DNS server pointing to the LoadBalancer DNS. Then edit the KineticaCluster Custom Resource and Workbench Custom Resource with the correct domain name, as mentioned above.

"},{"location":"Operators/kind/","title":"Overview","text":"

This installation in a kind cluster is for trying out the operators and the database in a non production environment. This method currently only supports installing a CPU version of the database.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/kind/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":""},{"location":"Operators/kind/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Bash
kind create cluster --config charts/kinetica-operators/kind.yaml\n
"},{"location":"Operators/kind/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

"},{"location":"Operators/kind/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Operators/kinetica-operators/","title":"Kinetica DB Operator Helm Charts","text":"

To install all the required operators in a single command perform the following: -

Bash
helm install -n kinetica-system \\\nkinetica-operators kinetica-operators/kinetica-operators --create-namespace\n

This will install all the Kubernetes Operators required into the kinetica-system namespace and create the namespace if it is not currently present.

Note

Depending on what target platform you are installing to it may be necessary to supply an additional parameter pointing to a values file to successfully provision the DB.

Bash
helm install -n kinetica-system -f values.yaml --set provider=aks \\\nkinetica-operators kinetica-operators/kinetica-operators --create-namespace\n

The command above uses a custom values.yaml for helm and sets the install platform to Microsoft Azure AKS.

Currently supported providers are: -

  • aks - Microsoft Azure AKS
  • eks - Amazon AWS EKS
  • local - Generic 'On-Prem' Kubernetes Clusters e.g. one deployed using kubeadm

Example Helm values.yaml for different Cloud Providers/On-Prem installations: -

Azure AKSAmazon EKSOn-Prem values.yaml
namespace: kinetica-system\n\ndb:\n  serviceAccount: {}\n  image:\n    # Kinetica DB Operator installer image\n    repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n    #  Kinetica DB Operator installer image tag\n    tag: \"\"\n\n  parameters:\n    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n    kubeconfig: \"\"\n    # The storage class to use for PVCs\n    storageClass: \"managed-premium\"\n\n  storageClass:\n    persist:\n      # Workbench Operator Persistent Volume Storage Class\n      provisioner: \"disk.csi.azure.com\"\n    procs:\n      # Workbench Operator Procs Volume Storage Class\n      provisioner: \"disk.csi.azure.com\"\n    cache:\n      # Workbench Operator Cache Volume Storage Class\n      provisioner: \"disk.csi.azure.com\"\n

15 storageClass: \"managed-premium\" - sets the appropriate storageClass for Microsoft Azure AKS Persistent Volume (PV)

20 provisioner: \"disk.csi.azure.com\" - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure

23 provisioner: \"disk.csi.azure.com\" - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure

26 provisioner: \"disk.csi.azure.com\" - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure

values.yaml
namespace: kinetica-system\n\ndb:\n  serviceAccount: {}\n  image:\n    # Kinetica DB Operator installer image\n    repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n    #  Kinetica DB Operator installer image tag\n    tag: \"\"\n\n  parameters:\n    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n    kubeconfig: \"\"\n    # The storage class to use for PVCs\n    storageClass: \"gp2\"\n\n  storageClass:\n    persist:\n      # Workbench Operator Persistent Volume Storage Class\n      provisioner: \"kubernetes.io/aws-ebs\"\n    procs:\n      # Workbench Operator Procs Volume Storage Class\n      provisioner: \"kubernetes.io/aws-ebs\"\n    cache:\n      # Workbench Operator Cache Volume Storage Class\n      provisioner: \"kubernetes.io/aws-ebs\"\n

15 storageClass: \"gp2\" - sets the appropriate storageClass for Amazon EKS Persistent Volume (PV)

20 provisioner: \"kubernetes.io/aws-ebs\" - sets the appropriate disk provisioner for the DB (Persist) filesystem for Microsoft Azure

23 provisioner: \"kubernetes.io/aws-ebs\" - sets the appropriate disk provisioner for the DB Procs filesystem for Microsoft Azure

26 provisioner: \"kubernetes.io/aws-ebs\" - sets the appropriate disk provisioner for the DB Cache filesystem for Microsoft Azure

values.yaml
namespace: kinetica-system\n\ndb:\n  serviceAccount: {}\n  image:\n    # Kinetica DB Operator installer image\n    repository: \"registry.harbor.kinetica.com/kinetica/kinetica-k8s-operator\"\n    #  Kinetica DB Operator installer image tag\n    tag: \"\"\n\n  parameters:\n    # <base64 encode of kubeconfig> of the Kubernetes Cluster to deploy to\n    kubeconfig: \"\"\n    # the type of installation e.g. aks, eks, local\n    environment: \"local\"\n    # The storage class to use for PVCs\n    storageClass: \"standard\"\n\n  storageClass:\n    procs: {}\n    persist: {}\n    cache: {}\n

15 environment: \"local\" - tells the DB Operator to deploy the DB as a 'local' instance to the Kubernetes Cluster

17 storageClass: \"standard\" - sets the appropriate storageClass for the On-Prem Persistent Volume Provisioner

storageClass

The storageClass should be present in the target environment.

A list of available storageClass can be obtained using: -

Bash
kubectl get sc\n
"},{"location":"Operators/kinetica-operators/#components","title":"Components","text":"

The kinetica-db Helm Chart wraps the deployment of a number of sub-components: -

  • Porter Operator
  • Kinetica Database Operator
  • Kinetica Workbench Operator

Installation/Upgrading/Deletion of the Kinetica Operators is done via two CRs which leverage porter.sh as the orchestrator. The corresponding Porter Operator, DB Operator & Workbench Operator CRs are submitted by running the appropriate helm command i.e.

  • install
  • upgrade
  • uninstall
"},{"location":"Operators/kinetica-operators/#porter-operator","title":"Porter Operator","text":""},{"location":"Operators/kinetica-operators/#database-operator","title":"Database Operator","text":"

The Kinetica DB Operator installation CR for the porter.sh operator is: -

YAML
apiVersion: porter.sh/v1\nkind: Installation\nmetadata:\n  annotations:\n    meta.helm.sh/release-name: kinetica-operators\n    meta.helm.sh/release-namespace: kinetica-system\n  labels:\n    app.kubernetes.io/instance: kinetica-operators\n    app.kubernetes.io/managed-by: Helm\n    app.kubernetes.io/name: kinetica-operators\n    app.kubernetes.io/version: 0.1.0\n    helm.sh/chart: kinetica-operators-0.1.0\n    installVersion: 0.38.10\n  name: kinetica-operators-operator-install\n  namespace: kinetica-system\nspec:\n  action: install\n  agentConfig:\n    volumeSize: '0'\n  parameters:\n    environment: local\n    storageclass: managed-premium\n  reference: docker.io/kinetica/kinetica-k8s-operator:v7.1.9-7.rc3\n
"},{"location":"Operators/kinetica-operators/#workbench-operator","title":"Workbench Operator","text":"

The Kinetica Workbench installation CR for the porter.sh operator is: -

YAML
apiVersion: porter.sh/v1\nkind: Installation\nmetadata:\n  annotations:\n    meta.helm.sh/release-name: kinetica-operators\n    meta.helm.sh/release-namespace: kinetica-system\n  labels:\n    app.kubernetes.io/instance: kinetica-operators\n    app.kubernetes.io/managed-by: Helm\n    app.kubernetes.io/name: kinetica-operators\n    app.kubernetes.io/version: 0.1.0\n    helm.sh/chart: kinetica-operators-0.1.0\n    installVersion: 0.38.10\n  name: kinetica-operators-wb-operator-install\n  namespace: kinetica-system\nspec:\n  action: install\n  agentConfig:\n    volumeSize: '0'\n  parameters:\n    environment: local\n  reference: docker.io/kinetica/workbench-operator:v7.1.9-7.rc3\n
"},{"location":"Operators/kinetica-operators/#overriding-images-tags","title":"Overriding Images Tags","text":"Bash
helm install -n kinetica-system kinetica-operators kinetica-operators/kinetica-operators \\\n--create-namespace \\\n--set provider=aks  \n--set dbOperator.image.tag=v7.1.9-7.rc3 \\\n--set dbOperator.image.repository=docker.io/kinetica/kinetica-k8s-operator \\\n--set wbOperator.image.repository=docker.io/kinetica/workbench-operator \\\n--set wbOperator.image.tag=v7.1.9-7.rc3\n
"},{"location":"Reference/","title":"Reference Section","text":"
  • Kinetica Operators Helm

    Kinetica Operators Helm charts & values file reference data.

    Charts

  • Kinetica Core DB CRDs

    Kinetica DB Kubernetes CRD & ConfigMap reference data.

    Cluster CRDs

  • Kinetica Workbench CRDs

    Kinetica Workbench Kubernetes CRD & ConfigMap reference data.

"},{"location":"Reference/#workbench","title":"Workbench","text":""},{"location":"Reference/database/","title":"Kinetica Database Configuration","text":"
  • kubectl (yaml)
"},{"location":"Reference/database/#kineticacluster","title":"KineticaCluster","text":"

To deploy a new Database Instance into a Kubernetes cluster...

kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica DB Cluster is as follows: -

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\n
"},{"location":"Reference/database/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n
"},{"location":"Reference/database/#spec","title":"Spec","text":"

Under the spec: section of the KineticaCLuster CR we have a number of sections supporting different aspects of the deployed DB cluster:-

  • gpudbCluster
  • autoSuspend
  • gadmin
"},{"location":"Reference/database/#gpudbcluster","title":"gpudbCluster","text":"

Configuartion items specific to the DB itself.

kineticacluster.yaml - gpudbCluster
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  gpudbCluster:\n
"},{"location":"Reference/database/#gpudbcluster_1","title":"gpudbCluster","text":"cluster name & size
clusterName: kinetica-cluster \nclusterSize: \n  tshirtSize: M \n  tshirtType: LargeCPU \nfqdn: kinetica-cluster.saas.kinetica.com\nhaRingName: default\nhasPools: false    \n

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

3. tshirtSize - sets the cluster size to a defined size based upon the t-shirt size. Valid sizes are: -

  • XS - 1 DB Rank
  • S - 2 DB Ranks
  • M - 4 DB Ranks
  • L - 8 DB Ranks
  • XL - 16 DB Ranks
  • XXL - 32 DB Ranks
  • XXXL - 64 DB Ranks

4. tshirtType - block that defines the tyoe DB Ranks to run: -

  • SmallCPU -
  • LargeCPU -
  • SmallGPU -
  • LargeGPU -

5. fqdn - The fully qualified URL for the DB cluster. Used on the Ingress records for any exposed services.

6. haRingName - Default: default

7. hasPools - Whether to enable the separate node 'pools' for \"infra\", \"compute\" pod scheduling. Default: false +optional

"},{"location":"Reference/database/#autosuspend","title":"autoSuspend","text":"

The DB Cluster autosuspend section allows for the spinning down of the core DB Pods to release the underlying Kubernetes nodes to reduce infrastructure costs when the DB is not in use.

kineticacluster.yaml - autoSuspend
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  autoSuspend:\n    enabled: false\n    inactivityDuration: 1h0m0s\n

7. the start of the autoSuspend definition

8. enabled when set to true auto suspend of the DB cluster is enabled otherwise set to false and no automatic suspending of the DB takes place. If omitted it defaults to false

9. inactivityDuration the duration after which if no DB activity has taken place the DB will be suspended

Horizontal Pod Autoscaler

In order for autoSuspend to work correctly the Kubernetes Horizontal Pod Autoscaler needs to be deployed to the cluster.

"},{"location":"Reference/database/#gadmin","title":"gadmin","text":"

GAdmin the Database Administration Console

kineticacluster.yaml - gadmin
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\n  name: my-kinetica-db-cr\n  namespace: gpudb\nspec:\n  gadmin:\n    containerPort:\n      containerPort: 8080\n      name: gadmin\n      protocol: TCP\n    isEnabled: true\n

7. gadmin configuration block definition

8. containerPort configuration block i.e. where gadmin is exposed on the DB Pod

9. containerPort the port number as an integer. Default: 8080

10. name the name of the port being exposed. Default: gadmin

11. protocol network protocal used. Default: TCP

12. isEnabled whether gadmin is exposed from the DB pod. Default: true

"},{"location":"Reference/database/#kineticauser","title":"KineticaUser","text":""},{"location":"Reference/database/#kineticagrant","title":"KineticaGrant","text":""},{"location":"Reference/database/#kineticaschema","title":"KineticaSchema","text":""},{"location":"Reference/database/#kineticaresourcegroup","title":"KineticaResourceGroup","text":""},{"location":"Reference/helm_kinetica_operators/","title":"Kinetica Operators Helm Chart Reference","text":""},{"location":"Reference/helm_kinetica_operators/#coming-soon","title":"Coming Soon","text":""},{"location":"Reference/kinetica_cluster_admins/","title":"Kinetica Cluster Admins Reference","text":""},{"location":"Reference/kinetica_cluster_admins/#full-kineticaclusteradmin-cr-structure","title":"Full KineticaClusterAdmin CR Structure","text":"kineticaclusteradmins.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterAdmin\nmetadata: {}\n# KineticaClusterAdminSpec defines the desired state of\n# KineticaClusterAdmin\nspec:\n  # ForceDBStatus - Force a Status of the DB.\n  forceDbStatus: string\n  # Name - The name of the cluster to target.\n  kineticaClusterName: string\n  # Offline - Pause/Resume of the DB.\n  offline:\n    # Set to true if desired state is offline. The supported values are:\n    # true false\n    offline: false\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: flush_to_disk Flush to disk when\n    # going offline The supported values are: true false\n    options: {}\n  # Rebalance of the DB.\n  rebalance:\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: rebalance_sharded_data        If true,\n    # sharded data will be rebalanced approximately equally across the\n    # cluster. Note that for clusters with large amounts of sharded\n    # data, this data transfer could be time-consuming and result in\n    # delayed query responses. The default value is true. The supported\n    # values are: true false rebalance_unsharded_data   If true,\n    # unsharded data (a.k.a. randomly-sharded) will be rebalanced\n    # approximately equally across the cluster. Note that for clusters\n    # with large amounts of unsharded data, this data transfer could be\n    # time-consuming and result in delayed query responses. The default\n    # value is true. The supported values are: true false\n    # table_includes                Comma-separated list of unsharded table names\n    # to rebalance. Not applicable to sharded tables because they are\n    # always rebalanced. Cannot be used simultaneously with\n    # table_excludes. This parameter is ignored if\n    # rebalance_unsharded_data is false.\n    # table_excludes                Comma-separated list of unsharded table names\n    # to not rebalance. Not applicable to sharded tables because they\n    # are always rebalanced. Cannot be used simultaneously with\n    # table_includes. This parameter is ignored if rebalance_\n    # unsharded_data is false. aggressiveness               Influences how much\n    # data is moved at a time during rebalance. A higher aggressiveness\n    # will complete the rebalance faster. A lower aggressiveness will\n    # take longer but allow for better interleaving between the\n    # rebalance and other queries. Valid values are constants from 1\n    # (lowest) to 10 (highest). The default value is '1'.\n    # compact_after_rebalance   Perform compaction of deleted records\n    # once the rebalance completes to reclaim memory and disk space.\n    # Default is true, unless repair_incorrectly_sharded_data is set to\n    # true. The default value is true. The supported values are: true\n    # false compact_only                If set to true, ignore rebalance options\n    # and attempt to perform compaction of deleted records to reclaim\n    # memory and disk space without rebalancing first. The default\n    # value is false. The supported values are: true false\n    # repair_incorrectly_sharded_data       Scans for any data sharded\n    # incorrectly and re-routes the data to the correct location. Only\n    # necessary if /admin/verifydb reports an error in sharding\n    # alignment. This can be done as part of a typical rebalance after\n    # expanding the cluster or in a standalone fashion when it is\n    # believed that data is sharded incorrectly somewhere in the\n    # cluster. Compaction will not be performed by default when this is\n    # enabled. If this option is set to true, the time necessary to\n    # rebalance and the memory used by the rebalance may increase. The\n    # default value is false. The supported values are: true false\n    options: {}\n  # RegenerateDBConfig - Force regenerate of DB ConfigMap. true -\n  # restarts DB Pods after config generation false - writes new\n  # configuration without restarting the DB Pods\n  regenerateDBConfig:\n    # Restart - Scales down the DB STS and back up once the DB\n    # Configuration has been regenerated.\n    restart: false\n# KineticaClusterAdminStatus defines the observed state of\n# KineticaClusterAdmin\nstatus:\n  # Phase - The current phase/state of the Admin request\n  phase: string\n  # Processed - Indicates if the admin request has already been\n  # processed. Avoids the request being rerun in the case the Operator\n  # gets restarted.\n  processed: false\n
"},{"location":"Reference/kinetica_cluster_backups/","title":"Kinetica Cluster Backups Reference","text":""},{"location":"Reference/kinetica_cluster_backups/#full-kineticaclusterbackup-cr-structure","title":"Full KineticaClusterBackup CR Structure","text":"kineticaclusterbackups.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterBackup \nmetadata: {}\n# Fields specific to the linked backup engine\nprovider:\n  # Name of the backup/restore provider. FOR INTERNAL USE ONLY.\n  backupProvider: \"velero\"\n  # Name of the backup in the linked BackupProvider. FOR INTERNAL USE\n  # ONLY.\n  linkedItemName: \"\"\n# BackupSpec defines the specification for a Velero backup.\nspec:\n  # DefaultVolumesToRestic specifies whether restic should be used to\n  # take a backup of all pod volumes by default.\n  defaultVolumesToRestic: true\n  # ExcludedNamespaces contains a list of namespaces that are not\n  # included in the backup.\n  excludedNamespaces: [\"string\"]\n  # ExcludedResources is a slice of resource names that are not included\n  # in the backup.\n  excludedResources: [\"string\"]\n  # Hooks represent custom behaviors that should be executed at\n  # different phases of the backup.\n  hooks:\n    # Resources are hooks that should be executed when backing up\n    # individual instances of a resource.\n    resources:\n    - excludedNamespaces: [\"string\"]\n      # ExcludedResources specifies the resources to which this hook\n      # spec does not apply.\n      excludedResources: [\"string\"]\n      # IncludedNamespaces specifies the namespaces to which this hook\n      # spec applies. If empty, it applies to all namespaces.\n      includedNamespaces: [\"string\"]\n      # IncludedResources specifies the resources to which this hook\n      # spec applies. If empty, it applies to all resources.\n      includedResources: [\"string\"]\n      # LabelSelector, if specified, filters the resources to which this\n      # hook spec applies.\n      labelSelector:\n        # matchExpressions is a list of label selector requirements. The\n        # requirements are ANDed.\n        matchExpressions:\n        - key: string\n          # operator represents a key's relationship to a set of values.\n          # Valid operators are In, NotIn, Exists and DoesNotExist.\n          operator: string\n          # values is an array of string values. If the operator is In\n          # or NotIn, the values array must be non-empty. If the\n          # operator is Exists or DoesNotExist, the values array must\n          # be empty. This array is replaced during a strategic merge\n          # patch.\n          values: [\"string\"]\n        # matchLabels is a map of {key,value} pairs. A single\n        # {key,value} in the matchLabels map is equivalent to an\n        # element of matchExpressions, whose key field is \"key\", the\n        # operator is \"In\", and the values array contains only \"value\".\n        # The requirements are ANDed.\n        matchLabels: {}\n      # Name is the name of this hook.\n      name: string\n      # PostHooks is a list of BackupResourceHooks to execute after\n      # storing the item in the backup. These are executed after\n      # all \"additional items\" from item actions are processed.\n      post:\n      - exec:\n          # Command is the command and arguments to execute.\n          command: [\"string\"]\n          # Container is the container in the pod where the command\n          # should be executed. If not specified, the pod's first\n          # container is used.\n          container: string\n          # OnError specifies how Velero should behave if it encounters\n          # an error executing this hook.\n          onError: string\n          # Timeout defines the maximum amount of time Velero should\n          # wait for the hook to complete before considering the\n          # execution a failure.\n          timeout: string\n      # PreHooks is a list of BackupResourceHooks to execute prior to\n      # storing the item in the backup. These are executed before\n      # any \"additional items\" from item actions are processed.\n      pre:\n      - exec:\n          # Command is the command and arguments to execute.\n          command: [\"string\"]\n          # Container is the container in the pod where the command\n          # should be executed. If not specified, the pod's first\n          # container is used.\n          container: string\n          # OnError specifies how Velero should behave if it encounters\n          # an error executing this hook.\n          onError: string\n          # Timeout defines the maximum amount of time Velero should\n          # wait for the hook to complete before considering the\n          # execution a failure.\n          timeout: string\n  # IncludeClusterResources specifies whether cluster-scoped resources\n  # should be included for consideration in the backup.\n  includeClusterResources: true\n  # IncludedNamespaces is a slice of namespace names to include objects\n  # from. If empty, all namespaces are included.\n  includedNamespaces: [\"string\"]\n  # IncludedResources is a slice of resource names to include in the\n  # backup. If empty, all resources are included.\n  includedResources: [\"string\"]\n  # LabelSelector is a metav1.LabelSelector to filter with when adding\n  # individual objects to the backup. If empty or nil, all objects are\n  # included. Optional.\n  labelSelector:\n    # matchExpressions is a list of label selector requirements. The\n    # requirements are ANDed.\n    matchExpressions:\n    - key: string\n      # operator represents a key's relationship to a set of values.\n      # Valid operators are In, NotIn, Exists and DoesNotExist.\n      operator: string\n      # values is an array of string values. If the operator is In or\n      # NotIn, the values array must be non-empty. If the operator is\n      # Exists or DoesNotExist, the values array must be empty. This\n      # array is replaced during a strategic merge patch.\n      values: [\"string\"]\n    # matchLabels is a map of {key,value} pairs. A single {key,value} in\n    # the matchLabels map is equivalent to an element of\n    # matchExpressions, whose key field is \"key\", the operator is \"In\",\n    # and the values array contains only \"value\". The requirements are\n    # ANDed.\n    matchLabels: {} metadata: labels: {}\n  # OrderedResources specifies the backup order of resources of specific\n  # Kind. The map key is the Kind name and value is a list of resource\n  # names separated by commas. Each resource name has\n  # format \"namespace/resourcename\".  For cluster resources, simply\n  # use \"resourcename\".\n  orderedResources: {}\n  # SnapshotVolumes specifies whether to take cloud snapshots of any\n  # PV's referenced in the set of objects included in the Backup.\n  snapshotVolumes: true\n  # StorageLocation is a string containing the name of a\n  # BackupStorageLocation where the backup should be stored.\n  storageLocation: string\n  # TTL is a time.Duration-parseable string describing how long the\n  # Backup should be retained for.\n  ttl: string\n  # VolumeSnapshotLocations is a list containing names of\n  # VolumeSnapshotLocations associated with this backup.\n  volumeSnapshotLocations: [\"string\"] status:\n  # ClusterSize the current number of ranks & type i.e. CPU or GPU of\n  # the cluster when the backup took place.\n  clusterSize:\n    # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n    # representation of the number of nodes in a simple to understand\n    # T-Short size scheme. This indicates the size of the cluster i.e.\n    # the number of nodes. It does not identify the size of the cloud\n    # provider nodes. For node size see ClusterTypeEnum. Supported\n    # Values are: - XS S M L XL XXL XXXL\n    tshirtSize: string\n    # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n    # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n    # of the VM.\n    tshirtType: string coldTierBackup: string\n  # CompletionTimestamp records the time a backup was completed.\n  # Completion time is recorded even on failed backups. Completion time\n  # is recorded before uploading the backup object. The server's time\n  # is used for CompletionTimestamps\n  completionTimestamp: string\n  # Errors is a count of all error messages that were generated during\n  # execution of the backup.  The actual errors are in the backup's log\n  # file in object storage.\n  errors: 1\n  # Expiration is when this Backup is eligible for garbage-collection.\n  expiration: string\n  # FormatVersion is the backup format version, including major, minor,\n  # and patch version.\n  formatVersion: string\n  # Phase is the current state of the Backup.\n  phase: string\n  # Progress contains information about the backup's execution progress.\n  # Note that this information is best-effort only -- if Velero fails\n  # to update it during a backup for any reason, it may be\n  # inaccurate/stale.\n  progress:\n    # ItemsBackedUp is the number of items that have actually been\n    # written to the backup tarball so far.\n    itemsBackedUp: 1\n    # TotalItems is the total number of items to be backed up. This\n    # number may change throughout the execution of the backup due to\n    # plugins that return additional related items to back up, the\n    # velero.io/exclude-from-backup label, and various other filters\n    # that happen as items are processed.\n    totalItems: 1\n  # StartTimestamp records the time a backup was started. Separate from\n  # CreationTimestamp, since that value changes on restores. The\n  # server's time is used for StartTimestamps\n  startTimestamp: string\n  # ValidationErrors is a slice of all validation errors\n  # (if applicable).\n  validationErrors: [\"string\"]\n  # Version is the backup format major version. Deprecated: Please see\n  # FormatVersion\n  version: 1\n  # VolumeSnapshotsAttempted is the total number of attempted volume\n  # snapshots for this backup.\n  volumeSnapshotsAttempted: 1\n  # VolumeSnapshotsCompleted is the total number of successfully\n  # completed volume snapshots for this backup.\n  volumeSnapshotsCompleted: 1\n  # Warnings is a count of all warning messages that were generated\n  # during execution of the backup. The actual warnings are in the\n  # backup's log file in object storage.\n  warnings: 1\n
"},{"location":"Reference/kinetica_cluster_grants/","title":"Kinetica Cluster Grants CRD Reference","text":""},{"location":"Reference/kinetica_cluster_grants/#full-kineticagrant-cr-structure","title":"Full KineticaGrant CR Structure","text":"kineticagrants.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaGrant \nmetadata: {}\n# KineticaGrantSpec defines the desired state of KineticaGrant\nspec:\n  # Grants system-level and/or table permissions to a user or role.\n  addGrantAllOnSchemaRequest:\n    # Name of the user or role that will be granted membership in input\n    # parameter role. Must be an existing user or role.\n    member: string\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # SchemaName - name of the schema on which to perform the Grant All\n    schemaName: string\n  # Grants system-level and/or table permissions to a user or role.\n  addGrantPermissionRequest:\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # Permission to grant to the user or role. Supported\n    # Values    Description system_admin    Full access to all data and\n    # system functions. system_user_admin   Access to administer users\n    # and roles that do not have system_admin permission.\n    # system_write  Read and write access to all tables.\n    # system_read   Read-only access to all tables.\n    systemPermission:\n      # UID of the user or role to which the permission will be granted.\n      # Must be an existing user or role.\n      name: string\n      # Optional parameters. The default value is an empty map (\n      # {} ). Supported Parameters: resource_group  Name of an existing\n      # resource group to associate with this role.\n      options: {}\n      # Permission to grant to the user or role. Supported\n      # Values  Description table_admin Full read/write and\n      # administrative access to the table. table_insert    Insert access\n      # to the table. table_update  Update access to the table.\n      # table_delete    Delete access to the table. table_read  Read access\n      # to the table.\n      permission: string\n    # Permission to grant to the user or role. Supported\n    # Values    Description<br/> system_admin   Full access to all data and\n    # system functions.<br/> system_user_admin  Access to administer\n    # users and roles that do not have system_admin permission.<br/>\n    # system_write  Read and write access to all tables.<br/>\n    # system_read   Read-only access to all tables.<br/>\n    tablePermissions:\n    - filter_expression: \"\"\n      # UID of the user or role to which the permission will be granted.\n      # Must be an existing user or role.\n      name: string\n      # Optional parameters. The default value is an empty map (\n      # {} ). Supported Parameters: resource_group  Name of an existing\n      # resource group to associate with this role.\n      options: {}\n      # Permission to grant to the user or role. Supported\n      # Values  Description table_admin Full read/write and\n      # administrative access to the table. table_insert    Insert access\n      # to the table. table_update  Update access to the table.\n      # table_delete    Delete access to the table. table_read  Read access\n      # to the table.\n      permission: string\n      # Name of the table for which the Permission is to be granted\n      table_name: string\n  # Grants membership in a role to a user or role.\n  addGrantRoleRequest:\n    # Name of the user or role that will be granted membership in input\n    # parameter role. Must be an existing user or role.\n    member: string\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # Name of the role in which membership will be granted. Must be an\n    # existing role.\n    role: string\n  # Debug debug the call\n  debug: false\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n# KineticaGrantStatus defines the observed state of KineticaGrant\nstatus:\n  # DBStringResponse - The GPUdb server embeds the endpoint response\n  # inside a standard response structure which contains status\n  # information and the actual response to the query.\n  db_response: data: string\n    # This embedded JSON represents the result of the endpoint\n    data_str: string\n    # API Call Specific\n    data_type: string\n    # Empty if success or an error message\n    message: string\n    # 'OK' or 'ERROR'\n    status: string \n    ldap_response: string\n
"},{"location":"Reference/kinetica_cluster_reference/","title":"Kinetica Core DB CRDs","text":"
  • DB Clusters

    Core Kinetica Database Cluster Management CRD & sample CR.

    KineticaCluster

  • DB Users

    Kinetica Database User Management CRD & sample CR.

    KineticaUser

  • DB Roles

    Kinetica Database Role Management CRD & sample CR.

    KineticaRole

  • DB Schemas

    Kinetica Database Schema Management CRD & sample CR.

    KineticaSchema

  • DB Grants

    Kinetica Database Grant Management CRD & sample CR.

    KineticaGrant

  • DB Resource Groups

    Kinetica Database Resource Group Management CRD & sample CR.

    KineticaResourceGroup

  • DB Administration

    Kinetica Database Administration CRD & sample CR.

    KineticaAdmin

  • DB Backups

    Kinetica Database Backup Management CRD & sample CR.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaBackup

  • DB Restore

    Kinetica Database Restore CRD & sample CR.

    Note

    This requires Velero to be installed on the Kubernetes Cluster.

    KineticaRestore

"},{"location":"Reference/kinetica_cluster_resource_groups/","title":"Kinetica Cluster Resource Groups CRD Reference","text":""},{"location":"Reference/kinetica_cluster_resource_groups/#full-kineticaresourcegroup-cr-structure","title":"Full KineticaResourceGroup CR Structure","text":"kineticaclusterresourcegroups.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterResourceGroup \nmetadata: {}\n# KineticaClusterResourceGroupSpec defines the desired state of\n# KineticaClusterResourceGroup\nspec: \n  db_create_resource_group_request:\n    # AdjoiningResourceGroup -\n    adjoining_resource_group: \"\"\n    # Name - name of the DB ResourceGroup\n    # https://docs.kinetica.com/7.1/azure/sql/resource_group/?search-highlight=resource+group#id-baea5b60-769c-5373-bff1-53f4f1ca5c21\n    name: string\n    # Options - DB Options used when creating the ResourceGroup\n    options: {}\n    # Ranking - Indicates the relative ranking among existing resource\n    # groups where this new resource group will be placed. When using\n    # before or after, specify which resource group this one will be\n    # inserted before or after in input parameter\n    # adjoining_resource_group. The supported values are: first last\n    # before after\n    ranking: \"\"\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n# KineticaClusterResourceGroupStatus defines the observed state of\n# KineticaClusterResourceGroup\nstatus: \n  provisioned: string\n
"},{"location":"Reference/kinetica_cluster_restores/","title":"Kinetica Cluster Restores Reference","text":""},{"location":"Reference/kinetica_cluster_restores/#full-kineticaclusterrestore-cr-structure","title":"Full KineticaClusterRestore CR Structure","text":"kineticaclusterrestores.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterRestore \nmetadata: {}\n# RestoreSpec defines the specification for a Velero restore.\nspec:\n  # BackupName is the unique name of the Velero backup to restore from.\n  backupName: string\n  # ExcludedNamespaces contains a list of namespaces that are not\n  # included in the restore.\n  excludedNamespaces: [\"string\"]\n  # ExcludedResources is a slice of resource names that are not included\n  # in the restore.\n  excludedResources: [\"string\"]\n  # IncludeClusterResources specifies whether cluster-scoped resources\n  # should be included for consideration in the restore. If null,\n  # defaults to true.\n  includeClusterResources: true\n  # IncludedNamespaces is a slice of namespace names to include objects\n  # from. If empty, all namespaces are included.\n  includedNamespaces: [\"string\"]\n  # IncludedResources is a slice of resource names to include in the\n  # restore. If empty, all resources in the backup are included.\n  includedResources: [\"string\"]\n  # LabelSelector is a metav1.LabelSelector to filter with when\n  # restoring individual objects from the backup. If empty or nil, all\n  # objects are included. Optional.\n  labelSelector:\n    # matchExpressions is a list of label selector requirements. The\n    # requirements are ANDed.\n    matchExpressions:\n    - key: string\n      # operator represents a key's relationship to a set of values.\n      # Valid operators are In, NotIn, Exists and DoesNotExist.\n      operator: string\n      # values is an array of string values. If the operator is In or\n      # NotIn, the values array must be non-empty. If the operator is\n      # Exists or DoesNotExist, the values array must be empty. This\n      # array is replaced during a strategic merge patch.\n      values: [\"string\"]\n    # matchLabels is a map of {key,value} pairs. A single {key,value} in\n    # the matchLabels map is equivalent to an element of\n    # matchExpressions, whose key field is \"key\", the operator is \"In\",\n    # and the values array contains only \"value\". The requirements are\n    # ANDed.\n    matchLabels: {}\n  # NamespaceMapping is a map of source namespace names to target\n  # namespace names to restore into. Any source namespaces not included\n  # in the map will be restored into namespaces of the same name.\n  namespaceMapping: {}\n  # RestorePVs specifies whether to restore all included PVs from\n  # snapshot (via the cloudprovider).\n  restorePVs: true\n  # ScheduleName is the unique name of the Velero schedule to restore\n  # from. If specified, and BackupName is empty, Velero will restore\n  # from the most recent successful backup created from this schedule.\n  scheduleName: string status: coldTierRestore: \"\"\n  # CompletionTimestamp records the time the restore operation was\n  # completed. Completion time is recorded even on failed restore. The\n  # server's time is used for StartTimestamps\n  completionTimestamp: string\n  # Errors is a count of all error messages that were generated during\n  # execution of the restore. The actual errors are stored in object\n  # storage.\n  errors: 1\n  # FailureReason is an error that caused the entire restore to fail.\n  failureReason: string\n  # Phase is the current state of the Restore\n  phase: string\n  # Progress contains information about the restore's execution\n  # progress. Note that this information is best-effort only -- if\n  # Velero fails to update it during a restore for any reason, it may\n  # be inaccurate/stale.\n  progress:\n    # ItemsRestored is the number of items that have actually been\n    # restored so far\n    itemsRestored: 1\n    # TotalItems is the total number of items to be restored. This\n    # number may change throughout the execution of the restore due to\n    # plugins that return additional related items to restore\n    totalItems: 1\n  # StartTimestamp records the time the restore operation was started.\n  # The server's time is used for StartTimestamps\n  startTimestamp: string\n  # ValidationErrors is a slice of all validation errors(if applicable)\n  validationErrors: [\"string\"]\n  # Warnings is a count of all warning messages that were generated\n  # during execution of the restore. The actual warnings are stored in\n  # object storage.\n  warnings: 1\n
"},{"location":"Reference/kinetica_cluster_roles/","title":"Kinetica Cluster Roles CRD","text":""},{"location":"Reference/kinetica_cluster_roles/#full-kineticarole-cr-structure","title":"Full KineticaRole CR Structure","text":"kineticaroles.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaRole \nmetadata: {}\n# KineticaRoleSpec defines the desired state of KineticaRole\nspec:\n  # AlterRoleRequest Kinetica DB REST API Request Format Object.\n  alter_role:\n    # Action - Modification operation to be applied to the role.\n    action: string\n    # Role UID - Name of the role to be altered. Must be an existing\n    # role.\n    name: string\n    # Optional parameters. The default value is an empty map ( {} ).\n    options: {}\n    # Value - The value of the modification, depending on input\n    # parameter action.\n    value: string\n  # Debug debug the call\n  debug: false\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n  # AddRoleRequest Kinetica DB REST API Request Format Object.\n  role:\n    # User UID\n    name: string\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: resource_group    Name of an existing\n    # resource group to associate with this role.\n    options: {}\n    # ResourceGroupName of an existing resource group to associate with\n    # this role\n    resourceGroupName: \"\"\n# KineticaRoleStatus defines the observed state of KineticaRole\nstatus:\n  # DBStringResponse - The GPUdb server embeds the endpoint response\n  # inside a standard response structure which contains status\n  # information and the actual response to the query.\n  db_response: data: string\n    # This embedded JSON represents the result of the endpoint\n    data_str: string\n    # API Call Specific\n    data_type: string\n    # Empty if success or an error message\n    message: string\n    # 'OK' or 'ERROR'\n    status: string \n    ldap_response: string\n
"},{"location":"Reference/kinetica_cluster_schemas/","title":"Kinetica Cluster Schemas CRD Reference","text":""},{"location":"Reference/kinetica_cluster_schemas/#full-kinetica-cluster-schemas-cr-structure","title":"Full Kinetica Cluster Schemas CR Structure","text":"kineticaclusterschemas.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaClusterSchema \nmetadata: {}\n# KineticaClusterSchemaSpec defines the desired state of\n# KineticaClusterSchema\nspec: \n  db_create_schema_request:\n    # Name - the name of the resource group to create in the DB\n    name: string\n    # Optional parameters. The default value is an empty map (\n    # {} ). Supported Parameters: \"max_cpu_concurrency\", \"max_data\"\n    options: {}\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n# KineticaClusterSchemaStatus defines the observed state of\n# KineticaClusterSchema\nstatus: \n  provisioned: string\n
"},{"location":"Reference/kinetica_cluster_users/","title":"Kinetica Cluster Users CRD Reference","text":""},{"location":"Reference/kinetica_cluster_users/#full-kineticauser-cr-structure","title":"Full KineticaUser CR Structure","text":"kineticausers.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaUser\nmetadata: {}\n# KineticaUserSpec defines the desired state of KineticaUser\nspec:\n  # Action field contains UserActionEnum field indicating whether it is\n  # an Upsert or Change Password operation. For deletion delete the\n  # KineticaUser CR and a finalizer will remove the user from LDAP.\n  action: string\n  # ChangePassword specific fields\n  changePassword:\n    # PasswordSecret - Not the actual user password but the name of a\n    # Kubernetes Secret containing a Data element with a Password\n    # attribute. The secret is removed on user creation. Must be in the\n    # same namespace as the Kinetica Cluster. Must contain the\n    # following fields: - oldPassword newPassword\n    passwordSecret: string\n  # Debug debug the call\n  debug: false\n  # GroupID - Organisation or Team Id the user belongs to.\n  groupId: string\n  # Create the user in Reveal\n  reveal: true\n  # RingName is the name of the kinetica ring that this user belongs\n  # to.\n  ringName: string\n  # UID is the username (not UUID UID).\n  uid: string\n  # Upsert specific fields\n  upsert:\n    # CreateHomeDirectory - when true, a home directory in KiFS is\n    # created for this user The default value is true. The supported\n    # values are: true false\n    createHomeDirectory: true\n    # DB Memory user data size limit\n    dataLimit: \"10Gi\"\n    # DisplayName\n    displayName: string\n    # GivenName is Firstname also called Christian name. givenName in\n    # LDAP terms.\n    givenName: string\n    # KIFs user data size limit\n    kifsDataLimit: \"2Gi\"\n    # LastName refers to last name or surname. sn in LDAP terms.\n    lastName: string\n    # Options -\n    options: {}\n    # PasswordSecret - Not the actual user password but the name of a\n    # Kubernetes Secret containing a Data element with a Password\n    # attribute. The secret is removed on user creation. Must be in the\n    # same namespace as the Kinetica Cluster.\n    passwordSecret: string\n    # UPN or UserPrincipalName - e.g. guyt@cp.com  \n    # Looks like an email address.\n    userPrincipalName: string\n  # UUID is the user unique UUID from the Control Plane.\n  uuid: string\n# KineticaUserStatus defines the observed state of KineticaUser\nstatus:\n  # DBStringResponse - The GPUdb server embeds the endpoint response\n  # inside a standard response structure which contains status\n  # information and the actual response to the query.\n  db_response: data: string\n    # This embedded JSON represents the result of the endpoint\n    data_str: string\n    # API Call Specific\n    data_type: string\n    # Empty if success or an error message\n    message: string\n    # 'OK' or 'ERROR'\n    status: string \n    ldap_response: string \n    reveal_admin: string\n
"},{"location":"Reference/kinetica_clusters/","title":"Kinetica Clusters CRD Reference","text":"

This page covers the Kinetica Cluster Kubernetes CRD.

"},{"location":"Reference/kinetica_clusters/#full-kineticacluster-cr-structure","title":"Full KineticaCluster CR Structure","text":"kineticaclusters.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an\n# object. Servers should convert recognized schemas to the latest\n# internal value, and may reject unrecognized values. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\napiVersion: app.kinetica.com/v1\n# Kind is a string value representing the REST resource this object\n# represents. Servers may infer this from the endpoint the client\n# submits requests to. Cannot be updated. In CamelCase. More info:\n# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\nkind: KineticaCluster \nmetadata: {}\n# KineticaClusterSpec defines the configuration for KineticaCluster DB\nspec:\n  # An optional duration after which the database is stopped and DB\n  # resources are freed\n  autoSuspend: \n    enabled: false\n    # InactivityDuration - the duration which the cluster should be idle\n    # before auto-pausing the DB Cluster.\n    inactivityDuration: \"1h\"\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  awsConfig:\n    # ClusterName - AWS name of the EKS Cluster. NOTE: Marked as\n    # optional but is mandatory\n    clusterName: string\n    # MarketplaceAppConfig - Amazon AWS specific DB Cluster\n    # information.\n    marketplaceApp:\n      # KmsKeyId - Key for disk encryption. The full Amazon Resource\n      # Name of the key to use when encrypting the volume. If none is\n      # supplied but encrypted is true, a key is generated by AWS. See\n      # AWS docs for valid ARN value.\n      kmsKeyId: string\n      # ProductCode - used to uniquely identify a product in AWS\n      # Marketplace. The product code should be the same as the one\n      # used during the publishing of a new product.\n      productCode: \"1cmucncoyp9pi8xjdwqjimlf8\"\n      # PublicKeyVersion - Public Key Version provided by AWS\n      # Marketplace\n      publicKeyVersion: 1\n      # ParentResourceGroup - The resource group of the ManagedApp\n      # itself ParentResourceGroup     string\n      # `json:\"parentResourceGroup\"` ResourceId - Identifier of the\n      # resource against which usage is emitted Format is GUID\n      # (UUID)\n      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n      # Optional only if that exactly of  ResourceId or ResourceUri is\n      # specified.\n      resourceId: string\n    # NodeGroups - List of NodeGroups for this cluster MUST contain at\n    # least one of the following keys: - \n    #   * none\n    #   * infra \n    #   * infra_public \n    #   * compute \n    #   * compute-gpu \n    #   * aaw_cpu \n    # NOTE: Marked as optional but is mandatory\n    nodeGroups: {}\n    # OTELTracing - OpenTelemetry Tracing Specifics\n    otelTracing:\n      # Endpoint - Set the OpenTelemetry reporting Endpoint\n      endpoint: \"\"\n      # Key - KineticaCluster specific Key required to send Telemetry\n      # information to the Cloud\n      key: string\n      # MaxBatchSize - Telemetry Reporting Interval to use in seconds.\n      maxBatchInterval: 10\n      # MaxBatchSize - Telemetry Maximum Batch Size to send.\n      maxBatchSize: 1024\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  azureConfig:\n    # App Insights Specifics\n    appInsights:\n      # Endpoint - Override the default AppInsights reporting Endpoint\n      endpoint: \"\"\n      # Key - KineticaCluster specific Application Insights Key required\n      # to send Telemetry information to the Azure Portal\n      key: string\n      # MaxBatchSize - Telemetry Reporting Interval to use in seconds.\n      maxBatchInterval: 10\n      # MaxBatchSize - Telemetry Maximum Batch Size to send.\n      maxBatchSize: 1024\n    # AzureManagedAppConfig - Microsoft Azure specific DB Cluster\n    # information.\n    managedApp:\n      # DiskEncryptionSetID - By default, managed disks use\n      # platform-managed encryption keys. All managed disks, snapshots,\n      # images, and data written to existing managed disks are\n      # automatically encrypted-at-rest with platform-managed keys. You\n      # can choose to manage encryption at the level of each managed\n      # disk, with your own keys. When you specify a customer-managed\n      # key, that key is used to protect and control access to the key\n      # that encrypts your data. Customer-managed keys offer greater\n      # flexibility to manage access controls.\n      diskEncryptionSetId: string\n      # PlanId - The Azure Marketplace Plan/Offer identifier selected by\n      # the customer for this DB cluster e.g. BYOL, Pay-As-You-Go etc.\n      planId: string\n      # ParentResourceGroup - The resource group of the ManagedApp\n      # itself ParentResourceGroup     string\n      # `json:\"parentResourceGroup\"` ResourceId - Identifier of the\n      # resource against which usage is emitted Format is GUID\n      # (UUID)\n      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n      # Optional only if that exactly of  ResourceId or ResourceUri is\n      # specified.\n      resourceId: string\n      # ResourceUri - Identifier of the managed app resource against\n      # which usage is emitted\n      # https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json\n      # Optional only if that exactly of  ResourceId or ResourceUri is\n      # specified.\n      resourceUri: string\n  # Tells the operator we want to run in Debug mode.\n  debug: false\n  # Identifies the type of Kubernetes deployment.\n  deploymentType:\n    # CloudRegionEnum - The target Kubernetes type to deploy to.\n    # Supported Values are: - aws_useast_1 aws_useast_2 aws_uswest_1\n    # az_useast_1 az_uswest_1\n    region: string\n    # DeploymentTypeEnum - The type of the Deployment. Supported Values\n    # are: - Managed FreeSaaS DedicatedSaaS OnPrem\n    type: string\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  devEditionConfig:\n    # Host IPv4 address. Used by KiND based Developer Edition where\n    # ingress paths set to *. Provides qualified, routable URLs to\n    # workbench.\n    hostIpAddress: \"\"\n  # The GAdmin Dashboard Configuration for the Kinetica Cluster.\n  gadmin:\n    # The port that GAdmin will be running on. It runs only on the head\n    # node pod in the cluster. Default: 8080\n    containerPort:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # The Ingress Endpoint that GAdmin will be running on.\n    ingressPath:\n      # backend defines the referenced service endpoint to which the\n      # traffic will be forwarded to.\n      backend:\n        # resource is an ObjectRef to another Kubernetes resource in the\n        # namespace of the Ingress object. If resource is specified,\n        # serviceName and servicePort must not be specified.\n        resource:\n          # APIGroup is the group for the resource being referenced. If\n          # APIGroup is not specified, the specified Kind must be in\n          # the core API group. For any other third-party types,\n          # APIGroup is required.\n          apiGroup: string\n          # Kind is the type of resource being referenced\n          kind: KineticaCluster\n          # Name is the name of resource being referenced\n          name: string\n        # serviceName specifies the name of the referenced service.\n        serviceName: string\n        # servicePort Specifies the port of the referenced service.\n        servicePort: \n      # path is matched against the path of an incoming request.\n      # Currently it can contain characters disallowed from the\n      # conventional \"path\" part of a URL as defined by RFC 3986. Paths\n      # must begin with a '/' and must be present when using PathType\n      # with value \"Exact\" or \"Prefix\".\n      path: string\n      # pathType determines the interpretation of the path matching.\n      # PathType can be one of the following values: * Exact: Matches\n      # the URL path exactly. * Prefix: Matches based on a URL path\n      # prefix split by '/'. Matching is done on a path element by\n      # element basis. A path element refers is the list of labels in\n      # the path split by the '/' separator. A request is a match for\n      # path p if every p is an element-wise prefix of p of the request\n      # path. Note that if the last element of the path is a substring\n      # of the last element in request path, it is not a match\n      # (e.g. /foo/bar matches /foo/bar/baz, but does not\n      # match /foo/barbaz). * ImplementationSpecific: Interpretation of\n      # the Path matching is up to the IngressClass. Implementations\n      # can treat this as a separate PathType or treat it identically\n      # to Prefix or Exact path types. Implementations are required to\n      # support all path types. Defaults to ImplementationSpecific.\n      pathType: string\n    # Whether to enable the GAdmin Dashboard on the Cluster. Default:\n    # true\n    isEnabled: true\n  # Gaia - gaia.properties configuration\n  gaia: admin:\n      # AdminLoginOnlyGpudbDown - When GPUdb is down, only allow admin\n      # user to login\n      admin_login_only_gpudb_down: true\n      # Username - We do check for admin username in various places\n      admin_username: \"admin\"\n      # LoginAnimationEnabled - Display any animation in login page\n      login_animation_enabled: true\n      # AdminLoginOnlyGpudbDown - Convenience settings for dev mode\n      login_bypass_enabled: false\n      # RequireStrongPassword - Convenience settings for dev mode\n      require_strong_password: true\n      # SSLTruststorePasswordScript - Display any animation in login\n      # page\n      ssl_truststore_password_script: string\n    # DemoSchema - Schema-related configuration\n    demo_schema: \"demo\" gpudb:\n      # DataFileStringNullValue - Table import/export null value string\n      data_file_string_null_value: \"\\\\N\"\n      gpudb_ext_url: \"http://127.0.0.1:8082/gpudb-0\"\n      # URL - Current instance of gpudb, when running in HA mode change\n      # this to load balancer endpoint\n      gpudb_url: \"http://127.0.0.1:9191\"\n      # LoggingLogFileName - Which file to use when displaying logging\n      # on Cluster page.\n      logging_log_file_name: \"gpudb.log\"\n      # SampleRepoURL - Table import/export null value string\n      sample_repo_url: \"//s3.amazonaws.com/kinetica-ce-data\" hm:\n      gpudb_ext_hm_url: \"http://127.0.0.1:8082/gpudb-host-manager\"\n      gpudb_hm_url: \"http://127.0.0.1:9300\" http:\n      # ClientTimeout - Number of seconds for proxy request timeout\n      http_client_timeout: 3600\n      # ClientTimeoutV2 - Force override of previous default with 0 as\n      # infinite timeout\n      http_client_timeout_v2: 0\n      # TomcatPathKey - Name of folder where Tomcat apps are installed\n      tomcat_path_key: \"tomcat\"\n      # WebappContext - Web App context\n      webapp_context: \"gadmin\"\n    # GAdminIsRemote - True if the gadmin application is running on a\n    # remote machine (not on same node as gpudb). If running on a\n    # remote machine the manage options will be disabled.\n    is_remote: false\n    # KAgentCLIPath - Schema-related configuration\n    kagent_cli_path: \"/opt/gpudb/kagent/bin/kagent\"\n    # KIO - KIO-related configuration\n    kio: kio_log_file_path: \"/opt/gpudb/kitools/kio/logs/gadmin.log\"\n    kio_log_level: \"DEBUG\" kio_log_size_limit: 10485760 kisql:\n      # QueryResultsLimit - KiSQL limit on the number of results in each\n      # query\n      kisql_query_results_limit: 10000\n      # QueryTimezone - KiSQL TimeZoneId setting for queries\n      # (use \"system\" for local system time)\n      kisql_query_timezone: \"GMT\" license:\n      # Status - Stub for license manager\n      status: \"ok\"\n      # Type - Stub for license manager\n      type: \"unlimited\"\n    # MaxConcurrentUserSessions - Session management configuration\n    max_concurrent_user_sessions: 0\n    # PublicSchema - Schema-related configuration\n    public_schema: \"ki_home\"\n    # RevealDBInfoFile - Path to file containing Reveal DB location\n    reveal_db_info_file: \"/opt/gpudb/connectors/reveal/var/REVEAL_DB_DIR\"\n    # RootSchema - Schema-related configuration\n    root_schema: \"root\" stats:\n      # GraphanaURL -\n      graphana_url: \"http://127.0.0.1:3000\"\n      # GraphiteURL\n      graphite_url: \"http://127.0.0.1:8181\"\n      # StatsGrafanaURL - Port used to host the Grafana user interface\n      # and embeddable metric dashboards in GAdmin. Note: If this value\n      # is defaulted then it will be replaced by the name of the Stats\n      # service if it is deployed & Grafana is enabled e.g.\n      # cluster-1234.gpudb.svc.cluster.local\n      stats_grafana_url: \"http://127.0.0.1:9091\"\n  # https://github.com/kubernetes-sigs/controller-tools/issues/622 if we\n  # want to set usePools as false, need to set defaults GPUDBCluster is\n  # an instance of a Kinetica DB Cluster i.e. it's StatefulSet,\n  # Service, Ingress, ConfigMap etc.\n  gpudbCluster:\n    # Affinity - is a group of affinity scheduling rules.\n    affinity:\n      # Describes node affinity scheduling rules for the pod.\n      nodeAffinity:\n        # The scheduler will prefer to schedule pods to nodes that\n        # satisfy the affinity expressions specified by this field, but\n        # it may choose a node that violates one or more of the\n        # expressions. The node that is most preferred is the one with\n        # the greatest sum of weights, i.e. for each node that meets\n        # all of the scheduling requirements (resource request,\n        # requiredDuringScheduling affinity expressions, etc.), compute\n        # a sum by iterating through the elements of this field and\n        # adding \"weight\" to the sum if the node matches the\n        # corresponding matchExpressions; the node(s) with the highest\n        # sum are the most preferred.\n        preferredDuringSchedulingIgnoredDuringExecution:\n        - preference:\n            # A list of node selector requirements by node's labels.\n            matchExpressions:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n            # A list of node selector requirements by node's fields.\n            matchFields:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n          # Weight associated with matching the corresponding\n          # nodeSelectorTerm, in the range 1-100.\n          weight: 1\n        # If the affinity requirements specified by this field are not\n        # met at scheduling time, the pod will not be scheduled onto\n        # the node. If the affinity requirements specified by this\n        # field cease to be met at some point during pod execution\n        # (e.g. due to an update), the system may or may not try to\n        # eventually evict the pod from its node.\n        requiredDuringSchedulingIgnoredDuringExecution:\n          # Required. A list of node selector terms. The terms are\n          # ORed.\n          nodeSelectorTerms:\n          - matchExpressions:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n            # A list of node selector requirements by node's fields.\n            matchFields:\n            - key: string\n              # Represents a key's relationship to a set of values.\n              # Valid operators are In, NotIn, Exists, DoesNotExist.\n              # Gt, and Lt.\n              operator: string\n              # An array of string values. If the operator is In or\n              # NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. If the operator is Gt or Lt, the values\n              # array must have a single element, which will be\n              # interpreted as an integer. This array is replaced\n              # during a strategic merge patch.\n              values: [\"string\"]\n      # Describes pod affinity scheduling rules (e.g. co-locate this pod\n      # in the same node, zone, etc. as some other pod(s)).\n      podAffinity:\n        # The scheduler will prefer to schedule pods to nodes that\n        # satisfy the affinity expressions specified by this field, but\n        # it may choose a node that violates one or more of the\n        # expressions. The node that is most preferred is the one with\n        # the greatest sum of weights, i.e. for each node that meets\n        # all of the scheduling requirements (resource request,\n        # requiredDuringScheduling affinity expressions, etc.), compute\n        # a sum by iterating through the elements of this field and\n        # adding \"weight\" to the sum if the node has pods which matches\n        # the corresponding podAffinityTerm; the node(s) with the\n        # highest sum are the most preferred.\n        preferredDuringSchedulingIgnoredDuringExecution:\n        - podAffinityTerm:\n            # A label query over a set of resources, in this case pods.\n            labelSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # A label query over the set of namespaces that the term\n            # applies to. The term is applied to the union of the\n            # namespaces selected by this field and the ones listed in\n            # the namespaces field. null selector and null or empty\n            # namespaces list means \"this pod's namespace\". An empty\n            # selector ({}) matches all namespaces.\n            namespaceSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # namespaces specifies a static list of namespace names that\n            # the term applies to. The term is applied to the union of\n            # the namespaces listed in this field and the ones selected\n            # by namespaceSelector. null or empty namespaces list and\n            # null namespaceSelector means \"this pod's namespace\".\n            namespaces: [\"string\"]\n            # This pod should be co-located (affinity) or not\n            # co-located (anti-affinity) with the pods matching the\n            # labelSelector in the specified namespaces, where\n            # co-located is defined as running on a node whose value of\n            # the label with key topologyKey matches that of any node\n            # on which any of the selected pods is running. Empty\n            # topologyKey is not allowed.\n            topologyKey: string\n          # weight associated with matching the corresponding\n          # podAffinityTerm, in the range 1-100.\n          weight: 1\n        # If the affinity requirements specified by this field are not\n        # met at scheduling time, the pod will not be scheduled onto\n        # the node. If the affinity requirements specified by this\n        # field cease to be met at some point during pod execution\n        # (e.g. due to a pod label update), the system may or may not\n        # try to eventually evict the pod from its node. When there are\n        # multiple elements, the lists of nodes corresponding to each\n        # podAffinityTerm are intersected, i.e. all terms must be\n        # satisfied.\n        requiredDuringSchedulingIgnoredDuringExecution:\n        - labelSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # A label query over the set of namespaces that the term\n          # applies to. The term is applied to the union of the\n          # namespaces selected by this field and the ones listed in\n          # the namespaces field. null selector and null or empty\n          # namespaces list means \"this pod's namespace\". An empty\n          # selector ({}) matches all namespaces.\n          namespaceSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # namespaces specifies a static list of namespace names that\n          # the term applies to. The term is applied to the union of\n          # the namespaces listed in this field and the ones selected\n          # by namespaceSelector. null or empty namespaces list and\n          # null namespaceSelector means \"this pod's namespace\".\n          namespaces: [\"string\"]\n          # This pod should be co-located (affinity) or not co-located\n          # (anti-affinity) with the pods matching the labelSelector in\n          # the specified namespaces, where co-located is defined as\n          # running on a node whose value of the label with key\n          # topologyKey matches that of any node on which any of the\n          # selected pods is running. Empty topologyKey is not\n          # allowed.\n          topologyKey: string\n      # Describes pod anti-affinity scheduling rules (e.g. avoid putting\n      # this pod in the same node, zone, etc. as some other pod(s)).\n      podAntiAffinity:\n        # The scheduler will prefer to schedule pods to nodes that\n        # satisfy the anti-affinity expressions specified by this\n        # field, but it may choose a node that violates one or more of\n        # the expressions. The node that is most preferred is the one\n        # with the greatest sum of weights, i.e. for each node that\n        # meets all of the scheduling requirements (resource request,\n        # requiredDuringScheduling anti-affinity expressions, etc.),\n        # compute a sum by iterating through the elements of this field\n        # and adding \"weight\" to the sum if the node has pods which\n        # matches the corresponding podAffinityTerm; the node(s) with\n        # the highest sum are the most preferred.\n        preferredDuringSchedulingIgnoredDuringExecution:\n        - podAffinityTerm:\n            # A label query over a set of resources, in this case pods.\n            labelSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # A label query over the set of namespaces that the term\n            # applies to. The term is applied to the union of the\n            # namespaces selected by this field and the ones listed in\n            # the namespaces field. null selector and null or empty\n            # namespaces list means \"this pod's namespace\". An empty\n            # selector ({}) matches all namespaces.\n            namespaceSelector:\n              # matchExpressions is a list of label selector\n              # requirements. The requirements are ANDed.\n              matchExpressions:\n              - key: string\n                # operator represents a key's relationship to a set of\n                # values. Valid operators are In, NotIn, Exists and\n                # DoesNotExist.\n                operator: string\n                # values is an array of string values. If the operator\n                # is In or NotIn, the values array must be non-empty.\n                # If the operator is Exists or DoesNotExist, the values\n                # array must be empty. This array is replaced during a\n                # strategic merge patch.\n                values: [\"string\"]\n              # matchLabels is a map of {key,value} pairs. A single\n              # {key,value} in the matchLabels map is equivalent to an\n              # element of matchExpressions, whose key field is \"key\",\n              # the operator is \"In\", and the values array contains\n              # only \"value\". The requirements are ANDed.\n              matchLabels: {}\n            # namespaces specifies a static list of namespace names that\n            # the term applies to. The term is applied to the union of\n            # the namespaces listed in this field and the ones selected\n            # by namespaceSelector. null or empty namespaces list and\n            # null namespaceSelector means \"this pod's namespace\".\n            namespaces: [\"string\"]\n            # This pod should be co-located (affinity) or not\n            # co-located (anti-affinity) with the pods matching the\n            # labelSelector in the specified namespaces, where\n            # co-located is defined as running on a node whose value of\n            # the label with key topologyKey matches that of any node\n            # on which any of the selected pods is running. Empty\n            # topologyKey is not allowed.\n            topologyKey: string\n          # weight associated with matching the corresponding\n          # podAffinityTerm, in the range 1-100.\n          weight: 1\n        # If the anti-affinity requirements specified by this field are\n        # not met at scheduling time, the pod will not be scheduled\n        # onto the node. If the anti-affinity requirements specified by\n        # this field cease to be met at some point during pod\n        # execution (e.g. due to a pod label update), the system may or\n        # may not try to eventually evict the pod from its node. When\n        # there are multiple elements, the lists of nodes corresponding\n        # to each podAffinityTerm are intersected, i.e. all terms must\n        # be satisfied.\n        requiredDuringSchedulingIgnoredDuringExecution:\n        - labelSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # A label query over the set of namespaces that the term\n          # applies to. The term is applied to the union of the\n          # namespaces selected by this field and the ones listed in\n          # the namespaces field. null selector and null or empty\n          # namespaces list means \"this pod's namespace\". An empty\n          # selector ({}) matches all namespaces.\n          namespaceSelector:\n            # matchExpressions is a list of label selector requirements.\n            # The requirements are ANDed.\n            matchExpressions:\n            - key: string\n              # operator represents a key's relationship to a set of\n              # values. Valid operators are In, NotIn, Exists and\n              # DoesNotExist.\n              operator: string\n              # values is an array of string values. If the operator is\n              # In or NotIn, the values array must be non-empty. If the\n              # operator is Exists or DoesNotExist, the values array\n              # must be empty. This array is replaced during a\n              # strategic merge patch.\n              values: [\"string\"]\n            # matchLabels is a map of {key,value} pairs. A single\n            # {key,value} in the matchLabels map is equivalent to an\n            # element of matchExpressions, whose key field is \"key\",\n            # the operator is \"In\", and the values array contains\n            # only \"value\". The requirements are ANDed.\n            matchLabels: {}\n          # namespaces specifies a static list of namespace names that\n          # the term applies to. The term is applied to the union of\n          # the namespaces listed in this field and the ones selected\n          # by namespaceSelector. null or empty namespaces list and\n          # null namespaceSelector means \"this pod's namespace\".\n          namespaces: [\"string\"]\n          # This pod should be co-located (affinity) or not co-located\n          # (anti-affinity) with the pods matching the labelSelector in\n          # the specified namespaces, where co-located is defined as\n          # running on a node whose value of the label with key\n          # topologyKey matches that of any node on which any of the\n          # selected pods is running. Empty topologyKey is not\n          # allowed.\n          topologyKey: string\n    # Annotations - Annotations labels to be applied to the Statefulset\n    # DB pods.\n    annotations: {}\n    # The name of the cluster to form.\n    clusterName: string\n    # The Ingress Endpoint that GAdmin will be running on.\n    clusterSize:\n      # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n      # representation of the number of nodes in a simple to understand\n      # T-Short size scheme. This indicates the size of the cluster\n      # i.e. the number of nodes. It does not identify the size of the\n      # cloud provider nodes. For node size see ClusterTypeEnum.\n      # Supported Values are: - XS S M L XL XXL XXXL\n      tshirtSize: string\n      # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n      # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n      # of the VM.\n      tshirtType: string\n    # Config Kinetica DB Configuration Object\n    config: ai: apiKey: string\n        # Provider - AI API provider type. The default is \"sqlgpt\"\n        apiProvider: \"sqlgpt\" apiUrl: string\n      # AlertManagerConfig\n      alertManager:\n        # AlertManager IP address (run on head node) default port\n        # is \"2003\"\n        ipAddress: \"${gaia.host0.address}\" port: 2003\n      # AlertConfig\n      alerts: alertDiskAbsolute: [integer]\n        # Trigger an alert if available disk space on any given node\n        # falls to or below a certain threshold, either absolute\n        # (number of bytes) or percentage of total disk space. For\n        # multiple thresholds, use a comma-delimited list of values.\n        alertDiskPercentage: [1,5,10,20]\n        # Trigger generic error message alerts, in cases of various\n        # significant runtime errors.\n        alertErrorMessages: true\n        # Executable to run when an alert condition occurs. This\n        # executable will only be run on **rank0** and does not need to\n        # be present on other nodes.\n        alertExe: \"\"\n        # Trigger an alert whenever the status of a host or rank\n        # changes.\n        alertHostStatus: true\n        # Optionally, filter host alerts for a comma-delimited list of\n        # statuses. If a filter is empty, every host status change will\n        # trigger an alert.\n        alertHostStatusFilter: \"fatal_init_error\"\n        # The maximum number of triggered alerts guaranteed to be stored\n        # at any given time. When this number of alerts is exceeded,\n        # older alerts may be discarded to stay within the limit.\n        alertMaxStoredAlerts: 100 alertMemoryAbsolute: [integer]\n        # Trigger an alert if available memory on any given node falls\n        # to or below a certain threshold, either absolute (number of\n        # bytes) or percentage of total memory. For multiple\n        # thresholds, use a comma-delimited list of values.\n        alertMemoryPercentage: [1,5,10,20]\n        # Trigger an alert if a CUDA error occurs on a rank.\n        alertRankCudaError: true\n        # Trigger alerts when the fallback allocator is employed; e.g.,\n        # host memory is allocated because GPU allocation fails. NOTE:\n        # To prevent a flooding of alerts, if a fallback allocator is\n        # triggered in bursts, not every use will generate an alert.\n        alertRankFallbackAllocator: true\n        # Trigger an alert whenever the status of a rank changes.\n        alertRankStatus: true\n        # Optionally, filter rank alerts for a comma-delimited list of\n        # statuses. If a filter is empty, every rank status change will\n        # trigger an alert.\n        alertRankStatusFilter:\n        [\"fatal_init_error\",\"not_responding\",\"terminated\"]\n        # Enable the alerting system.\n        enableAlerts: true\n        # Directory where the trace event and summary files are stored.\n        # Must be a fully qualified path with sufficient free space for\n        # required volume of data.\n        traceDirectory: \"/tmp\"\n        # The maximum number of trace events to be collected\n        traceEventBufferSize: 1000000\n      # Audit - This section controls the request auditor, which will\n      # audit all requests received by the server in full or in part\n      # based on the settings.\n      audit:\n        # Controls whether the body of each request is audited (in JSON\n        # format). If 'enable_audit' is \"false\" this setting has no\n        # effect. NOTE: For requests that insert data records, this\n        # setting does not control the auditing of the records being\n        # inserted, only the rest of the request body; see 'audit_data'\n        # below to control this. audit_body = false\n        body: false\n        # Controls whether records being inserted are audited (in JSON\n        # format) for requests that insert data records. If\n        # either 'enable_audit' or 'audit_body' is \"false\", this\n        # setting has no effect. NOTE: Enabling this setting during\n        # bulk ingestion of data will rapidly produce very large audit\n        # logs and may cause disk space exhaustion; use with caution.\n        # audit_data = false\n        data: false\n        # Controls whether request auditing is enabled. If set\n        # to \"true\", the following information is audited for every\n        # request: Job ID, URI, User, and Client Address. The settings\n        # below control whether additional information about each\n        # request is also audited. If set to \"false\", all auditing is\n        # disabled. enable_audit = false\n        enable: false\n        # Controls whether HTTP headers are audited for each request.\n        # If 'enable_audit' is \"false\" this setting has no effect.\n        # audit_headers = false\n        headers: true\n        # Controls whether the above audit settings can be altered at\n        # runtime via the /alter/system/properties endpoint. In a\n        # secure environment where auditing is required at all times,\n        # this should be set to \"true\" to lock the settings to what is\n        # set in this file. lock_audit = false\n        lock: false\n        # Controls whether response information is audited for each\n        # request. If 'enable_audit' is \"false\" this setting has no\n        # effect. audit_response = false\n        response: false\n      # EventConfig\n      events:\n        # Run a statistics server to collect information about Kinetica\n        # and the machines it runs on.\n        internal: true\n        # Statistics server IP address (run on head node) default port\n        # is \"2003\"\n        ipAddress: \"${gaia.host0.address}\" port: 2003\n        # Statistics server namespace - should be a machine identifier\n        statsServerNamespace: \"gpudb\"\n      # ExternalFilesConfig\n      externalFiles:\n        # Defines the directory from which external files can be loaded\n        directory: \"/opt/gpudb/persist\"\n        # # Parquet files compression type egress_parquet_compression =\n        #   snappy\n        egressParquetCompression: \"snappy\"\n        # Max file size (in MB) to allow saving to a single file. May be\n        # overridden by target limitations. egress_single_file_max_size\n        # = 100\n        egressSingleFileMaxSize: \"100\"\n        # Maximum number of simultaneous threads allocated to a given\n        # external file read request, on each rank. Note that thread\n        # allocation may also be limited by resource group limits, the\n        # subtask_concurrency_limit setting, or system load.\n        readerNumTasks: \"-1\"\n      # GeneralConfig - the root of the gpudb.conf configuration in the\n      # CRD\n      general:\n        # Timeout (in seconds) to wait for a rank to start during a\n        # cluster event (ex: failover) event is considered failed.\n        clusterEventTimeoutStartupRank: \"300\"\n        # Enable (if \"true\") multiple kernels to run concurrently on the\n        # same GPU\n        concurrentKernelExecution: true\n        # Time-to-live in minutes of non-protected tables before they\n        # are automatically deleted from the database.\n        defaultTTL: \"20\"\n        # Disallow the /clear/table request to clear all tables.\n        disableClearAll: true\n        # Enable overlapped-equi-join filters\n        enableOverlappedEquiJoin: true\n        # Enable predicate-equi-join filter plan type\n        enablePredicateEquiJoin: true\n        # If \"true\" then all filter execution will be host-only\n        # (i.e. CPU). This can be useful for high-concurrency\n        # situations and when PCIe bandwidth is a limiting factor.\n        forceHostFilterExecution: false\n        # Maximum number of kernels that can be running at the same time\n        # on a given GPU. Set to \"0\" for no limit. Only takes effect\n        # if 'concurrent_kernel_execution' is \"true\"\n        maxConcurrentKernels: \"0\"\n        # Maximum number of records that data retrieval requests such\n        # as /get/records and /aggregate/groupby will return per\n        # request.\n        maxGetRecordsSize: 20000\n        # Set an optional executable command that will be run once when\n        # Kinetica is ready for client requests. This can be used to\n        # perform any initialization logic that needs to be run before\n        # clients connect. It will be run as the \"gpudb\" user, so you\n        # must ensure that any required permissions are set on the file\n        # to allow it to be executed.  If the command cannot be\n        # executed or returns a non-zero error code, then Kinetica will\n        # be stopped.  Output from the startup script will be logged\n        # to \"/opt/gpudb/core/logs/gpudb-on-start.log\" (and its dated\n        # relatives).  The \"gpudb_env.sh\" script is run directly before\n        # the command, so the path will be set to include the supplied\n        # Python runtime. Example: on_startup_script\n        # = /home/gpudb/on-start.sh param1 param2 ...\n        onStartupScript: \"\"\n        # Size in bytes of the pinned memory pool per-rank process to\n        # speed up copying data to the GPU.  Set to \"0\" to disable.\n        pinnedMemoryPoolSize: 2000000000\n        # Tables and collections with these names will not be deleted\n        # (comma separated).\n        protectedSets: \"MASTER,_MASTER,_DATASOURCE\"\n        # Timeout (in minutes) for filter-type requests\n        requestTimeout: \"20\"\n        # Timeout (in seconds) to wait for a rank to exit gracefully\n        # before it is force-killed. Machines with slow disk drives may\n        # require longer times and data may be lost if a drive is not\n        # responsive.\n        timeoutShutdownRank: \"300\"\n        # Timeout (in seconds) to wait for each database subsystem to\n        # exit gracefully before it is force-killed.\n        timeoutShutdownSubsystem: \"20\"\n        # Timeout (in seconds) to wait for each database subsystem to\n        # startup. Subsystems include the Query Planner, Graph,\n        # Stats, & HTTP servers, as well as external text-search\n        # ranks.\n        timeoutStartupSubsystem: \"60\"\n      # GraphConfig\n      graph:\n        # Enable the graph server\n        enable: false\n        # List of GPU devices to be used by graph server The server\n        # would ideally be run on a different node with dedicated GPU\n        # (s)\n        gpuList: \"\"\n        # Specify where the graph server should be run, defaults to head\n        # node\n        ipAddress: \"${gaia.rank0_ip_address}\"\n        # Maximum memory that can be used by the graph server, set\n        # to \"0\" to disable memory restriction\n        maxMemory: 0\n        # Port used for responses from the graph server to the database\n        # server\n        pullPort: 8100\n        # Port used for requests from the database server to the graph\n        # server\n        pushPort: 8099\n        # Number of seconds the graph client will wait for a response\n        # from the graph server\n        timeout: 1200\n      # HardwareConfig\n      hardware:\n        # Rank0HardwareConfig\n        rank0:\n          # Specify the GPU to use for all calculations on the HTTP\n          # server node, **rank0**. NOTE: The **rank0** GPU may be\n          # shared with another rank.\n          gpu: 0\n          # Set the head HTTP **rank0** numa node(s). If left empty,\n          # there will be no thread affinity or preferred memory node.\n          # The node list may be either a single node number or a\n          # range; e.g., \"1-5,7,10\". If there will be many simultaneous\n          # users, specify as many nodes as possible that won't overlap\n          # the **rank1** to **rankN** worker numa nodes that the GPUs\n          # are on. If there will be few simultaneous users and WMS\n          # speed is important, choose the numa node the 'rank0.gpu' is\n          # on.\n          numaNode: ranks:\n        - baseNumaNode: string\n          # Set each worker rank's preferred data numa node for CPU\n          # affinity and memory allocation.\n          # The 'rank<#>.data_numa_node' is the node or nodes that data\n          # intensive threads will run in and should be set to the same\n          # numa node that the GPU specified by the\n          # corresponding 'rank<#>.taskcalc_gpu' is on for best\n          # performance. If the 'rank<#>.taskcalc_gpu' is specified\n          # the 'rank<#>.data_numa_node' will be automatically set to\n          # the node the GPU is attached to, otherwise there will be no\n          # CPU thread affinity or preferred node for memory allocation\n          # if not specified or left empty. The node list may be a\n          # single node number or a range; e.g., \"1-5,7,10\".\n          dataNumaNode: string\n          # Set the GPU device for each worker rank to use. If no GPUs\n          # are specified, each rank will round-robin the available\n          # GPUs per host system. Add 'rank<#>.taskcalc_gpu' as needed\n          # for the worker ranks, where *#* ranges from \"1\" to the\n          # highest *rank #* among the 'rank<#>.host' parameters\n          # Example setting the GPUs to use for ranks 1 and 2: \n          #  #   rank1.taskcalc_gpu = 0 #   rank2.taskcalc_gpu = 1\n          taskCalcGPU: kafka:\n        # Maximum number of records to be ingested in a single batch\n        # kafka.batch_size = 1000\n        batchSize: 1000\n        # Maximum time (milliseconds) for each poll to get records from\n        # kafka kafka.poll_timeout = 0\n        pollTimeout: 1\n        # Maximum wait time (seconds) to buffer records received from\n        # kafka before ingestion kafka.wait_time = 30\n        waitTime: 30\n      # KifsConfig\n      kifs:\n        # KIFs user data size limit\n        dataLimit: \"4Gi\"\n        # sudo usermod -a -G gpudb_proc <user>\n        enable: false\n        # Parent directory of the mount point for the KiFS file system.\n        # Must be a fully qualified path. The actual mount point will\n        # be a subdirectory *mount* below this directory. Note that\n        # this folder must have read, write and execute permissions for\n        # the \"gpudb\" user and the \"gpudb_proc\" group, and it cannot be\n        # a path on an NFS.\n        mountPoint: \"/gpudb/kifs\" useManagedCredentials: true\n      # Etcd *ETCDConfig `json:\"etcd,omitempty\"` HA        HAConfig\n      # `json:\"ha,omitempty\"`\n      ml:\n        # Enable the ML server.\n        enable: false\n      # NetworkConfig\n      network:\n        # HAAddress - An optional address to allow inter-cluster\n        # communication with HA when 'address' is not routable between\n        # clusters.\n        HAAddress: string\n        # CompressNetworkData - Enables compression of inter-node\n        # network data transfers.\n        compressNetworkData: false\n        # EnableHTTPDProxy - Start an HTTP server as a proxy to handle\n        # LDAP and/or Kerberos authentication. Each host will run an\n        # HTTP server and access to each rank is available through\n        # http://host:8082/gpudb-1, where port \"8082\" is defined\n        # by 'httpd_proxy_port'. NOTE: HTTP external endpoints are not\n        # affected by the 'use_https' parameter above. If you wish to\n        # enable HTTPS, you must edit\n        # the \"/opt/gpudb/httpd/conf/httpd.conf\" and setup HTTPS as per\n        # the Apache httpd documentation at\n        # https://httpd.apache.org/docs/2.2/\n        enableHTTPDProxy: true\n        # EnableWorkerHTTPServers - Enable worker HTTP servers; each\n        # process runs its own server for multi-head ingest.\n        enableWorkerHTTPServers: true\n        # GlobalManagerLocalPubPort - ?\n        globalManagerLocalPubPort: 5554\n        # GlobalManagerPortOne - Internal communication ports -  Host\n        # manager status notification channel\n        globalManagerPortOne: 5552\n        # GlobalManagerPubPort - Host manager synchronization message\n        # publishing channel port\n        globalManagerPubPort: 5553\n        # HeadIPAddress - Head HTTP server IP address. Set to the\n        # publicly accessible IP address of the first\n        # process, **rank0**.\n        headIPAddress: \"172.20.0.10\"\n        # HeadPort - Head HTTP server port to use\n        # for 'head_ip_address'.\n        headPort: 9191\n        # HostManagerHTTPPort - HTTP port for web portal of the host\n        # manager\n        hostManagerHTTPPort: 9300\n        # HTTPAllowOrigin - Value to return via\n        # Access-Control-Allow-Origin HTTP header (for Cross-Origin\n        # Resource Sharing). Set to empty to not return the header and\n        # disallow CORS.\n        httpAllowOrigin: \"*\"\n        # HTTPKeepAlive - Keep HTTP connections alive between requests\n        httpKeepAlive: false\n        # HTTPDProxyPort - TCP port that the httpd auth proxy server\n        # will listen on if 'enable_httpd_proxy' is \"true\".\n        httpdProxyPort: 8082\n        # HTTPDProxyUseHTTPS - Set to \"true\" if the httpd auth proxy\n        # server is configured to use HTTPS.\n        httpdProxyUseHTTPS: false\n        # HTTPSCertFile - File containing the SSL certificate  e.g.\n        # cert.pem If required, a self-signed certificate(expires after\n        # 10 years) can be generated via the command: e.g. cert.pem\n        # openssl req -newkey rsa:2048 -new -nodes -x509 \\ -days\n        # 3650 -keyout key.pem -out cert.pem\n        httpsCertFile: \"\"\n        # HTTPSKeyFile - File containing the SSL private Key e.g.\n        # key.pem If required, a self-signed certificate (expires after\n        # 10 years) can be generated via the command: openssl\n        # req -newkey rsa:2048 -new -nodes -x509 \\ -days 3650 -keyout\n        # key.pem -out cert.pem\n        httpsKeyFile: \"\"\n        # Rank0IPAddress - Internal use IP address of the head HTTP\n        # server, **rank0**. Set to either a second internal network\n        # accessible by all ranks or to '${gaia.head_ip_address}'.\n        rank0IPAddress: \"${gaia.rank0.host}\" ranks:\n        - communicatorPort:\n            # Number of port to expose on the pod's IP address. This\n            # must be a valid port number, 0 < x < 65536.\n            containerPort: 1\n            # What host IP to bind the external port to.\n            hostIP: string\n            # Number of port to expose on the host. If specified, this\n            # must be a valid port number, 0 < x < 65536. If\n            # HostNetwork is specified, this must match ContainerPort.\n            # Most containers do not need this.\n            hostPort: 1\n            # If specified, this must be an IANA_SVC_NAME and unique\n            # within the pod. Each named port in a pod must have a\n            # unique name. Name for the port that can be referred to by\n            # services.\n            name: string\n            # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n            # to \"TCP\".\n            protocol: \"TCP\"\n          # Specify the hosts to run each rank worker process in the\n          # cluster. For a single machine system, use \"127.0.0.1\", but\n          # if using two or more machines, a hostname or IP address\n          # must be specified for each rank that is accessible from the\n          # other ranks. See also 'head_ip_address'\n          # and 'rank0_ip_address'.\n          host: string\n          # Optionally, specify the worker HTTP server ports. The\n          # default is to use ('head_port' + *rank #*) for each worker\n          # process where rank number is from \"1\" to number of ranks\n          # in 'rank<#>.host' below.\n          httpServerPort:\n            # Number of port to expose on the pod's IP address. This\n            # must be a valid port number, 0 < x < 65536.\n            containerPort: 1\n            # What host IP to bind the external port to.\n            hostIP: string\n            # Number of port to expose on the host. If specified, this\n            # must be a valid port number, 0 < x < 65536. If\n            # HostNetwork is specified, this must match ContainerPort.\n            # Most containers do not need this.\n            hostPort: 1\n            # If specified, this must be an IANA_SVC_NAME and unique\n            # within the pod. Each named port in a pod must have a\n            # unique name. Name for the port that can be referred to by\n            # services.\n            name: string\n            # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n            # to \"TCP\".\n            protocol: \"TCP\"\n          # This is the Kubernetes pod IP Address of the current rank\n          # which we need to populate in the operator. NOTE: Internal\n          # Attribute\n          podIP: string\n          # Optionally, specify a public URL for each worker HTTP server\n          # that clients should use to connect for multi-head\n          # operations. NOTE: If specified for any ranks, a public URL\n          # must be specified for all ranks.\n          publicURL: \"https://:8082/gpudb-{{.Rank}}\"\n          # Define the rank number of this rank.\n          rank: 1\n        # SetMonitorPort - Set monitor ZMQ publisher server port (-1 to\n        # disable), uses the 'head_ip_address' interface.\n        setMonitorPort: 9002\n        # SetMonitorProxyPort - Set monitor ZMQ publisher internal proxy\n        # server port (\"-1\" to disable), uses the 'head_ip_address'\n        # interface. IMPORTANT:  Disabling this port effectively\n        # prevents worker nodes from publishing set monitor\n        # notifications when multi-head ingest is enabled\n        # (see 'enable_worker_http_servers').\n        setMonitorProxyPort: 9003\n        # SetMonitorQueueSize - Set monitor queue size\n        setMonitorQueueSize: 1000\n        # TriggerPort - Trigger ZMQ publisher server port (\"-1\" to\n        # disable), uses the 'head_ip_address' interface.\n        triggerPort: -1\n        # UseHTTPS - Set to \"true\" to use HTTPS; if \"true\"\n        # then 'https_key_file' and 'https_cert_file' must be provided\n        useHttps: false\n      # PersistenceConfig\n      persistence:\n        # Removed in 7.2\n        IndexDBFlushImmediate: true\n        # DataLoadingSchema Startup data-loading scheme\n        buildMaterializedViewsOnStart: \"on_demand\"\n        # DataLoadingSchema Startup data-loading scheme\n        buildPKIndexOnStart: \"on_demand\"\n        # Target maximum data size for any one column in a chunk\n        # (512 MB) (0 = disable). chunk_max_memory = 8192000000\n        chunkColumnMaxMemory: 8192000000\n        # Target maximum total data size for all columns in a chunk\n        # (8 GB) (0 = disable).\n        chunkMaxMemory: 512000000\n        # Number of records per chunk (\"0\" disables chunking)\n        chunkSize: 8000000\n        # Determines whether to execute kernels on host (CPU) or device\n        # (GPU). Possible values are: \n        #  * \"default\"   : engine decides * \"host\"      : execute only\n        #     host * \"device\"    : execute only device * *<rows>*    :\n        #     execute on the host if chunked column contains the given\n        #     number of *rows* or fewer; otherwise, execute on device.\n        executionMode: \"device\"\n        # Removed in 7.2\n        fsyncIndexDBImmediate: true\n        # Removed in 7.2\n        fsyncInodesImmediate: true\n        # Removed in 7.2\n        fsyncMetadataImmediate: true\n        # Removed in 7.2\n        fsyncOnInterval: true\n        # Maximum number of open files for IndexedDb object file store.\n        # Removed in 7.2\n        indexDBMaxOpenFiles: \n        # Table of contents size for IndexedDb object file store.\n        # Removed in 7.2\n        indexDBTOCSize: \n        # Disable detection of sparse file support and use the full file\n        # length which may be an over-estimate of the actual usage in\n        # the persist tier. Removed in 7.2\n        indexDBTierByFileLength: false\n        # Startup data-loading scheme: \n        #  * \"always\"    : load all the data into memory before\n        #     accepting requests * \"lazy\"      : load the necessary\n        #     data to start, but load the remainder\n        #     lazily * \"on_demand\" : only load data as requests use it\n        loadVectorsOnStart: \"on_demand\"\n        # Removed in 7.2\n        metadataFlushImmediate: true\n        # Specify a base directory to store persistence data files.\n        persistDirectory: \"/opt/gpudb/persist\"\n        # Whether to use synchronous persistence file writing.\n        # If \"false\", files will be written asynchronously. Removed in\n        # 7.2\n        persistSync: true\n        # Duration in seconds, for which persistence files will be\n        # force-synced if out of sync, once per minute. NOTE: Files are\n        # always opportunistically saved; this simply enforces a\n        # maximum time a file can be out of date. Set to a very high\n        # number to disable.\n        persistSyncTime: 5\n        # The maximum number of bytes in the shadow aggregate cache\n        shadowAggSize: 100000000\n        # Whether to enable chunk caching\n        shadowCubeEnabled: true\n        # The maximum number of bytes in the shadow filter cache\n        shadowFilterSize: 100000000\n        # Base directory to store hashed strings.\n        smsDirectory: \"${gaia.persist_directory}\"\n        # Maximum number of open files (per-TOM) for the SMS\n        # (string) store.\n        smsMaxOpenFiles: 128\n        # Synchronous compression: compress vectors on set compression.\n        synchronousCompression: false\n        # Directory for GPUdb to use to store temporary files. Must be a\n        # fully qualified path, have at least 100Mb of free space, and\n        # execute permission.\n        tempDirectory: \"${gaia.persist_directory}/tmp\"\n        # Base directory to store the text search index.\n        textIndexDirectory: \"${gaia.persist_directory}\"\n        # Enable checksum protection on the wal entries. New in 7.2\n        walChecksum: true\n        # Specifies how frequently wal entries are written with\n        # background sync. New in 7.2\n        walFlushFrequency: 60\n        # Maximum size of each wal segment file New in 7.2\n        walMaxSegmentSize: 500000000\n        # Approximate number of segment files to split the wal across. A\n        # minimum of two is required. The size of the wal is limited by\n        # segment_count * max_segment_size. (per rank and per tom) Set\n        # to 0 to remove a size limit on the wal itself, but still be\n        # bounded by rank tier limits. Set to -1 to have the database\n        # decide automatically per table. New in 7.2\n        walSegmentCount: \n        # Sync mode to use when persisting wal entries to disk: \n        #  \"none\"       : Disable the wal \"background\" : Wal entries are\n        #   periodically written instead of immediately after each\n        #   operation \"flush\"      : Protects entries in the event of a\n        #   database crash \"fsync\"      : Protects entries in the event\n        #   of an OS crash New in 7.2\n        walSyncPolicy: \"flush\"\n        # If true, any table that is found to be corrupt after replaying\n        # its wal at startup will automatically be truncated so that\n        # the table becomes operable. If false, the user will be\n        # responsible for resolving the issue via sql REPAIR TABLE or\n        # similar. New in 7.2\n        walTruncateCorruptTablesOnStart: true\n      # PostgresProxy\n      postgresProxy:\n        # Postgres Proxy Server Start an Postgres(TCP) server as a proxy\n        # to handle postgres wire protocol messages.\n        enablePostgresProxy: false\n        # Set idle connection  timeout in seconds. (default: \"1200\")\n        idleConnectionTimeout: 1200\n        # Set max number of queued server connections. (default: \"1\")\n        maxQueuedConnections: 1\n        # Set max number of server threads to spawn. (default: \"64\")\n        maxThreads: 64\n        # Set min number of server threads to spawn. (default: \"2\")\n        minThreads: 2\n        # TCP port that the postgres  proxy server will listen on\n        # if 'enable_postgres_proxy' is \"true\".\n        port:\n          # Number of port to expose on the pod's IP address. This must\n          # be a valid port number, 0 < x < 65536.\n          containerPort: 1\n          # What host IP to bind the external port to.\n          hostIP: string\n          # Number of port to expose on the host. If specified, this\n          # must be a valid port number, 0 < x < 65536. If HostNetwork\n          # is specified, this must match ContainerPort. Most\n          # containers do not need this.\n          hostPort: 1\n          # If specified, this must be an IANA_SVC_NAME and unique\n          # within the pod. Each named port in a pod must have a unique\n          # name. Name for the port that can be referred to by\n          # services.\n          name: string\n          # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n          # to \"TCP\".\n          protocol: \"TCP\"\n        # Set to \"true\" to use SSL; if \"true\" then 'ssl_key_file'\n        # and 'ssl_cert_file' must be provided\n        ssl: false sslCertFile: \"\"\n        # Files containing the SSL private Key and the SSL certificate\n        # for. If required, a self signed certificate (expires after 10\n        # years) can be generated via the command: openssl req -newkey\n        # rsa:2048 -new -nodes -x509 \\ -days 3650 -keyout key.pem -out\n        # cert.pem\n        sslKeyFile: \"\"\n      # ProcessesConfig\n      processes:\n        # Set the maximum number of threads per tom for table\n        # initialization on startup\n        initTablesNumThreadsPerTom: 8\n        # Set the number of parallel calculation threads to use for data\n        # processing use -1 to use the max number of threads\n        # (not recommended)\n        kernelOmpThreads: 3\n        # The maximum number of web server threads to spawn\n        maxHttpThreads: 512\n        # Set the maximum number of threads (both workers and masters)\n        # to be passed to TBB on initialization.  Generally\n        # speaking, 'max_tbb_threads_per_rank' - \"1\" TBB workers will\n        # be created.  Use \"-1\" for no limit.\n        maxTbbThreadsPerRank: \"-1\"\n        # The minimum number of web server threads to spawn\n        minHttpThreads: 8\n        # Set the number of parallel jobs to create for multi-child set\n        # calulations use \"-1\" to use the max number of threads\n        # (not recommended)\n        smOmpThreads: 2\n        # Maximum number of simultaneous threads allocated to a given\n        # request, on each rank. Note that thread allocation may also\n        # be limted by resource group limits and/or system load.\n        subtaskConcurrentyLimit: \"-1\"\n        # Set the number of TaskCalculators per TOM, GPU data\n        # processors.\n        tcsPerTom: \"-1\"\n        # Set the number of TOMs (data container shards) per rank\n        tomsPerRank: 1\n        # Set the number of TaskProcessors per TOM, CPU data\n        # processors.\n        tpsPerTom: \"-1\"\n      # ProcsConfig\n      procs:\n        # Directory where proc files are stored at runtime. Must be a\n        # fully qualified path with execute permission. If not\n        # specified, 'temp_directory' will be used.\n        directory:\n          # PersistentVolumeClaim is a user's request for and claim to a\n          # persistent volume\n          persistVolumeClaim:\n            # APIVersion defines the versioned schema of this\n            # representation of an object. Servers should convert\n            # recognized schemas to the latest internal value, and may\n            # reject unrecognized values. More info:\n            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            apiVersion: app.kinetica.com/v1\n            # Kind is a string value representing the REST resource this\n            # object represents. Servers may infer this from the\n            # endpoint the client submits requests to. Cannot be\n            # updated. In CamelCase. More info:\n            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            kind: KineticaCluster\n            # Standard object's metadata. More info:\n            # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n            metadata: {}\n            # spec defines the desired characteristics of a volume\n            # requested by a pod author. More info:\n            # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n            spec:\n              # accessModes contains the desired access modes the volume\n              # should have. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n              accessModes: [\"string\"]\n              # dataSource field can be used to specify either: * An\n              # existing VolumeSnapshot object\n              # (snapshot.storage.k8s.io/VolumeSnapshot) * An existing\n              # PVC (PersistentVolumeClaim) If the provisioner or an\n              # external controller can support the specified data\n              # source, it will create a new volume based on the\n              # contents of the specified data source. When the\n              # AnyVolumeDataSource feature gate is enabled, dataSource\n              # contents will be copied to dataSourceRef, and\n              # dataSourceRef contents will be copied to dataSource\n              # when dataSourceRef.namespace is not specified. If the\n              # namespace is specified, then dataSourceRef will not be\n              # copied to dataSource.\n              dataSource:\n                # APIGroup is the group for the resource being\n                # referenced. If APIGroup is not specified, the\n                # specified Kind must be in the core API group. For any\n                # other third-party types, APIGroup is required.\n                apiGroup: string\n                # Kind is the type of resource being referenced\n                kind: KineticaCluster\n                # Name is the name of resource being referenced\n                name: string\n              # dataSourceRef specifies the object from which to\n              # populate the volume with data, if a non-empty volume is\n              # desired. This may be any object from a non-empty API\n              # group (non core object) or a PersistentVolumeClaim\n              # object. When this field is specified, volume binding\n              # will only succeed if the type of the specified object\n              # matches some installed volume populator or dynamic\n              # provisioner. This field will replace the functionality\n              # of the dataSource field and as such if both fields are\n              # non-empty, they must have the same value. For backwards\n              # compatibility, when namespace isn't specified in\n              # dataSourceRef, both fields (dataSource and\n              # dataSourceRef) will be set to the same value\n              # automatically if one of them is empty and the other is\n              # non-empty. When namespace is specified in\n              # dataSourceRef, dataSource isn't set to the same value\n              # and must be empty. There are three important\n              # differences between dataSource and dataSourceRef: *\n              # While dataSource only allows two specific types of\n              # objects, dataSourceRef allows any non-core object, as\n              # well as PersistentVolumeClaim objects. * While\n              # dataSource ignores disallowed values (dropping them),\n              # dataSourceRef preserves all values, and generates an\n              # error if a disallowed value is specified. * While\n              # dataSource only allows local objects, dataSourceRef\n              # allows objects in any namespaces. (Beta) Using this\n              # field requires the AnyVolumeDataSource feature gate to\n              # be enabled. (Alpha) Using the namespace field of\n              # dataSourceRef requires the\n              # CrossNamespaceVolumeDataSource feature gate to be\n              # enabled.\n              dataSourceRef:\n                # APIGroup is the group for the resource being\n                # referenced. If APIGroup is not specified, the\n                # specified Kind must be in the core API group. For any\n                # other third-party types, APIGroup is required.\n                apiGroup: string\n                # Kind is the type of resource being referenced\n                kind: KineticaCluster\n                # Name is the name of resource being referenced\n                name: string\n                # Namespace is the namespace of resource being\n                # referenced Note that when a namespace is specified, a\n                # gateway.networking.k8s.io/ReferenceGrant object is\n                # required in the referent namespace to allow that\n                # namespace's owner to accept the reference. See the\n                # ReferenceGrant documentation for details.(Alpha) This\n                # field requires the CrossNamespaceVolumeDataSource\n                # feature gate to be enabled.\n                namespace: string\n              # resources represents the minimum resources the volume\n              # should have. If RecoverVolumeExpansionFailure feature\n              # is enabled users are allowed to specify resource\n              # requirements that are lower than previous value but\n              # must still be higher than capacity recorded in the\n              # status field of the claim. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n              resources:\n                # Claims lists the names of resources, defined in\n                # spec.resourceClaims, that are used by this container.\n                # This is an alpha field and requires enabling the\n                # DynamicResourceAllocation feature gate. This field is\n                # immutable. It can only be set for containers.\n                claims:\n                - name: string\n                # Limits describes the maximum amount of compute\n                # resources allowed. More info:\n                # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                limits: {}\n                # Requests describes the minimum amount of compute\n                # resources required. If Requests is omitted for a\n                # container, it defaults to Limits if that is\n                # explicitly specified, otherwise to an\n                # implementation-defined value. Requests cannot exceed\n                # Limits. More info:\n                # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                requests: {}\n              # selector is a label query over volumes to consider for\n              # binding.\n              selector:\n                # matchExpressions is a list of label selector\n                # requirements. The requirements are ANDed.\n                matchExpressions:\n                - key: string\n                  # operator represents a key's relationship to a set of\n                  # values. Valid operators are In, NotIn, Exists and\n                  # DoesNotExist.\n                  operator: string\n                  # values is an array of string values. If the operator\n                  # is In or NotIn, the values array must be non-empty.\n                  # If the operator is Exists or DoesNotExist, the\n                  # values array must be empty. This array is replaced\n                  # during a strategic merge patch.\n                  values: [\"string\"]\n                # matchLabels is a map of {key,value} pairs. A single\n                # {key,value} in the matchLabels map is equivalent to\n                # an element of matchExpressions, whose key field\n                # is \"key\", the operator is \"In\", and the values array\n                # contains only \"value\". The requirements are ANDed.\n                matchLabels: {}\n              # storageClassName is the name of the StorageClass\n              # required by the claim. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n              storageClassName: string\n              # volumeMode defines what type of volume is required by\n              # the claim. Value of Filesystem is implied when not\n              # included in claim spec.\n              volumeMode: string\n              # volumeName is the binding reference to the\n              # PersistentVolume backing this claim.\n              volumeName: string\n            # status represents the current information/status of a\n            # persistent volume claim. Read-only. More info:\n            # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n            status:\n              # accessModes contains the actual access modes the volume\n              # backing the PVC has. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n              accessModes: [\"string\"]\n              # allocatedResources is the storage resource within\n              # AllocatedResources tracks the capacity allocated to a\n              # PVC. It may be larger than the actual capacity when a\n              # volume expansion operation is requested. For storage\n              # quota, the larger value from allocatedResources and\n              # PVC.spec.resources is used. If allocatedResources is\n              # not set, PVC.spec.resources alone is used for quota\n              # calculation. If a volume expansion capacity request is\n              # lowered, allocatedResources is only lowered if there\n              # are no expansion operations in progress and if the\n              # actual volume capacity is equal or lower than the\n              # requested capacity. This is an alpha field and requires\n              # enabling RecoverVolumeExpansionFailure feature.\n              allocatedResources: {}\n              # capacity represents the actual resources of the\n              # underlying volume.\n              capacity: {}\n              # conditions is the current Condition of persistent volume\n              # claim. If underlying persistent volume is being resized\n              # then the Condition will be set to 'ResizeStarted'.\n              conditions:\n              - lastProbeTime: string\n                # lastTransitionTime is the time the condition\n                # transitioned from one status to another.\n                lastTransitionTime: string\n                # message is the human-readable message indicating\n                # details about last transition.\n                message: string\n                # reason is a unique, this should be a short, machine\n                # understandable string that gives the reason for\n                # condition's last transition. If it\n                # reports \"ResizeStarted\" that means the underlying\n                # persistent volume is being resized.\n                reason: string status: string\n                # PersistentVolumeClaimConditionType is a valid value of\n                # PersistentVolumeClaimCondition.Type\n                type: string\n              # phase represents the current phase of\n              # PersistentVolumeClaim.\n              phase: string\n              # resizeStatus stores status of resize operation.\n              # ResizeStatus is not set by default but when expansion\n              # is complete resizeStatus is set to empty string by\n              # resize controller or kubelet. This is an alpha field\n              # and requires enabling RecoverVolumeExpansionFailure\n              # feature.\n              resizeStatus: string\n          # VolumeMount describes a mounting of a Volume within a\n          # container.\n          volumeMount:\n            # Path within the container at which the volume should be\n            # mounted.  Must not contain ':'.\n            mountPath: string\n            # mountPropagation determines how mounts are propagated from\n            # the host to container and the other way around. When not\n            # set, MountPropagationNone is used. This field is beta in\n            # 1.10.\n            mountPropagation: string\n            # This must match the Name of a Volume.\n            name: string\n            # Mounted read-only if true, read-write otherwise (false or\n            # unspecified). Defaults to false.\n            readOnly: true\n            # Path within the volume from which the container's volume\n            # should be mounted. Defaults to \"\" (volume's root).\n            subPath: string\n            # Expanded path within the volume from which the container's\n            # volume should be mounted. Behaves similarly to SubPath\n            # but environment variable references $(VAR_NAME) are\n            # expanded using the container's environment. Defaults\n            # to \"\" (volume's root). SubPathExpr and SubPath are\n            # mutually exclusive.\n            subPathExpr: string\n        # Enable procs (UDFs)\n        enable: true\n      # SecurityConfig\n      security:\n        # Automatically create accounts for externally-authenticated\n        # users. If 'enable_external_authentication' is \"false\", this\n        # setting has no effect. Note that accounts are not\n        # automatically deleted if users are removed from the external\n        # authentication provider and will be orphaned.\n        autoCreateExternalUsers: false\n        # Automatically add roles passed in via the \"KINETICA_ROLES\"\n        # HTTP header to externally-authenticated users. Specified\n        # roles that do not exist are ignored.\n        # If 'enable_external_authentication' is \"false\", this setting\n        # has no effect. IMPORTANT: DO NOT ENABLE unless the\n        # authentication proxy is configured to block \"KINETICA_ROLES\"\n        # HTTP headers passed in from clients.\n        autoGrantExternalRoles: false\n        # Comma-separated list of roles to revoke from\n        # externally-authenticated users prior to granting roles passed\n        # in via the \"KINETICA_ROLES\" HTTP header, or \"*\" to revoke all\n        # roles. Preceding a role name with an \"!\" overrides the\n        # revocation (e.g. \"*,!foo\" revokes all roles except \"foo\").\n        # Leave blank to disable. If\n        # either 'enable_external_authentication'\n        # or 'auto_grant_external_roles' is \"false\", this setting has\n        # no effect.\n        autoRevokeExternalRoles: false\n        # Enable authorization checks.  When disabled, all requests will\n        # be treated as the administrative user.\n        enableAuthorization: true\n        # Enable external (LDAP, Kerberos, etc.) authentication. User\n        # IDs of externally-authenticated users must be passed in via\n        # the \"REMOTE_USER\" HTTP header from the authentication proxy.\n        # May be used in conjuntion with the 'enable_httpd_proxy'\n        # setting above for an integrated external authentication\n        # solution. IMPORTANT: DO NOT ENABLE unless external access to\n        # GPUdb ports has been blocked via firewall AND the\n        # authentication proxy is configured to block \"REMOTE_USER\"\n        # HTTP headers passed in from clients. server.\n        enableExternalAuthentication: true\n        # ExternalSecurity\n        externalSecurity:\n          # Ranger\n          ranger:\n            # AuthorizerAddress - The network URI for the\n            # ranger_authorizer to start. The URI can be either TCP or\n            # IPC. TCP address is used to indicate the remote\n            # ranger_authorizer which may run at other hosts. The IPC\n            # address is for a local ranger_authorizer. Example\n            # addresses for remote or TCP servers: tcp://127.0.0.1:9293\n            # tcp://HOST_IP:9293 Example address for local IPC servers:\n            # ipc:///tmp/gpudb-ranger-0\n            # security.external.ranger_authorizer.address = ipc://$\n            # {gaia.temp_directory}/gpudb-ranger-0\n            authorizerAddress: \"ipc://$\n            {gaia.temp_directory}/gpudb-ranger-0\"\n            # Remote debugger port used for the ranger_authorizer.\n            # Setting the port to \"0\" disables remote debugging. NOTE:\n            # Recommended port to use is \"5005\"\n            # security.external.ranger_authorizer.remote_debug_port =\n            # 0\n            authorizerRemoteDebugPort: 0\n            # AuthorizerTimeout - Ranger Authorizer timeout in seconds\n            # security.external.ranger_authorizer.timeout = 120\n            authorizerTimeout: 120\n            # CacheMinutes- Maximum minutes to hold on to data from\n            # Ranger security.external.ranger.cache_minutes = 60\n            cacheMinutes: 60\n            # Name of the service created on the Ranger Server to manage\n            # this Kinetica instance\n            # security.external.ranger.service_name = kinetica\n            name: \"kinetica\"\n            # ExtURL - URL of Ranger REST API.  E.g.,\n            # https://localhost:6080/ Leave blank for no Ranger Server\n            # security.external.ranger.url =\n            url: string\n        # The minimum allowable password length.\n        minPasswordLength: 4\n        # Require all users to be authenticated.  Disable this to allow\n        # users to access the database as the 'unauthenticated' user.\n        # Useful for situations where the public needs to access the\n        # data.\n        requireAuthentication: true\n        # UnifiedSecurityNamespace - Use a single namespace for internal\n        # and external user IDs and role names. If false, external user\n        # IDs must be prefixed with \"@\" to differentiate them from\n        # internal user IDs and role names (except in the \"REMOTE_USER\"\n        # HTTP header, where the \"@\" is omitted).\n        # unified_security_namespace = true\n        unifiedSecurityNamespace: true\n      # SQLConfig\n      sql:\n        # SQLPlannerAddress is not included as it is just default\n        # always\n        address: \"ipc://${gaia.temp_directory}/gpudb-query-engine-0\"\n        # Enable the cost-based optimizer\n        costBasedOptimization: false\n        # Enable distributed joins\n        distributedJoins: true\n        # Enable distributed operations\n        distributedOperations: true\n        # Enable Query Planner\n        enablePlanner: true\n        # Perform joins between only 2 tables at a time; default is all\n        # tables involved in the operation at once\n        forceBinaryJoins: false\n        # Perform unions/intersections/exceptions between only 2 tables\n        # at a time; default is all tables involved in the operation at\n        # once\n        forceBinarySetOps: false\n        # Max parallel steps\n        maxParallelSteps: 4\n        # Max allowed view nesting levels. Valid range(1-64)\n        maxViewNestingLevels: 16\n        # TTL of the paging results table\n        pagingTableTTL: 20\n        # Enable parallel query evaluation\n        parallelExecution: true\n        # The maximum number of entries in the SQL plan cache.  The\n        # default is \"4000\" entries, but the configurable range\n        # is \"1\" - \"1000000\".  Plan caching will be disabled if the\n        # value is set outside of that range.\n        planCacheSize: 4000\n        # The maximum memory for the query planner to use in Megabytes.\n        plannerMaxMemory: 4096\n        # The maximum stack size for the query planner threads to use in\n        # Megabytes.\n        plannerMaxStack: 6\n        # Query planner timeout in seconds\n        plannerTimeout: 120\n        # Max Query planner threads\n        plannerWorkers: 16\n        # Remote debugger port used for the query planner. Setting the\n        # port to \"0\" disables remote debugging. NOTE:  Recommended\n        # port to use is \"5005\"\n        remoteDebugPort: 5005\n        # TTL of the query cache results table\n        resultsCacheTTL: 60\n        # Enable query results caching\n        resultsCaching: true\n        # Enable rule-based query rewrites\n        ruleBasedOptimization: true\n      # SQLEngineConfig\n      sqlEngine:\n        # Enable the cost-based optimizer\n        costBasedOptimization: false\n        # Name of default collection for user tables\n        defaultSchema: \"\"\n        # Enable distributed joins\n        distributedJoins: true\n        # Enable distributed operations\n        distributedOperations: true\n        # Perform joins between only 2 tables at a time; default is all\n        # tables involved in the operation at once\n        forceBinaryJoins: false\n        # Perform unions/intersections/exceptions between only 2 tables\n        # at a time; default is all tables involved in the operation at\n        # once\n        forceBinarySetOps: false\n        # Max parallel steps\n        maxParallelSteps: 4\n        # Max allowed view nesting levels. Valid range(1-64)\n        maxViewNestingLevels: 16\n        # TTL of the paging results table\n        pagingTableTTL: 20\n        # Enable parallel query evaluation\n        parallelExecution: true\n        # The maximum number of entries in the SQL plan cache.  The\n        # default is \"4000\" entries, but the configurable range\n        # is \"1\" - \"1000000\".  Plan caching will be disabled if the\n        # value is set outside of that range.\n        planCacheSize: 4000\n        # PlannerConfig\n        planner:\n          # Enable Query Planner\n          enablePlanner: true\n          # The maximum memory for the query planner to use in\n          # Megabytes.\n          maxMemory: 4096\n          # The maximum stack size for the query planner threads to use\n          # in Megabytes.\n          maxStack: 6\n          # The network URI for the query planner to start. The URI can\n          # be either TCP or IPC. TCP address is used to indicate the\n          # remote query planner which may run at other hosts. The IPC\n          # address is for a local query planner. Example for remote or\n          # TCP servers: \n          #  #  sql.planner.address  = tcp://127.0.0.1:9293 #\n          #     sql.planner.address  = tcp://HOST_IP:9293 Example for\n          #     local IPC servers: \n          #  #  sql.planner.address  = ipc:///tmp/gpudb-query-engine-0\n          plannerAddress: \"ipc:///tmp/gpudb-query-engine-0\"\n          # Remote debugger port used for the query planner. Setting the\n          # port to \"0\" disables remote debugging. NOTE:  Recommended\n          # port to use is \"5005\"\n          remoteDebugPort: 0\n          # Query planner timeout in seconds\n          timeout: 120\n          # Max Query planner threads\n          workers: 16 results:\n          # TTL of the query cache results table\n          cacheTTL: 60\n          # Enable query results caching\n          caching: true\n        # Enable rule-based query rewrites\n        ruleBasedOptimization: true\n        # Name of collection that will be used to store result tables\n        # generated as part of query execution\n        tempCollection: \"__SQL_TEMP\"\n      # StatisticsConfig\n      statistics:\n        # system_metadata.stats_aggr_rowcount = 10000\n        aggrRowCount: 10000\n        # system_metadata.stats_aggr_time = 1\n        aggrTime: 1\n        # Run a statistics server to collect information about Kinetica\n        # and the machines it runs on.\n        enable: true\n        # Statistics server IP address (run on head node) default port\n        # is \"2003\"\n        ipAddress: \"${gaia.host0.address}\"\n        # Statistics server namespace - should be a machine identifier\n        namespace: \"gpudb\" port: 2003\n        # System metadata catalog settings\n        # system_metadata.stats_retention_days = 21\n        retentionDays: 21\n      # TextSearchConfig\n      textSearch:\n        # Enable text search capability within the database.\n        enableTextSearch: false\n        # Number of text indices to start for each rank\n        textIndicesPerTom: 2\n        # Searcher refresh intervals - specifies the maximum delay\n        # (in seconds) between writing to the text search index and\n        # being able to search for the value just written.  A value\n        # of \"0\" insures that writes to the index are immediately\n        # available to be searched.  A more nominal value of \"100\"\n        # should improve ingest speed at the cost of some delay in\n        # being able to text search newly added values.\n        textSearcherRefreshInterval: 20\n        # Use the production capable external text server instead of a\n        # lightweight internal server which should only be used for\n        # light testing. Note: The internal text server is deprecated\n        # and may be removed in future versions.\n        useExternalTextServer: true tieredStorage:\n        # Cold Storage Tiers can be used to extend the storage capacity\n        # of the Persist Tier. Assign a tier strategy with cold storage\n        # to objects that will be infrequently accessed since they will\n        # be moved as needed from the Persist Tier. The Cold Storage\n        # Tier is typically a much larger capacity physical disk or a\n        # cloud-based storage system which may not be as performant as\n        # the Persist Tier storage. A default storage limit and\n        # eviction thresholds can be set across all ranks for a given\n        # Cold Storage Tier, while one or more ranks within a Cold\n        # Storage Tier may be configured to override those defaults.\n        # NOTE: If an object needs to be pulled out of cold storage\n        # during a query, it may need to use the local persist\n        # directory as a temporary swap space. This may trigger an\n        # eviction of other persisted items to cold storage due to low\n        # disk space condition defined by the watermark settings for\n        # the Persist Tier.\n        coldStorageTier:\n          # ColdStorageAzure\n          coldStorageAzure:\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string clientID: string clientSecret: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\"\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier. BasePath string\n            #  `json:\"basePath,omitempty\"`\n            containerName: \"/gpudb/cold_storage\"\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\" sasToken:\n            string storageAccountKey: string storageAccountName: string\n            tenantID: string useManagedCredentials: false\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageDisk\n          coldStorageDisk:\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\"\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageGCS - Google Cloud Storage-specific *parameter*\n          # names: \n          #  * BucketName =        'gcs_bucket_name' *\n          #    ProjectID - 'gcs_project_id'\n          #    (optional) * AccountID - 'gcs_service_account_id'\n          #    (optional) *\n          #    AccountPrivateKey - 'gcs_service_account_private_key'\n          #    (optional) * AccountKeys -  'gcs_service_account_keys'\n          #    (optional) NOTE: If\n          #    the 'gcs_service_account_id', 'gcs_service_account_private_key'\n          #    and/or 'gcs_service_account_keys' values are not\n          #    specified, the Google Clould Client Libraries will\n          #    attempt to find and use service account credentials from\n          #    the GOOGLE_APPLICATION_CREDENTIALS environment\n          #    variable.\n          coldStorageGCS: accountID: string accountKeys: string\n          accountPrivateKey: string\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string bucketName: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\"\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" projectID: string\n            provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageHDFS\n          coldStorageHDFS:\n            # ColdStorageDisk\n            default:\n              # 'base_path'             : A base path based on the\n              #  provider type for this tier.\n              basePath: string\n              # 'connection_timeout'    : Timeout in seconds for\n              #  connecting to this storage provider.\n              connectionTimeout: \"30\"\n              # * 'high_watermark' : Percentage used eviction threshold.\n              #    Once usage exceeds this value, evictions from this\n              #    tier will be scheduled in the background and\n              #    continue until the 'low_watermark' percentage usage\n              #    is reached.  Default is \"90\", signifying a 90%\n              #    memory usage threshold.\n              highWatermark: 90\n              # * 'limit'          : The maximum (bytes) per rank that\n              #    can be allocated across all resource groups.\n              limit: \"1Gi\"\n              # * 'low_watermark'  : Percentage used recovery threshold.\n              #    Once usage exceeds the 'high_watermark', evictions\n              #    will continue until usage falls below this recovery\n              #    threshold. Default is \"80\", signifying an 80% usage\n              #    threshold.\n              lowWatermark: 80 name: string\n              # A base directory to use as a space for this tier.\n              path: \"default\" provisioner: \"docker.io/hostpath\"\n              # Kubernetes Persistent Volume Claim for this disk tier.\n              volumeClaim:\n                # APIVersion defines the versioned schema of this\n                # representation of an object. Servers should convert\n                # recognized schemas to the latest internal value, and\n                # may reject unrecognized values. More info:\n                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n                apiVersion: app.kinetica.com/v1\n                # Kind is a string value representing the REST resource\n                # this object represents. Servers may infer this from\n                # the endpoint the client submits requests to. Cannot\n                # be updated. In CamelCase. More info:\n                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n                kind: KineticaCluster\n                # Standard object's metadata. More info:\n                # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n                metadata: {}\n                # spec defines the desired characteristics of a volume\n                # requested by a pod author. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n                spec:\n                  # accessModes contains the desired access modes the\n                  # volume should have. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                  accessModes: [\"string\"]\n                  # dataSource field can be used to specify either: * An\n                  # existing VolumeSnapshot object\n                  # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                  # existing PVC (PersistentVolumeClaim) If the\n                  # provisioner or an external controller can support\n                  # the specified data source, it will create a new\n                  # volume based on the contents of the specified data\n                  # source. When the AnyVolumeDataSource feature gate\n                  # is enabled, dataSource contents will be copied to\n                  # dataSourceRef, and dataSourceRef contents will be\n                  # copied to dataSource when dataSourceRef.namespace\n                  # is not specified. If the namespace is specified,\n                  # then dataSourceRef will not be copied to\n                  # dataSource.\n                  dataSource:\n                    # APIGroup is the group for the resource being\n                    # referenced. If APIGroup is not specified, the\n                    # specified Kind must be in the core API group. For\n                    # any other third-party types, APIGroup is\n                    # required.\n                    apiGroup: string\n                    # Kind is the type of resource being referenced\n                    kind: KineticaCluster\n                    # Name is the name of resource being referenced\n                    name: string\n                  # dataSourceRef specifies the object from which to\n                  # populate the volume with data, if a non-empty\n                  # volume is desired. This may be any object from a\n                  # non-empty API group (non core object) or a\n                  # PersistentVolumeClaim object. When this field is\n                  # specified, volume binding will only succeed if the\n                  # type of the specified object matches some installed\n                  # volume populator or dynamic provisioner. This field\n                  # will replace the functionality of the dataSource\n                  # field and as such if both fields are non-empty,\n                  # they must have the same value. For backwards\n                  # compatibility, when namespace isn't specified in\n                  # dataSourceRef, both fields (dataSource and\n                  # dataSourceRef) will be set to the same value\n                  # automatically if one of them is empty and the other\n                  # is non-empty. When namespace is specified in\n                  # dataSourceRef, dataSource isn't set to the same\n                  # value and must be empty. There are three important\n                  # differences between dataSource and dataSourceRef: *\n                  # While dataSource only allows two specific types of\n                  # objects, dataSourceRef allows any non-core object,\n                  # as well as PersistentVolumeClaim objects. * While\n                  # dataSource ignores disallowed values\n                  # (dropping them), dataSourceRef preserves all\n                  # values, and generates an error if a disallowed\n                  # value is specified. * While dataSource only allows\n                  # local objects, dataSourceRef allows objects in any\n                  # namespaces. (Beta) Using this field requires the\n                  # AnyVolumeDataSource feature gate to be enabled.\n                  # (Alpha) Using the namespace field of dataSourceRef\n                  # requires the CrossNamespaceVolumeDataSource feature\n                  # gate to be enabled.\n                  dataSourceRef:\n                    # APIGroup is the group for the resource being\n                    # referenced. If APIGroup is not specified, the\n                    # specified Kind must be in the core API group. For\n                    # any other third-party types, APIGroup is\n                    # required.\n                    apiGroup: string\n                    # Kind is the type of resource being referenced\n                    kind: KineticaCluster\n                    # Name is the name of resource being referenced\n                    name: string\n                    # Namespace is the namespace of resource being\n                    # referenced Note that when a namespace is\n                    # specified, a\n                    # gateway.networking.k8s.io/ReferenceGrant object\n                    # is required in the referent namespace to allow\n                    # that namespace's owner to accept the reference.\n                    # See the ReferenceGrant documentation for\n                    # details. (Alpha) This field requires the\n                    # CrossNamespaceVolumeDataSource feature gate to be\n                    # enabled.\n                    namespace: string\n                  # resources represents the minimum resources the\n                  # volume should have. If\n                  # RecoverVolumeExpansionFailure feature is enabled\n                  # users are allowed to specify resource requirements\n                  # that are lower than previous value but must still\n                  # be higher than capacity recorded in the status\n                  # field of the claim. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                  resources:\n                    # Claims lists the names of resources, defined in\n                    # spec.resourceClaims, that are used by this\n                    # container. This is an alpha field and requires\n                    # enabling the DynamicResourceAllocation feature\n                    # gate. This field is immutable. It can only be set\n                    # for containers.\n                    claims:\n                    - name: string\n                    # Limits describes the maximum amount of compute\n                    # resources allowed. More info:\n                    # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                    limits: {}\n                    # Requests describes the minimum amount of compute\n                    # resources required. If Requests is omitted for a\n                    # container, it defaults to Limits if that is\n                    # explicitly specified, otherwise to an\n                    # implementation-defined value. Requests cannot\n                    # exceed Limits. More info:\n                    # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                    requests: {}\n                  # selector is a label query over volumes to consider\n                  # for binding.\n                  selector:\n                    # matchExpressions is a list of label selector\n                    # requirements. The requirements are ANDed.\n                    matchExpressions:\n                    - key: string\n                      # operator represents a key's relationship to a\n                      # set of values. Valid operators are In, NotIn,\n                      # Exists and DoesNotExist.\n                      operator: string\n                      # values is an array of string values. If the\n                      # operator is In or NotIn, the values array must\n                      # be non-empty. If the operator is Exists or\n                      # DoesNotExist, the values array must be empty.\n                      # This array is replaced during a strategic merge\n                      # patch.\n                      values: [\"string\"]\n                    # matchLabels is a map of {key,value} pairs. A\n                    # single {key,value} in the matchLabels map is\n                    # equivalent to an element of matchExpressions,\n                    # whose key field is \"key\", the operator is \"In\",\n                    # and the values array contains only \"value\". The\n                    # requirements are ANDed.\n                    matchLabels: {}\n                  # storageClassName is the name of the StorageClass\n                  # required by the claim. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                  storageClassName: string\n                  # volumeMode defines what type of volume is required\n                  # by the claim. Value of Filesystem is implied when\n                  # not included in claim spec.\n                  volumeMode: string\n                  # volumeName is the binding reference to the\n                  # PersistentVolume backing this claim.\n                  volumeName: string\n                # status represents the current information/status of a\n                # persistent volume claim. Read-only. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n                status:\n                  # accessModes contains the actual access modes the\n                  # volume backing the PVC has. More info:\n                  # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                  accessModes: [\"string\"]\n                  # allocatedResources is the storage resource within\n                  # AllocatedResources tracks the capacity allocated to\n                  # a PVC. It may be larger than the actual capacity\n                  # when a volume expansion operation is requested. For\n                  # storage quota, the larger value from\n                  # allocatedResources and PVC.spec.resources is used.\n                  # If allocatedResources is not set,\n                  # PVC.spec.resources alone is used for quota\n                  # calculation. If a volume expansion capacity request\n                  # is lowered, allocatedResources is only lowered if\n                  # there are no expansion operations in progress and\n                  # if the actual volume capacity is equal or lower\n                  # than the requested capacity. This is an alpha field\n                  # and requires enabling RecoverVolumeExpansionFailure\n                  # feature.\n                  allocatedResources: {}\n                  # capacity represents the actual resources of the\n                  # underlying volume.\n                  capacity: {}\n                  # conditions is the current Condition of persistent\n                  # volume claim. If underlying persistent volume is\n                  # being resized then the Condition will be set\n                  # to 'ResizeStarted'.\n                  conditions:\n                  - lastProbeTime: string\n                    # lastTransitionTime is the time the condition\n                    # transitioned from one status to another.\n                    lastTransitionTime: string\n                    # message is the human-readable message indicating\n                    # details about last transition.\n                    message: string\n                    # reason is a unique, this should be a short,\n                    # machine understandable string that gives the\n                    # reason for condition's last transition. If it\n                    # reports \"ResizeStarted\" that means the underlying\n                    # persistent volume is being resized.\n                    reason: string status: string\n                    # PersistentVolumeClaimConditionType is a valid\n                    # value of PersistentVolumeClaimCondition.Type\n                    type: string\n                  # phase represents the current phase of\n                  # PersistentVolumeClaim.\n                  phase: string\n                  # resizeStatus stores status of resize operation.\n                  # ResizeStatus is not set by default but when\n                  # expansion is complete resizeStatus is set to empty\n                  # string by resize controller or kubelet. This is an\n                  # alpha field and requires enabling\n                  # RecoverVolumeExpansionFailure feature.\n                  resizeStatus: string\n              # 'wait_timeout'          : Timeout in seconds for reading\n              #  from or writing to this storage provider.\n              waitTimeout: \"90\"\n            # 'hdfs_kerberos_keytab'  : The Kerberos keytab file used to\n            #  authenticate the \"gpudb\" Kerberos\n            kerberosKeytab: string\n            # 'hdfs_principal'        : The effective principal name to\n            #  use when connecting to the hadoop cluster.\n            principal: string\n            # 'hdfs_uri'              : The host IP address & port for\n            #  the hadoop distributed file system. For example:\n            #  hdfs://localhost:8020\n            uri: string\n            # 'hdfs_use_kerberos'     : Set to \"true\" to enable Kerberos\n            #  authentication to an HDFS storage server. The\n            #  credentials of the principal are in the file specified\n            #  by the 'hdfs_kerberos_keytab' parameter. Note that\n            #  Kerberos's *kinit* command will be run when the database\n            #  is started.\n            useKerberos: true\n          # ColdStorageS3\n          coldStorageS3: awsAccessKeyId: string awsRoleARN: string\n          awsSecretAccessKey: string\n            # 'base_path'             : A base path based on the\n            #  provider type for this tier.\n            basePath: string bucketName: string\n            # 'connection_timeout'    : Timeout in seconds for\n            #  connecting to this storage provider.\n            connectionTimeout: \"30\" encryptionCustomerAlgorithm: string\n            encryptionCustomerKey: string\n            # EncryptionType - This is optional and valid values are\n            # sse-s3 (Encryption key is managed by Amazon S3) and\n            # sse-kms (Encryption key is managed by AWS Key Management\n            # Service (kms)).\n            encryptionType: string\n            # Endpoint - s3_endpoint\n            endpoint: string\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # KMSKeyID - This is optional and must be specified when\n            # encryption type is sse-kms.\n            kmsKeyID: string\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\" region:\n            string useManagedCredentials: true\n            # UseVirtualAddressing - 's3_use_virtual_addressing'  : If\n            # true (default), S3 endpoints will be constructed using\n            # the 'virtual' style which includes the bucket name as\n            # part of the hostname. Set to false to use the 'path'\n            # style which treats the bucket name as if it is a path in\n            # the URI.\n            useVirtualAddressing: true\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n            # 'wait_timeout'          : Timeout in seconds for reading\n            #  from or writing to this storage provider.\n            waitTimeout: \"90\"\n          # ColdStorageType The storage provider type. Currently,\n          # supports \"none\", \"disk\"(local/network storage), \"hdfs\"\n          # (Hadoop distributed filesystem), \"s3\" (Amazon S3\n          # bucket), \"azure_blob\" (Microsoft Azure Blob Storage)\n          # and \"gcs\" (Google GCS Bucket).\n          coldStorageType: \"none\" name: string\n        # The DiskCacheTier are used as temporary swap space for data\n        # that doesn't fit in RAM or VRAM. The disk should be as fast\n        # or faster than the Persist Tier storage since this tier is\n        # used as an intermediary cache between the RAM and Persist\n        # Tiers.\n        diskCacheTier:\n          # DiskTierStorageLimit\n          default:\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string defaultStorePersistentObjects: true\n                ranks:\n          - highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n        # GlobalTier Parameters\n        globalTier:\n          # Co-locates all disks to a single disk i.e. persist, cache,\n          # UDF will be on a single PVC.\n          colocateDisks: true\n          # Timeout in seconds for subsequent requests to wait on a\n          # locked resource\n          concurrentWaitTimeout: 120\n          # EncryptDataAtRest - Enable disk encryption of data at rest\n          encryptDataAtRest: true\n        # The PersistTier are used as temporary swap space for data that\n        # doesn't fit in RAM or VRAM. The disk should be as fast or\n        # faster than the Persist Tier storage since this tier is used\n        # as an intermediary cache between the RAM and Persist Tiers.\n        persistTier:\n          # DiskTierStorageLimit\n          default:\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string defaultStorePersistentObjects: true\n                ranks:\n          - highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n            # A base directory to use as a space for this tier.\n            path: \"default\" provisioner: \"docker.io/hostpath\"\n            # Kubernetes Persistent Volume Claim for this disk tier.\n            volumeClaim:\n              # APIVersion defines the versioned schema of this\n              # representation of an object. Servers should convert\n              # recognized schemas to the latest internal value, and\n              # may reject unrecognized values. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n              apiVersion: app.kinetica.com/v1\n              # Kind is a string value representing the REST resource\n              # this object represents. Servers may infer this from the\n              # endpoint the client submits requests to. Cannot be\n              # updated. In CamelCase. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n              kind: KineticaCluster\n              # Standard object's metadata. More info:\n              # https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n              metadata: {}\n              # spec defines the desired characteristics of a volume\n              # requested by a pod author. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              spec:\n                # accessModes contains the desired access modes the\n                # volume should have. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # dataSource field can be used to specify either: * An\n                # existing VolumeSnapshot object\n                # (snapshot.storage.k8s.io/VolumeSnapshot) * An\n                # existing PVC (PersistentVolumeClaim) If the\n                # provisioner or an external controller can support the\n                # specified data source, it will create a new volume\n                # based on the contents of the specified data source.\n                # When the AnyVolumeDataSource feature gate is enabled,\n                # dataSource contents will be copied to dataSourceRef,\n                # and dataSourceRef contents will be copied to\n                # dataSource when dataSourceRef.namespace is not\n                # specified. If the namespace is specified, then\n                # dataSourceRef will not be copied to dataSource.\n                dataSource:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                # dataSourceRef specifies the object from which to\n                # populate the volume with data, if a non-empty volume\n                # is desired. This may be any object from a non-empty\n                # API group (non core object) or a\n                # PersistentVolumeClaim object. When this field is\n                # specified, volume binding will only succeed if the\n                # type of the specified object matches some installed\n                # volume populator or dynamic provisioner. This field\n                # will replace the functionality of the dataSource\n                # field and as such if both fields are non-empty, they\n                # must have the same value. For backwards\n                # compatibility, when namespace isn't specified in\n                # dataSourceRef, both fields (dataSource and\n                # dataSourceRef) will be set to the same value\n                # automatically if one of them is empty and the other\n                # is non-empty. When namespace is specified in\n                # dataSourceRef, dataSource isn't set to the same value\n                # and must be empty. There are three important\n                # differences between dataSource and dataSourceRef: *\n                # While dataSource only allows two specific types of\n                # objects, dataSourceRef allows any non-core object, as\n                # well as PersistentVolumeClaim objects. * While\n                # dataSource ignores disallowed values (dropping them),\n                # dataSourceRef preserves all values, and generates an\n                # error if a disallowed value is specified. * While\n                # dataSource only allows local objects, dataSourceRef\n                # allows objects in any namespaces. (Beta) Using this\n                # field requires the AnyVolumeDataSource feature gate\n                # to be enabled. (Alpha) Using the namespace field of\n                # dataSourceRef requires the\n                # CrossNamespaceVolumeDataSource feature gate to be\n                # enabled.\n                dataSourceRef:\n                  # APIGroup is the group for the resource being\n                  # referenced. If APIGroup is not specified, the\n                  # specified Kind must be in the core API group. For\n                  # any other third-party types, APIGroup is required.\n                  apiGroup: string\n                  # Kind is the type of resource being referenced\n                  kind: KineticaCluster\n                  # Name is the name of resource being referenced\n                  name: string\n                  # Namespace is the namespace of resource being\n                  # referenced Note that when a namespace is specified,\n                  # a gateway.networking.k8s.io/ReferenceGrant object\n                  # is required in the referent namespace to allow that\n                  # namespace's owner to accept the reference. See the\n                  # ReferenceGrant documentation for details.\n                  # (Alpha) This field requires the\n                  # CrossNamespaceVolumeDataSource feature gate to be\n                  # enabled.\n                  namespace: string\n                # resources represents the minimum resources the volume\n                # should have. If RecoverVolumeExpansionFailure feature\n                # is enabled users are allowed to specify resource\n                # requirements that are lower than previous value but\n                # must still be higher than capacity recorded in the\n                # status field of the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\n                resources:\n                  # Claims lists the names of resources, defined in\n                  # spec.resourceClaims, that are used by this\n                  # container. This is an alpha field and requires\n                  # enabling the DynamicResourceAllocation feature\n                  # gate. This field is immutable. It can only be set\n                  # for containers.\n                  claims:\n                  - name: string\n                  # Limits describes the maximum amount of compute\n                  # resources allowed. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  limits: {}\n                  # Requests describes the minimum amount of compute\n                  # resources required. If Requests is omitted for a\n                  # container, it defaults to Limits if that is\n                  # explicitly specified, otherwise to an\n                  # implementation-defined value. Requests cannot\n                  # exceed Limits. More info:\n                  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n                  requests: {}\n                # selector is a label query over volumes to consider for\n                # binding.\n                selector:\n                  # matchExpressions is a list of label selector\n                  # requirements. The requirements are ANDed.\n                  matchExpressions:\n                  - key: string\n                    # operator represents a key's relationship to a set\n                    # of values. Valid operators are In, NotIn, Exists\n                    # and DoesNotExist.\n                    operator: string\n                    # values is an array of string values. If the\n                    # operator is In or NotIn, the values array must be\n                    # non-empty. If the operator is Exists or\n                    # DoesNotExist, the values array must be empty.\n                    # This array is replaced during a strategic merge\n                    # patch.\n                    values: [\"string\"]\n                  # matchLabels is a map of {key,value} pairs. A single\n                  # {key,value} in the matchLabels map is equivalent to\n                  # an element of matchExpressions, whose key field\n                  # is \"key\", the operator is \"In\", and the values\n                  # array contains only \"value\". The requirements are\n                  # ANDed.\n                  matchLabels: {}\n                # storageClassName is the name of the StorageClass\n                # required by the claim. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\n                storageClassName: string\n                # volumeMode defines what type of volume is required by\n                # the claim. Value of Filesystem is implied when not\n                # included in claim spec.\n                volumeMode: string\n                # volumeName is the binding reference to the\n                # PersistentVolume backing this claim.\n                volumeName: string\n              # status represents the current information/status of a\n              # persistent volume claim. Read-only. More info:\n              # https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\n              status:\n                # accessModes contains the actual access modes the\n                # volume backing the PVC has. More info:\n                # https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\n                accessModes: [\"string\"]\n                # allocatedResources is the storage resource within\n                # AllocatedResources tracks the capacity allocated to a\n                # PVC. It may be larger than the actual capacity when a\n                # volume expansion operation is requested. For storage\n                # quota, the larger value from allocatedResources and\n                # PVC.spec.resources is used. If allocatedResources is\n                # not set, PVC.spec.resources alone is used for quota\n                # calculation. If a volume expansion capacity request\n                # is lowered, allocatedResources is only lowered if\n                # there are no expansion operations in progress and if\n                # the actual volume capacity is equal or lower than the\n                # requested capacity. This is an alpha field and\n                # requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                allocatedResources: {}\n                # capacity represents the actual resources of the\n                # underlying volume.\n                capacity: {}\n                # conditions is the current Condition of persistent\n                # volume claim. If underlying persistent volume is\n                # being resized then the Condition will be set\n                # to 'ResizeStarted'.\n                conditions:\n                - lastProbeTime: string\n                  # lastTransitionTime is the time the condition\n                  # transitioned from one status to another.\n                  lastTransitionTime: string\n                  # message is the human-readable message indicating\n                  # details about last transition.\n                  message: string\n                  # reason is a unique, this should be a short, machine\n                  # understandable string that gives the reason for\n                  # condition's last transition. If it\n                  # reports \"ResizeStarted\" that means the underlying\n                  # persistent volume is being resized.\n                  reason: string status: string\n                  # PersistentVolumeClaimConditionType is a valid value\n                  # of PersistentVolumeClaimCondition.Type\n                  type: string\n                # phase represents the current phase of\n                # PersistentVolumeClaim.\n                phase: string\n                # resizeStatus stores status of resize operation.\n                # ResizeStatus is not set by default but when expansion\n                # is complete resizeStatus is set to empty string by\n                # resize controller or kubelet. This is an alpha field\n                # and requires enabling RecoverVolumeExpansionFailure\n                # feature.\n                resizeStatus: string\n        # The RAMTier represents the RAM available for data storage per\n        # rank. The RAM Tier is NOT used for small, non-data objects or\n        # variables that are allocated and deallocated for program flow\n        # control or used to store metadata or other similar\n        # information; these continue to use either the stack or the\n        # regular runtime memory allocator. This tier should be sized\n        # on each machine such that there is sufficient RAM left over\n        # to handle this overhead, as well as the needs of other\n        # processes running on the same machine.\n        ramTier:\n          # The RAM Tier represents the RAM available for data storage\n          # per rank. The RAM Tier is NOT used for small, non-data\n          # objects or variables that are allocated and deallocated for\n          # program flow control or used to store metadata or other\n          # similar information; these continue to use either the stack\n          # or the regular runtime memory allocator. This tier should\n          # be sized on each machine such that there is sufficient RAM\n          # left over to handle this overhead, as well as the needs of\n          # other processes running on the same machine. A default\n          # memory limit and eviction thresholds can be set across all\n          # ranks, while one or more ranks may be configured to\n          # override those defaults. The general format for RAM\n          # settings: \n          #  # tier.ram.[default|rank<#>].<parameter> Valid *parameter*\n          #    names include: \n          #  * 'limit'          : The maximum RAM (bytes) per rank that\n          #     can be allocated across all resource groups.  Default\n          #     is -1, signifying no limit and ignore watermark\n          #     settings. * 'high_watermark' : RAM percentage used\n          #     eviction threshold.  Once memory usage exceeds this\n          #     value, evictions from this tier will be scheduled in\n          #     the background and continue until the 'low_watermark'\n          #     percentage usage is reached.  Default is \"90\",\n          #     signifying a 90% memory usage\n          #     threshold. * 'low_watermark'  : RAM percentage used\n          #     recovery threshold.  Once memory usage exceeds\n          #     the 'high_watermark', evictions will continue until\n          #     memory usage falls below this recovery threshold.\n          #     Default is \"50\", signifying a 50% memory usage\n          #     threshold.\n          default:\n            # * 'high_watermark' : Percentage used eviction threshold.\n            #    Once usage exceeds this value, evictions from this\n            #    tier will be scheduled in the background and continue\n            #    until the 'low_watermark' percentage usage is reached.\n            #    Default is \"90\", signifying a 90% memory usage\n            #    threshold.\n            highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string\n          # The maximum RAM (bytes) for processing data at rank 0.\n          # Overrides the overall default RAM tier\n          # limit. #tier.ram.rank0.limit = -1\n          ranks:\n          - highWatermark: 90\n            # * 'limit'          : The maximum (bytes) per rank that can\n            #    be allocated across all resource groups.\n            limit: \"1Gi\"\n            # * 'low_watermark'  : Percentage used recovery threshold.\n            #    Once usage exceeds the 'high_watermark', evictions\n            #    will continue until usage falls below this recovery\n            #    threshold. Default is \"80\", signifying an 80% usage\n            #    threshold.\n            lowWatermark: 80 name: string tieredStrategy:\n        # Default strategy to apply to tables or columns when one was\n        # not provided during table creation. This strategy is also\n        # applied to a resource group that does not specify one at time\n        # of creation. The strategy is formed by chaining together the\n        # tier types and their respective eviction priorities. Any\n        # given tier may appear no more than once in the chain and the\n        # priority must be in range \"1\" - \"10\", where \"1\" is the lowest\n        # priority (first to be evicted) and \"9\" is the highest\n        # priority (last to be evicted).  A priority of \"10\" indicates\n        # that an object is unevictable. Each tier's priority is in\n        # relation to the priority of other objects in the same tier;\n        # e.g., \"RAM 9, DISK2 1\" indicates that an object will be the\n        # highest evictable priority among objects in the RAM Tier\n        # (last evicted), but that it will be the lowest priority among\n        # objects in the Disk Tier named 'disk2' (first evicted).  Note\n        # that since an object can only have one Disk Tier instance in\n        # its strategy, the corresponding priority will only apply in\n        # relation to other objects in Disk Tier instance 'disk2'. See\n        # the Tiered Storage section for more information about tier\n        # type names. Format: <tier1> <priority>, <tier2> <priority>,\n        # <tier3> <priority>, ... Examples using a Disk Tier\n        # named 'disk2' and a Cold Storage Tier 'cold0': vram 3, ram 5,\n        # disk2 3, persist 10 vram 3, ram 5, disk2 3, persist 6, cold0\n        # 10 tier_strategy.default = VRAM 1, RAM 5, PERSIST 5\n        default: \"VRAM 1, RAM 5, PERSIST 5\"\n        # Predicate evaluation interval (in minutes) -  indicates the\n        # interval at which the tier strategy predicates are evaluated\n        predicateEvaluationInterval: 60 video:\n        # System default TTL for videos. Time-to-live (TTL) is the\n        # number of minutes before a video will expire and be removed,\n        # or -1 to disable. video_default_ttl = -1\n        defaultTTL: \"-1\"\n        # The maximum number of videos to allow on the system. Set to 0\n        # to disable video rendering.  Set to -1 to allow an unlimited\n        # number of videos. video_max_count = -1\n        maxCount: \"-1\"\n        # Directory where video files should be temporarily stored while\n        # rendering. Only accessed by rank 0. video_temp_directory = $\n        # {gaia.temp_directory}/gpudb-temp-videos\n        tmpDir: \"${gaia.temp_directory}/gpudb-temp-videos\"\n      # VisualizationConfig\n      visualization:\n        # Enable level-of-details rendering for fast interaction with\n        # large WKT polygon data.  Only available for the OpenGL\n        # renderer (when 'enable_opengl_renderer' is \"true\").\n        enableLODRendering: true\n        # If \"true\", enable hardware-accelerated OpenGL renderer;\n        # if \"false\", use the software-based Cairo renderer.\n        enableOpenGLRenderer: true\n        # If \"true\", enable Vector Tile Service (VTS) to support\n        # client-side visualization of geospatial data. Enabling this\n        # option increases memory usage on ingestion.\n        enableVectorTileService: false\n        # Longitude and latitude ranges of geospatial data for which\n        # level-of-details representations are being generated. The\n        # parameter order is: <min_longitude> <min_latitude>\n        # <max_longitude> <max_latitude> The default values span over\n        # the world, but the level-of-details rendering becomes more\n        # efficient when the precise extent of geospatial data is\n        # specified. kubebuilder:default:={ -180, -90, 180, 90 }\n        lodDataExtent: [integer]\n        # The extent to which shape data are pre-processed for\n        # level-of-details rendering during data insert/load or\n        # processed on-the-fly in rendering time. This is a trade-off\n        # between speed and memory. The higher the value, the faster\n        # level-of-details rendering is, but the more memory is used\n        # for storing processed shape data. The maximum level is \"10\"\n        # (most shape data are pre-processed) and the minimum level\n        # is \"0\".\n        lodPreProcessingLevel: 5\n        # The number of subregions in horizontal and vertical geospatial\n        # data extent. The default values of \"12 6\" divide the world\n        # into subregions of 30 degree (lon.) x 30 degree (lat.)\n        lodSubRegionNum: [12,6]\n        # A base image resolution (width and height in pixels) at which\n        # a subregion would be rendered in a global view spanning over\n        # the whole dataset. Based on this resolution level-of-details\n        # representations are generated for the polygons located in the\n        # subregion.\n        lodSubRegionResolution: [512,512]\n        # Maximum heatmap size (in pixels) that can be generated. This\n        # reserves 'max_heatmap_size' ^ 2 * 8 bytes of GPU memory\n        # at **rank0**\n        maxHeatmapSize: 3072\n        # The maximum number of levels in the level-of-details\n        # rendering. As the number increases, level-of-details\n        # rendering becomes effective at higher zoom levels, but it may\n        # increase memory usage for storing level-of-details\n        # representations.\n        maxLODLevel: 8\n        # Input geometries are pre-processed upon ingestion for faster\n        # vector tile generation. This parameter determines the\n        # zoomlevel at which the vector tile pre-processing stops. A\n        # vector tile request for a higher zoomlevel than this\n        # parameter takes additional time because the vector tile needs\n        # to be generated on the fly.\n        maxVectorTileZoomLevel: 8\n        # Input geometries are pre-processed upon ingestion for faster\n        # vector tile generation. This parameter determines the\n        # zoomlevel from which the vector tile pre-processing starts. A\n        # vector tile request for a lower zoomlevel than this parameter\n        # takes additional time because the vector tile needs to be\n        # generated on the fly.\n        minVectorTileZoomLevel: 1\n        # The number of samples to use for antialiasing. Higher numbers\n        # will improve image quality but require more GPU memory to\n        # store the samples on worker ranks.  This affects only the\n        # OpenGL renderer. Value may be \"0\", \"4\", \"8\" or \"16\". When \"0\"\n        # antialiasing is disabled. The default value is \"0\".\n        openGLAntialiasingLevel: 1\n        # Threshold number of points (per-TOM) at which point rendering\n        # switches to fast mode.\n        pointRenderThreshold: 100000\n        # Single-precision coordinates are used for usual rendering\n        # processes, but depending on the precision of geometry data\n        # and use case, double precision processing may be required at\n        # a high zoomlevel. Double precision rendering processes are\n        # used from the zoomlevel specified by this parameter, which is\n        # corresponding to a zoomlevel of TMS or Google map service.\n        renderingPrecisionThreshold: 30\n        # The image width/height (in pixels) of svg symbols cached in\n        # the OpenGL symbol cache.\n        symbolResolution: 100\n        # The width/height (in pixels) of an OpenGL texture which caches\n        # symbol images for OpenGL rendering.\n        symbolTextureSize: 4000\n        # Threshold for the number of points (per-TOM) after which\n        # symbology rendering falls back to regular rendering\n        symbologyRenderThreshold: 10000\n        # The name of map tiler used for Vector Tile Service. \"google\"\n        # and \"tms\" map tilers are supported currently. This parameter\n        # should be matched with the map tiler of clients' vector tile\n        # renderer.\n        vectorTileMapTiler: \"google\" workbench:\n        # Start the Workbench app on the head host when host manager is\n        # started. enable_workbench = false\n        enable: false\n        # # HTTP server port for Workbench if enabled. workbench_port =\n        #   8000\n        port:\n          # Number of port to expose on the pod's IP address. This must\n          # be a valid port number, 0 < x < 65536.\n          containerPort: 1\n          # What host IP to bind the external port to.\n          hostIP: string\n          # Number of port to expose on the host. If specified, this\n          # must be a valid port number, 0 < x < 65536. If HostNetwork\n          # is specified, this must match ContainerPort. Most\n          # containers do not need this.\n          hostPort: 1\n          # If specified, this must be an IANA_SVC_NAME and unique\n          # within the pod. Each named port in a pod must have a unique\n          # name. Name for the port that can be referred to by\n          # services.\n          name: string\n          # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n          # to \"TCP\".\n          protocol: \"TCP\"\n    # The fully qualified URL used on the Ingress records for any\n    # exposed services. Completed buy yth Operator. DO NOT POPULATE\n    # MANUALLY.\n    fqdn: \"\"\n    # The name of the parent HA Ring this cluster belongs to.\n    haRingName: \"default\"\n    # Whether to enable the separate node 'pools' for \"infra\", \"compute\"\n    # pod scheduling. Default: false\n    hasPools: true\n    # The port the HostManager will be running in each pod in the\n    # cluster. Default: 9300, TCP\n    hostManagerPort:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # Set the name of the container image to use.\n    image: \"kinetica/kinetica-k8s-intel:v7.1.6.0\"\n    # Set the policy for pulling container images.\n    imagePullPolicy: \"IfNotPresent\"\n    # ImagePullSecrets is an optional list of references to secrets in\n    # the same gpudb-namespace to use for pulling any of the images\n    # used by this PodSpec. If specified, these secrets will be passed\n    # to individual puller implementations for them to use. For\n    # example, in the case of docker, only DockerConfig type secrets\n    # are honored.\n    imagePullSecrets:\n    - name: string\n    # Labels - Pod labels to be applied to the Statefulset DB pods.\n    labels: {}\n    # The Ingress Endpoint that GAdmin will be running on.\n    letsEncrypt:\n      # Enable LetsEncrypt for Certificate generation.\n      enabled: false\n      # LetsEncryptEnvironment\n      environment: \"staging\"\n    # Set the Kinetica DB License.\n    license: string\n    # Periodic probe of container liveness. Container will be restarted\n    # if the probe fails. Cannot be updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    livenessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # LoggerConfig Kinetica DB Logger Configuration Object Configure the\n    # LOG4CPLUS logger for the DB. Field takes a string containing the\n    # full configuration. If not specified a template file is used\n    # during DB configuration generation.\n    loggerConfig: configString: string\n    # Metrics - DB Metrics scrape & forward configuration for\n    # `fluent-bit`.\n    metricsRegistryRepositoryTag:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Metrics - `fluent-bit` container requests/limits.\n    metricsResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # NodeSelector - NodeSelector to be applied to the DB Pods\n    nodeSelector: {}\n    # Do not use internal Operator field only.\n    originalReplicas: 1\n    # podManagementPolicy controls how pods are created during initial\n    # scale up, when replacing pods on nodes, or when scaling down. The\n    # default policy is `OrderedReady`, where pods are created in\n    # increasing order (pod-0, then pod-1, etc) and the controller will\n    # wait until each pod is ready before continuing. When scaling\n    # down, the pods are removed in the opposite order. The alternative\n    # policy is `Parallel` which will create pods in parallel to match\n    # the desired scale without waiting, and on scale down will delete\n    # all pods at once.\n    podManagementPolicy: \"Parallel\"\n    # Number of ranks per node as a uint16 i.e. 1-65535 ranks per node.\n    # Default: 1\n    ranksPerNode: 1\n    # Periodic probe of container service readiness. Container will be\n    # removed from service endpoints if the probe fails. Cannot be\n    # updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    readinessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # The number of DB ranks i.e. replicas that the cluster will spin\n    # up. Default: 3\n    replicas: 3\n    # Limit the resources a DB Pod can consume.\n    resources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # SecurityContext holds security configuration that will be applied\n    # to a container. Some fields are present in both SecurityContext\n    # and PodSecurityContext.  When both are set, the values in\n    # SecurityContext take precedence.\n    securityContext:\n      # AllowPrivilegeEscalation controls whether a process can gain\n      # more privileges than its parent process. This bool directly\n      # controls if the no_new_privs flag will be set on the container\n      # process. AllowPrivilegeEscalation is true always when the\n      # container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note\n      # that this field cannot be set when spec.os.name is windows.\n      allowPrivilegeEscalation: true\n      # The capabilities to add/drop when running containers. Defaults\n      # to the default set of capabilities granted by the container\n      # runtime. Note that this field cannot be set when spec.os.name\n      # is windows.\n      capabilities:\n        # Added capabilities\n        add: [\"string\"]\n        # Removed capabilities\n        drop: [\"string\"]\n      # Run container in privileged mode. Processes in privileged\n      # containers are essentially equivalent to root on the host.\n      # Defaults to false. Note that this field cannot be set when\n      # spec.os.name is windows.\n      privileged: true\n      # procMount denotes the type of proc mount to use for the\n      # containers. The default is DefaultProcMount which uses the\n      # container runtime defaults for readonly paths and masked paths.\n      # This requires the ProcMountType feature flag to be enabled.\n      # Note that this field cannot be set when spec.os.name is\n      # windows.\n      procMount: string\n      # Whether this container has a read-only root filesystem. Default\n      # is false. Note that this field cannot be set when spec.os.name\n      # is windows.\n      readOnlyRootFilesystem: true\n      # The GID to run the entrypoint of the container process. Uses\n      # runtime default if unset. May also be set in\n      # PodSecurityContext.  If set in both SecurityContext and\n      # PodSecurityContext, the value specified in SecurityContext\n      # takes precedence. Note that this field cannot be set when\n      # spec.os.name is windows.\n      runAsGroup: 1\n      # Indicates that the container must run as a non-root user. If\n      # true, the Kubelet will validate the image at runtime to ensure\n      # that it does not run as UID 0 (root) and fail to start the\n      # container if it does. If unset or false, no such validation\n      # will be performed. May also be set in PodSecurityContext.  If\n      # set in both SecurityContext and PodSecurityContext, the value\n      # specified in SecurityContext takes precedence.\n      runAsNonRoot: true\n      # The UID to run the entrypoint of the container process. Defaults\n      # to user specified in image metadata if unspecified. May also be\n      # set in PodSecurityContext.  If set in both SecurityContext and\n      # PodSecurityContext, the value specified in SecurityContext\n      # takes precedence. Note that this field cannot be set when\n      # spec.os.name is windows.\n      runAsUser: 1\n      # The SELinux context to be applied to the container. If\n      # unspecified, the container runtime will allocate a random\n      # SELinux context for each container.  May also be set in\n      # PodSecurityContext.  If set in both SecurityContext and\n      # PodSecurityContext, the value specified in SecurityContext\n      # takes precedence. Note that this field cannot be set when\n      # spec.os.name is windows.\n      seLinuxOptions:\n        # Level is SELinux level label that applies to the container.\n        level: string\n        # Role is a SELinux role label that applies to the container.\n        role: string\n        # Type is a SELinux type label that applies to the container.\n        type: string\n        # User is a SELinux user label that applies to the container.\n        user: string\n      # The seccomp options to use by this container. If seccomp options\n      # are provided at both the pod & container level, the container\n      # options override the pod options. Note that this field cannot\n      # be set when spec.os.name is windows.\n      seccompProfile:\n        # localhostProfile indicates a profile defined in a file on the\n        # node should be used. The profile must be preconfigured on the\n        # node to work. Must be a descending path, relative to the\n        # kubelet's configured seccomp profile location. Must only be\n        # set if type is \"Localhost\".\n        localhostProfile: string\n        # type indicates which kind of seccomp profile will be applied.\n        # Valid options are: Localhost - a profile defined in a file on\n        # the node should be used. RuntimeDefault - the container\n        # runtime default profile should be used. Unconfined - no\n        # profile should be applied.\n        type: string\n      # The Windows specific settings applied to all containers. If\n      # unspecified, the options from the PodSecurityContext will be\n      # used. If set in both SecurityContext and PodSecurityContext,\n      # the value specified in SecurityContext takes precedence. Note\n      # that this field cannot be set when spec.os.name is linux.\n      windowsOptions:\n        # GMSACredentialSpec is where the GMSA admission webhook\n        # (https://github.com/kubernetes-sigs/windows-gmsa) inlines the\n        # contents of the GMSA credential spec named by the\n        # GMSACredentialSpecName field.\n        gmsaCredentialSpec: string\n        # GMSACredentialSpecName is the name of the GMSA credential spec\n        # to use.\n        gmsaCredentialSpecName: string\n        # HostProcess determines if a container should be run as a 'Host\n        # Process' container. This field is alpha-level and will only\n        # be honored by components that enable the\n        # WindowsHostProcessContainers feature flag. Setting this field\n        # without the feature flag will result in errors when\n        # validating the Pod. All of a Pod's containers must have the\n        # same effective HostProcess value (it is not allowed to have a\n        # mix of HostProcess containers and non-HostProcess\n        # containers).  In addition, if HostProcess is true then\n        # HostNetwork must also be set to true.\n        hostProcess: true\n        # The UserName in Windows to run the entrypoint of the container\n        # process. Defaults to the user specified in image metadata if\n        # unspecified. May also be set in PodSecurityContext. If set in\n        # both SecurityContext and PodSecurityContext, the value\n        # specified in SecurityContext takes precedence.\n        runAsUserName: string\n    # StartupProbe indicates that the Pod has successfully initialized.\n    # If specified, no other probes are executed until this completes\n    # successfully. If this probe fails, the Pod will be restarted,\n    # just as if the livenessProbe failed. This can be used to provide\n    # different probe parameters at the beginning of a Pod's lifecycle,\n    # when it might take a long time to load data or warm a cache, than\n    # during steady-state operation. This cannot be updated. This is an\n    # alpha feature enabled by the StartupProbe feature flag. More\n    # info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    startupProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n  # HostManagerMonitor is used to monitor the Kinetica DB Ranks. If a\n  # rank is unavailable for the specified time(MaxRankFailureCount) the\n  # cluster will be restarted.\n  hostManagerMonitor:\n    # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and\n    # Liveness probes. Default: 8888\n    db_healthz_port:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # The HostMonitor Port for the DB StartupProbe, ReadinessProbe and\n    # Liveness probes. Default: 8889\n    hm_healthz_port:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # Periodic probe of container liveness. Container will be restarted\n    # if the probe fails. Cannot be updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    livenessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # Set the name of the container image to use.\n    monitorRegistryRepositoryTag:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Periodic probe of container service readiness. Container will be\n    # removed from service endpoints if the probe fails. Cannot be\n    # updated. More info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    readinessProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n    # Allow for overriding resource requests/limits.\n    resources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # StartupProbe indicates that the Pod has successfully initialized.\n    # If specified, no other probes are executed until this completes\n    # successfully. If this probe fails, the Pod will be restarted,\n    # just as if the livenessProbe failed. This can be used to provide\n    # different probe parameters at the beginning of a Pod's lifecycle,\n    # when it might take a long time to load data or warm a cache, than\n    # during steady-state operation. This cannot be updated. This is an\n    # alpha feature enabled by the StartupProbe feature flag. More\n    # info:\n    # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n    startupProbe:\n      # Minimum consecutive failures for the probe to be considered\n      # failed after having succeeded. Defaults to 3. Minimum value is\n      # 1.\n      failureThreshold: 3\n      # Number of seconds after the container has started before\n      # liveness probes are initiated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      initialDelaySeconds: 10\n      # How often (in seconds) to perform the probe. Default to 10\n      # seconds. Minimum value is 1.\n      periodSeconds: 10\n  # The platform infrastructure provider e.g. azure, aws, gcp, on-prem\n  # etc.\n  infra: \"on-prem\"\n  # The Kubernetes Ingress Controller will be running on e.g.\n  # ingress-nginx, Traefik, Ambassador, Gloo, Kong etc.\n  ingressController: \"nginx\"\n  # The LDAP server to connect to.\n  ldap:\n    # BaseDN - The root base LDAP Distinguished Name to use as the base\n    # for the LDAP usage\n    baseDN: \"dc=kinetica,dc=com\"\n    # BindDN - The LDAP Distinguished Name to use for the LDAP\n    # connectivity/data connectivity/bind\n    bindDN: \"cn=admin,dc=kinetica,dc=com\"\n    # Host - The name of the host to connect to. If IsInLocalK8S=true\n    # then supply only the name e.g. `openldap` Default: openldap\n    host: \"openldap\"\n    # IsInLocalK8S - Is the LDAP server co-located in the same K8s\n    # cluster the operator is running in. Default: true\n    isInLocalK8S: true\n    # IsLDAPS - IUse LDAPS instead of LDAP Default: false\n    isLDAPS: false\n    # Namespace - The namespace the Default: openldap\n    namespace: \"gpudb\"\n    # Port - Defaults to LDAP Port 389 Default: 389\n    port: 389\n  # Tells the operator to use Cloud Provider Pay As You Go\n  # functionality.\n  payAsYouGo: false\n  # The Reveal Dashboard Configuration for the Kinetica Cluster.\n  reveal:\n    # The port that Reveal will be running on. It runs only on the head\n    # node pod in the cluster. Default: 8080\n    containerPort:\n      # Number of port to expose on the pod's IP address. This must be a\n      # valid port number, 0 < x < 65536.\n      containerPort: 1\n      # What host IP to bind the external port to.\n      hostIP: string\n      # Number of port to expose on the host. If specified, this must be\n      # a valid port number, 0 < x < 65536. If HostNetwork is\n      # specified, this must match ContainerPort. Most containers do\n      # not need this.\n      hostPort: 1\n      # If specified, this must be an IANA_SVC_NAME and unique within\n      # the pod. Each named port in a pod must have a unique name. Name\n      # for the port that can be referred to by services.\n      name: string\n      # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n      # to \"TCP\".\n      protocol: \"TCP\"\n    # The Ingress Endpoint that Reveal will be running on.\n    ingressPath:\n      # backend defines the referenced service endpoint to which the\n      # traffic will be forwarded to.\n      backend:\n        # resource is an ObjectRef to another Kubernetes resource in the\n        # namespace of the Ingress object. If resource is specified,\n        # serviceName and servicePort must not be specified.\n        resource:\n          # APIGroup is the group for the resource being referenced. If\n          # APIGroup is not specified, the specified Kind must be in\n          # the core API group. For any other third-party types,\n          # APIGroup is required.\n          apiGroup: string\n          # Kind is the type of resource being referenced\n          kind: KineticaCluster\n          # Name is the name of resource being referenced\n          name: string\n        # serviceName specifies the name of the referenced service.\n        serviceName: string\n        # servicePort Specifies the port of the referenced service.\n        servicePort: \n      # path is matched against the path of an incoming request.\n      # Currently it can contain characters disallowed from the\n      # conventional \"path\" part of a URL as defined by RFC 3986. Paths\n      # must begin with a '/' and must be present when using PathType\n      # with value \"Exact\" or \"Prefix\".\n      path: string\n      # pathType determines the interpretation of the path matching.\n      # PathType can be one of the following values: * Exact: Matches\n      # the URL path exactly. * Prefix: Matches based on a URL path\n      # prefix split by '/'. Matching is done on a path element by\n      # element basis. A path element refers is the list of labels in\n      # the path split by the '/' separator. A request is a match for\n      # path p if every p is an element-wise prefix of p of the request\n      # path. Note that if the last element of the path is a substring\n      # of the last element in request path, it is not a match\n      # (e.g. /foo/bar matches /foo/bar/baz, but does not\n      # match /foo/barbaz). * ImplementationSpecific: Interpretation of\n      # the Path matching is up to the IngressClass. Implementations\n      # can treat this as a separate PathType or treat it identically\n      # to Prefix or Exact path types. Implementations are required to\n      # support all path types. Defaults to ImplementationSpecific.\n      pathType: string\n    # Whether to enable the Reveal Dashboard on the Cluster. Default:\n    # true\n    isEnabled: true\n  # The Stats server to deploy & connect to if required.\n  stats:\n    # AlertManager - AlertManager specific configuration.\n    alertManager:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n      # StoragePath - Set the location of the AlertManager file\n      # storage.\n      storagePath: \"/opt/gpudb/kagent/stats/storage/alertmanager/alertmanager\"\n      # WebConfigFile - Set the location of the AlertManager\n      # alertmanager-web-config.yml.\n      webConfigFile: \"/opt/gpudb/kagent/stats/alertmanager/alertmanager-web-config.yml\"\n      # WebListenAddress - Set the location of the AlertManager\n      # alertmanager-web-config.yml.\n      webListenAddress: \"0.0.0.0:9089\"\n    # Grafana - Grafana specific configuration.\n    grafana:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # HomePath - Set the location of the Grafana home directory.\n      homePath: \"/opt/gpudb/kagent/stats/grafana\"\n      # GraphiteHost - Host Address\n      host: \"0.0.0.0\"\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n    # Whether to enable the Stats Server on the Cluster. Default: true\n    isEnabled: true\n    # Loki - Loki specific configuration.\n    loki:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # ExpandEnv\n      expandEnv: true\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n      # Storage - Set the path of the Loki storage.\n      storage: \"/opt/gpudb/kagent/stats/storage/loki-storage\"\n    # Which vmss/node group etc. to use as the NodeSelector\n    pool: \"compute\"\n    # Prometheus - Prometheus specific configuration.\n    prometheus:\n      # Set the arguments for the command within the container to run.\n      args:\n      [\"-c\",\"/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug\n      --config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090\n      --storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage\n      --storage.tsdb.retention.time=7d  --web.enable-lifecycle\"]\n      # Set the command within the container to run.\n      command: [\"/bin/sh\"]\n      # ConfigFile - Set the location of the Loki configuration file.\n      configFile: \"/opt/gpudb/kagent/stats/loki/loki.yml\"\n      # ConfigFileAsConfigMap - If true the ConfigFile is mounted from a\n      # ConfigMap\n      configFileAsConfigMap: true\n      # The port that Stats will be running on. It runs only on the head\n      # node pod in the cluster. Default: 9091\n      containerPort:\n        # Number of port to expose on the pod's IP address. This must be\n        # a valid port number, 0 < x < 65536.\n        containerPort: 1\n        # What host IP to bind the external port to.\n        hostIP: string\n        # Number of port to expose on the host. If specified, this must\n        # be a valid port number, 0 < x < 65536. If HostNetwork is\n        # specified, this must match ContainerPort. Most containers do\n        # not need this.\n        hostPort: 1\n        # If specified, this must be an IANA_SVC_NAME and unique within\n        # the pod. Each named port in a pod must have a unique name.\n        # Name for the port that can be referred to by services.\n        name: string\n        # Protocol for port. Must be UDP, TCP, or SCTP. Defaults\n        # to \"TCP\".\n        protocol: \"TCP\"\n      # List of environment variables to set in the container.\n      env:\n      - name: string\n        # Variable references $(VAR_NAME) are expanded using the\n        # previously defined environment variables in the container and\n        # any service environment variables. If a variable cannot be\n        # resolved, the reference in the input string will be\n        # unchanged. Double $$ are reduced to a single $, which allows\n        # for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will\n        # produce the string literal \"$(VAR_NAME)\". Escaped references\n        # will never be expanded, regardless of whether the variable\n        # exists or not. Defaults to \"\".\n        value: string\n        # Source for the environment variable's value. Cannot be used if\n        # value is not empty.\n        valueFrom:\n          # Selects a key of a ConfigMap.\n          configMapKeyRef:\n            # The key to select.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the ConfigMap or its key must be defined\n            optional: true\n          # Selects a field of the pod: supports metadata.name,\n          # metadata.namespace, `metadata.labels\n          # ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,\n          # spec.serviceAccountName, status.hostIP, status.podIP,\n          # status.podIPs.\n          fieldRef:\n            # Version of the schema the FieldPath is written in terms\n            # of, defaults to \"v1\".\n            apiVersion: app.kinetica.com/v1\n            # Path of the field to select in the specified API version.\n            fieldPath: string\n          # Selects a resource of the container: only resources limits\n          # and requests (limits.cpu, limits.memory,\n          # limits.ephemeral-storage, requests.cpu, requests.memory and\n          # requests.ephemeral-storage) are currently supported.\n          resourceFieldRef:\n            # Container name: required for volumes, optional for env\n            # vars\n            containerName: string\n            # Specifies the output format of the exposed resources,\n            # defaults to \"1\"\n            divisor: \n            # Required: resource to select\n            resource: string\n          # Selects a key of a secret in the pod's namespace\n          secretKeyRef:\n            # The key of the secret to select from.  Must be a valid\n            # secret key.\n            key: string\n            # Name of the referent. More info:\n            # https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\n            # TODO: Add other useful fields. apiVersion, kind, uid?\n            name: string\n            # Specify whether the Secret or its key must be defined\n            optional: true\n      # Set the name of the container image to use.\n      image:\n        # Set the policy for pulling container images.\n        imagePullPolicy: \"IfNotPresent\"\n        # ImagePullSecrets is an optional list of references to secrets\n        # in the same gpudb-namespace to use for pulling any of the\n        # images used by this PodSpec. If specified, these secrets will\n        # be passed to individual puller implementations for them to\n        # use. For example, in the case of docker, only DockerConfig\n        # type secrets are honored.\n        imagePullSecrets:\n        - name: string\n        # The image registry & optional port containing the repository.\n        registry: \"docker.io\"\n        # The image repository path.\n        repository: \"kineticadevcloud/\"\n        # SemVer = Semantic Version for the Tag SemVer semver.Version\n        semVer: string\n        # The image sha.\n        sha: \"\"\n        # The image tag.\n        tag: \"v7.1.5.2\"\n      # Whether to enable the Stats Server on the Cluster. Default:\n      # true\n      isEnabled: true\n      # Periodic probe of container liveness. Container will be\n      # restarted if the probe fails. Cannot be updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      livenessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Set the Prometheus logging level.\n      logLevel: \"debug\"\n      # Logs - Set the location of the Loki configuration file.\n      logs: \"/opt/gpudb/kagent/stats/logs\" name: \"stats\"\n      # Periodic probe of container service readiness. Container will be\n      # removed from service endpoints if the probe fails. Cannot be\n      # updated. More info:\n      # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n      readinessProbe:\n        # Exec specifies the action to take.\n        exec:\n          # Command is the command line to execute inside the container,\n          # the working directory for the command  is root ('/') in the\n          # container's filesystem. The command is simply exec'd, it is\n          # not run inside a shell, so traditional shell instructions\n          # ('|', etc) won't work. To use a shell, you need to\n          # explicitly call out to that shell. Exit status of 0 is\n          # treated as live/healthy and non-zero is unhealthy.\n          command: [\"string\"]\n        # Minimum consecutive failures for the probe to be considered\n        # failed after having succeeded. Defaults to 3. Minimum value\n        # is 1.\n        failureThreshold: 1\n        # GRPC specifies an action involving a GRPC port.\n        grpc:\n          # Port number of the gRPC service. Number must be in the range\n          # 1 to 65535.\n          port: 1\n          # Service is the name of the service to place in the gRPC\n          # HealthCheckRequest\n          # (see\n          # https://github.com/grpc/grpc/blob/master/doc/health-checking.md).\n          # If this is not specified, the default behavior is defined\n          # by gRPC.\n          service: string\n        # HTTPGet specifies the http request to perform.\n        httpGet:\n          # Host name to connect to, defaults to the pod IP. You\n          # probably want to set \"Host\" in httpHeaders instead.\n          host: string\n          # Custom headers to set in the request. HTTP allows repeated\n          # headers.\n          httpHeaders:\n          - name: string\n            # The header field value\n            value: string\n          # Path to access on the HTTP server.\n          path: string\n          # Name or number of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n          # Scheme to use for connecting to the host. Defaults to HTTP.\n          scheme: string\n        # Number of seconds after the container has started before\n        # liveness probes are initiated. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        initialDelaySeconds: 1\n        # How often (in seconds) to perform the probe. Default to 10\n        # seconds. Minimum value is 1.\n        periodSeconds: 1\n        # Minimum consecutive successes for the probe to be considered\n        # successful after having failed. Defaults to 1. Must be 1 for\n        # liveness and startup. Minimum value is 1.\n        successThreshold: 1\n        # TCPSocket specifies an action involving a TCP port.\n        tcpSocket:\n          # Optional: Host name to connect to, defaults to the pod IP.\n          host: string\n          # Number or name of the port to access on the container.\n          # Number must be in the range 1 to 65535. Name must be an\n          # IANA_SVC_NAME.\n          port: \n        # Optional duration in seconds the pod needs to terminate\n        # gracefully upon probe failure. The grace period is the\n        # duration in seconds after the processes running in the pod\n        # are sent a termination signal and the time when the processes\n        # are forcibly halted with a kill signal. Set this value longer\n        # than the expected cleanup time for your process. If this\n        # value is nil, the pod's terminationGracePeriodSeconds will be\n        # used. Otherwise, this value overrides the value provided by\n        # the pod spec. Value must be non-negative integer. The value\n        # zero indicates stop immediately via the kill signal\n        # (no opportunity to shut down). This is a beta field and\n        # requires enabling ProbeTerminationGracePeriod feature gate.\n        # Minimum value is 1. spec.terminationGracePeriodSeconds is\n        # used if unset.\n        terminationGracePeriodSeconds: 1\n        # Number of seconds after which the probe times out. Defaults to\n        # 1 second. Minimum value is 1. More info:\n        # https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes\n        timeoutSeconds: 1\n      # Resource Requests & Limits for the Stats Pod.\n      resources:\n        # Claims lists the names of resources, defined in\n        # spec.resourceClaims, that are used by this container. This is\n        # an alpha field and requires enabling the\n        # DynamicResourceAllocation feature gate. This field is\n        # immutable. It can only be set for containers.\n        claims:\n        - name: string\n        # Limits describes the maximum amount of compute resources\n        # allowed. More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        limits: {}\n        # Requests describes the minimum amount of compute resources\n        # required. If Requests is omitted for a container, it defaults\n        # to Limits if that is explicitly specified, otherwise to an\n        # implementation-defined value. Requests cannot exceed Limits.\n        # More info:\n        # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n        requests: {}\n      # Set the location of the TSDB database.\n      storageTSDBPath: \"/opt/gpudb/kagent/stats/storage/prometheus-storage\"\n      # Set the time to hold data in the TSDB database.\n      storageTSDBRetentionTime: \"7d\"\n      # Timings - Prometheus Intervals & Timeouts\n      timings: evaluationInterval: \"30s\" scrapeInterval: \"30s\"\n      scrapeTimeout: \"10s\"\n    # Whether to share a single PV for Loki, Prometheus & Grafana or\n    # have a separate PV for each. Default: true\n    sharedPV: true\n    # Resource block specifically for use with SharedPV = true to set\n    # storage `requests` & `limits`\n    sharedPVResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n  # Supporting images like socat,busybox etc.\n  supportingImages:\n    # Set the resource requests/limits for the BusyBox Pod(s).\n    busyBoxResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n    # Set the name of the container image to use.\n    busybox:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Set the name of the container image to use.\n    socat:\n      # Set the policy for pulling container images.\n      imagePullPolicy: \"IfNotPresent\"\n      # ImagePullSecrets is an optional list of references to secrets in\n      # the same gpudb-namespace to use for pulling any of the images\n      # used by this PodSpec. If specified, these secrets will be\n      # passed to individual puller implementations for them to use.\n      # For example, in the case of docker, only DockerConfig type\n      # secrets are honored.\n      imagePullSecrets:\n      - name: string\n      # The image registry & optional port containing the repository.\n      registry: \"docker.io\"\n      # The image repository path.\n      repository: \"kineticadevcloud/\"\n      # SemVer = Semantic Version for the Tag SemVer semver.Version\n      semVer: string\n      # The image sha.\n      sha: \"\"\n      # The image tag.\n      tag: \"v7.1.5.2\"\n    # Set the resource requests/limits for the Socat Pod.\n    socatResources:\n      # Claims lists the names of resources, defined in\n      # spec.resourceClaims, that are used by this container. This is\n      # an alpha field and requires enabling the\n      # DynamicResourceAllocation feature gate. This field is\n      # immutable. It can only be set for containers.\n      claims:\n      - name: string\n      # Limits describes the maximum amount of compute resources\n      # allowed. More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      limits: {}\n      # Requests describes the minimum amount of compute resources\n      # required. If Requests is omitted for a container, it defaults\n      # to Limits if that is explicitly specified, otherwise to an\n      # implementation-defined value. Requests cannot exceed Limits.\n      # More info:\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n      requests: {}\n# KineticaClusterStatus defines the observed state of KineticaCluster\nstatus:\n  # CloudProvider the DB is deployed on\n  cloudProvider: string\n  # CloudRegion the DB is deployed on\n  cloudRegion: string\n  # ClusterSize the current number of ranks & type i.e. CPU or GPU of\n  # the cluster\n  clusterSize:\n    # ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a\n    # representation of the number of nodes in a simple to understand\n    # T-Short size scheme. This indicates the size of the cluster i.e.\n    # the number of nodes. It does not identify the size of the cloud\n    # provider nodes. For node size see ClusterTypeEnum. Supported\n    # Values are: - XS S M L XL XXL XXXL\n    tshirtSize: string\n    # ClusterTypeEnum - An Enum of the node types of a KineticaCluster\n    # e.g. CPU, GPU along with the Cloud Provider node size e.g. size\n    # of the VM.\n    tshirtType: string\n  # The number of ranks (replicas) that the cluster was last run with\n  currentReplicas: 0\n  # The first start of a new cluster has completed.\n  firstStartComplete: false\n  # HostManagerStatusResponse - The contents of polling the HostManager\n  # on port 9300n are added to the BR status field. This allows clients\n  # to get the Host/Rank/Graph/ML status information.\n  hmStatus: cluster_leader: string cluster_operation: string graph:\n  status: string graph_status: string host_httpd_status: string\n  host_mode: string host_num_gpus: string host_pid: 1\n  host_stats_status: string host_status: string hostname: string hosts:\n  graph_status: string host_httpd_status: string host_mode: string\n  host_pid: 1 host_stats_status: string host_status: string ml_status:\n  string query_planner_status: string reveal_status: string\n  license_expiration: string license_status: string license_type:\n  string ml_status: string query_planner_status: string ranks: mode:\n  string\n      # Pid - The OS Process Id for the Rank.\n      pid: 1 status: string reveal_status: string system_idle_time:\n      string system_mode: string system_rebalancing: 1 system_status:\n      string text: status: string version: string\n  # The fully qualified Ingress routes.\n  ingressUrls: aaw: string dbMonitor: string files: string gadmin:\n  string postgresProxy: string ranks: {} reveal: string\n  # The fully qualified in-cluster Ingress routes.\n  internalIngressUrls: aaw: string dbMonitor: string files: string\n  gadmin: string postgresProxy: string ranks: {} reveal: string\n  # Identify FreeSaaS Cluster\n  isFreeSaaS: false\n  # HostOptions used during DB Cluster Scaling Functions\n  options: ram_limit: 1\n  # OutstandingBilling - A list of hours not yet billed for. Will only\n  # be present if the plan is Pay As You Go and the operator was unable\n  # to send the billing information due to an issue with the cloud\n  # providers billing APIs.\n  outstandingBillableHour:\n  - billable: true billed: true billedAt: string duration: string end:\n    string start: string\n  # The state or phase of the current DB installation\n  phase: stringv\n
"},{"location":"Reference/kinetica_workbench/","title":"Kinetica Workbench cRD Reference","text":""},{"location":"Reference/kinetica_workbench/#coming-soon","title":"Coming Soon","text":""},{"location":"Reference/workbench/","title":"Kinetica Workbench Configuration","text":"
  • kubectl (yaml)
  • Helm Chart
"},{"location":"Reference/workbench/#workbench","title":"Workbench","text":"kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica Workbench is as follows: -

Workbench GVK
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\n
"},{"location":"Reference/workbench/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

Workbench metadata
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n  name: workbench-kinetica-cluster\n  namespace: gpudb\n

The simplest valid Workbench CR looks as follows: -

workbench.yaml
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\n  name: workbench-kinetica-cluster\n  namespace: gpudb\nspec:\n  executeSqlLimit: 10000\n  fqdn: kinetica-cluster.saas.kinetica.com\n  image: kineticastagingcloud/workbench:v7.1.9-8.rc1\n  letsEncrypt:\n    enabled: false\n  userIdleTimeout: 60\n  ingressController: nginx-ingress\n

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

helm"},{"location":"Setup/","title":"Kinetica for Kubernetes Setup","text":"
  • Prepare to Install

    What you need to know & do before beginning an installation.

    Preparation and Prerequisites

  • Set up in 15 minutes

    Install the Kinetica DB with helm and get up and running in minutes.

    Installation

  • Beyond a Simple Installation

    It is possible using the Helm Charts and Kinetica CRDs to customize your installation in a number of ways.

"},{"location":"Setup/#advanced-topics","title":"Advanced Topics","text":""},{"location":"Support/","title":"Support","text":"
  • Taking the next steps

    Further tutorials or help on configuring Kinetica in different environments.

    Help & Tutorials

  • Locating Issues

    In the unlikely event you require information on how to troubleshoot your installation, help can be found here.

    Troubleshooting

"},{"location":"Troubleshooting/troubleshooting/","title":"Troubleshooting","text":""},{"location":"Troubleshooting/troubleshooting/#coming-soon","title":"Coming Soon","text":""},{"location":"blog/","title":"Blog","text":""}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 0f8724e..ffdadb1 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1,3 +1,213 @@ + + https://www.kinetica.com/ + 2024-03-25 + daily + + + https://www.kinetica.com/Advanced/advanced_topics/ + 2024-03-25 + daily + + + https://www.kinetica.com/Architecture/ + 2024-03-25 + daily + + + https://www.kinetica.com/Architecture/db_architecture/ + 2024-03-25 + daily + + + https://www.kinetica.com/Architecture/kinetica_for_kubernetes_architecture/ + 2024-03-25 + daily + + + https://www.kinetica.com/GettingStarted/ + 2024-03-25 + daily + + + https://www.kinetica.com/GettingStarted/aks/ + 2024-03-25 + daily + + + https://www.kinetica.com/GettingStarted/eks/ + 2024-03-25 + daily + + + https://www.kinetica.com/GettingStarted/installation/ + 2024-03-25 + daily + + + https://www.kinetica.com/GettingStarted/k3s/ + 2024-03-25 + daily + + + https://www.kinetica.com/GettingStarted/kind/ + 2024-03-25 + daily + + + https://www.kinetica.com/GettingStarted/preparation_and_prerequisites/ + 2024-03-25 + daily + + + https://www.kinetica.com/GettingStarted/quickstart/ + 2024-03-25 + daily + + + https://www.kinetica.com/Help/help_and_tutorials/ + 2024-03-25 + daily + + + https://www.kinetica.com/Monitoring/logs/ + 2024-03-25 + daily + + + https://www.kinetica.com/Monitoring/metrics_and_monitoring/ + 2024-03-25 + daily + + + https://www.kinetica.com/Operations/ + 2024-03-25 + daily + + + https://www.kinetica.com/Operations/backup_and_restore/ + 2024-03-25 + daily + + + https://www.kinetica.com/Operations/otel/ + 2024-03-25 + daily + + + https://www.kinetica.com/Operators/k3s/ + 2024-03-25 + daily + + + https://www.kinetica.com/Operators/k8s/ + 2024-03-25 + daily + + + https://www.kinetica.com/Operators/kind/ + 2024-03-25 + daily + + + https://www.kinetica.com/Operators/kinetica-operators/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/database/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/helm_kinetica_operators/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_cluster_admins/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_cluster_backups/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_cluster_grants/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_cluster_reference/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_cluster_resource_groups/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_cluster_restores/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_cluster_roles/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_cluster_schemas/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_cluster_users/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_clusters/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/kinetica_workbench/ + 2024-03-25 + daily + + + https://www.kinetica.com/Reference/workbench/ + 2024-03-25 + daily + + + https://www.kinetica.com/Setup/ + 2024-03-25 + daily + + + https://www.kinetica.com/Support/ + 2024-03-25 + daily + + + https://www.kinetica.com/Troubleshooting/troubleshooting/ + 2024-03-25 + daily + + + https://www.kinetica.com/blog/ + 2024-03-25 + daily + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 3a57807..632b9af 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ