diff --git a/Operators/k8s/index.html b/Operators/k8s/index.html index 6b2ec6f..bc56fde 100644 --- a/Operators/k8s/index.html +++ b/Operators/k8s/index.html @@ -3,7 +3,10 @@ helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k8s.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password" --set global.defaultStorageClass="your_default_storage_class" -# if you want to try out a development version, -helm search repo kinetica-operators --devel --versions -helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k8s.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password" --set global.defaultStorageClass="your_default_storage_class" --devel --version 7.2.0-2.rc-2 +# to find your default storage class, you could do the following +kubectl get sc -o name + +# if you want to try out a development version, +helm search repo kinetica-operators --devel --versions +helm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k8s.yaml --set db.gpudbCluster.license="your_license_key" --set dbAdminUser.password="your_password" --set global.defaultStorageClass="your_default_storage_class" --devel --version 7.2.0-2.rc-2
\ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json index 65621cd..45aa84e 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":"

This repository provides the helm chart for deploying the Kubernetes Operators for Database and Workbench. Once the operators are installed, you should be able to use the Kinetica Database and Workbench CRDs to deploy and manage Kinetica clusters and workbenches.

"},{"location":"#add-kinetica-helm-repository","title":"Add Kinetica Helm Repository","text":"

To add the Kinetica Helm repoistory to Helm 3:-

Bash
helm repo add kinetica-operators https://kineticadb.github.io/charts\nhelm repo update\n\n# if you get a 404 error on the .tgz file, you may do the following\n# you could get this if you had previously added the repo and the chart has been updated in development\nhelm repo remove kinetica-operators\nhelm repo add kinetica-operators https://kineticadb.github.io/charts\n

Installation values will depend on the target kubernetes platform you are using.

This chart provides out of the box support for trying out in K3s and Kind clusters. For k3s, you should be able to use either the GPU and CPU version of the databases. In these platforms, a non production configuration of the Kinetica Database and Workbench is also deployed for you to get started. However, you should be able to change the k3s values file to deploy in other platforms as well. For fine grained configuration of the Database or the Workbench, refer to the Database and Workbench documentation.

We use the same chart for our SaaS and AWS Marketplace offerings. If you want to try out in SaaS or AWS Marketplace, follow those links, you need not use this chart directly.

Current version of the chart supports kubernetes version 1.25 and above.

"},{"location":"#k3s-k3sio","title":"k3s (k3s.io)","text":"

Refer to Kinetica on K3s

"},{"location":"#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":"

Refer to Kinetica on Kind

"},{"location":"#k8s-any-flavour-kubernetesio","title":"K8s - Any flavour (kubernetes.io)","text":"

Refer to Kinetica on K8s

"},{"location":"Database/database/","title":"Kinetica Database Configuration","text":""},{"location":"Database/database/#kineticacluster","title":"KineticaCluster","text":"

To deploy a new Database Instance into a Kubernetes cluster...

kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica DB Cluster is as follows: -

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\n
"},{"location":"Database/database/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\nname: my-kinetica-db-cr\nnamespace: gpudb\nspec:\n
"},{"location":"Database/database/#spec","title":"Spec","text":"

Under the spec: section of the KineticaCLuster CR we have a number of sections supporting different aspects of the deployed DB cluster:-

"},{"location":"Database/database/#gpudbcluster","title":"gpudbCluster","text":"

Configuartion items specific to the DB itself.

kineticacluster.yaml - gpudbCluster
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\nname: my-kinetica-db-cr\nnamespace: gpudb\nspec:\ngpudbCluster:\n
"},{"location":"Database/database/#gpudbcluster_1","title":"gpudbCluster","text":"cluster name & size
clusterName: kinetica-cluster clusterSize: tshirtSize: M tshirtType: LargeCPU fqdn: kinetica-cluster.saas.kinetica.com\nhaRingName: default\nhasPools: false    

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

3. tshirtSize - sets the cluster size to a defined size based upon the t-shirt size. Valid sizes are: -

4. tshirtType - block that defines the tyoe DB Ranks to run: -

5. fqdn - The fully qualified URL for the DB cluster. Used on the Ingress records for any exposed services.

6. haRingName - Default: default

7. hasPools - Whether to enable the separate node 'pools' for \"infra\", \"compute\" pod scheduling. Default: false +optional

"},{"location":"Database/database/#autosuspend","title":"autoSuspend","text":"

The DB Cluster autosuspend section allows for the spinning down of the core DB Pods to release the underlying Kubernetes nodes to reduce infrastructure costs when the DB is not in use.

kineticacluster.yaml - autoSuspend
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\nname: my-kinetica-db-cr\nnamespace: gpudb\nspec:\nautoSuspend:\nenabled: false\ninactivityDuration: 1h0m0s\n

7. the start of the autoSuspend definition

8. enabled when set to true auto suspend of the DB cluster is enabled otherwise set to false and no automatic suspending of the DB takes place. If omitted it defaults to false

9. inactivityDuration the duration after which if no DB activity has taken place the DB will be suspended

Horizontal Pod Autoscaler

In order for autoSuspend to work correctly the Kubernetes Horizontal Pod Autoscaler needs to be deployed to the cluster.

"},{"location":"Database/database/#gadmin","title":"gadmin","text":"

GAdmin the Database Administration Console

kineticacluster.yaml - gadmin
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\nname: my-kinetica-db-cr\nnamespace: gpudb\nspec:\ngadmin:\ncontainerPort:\ncontainerPort: 8080\nname: gadmin\nprotocol: TCP\nisEnabled: true\n

7. gadmin configuration block definition

8. containerPort configuration block i.e. where gadmin is exposed on the DB Pod

9. containerPort the port number as an integer. Default: 8080

10. name the name of the port being exposed. Default: gadmin

11. protocol network protocal used. Default: TCP

12. isEnabled whether gadmin is exposed from the DB pod. Default: true

"},{"location":"Database/database/#example-db-cr","title":"Example DB CR","text":"YAML
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\nname: kinetica-cluster\nnamespace: gpudb\nspec:\nautoSuspend:\nenabled: false\ninactivityDuration: 1h0m0s\ndebug: false\ngadmin:\nisEnabled: true\ngpudbCluster:\nclusterName: kinetica-cluster\nclusterSize:\ntshirtSize: M\ntshirtType: LargeCPU\nconfig:\ngraph:\nenable: true\npostgresProxy:\nenablePostgresProxy: true\ntextSearch:\nenableTextSearch: true\nkifs:\nenable: false\nml:\nenable: false\ntieredStorage:\nglobalTier:\ncolocateDisks: true\nconcurrentWaitTimeout: 120\nencryptDataAtRest: true\npersistTier:\ndefault:\nhighWatermark: 90\nlimit: 4Ti\nlowWatermark: 50\nname: ''\npath: default\nprovisioner: docker.io/hostpath\nvolumeClaim:\nmetadata: {}\nspec:\nresources: {}\nstorageClassName: kinetica-db-persist\nstatus: {}\ntieredStrategy:\ndefault: VRAM 1, RAM 5, PERSIST 5\npredicateEvaluationInterval: 60\nfqdn: kinetica-cluster.saas.kinetica.com\nhaRingName: default\nhasPools: false\nhostManagerPort:\ncontainerPort: 9300\nname: hostmanager\nprotocol: TCP\nimage: docker.io/kineticastagingcloud/kinetica-k8s-db:v7.1.9-8.rc1\nimagePullPolicy: IfNotPresent\nletsEncrypt:\nenabled: false\nlicense: >-\nmetricsRegistryRepositoryTag:\nimagePullPolicy: IfNotPresent\nregistry: docker.io\nrepository: kineticastagingcloud/fluent-bit\nsha: ''\ntag: v7.1.9-8.rc1\npodManagementPolicy: Parallel\nranksPerNode: 1\nreplicas: 4\nhostManagerMonitor:\nmonitorRegistryRepositoryTag:\nimagePullPolicy: IfNotPresent\nregistry: docker.io\nrepository: kineticastagingcloud/kinetica-k8s-monitor\nsha: ''\ntag: v7.1.9-8.rc1\nreadinessProbe:\nfailureThreshold: 20\ninitialDelaySeconds: 5\nperiodSeconds: 10\nstartupProbe:\nfailureThreshold: 20\ninitialDelaySeconds: 5\nperiodSeconds: 10\ninfra: on-prem\ningressController: nginx-ingress\nldap:\nhost: openldap\nisInLocalK8S: true\nisLDAPS: false\nnamespace: gpudb\nport: 389\npayAsYouGo: false\nreveal:\ncontainerPort:\ncontainerPort: 8088\nname: reveal\nprotocol: TCP\nisEnabled: true\nsupportingImages:\nbusybox:\nimagePullPolicy: IfNotPresent\nregistry: docker.io\nrepository: kineticastagingcloud/busybox\nsha: ''\ntag: v7.1.9-8.rc1\nsocat:\nimagePullPolicy: IfNotPresent\nregistry: docker.io\nrepository: kineticastagingcloud/socat\nsha: ''\ntag: v7.1.9-8.rc1\n
"},{"location":"Database/database/#kineticauser","title":"KineticaUser","text":""},{"location":"Database/database/#kineticagrant","title":"KineticaGrant","text":""},{"location":"Database/database/#kineticaschema","title":"KineticaSchema","text":""},{"location":"Database/database/#kineticaresourcegroup","title":"KineticaResourceGroup","text":""},{"location":"Database/database_reference/","title":"Database CRD/CR Reference","text":""},{"location":"Database/grant_reference/","title":"Kinetica DB User Permission Grant CRD/CR Reference","text":""},{"location":"Database/resource_group_reference/","title":"Kinetica DB User Resource Group CRD/CR Reference","text":""},{"location":"Database/schema_reference/","title":"Kinetica DB User Schema CRD/CR Reference","text":""},{"location":"Database/user_reference/","title":"Kinetica DB User CRD/CR Reference","text":""},{"location":"Operators/k3s/","title":"Overview","text":"

Kinetica Operators can be installed in any on-prem kubernetes cluster. This document provides instructions to install the operators in k3s. If you are on another distribution, you should be able to change the values file to suit your environment.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/k3s/#kinetica-on-k3s-k3sio","title":"Kinetica on k3s (k3s.io)","text":"

Current version of the chart supports kubernetes version 1.25 and above.

"},{"location":"Operators/k3s/#install-k3s-129","title":"Install k3s 1.29","text":"Bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik  --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.1+k3s2 sh -\n
"},{"location":"Operators/k3s/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s -Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Text Only
127.0.0.1 local.kinetica\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart","title":"K3s - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart-gpu-capable-machine","title":"K3s - Install the kinetica-operators chart (GPU Capable Machine)","text":"

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Operators/k3s/#uninstall-k3s","title":"Uninstall k3s","text":"Bash
/usr/local/bin/k3s-uninstall.sh\n
"},{"location":"Operators/k8s/","title":"Overview","text":"

If you are on a kubernetes flavour other than Kind or K3s, you may follow this generic guide to install the Kinetica Operators.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/k8s/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"

This is trying to install the operators and a simple db with workbench installation.

If you are in a managed Kubernetes environment, and your nginx ingress controller which is installed along with this install creates a LoadBalancer service, you may need to make sure you associate the LoadBalancer to the domain you are using.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent. By default, the default chart configuration is pointing to local.kinetica.

Text Only
127.0.0.1 local.kinetica\n
Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k8s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --set global.defaultStorageClass=\"your_default_storage_class\"\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k8s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --set global.defaultStorageClass=\"your_default_storage_class\" --devel --version 7.2.0-2.rc-2\n
"},{"location":"Operators/kind/","title":"Overview","text":"

This installation in a kind cluster is for trying out the operators and the database in a non production environment. This method currently only supports installing a CPU version of the database.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/kind/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":""},{"location":"Operators/kind/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Bash
kind create cluster --config charts/kinetica-operators/kind.yaml\n
"},{"location":"Operators/kind/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

"},{"location":"Operators/kind/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Workbench/workbench/","title":"Kinetica Workbench Configuration","text":""},{"location":"Workbench/workbench/#workbench","title":"Workbench","text":"kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica Workbench is as follows: -

Workbench GVK
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\n
"},{"location":"Workbench/workbench/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

Workbench metadata
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\nname: workbench-kinetica-cluster\nnamespace: gpudb\n
"},{"location":"Workbench/workbench/#an-example-workbench-cr","title":"An Example Workbench CR","text":"

The simplest valid Workbench CR looks as follows: -

workbench.yaml
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\nname: workbench-kinetica-cluster\nnamespace: gpudb\nspec:\nexecuteSqlLimit: 10000\nfqdn: kinetica-cluster.saas.kinetica.com\nimage: kineticastagingcloud/workbench:v7.1.9-8.rc1\nletsEncrypt:\nenabled: false\nuserIdleTimeout: 60\ningressController: nginx-ingress\n
"},{"location":"Workbench/workbench_reference/","title":"Workbench CRD/CR Reference","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":"

This repository provides the helm chart for deploying the Kubernetes Operators for Database and Workbench. Once the operators are installed, you should be able to use the Kinetica Database and Workbench CRDs to deploy and manage Kinetica clusters and workbenches.

"},{"location":"#add-kinetica-helm-repository","title":"Add Kinetica Helm Repository","text":"

To add the Kinetica Helm repoistory to Helm 3:-

Bash
helm repo add kinetica-operators https://kineticadb.github.io/charts\nhelm repo update\n\n# if you get a 404 error on the .tgz file, you may do the following\n# you could get this if you had previously added the repo and the chart has been updated in development\nhelm repo remove kinetica-operators\nhelm repo add kinetica-operators https://kineticadb.github.io/charts\n

Installation values will depend on the target kubernetes platform you are using.

This chart provides out of the box support for trying out in K3s and Kind clusters. For k3s, you should be able to use either the GPU and CPU version of the databases. In these platforms, a non production configuration of the Kinetica Database and Workbench is also deployed for you to get started. However, you should be able to change the k3s values file to deploy in other platforms as well. For fine grained configuration of the Database or the Workbench, refer to the Database and Workbench documentation.

We use the same chart for our SaaS and AWS Marketplace offerings. If you want to try out in SaaS or AWS Marketplace, follow those links, you need not use this chart directly.

Current version of the chart supports kubernetes version 1.25 and above.

"},{"location":"#k3s-k3sio","title":"k3s (k3s.io)","text":"

Refer to Kinetica on K3s

"},{"location":"#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":"

Refer to Kinetica on Kind

"},{"location":"#k8s-any-flavour-kubernetesio","title":"K8s - Any flavour (kubernetes.io)","text":"

Refer to Kinetica on K8s

"},{"location":"Database/database/","title":"Kinetica Database Configuration","text":""},{"location":"Database/database/#kineticacluster","title":"KineticaCluster","text":"

To deploy a new Database Instance into a Kubernetes cluster...

kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica DB Cluster is as follows: -

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\n
"},{"location":"Database/database/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

kineticacluster.yaml
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\nname: my-kinetica-db-cr\nnamespace: gpudb\nspec:\n
"},{"location":"Database/database/#spec","title":"Spec","text":"

Under the spec: section of the KineticaCLuster CR we have a number of sections supporting different aspects of the deployed DB cluster:-

"},{"location":"Database/database/#gpudbcluster","title":"gpudbCluster","text":"

Configuartion items specific to the DB itself.

kineticacluster.yaml - gpudbCluster
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\nname: my-kinetica-db-cr\nnamespace: gpudb\nspec:\ngpudbCluster:\n
"},{"location":"Database/database/#gpudbcluster_1","title":"gpudbCluster","text":"cluster name & size
clusterName: kinetica-cluster clusterSize: tshirtSize: M tshirtType: LargeCPU fqdn: kinetica-cluster.saas.kinetica.com\nhaRingName: default\nhasPools: false    

1. clusterName - the user defined name of the Kinetica DB Cluster

2. clusterSize - block that defines the number of DB Ranks to run

3. tshirtSize - sets the cluster size to a defined size based upon the t-shirt size. Valid sizes are: -

4. tshirtType - block that defines the tyoe DB Ranks to run: -

5. fqdn - The fully qualified URL for the DB cluster. Used on the Ingress records for any exposed services.

6. haRingName - Default: default

7. hasPools - Whether to enable the separate node 'pools' for \"infra\", \"compute\" pod scheduling. Default: false +optional

"},{"location":"Database/database/#autosuspend","title":"autoSuspend","text":"

The DB Cluster autosuspend section allows for the spinning down of the core DB Pods to release the underlying Kubernetes nodes to reduce infrastructure costs when the DB is not in use.

kineticacluster.yaml - autoSuspend
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\nname: my-kinetica-db-cr\nnamespace: gpudb\nspec:\nautoSuspend:\nenabled: false\ninactivityDuration: 1h0m0s\n

7. the start of the autoSuspend definition

8. enabled when set to true auto suspend of the DB cluster is enabled otherwise set to false and no automatic suspending of the DB takes place. If omitted it defaults to false

9. inactivityDuration the duration after which if no DB activity has taken place the DB will be suspended

Horizontal Pod Autoscaler

In order for autoSuspend to work correctly the Kubernetes Horizontal Pod Autoscaler needs to be deployed to the cluster.

"},{"location":"Database/database/#gadmin","title":"gadmin","text":"

GAdmin the Database Administration Console

kineticacluster.yaml - gadmin
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\nname: my-kinetica-db-cr\nnamespace: gpudb\nspec:\ngadmin:\ncontainerPort:\ncontainerPort: 8080\nname: gadmin\nprotocol: TCP\nisEnabled: true\n

7. gadmin configuration block definition

8. containerPort configuration block i.e. where gadmin is exposed on the DB Pod

9. containerPort the port number as an integer. Default: 8080

10. name the name of the port being exposed. Default: gadmin

11. protocol network protocal used. Default: TCP

12. isEnabled whether gadmin is exposed from the DB pod. Default: true

"},{"location":"Database/database/#example-db-cr","title":"Example DB CR","text":"YAML
apiVersion: app.kinetica.com/v1\nkind: KineticaCluster\nmetadata:\nname: kinetica-cluster\nnamespace: gpudb\nspec:\nautoSuspend:\nenabled: false\ninactivityDuration: 1h0m0s\ndebug: false\ngadmin:\nisEnabled: true\ngpudbCluster:\nclusterName: kinetica-cluster\nclusterSize:\ntshirtSize: M\ntshirtType: LargeCPU\nconfig:\ngraph:\nenable: true\npostgresProxy:\nenablePostgresProxy: true\ntextSearch:\nenableTextSearch: true\nkifs:\nenable: false\nml:\nenable: false\ntieredStorage:\nglobalTier:\ncolocateDisks: true\nconcurrentWaitTimeout: 120\nencryptDataAtRest: true\npersistTier:\ndefault:\nhighWatermark: 90\nlimit: 4Ti\nlowWatermark: 50\nname: ''\npath: default\nprovisioner: docker.io/hostpath\nvolumeClaim:\nmetadata: {}\nspec:\nresources: {}\nstorageClassName: kinetica-db-persist\nstatus: {}\ntieredStrategy:\ndefault: VRAM 1, RAM 5, PERSIST 5\npredicateEvaluationInterval: 60\nfqdn: kinetica-cluster.saas.kinetica.com\nhaRingName: default\nhasPools: false\nhostManagerPort:\ncontainerPort: 9300\nname: hostmanager\nprotocol: TCP\nimage: docker.io/kineticastagingcloud/kinetica-k8s-db:v7.1.9-8.rc1\nimagePullPolicy: IfNotPresent\nletsEncrypt:\nenabled: false\nlicense: >-\nmetricsRegistryRepositoryTag:\nimagePullPolicy: IfNotPresent\nregistry: docker.io\nrepository: kineticastagingcloud/fluent-bit\nsha: ''\ntag: v7.1.9-8.rc1\npodManagementPolicy: Parallel\nranksPerNode: 1\nreplicas: 4\nhostManagerMonitor:\nmonitorRegistryRepositoryTag:\nimagePullPolicy: IfNotPresent\nregistry: docker.io\nrepository: kineticastagingcloud/kinetica-k8s-monitor\nsha: ''\ntag: v7.1.9-8.rc1\nreadinessProbe:\nfailureThreshold: 20\ninitialDelaySeconds: 5\nperiodSeconds: 10\nstartupProbe:\nfailureThreshold: 20\ninitialDelaySeconds: 5\nperiodSeconds: 10\ninfra: on-prem\ningressController: nginx-ingress\nldap:\nhost: openldap\nisInLocalK8S: true\nisLDAPS: false\nnamespace: gpudb\nport: 389\npayAsYouGo: false\nreveal:\ncontainerPort:\ncontainerPort: 8088\nname: reveal\nprotocol: TCP\nisEnabled: true\nsupportingImages:\nbusybox:\nimagePullPolicy: IfNotPresent\nregistry: docker.io\nrepository: kineticastagingcloud/busybox\nsha: ''\ntag: v7.1.9-8.rc1\nsocat:\nimagePullPolicy: IfNotPresent\nregistry: docker.io\nrepository: kineticastagingcloud/socat\nsha: ''\ntag: v7.1.9-8.rc1\n
"},{"location":"Database/database/#kineticauser","title":"KineticaUser","text":""},{"location":"Database/database/#kineticagrant","title":"KineticaGrant","text":""},{"location":"Database/database/#kineticaschema","title":"KineticaSchema","text":""},{"location":"Database/database/#kineticaresourcegroup","title":"KineticaResourceGroup","text":""},{"location":"Database/database_reference/","title":"Database CRD/CR Reference","text":""},{"location":"Database/grant_reference/","title":"Kinetica DB User Permission Grant CRD/CR Reference","text":""},{"location":"Database/resource_group_reference/","title":"Kinetica DB User Resource Group CRD/CR Reference","text":""},{"location":"Database/schema_reference/","title":"Kinetica DB User Schema CRD/CR Reference","text":""},{"location":"Database/user_reference/","title":"Kinetica DB User CRD/CR Reference","text":""},{"location":"Operators/k3s/","title":"Overview","text":"

Kinetica Operators can be installed in any on-prem kubernetes cluster. This document provides instructions to install the operators in k3s. If you are on another distribution, you should be able to change the values file to suit your environment.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/k3s/#kinetica-on-k3s-k3sio","title":"Kinetica on k3s (k3s.io)","text":"

Current version of the chart supports kubernetes version 1.25 and above.

"},{"location":"Operators/k3s/#install-k3s-129","title":"Install k3s 1.29","text":"Bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable=traefik  --node-name kinetica-master --token 12345\" K3S_KUBECONFIG_OUTPUT=~/.kube/config_k3s K3S_KUBECONFIG_MODE=644 INSTALL_K3S_VERSION=v1.29.1+k3s2 sh -\n
"},{"location":"Operators/k3s/#k3s-install-kinetica-operators-including-a-sample-db-to-try-out","title":"K3s -Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.k3s.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent.

Text Only
127.0.0.1 local.kinetica\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart","title":"K3s - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k3s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n
"},{"location":"Operators/k3s/#k3s-install-the-kinetica-operators-chart-gpu-capable-machine","title":"K3s - Install the kinetica-operators chart (GPU Capable Machine)","text":"

If you wish to try out the GPU capabilities, you can use the following values file, provided you are in a nvidia gpu capable machine.

Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k3s.gpu.yaml\n\nhelm -n kinetica-system install kinetica-operators charts/kinetica-operators/ --create-namespace --values values.onPrem.k3s.gpu.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Operators/k3s/#uninstall-k3s","title":"Uninstall k3s","text":"Bash
/usr/local/bin/k3s-uninstall.sh\n
"},{"location":"Operators/k8s/","title":"Overview","text":"

If you are on a kubernetes flavour other than Kind or K3s, you may follow this generic guide to install the Kinetica Operators.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/k8s/#install-the-kinetica-operators-chart","title":"Install the kinetica-operators chart","text":"

This is trying to install the operators and a simple db with workbench installation.

If you are in a managed Kubernetes environment, and your nginx ingress controller which is installed along with this install creates a LoadBalancer service, you may need to make sure you associate the LoadBalancer to the domain you are using.

If you are on a local machine which is not having a domain name, you add the following entry to your /etc/hosts file or equivalent. By default, the default chart configuration is pointing to local.kinetica.

Text Only
127.0.0.1 local.kinetica\n
Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k8s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --set global.defaultStorageClass=\"your_default_storage_class\"\n# to find your default storage class, you could do the following\nkubectl get sc -o name\n\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.k8s.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --set global.defaultStorageClass=\"your_default_storage_class\" --devel --version 7.2.0-2.rc-2\n
"},{"location":"Operators/kind/","title":"Overview","text":"

This installation in a kind cluster is for trying out the operators and the database in a non production environment. This method currently only supports installing a CPU version of the database.

You will need a license key for this to work. Please contact Kinetica Support.

"},{"location":"Operators/kind/#kind-kubernetes-in-docker-kindsigsk8sio","title":"Kind (kubernetes in docker kind.sigs.k8s.io)","text":""},{"location":"Operators/kind/#create-kind-cluster-129","title":"Create Kind Cluster 1.29","text":"Bash
kind create cluster --config charts/kinetica-operators/kind.yaml\n
"},{"location":"Operators/kind/#kind-install-kinetica-operators-including-a-sample-db-to-try-out","title":"Kind - Install kinetica-operators including a sample db to try out","text":"

Review the values file charts/kinetica-operators/values.onPrem.kind.yaml. This is trying to install the operators and a simple db with workbench installation for a non production try out.

As you can see it is trying to create an ingress pointing towards local.kinetica. If you have a domain pointing to your machine, replace it with the correct domain name.

"},{"location":"Operators/kind/#kind-install-the-kinetica-operators-chart","title":"Kind - Install the kinetica-operators chart","text":"Bash
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.kind.yaml\n\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\"\n# if you want to try out a development version,\nhelm search repo kinetica-operators --devel --versions\nhelm -n kinetica-system install kinetica-operators kinetica-operators/kinetica-operators/ --create-namespace --values values.onPrem.kind.yaml --set db.gpudbCluster.license=\"your_license_key\" --set dbAdminUser.password=\"your_password\" --devel --version 7.2.0-2.rc-2\n

You should be able to access the workbench at http://local.kinetica

Username as per the values file mentioned above is kadmin and password is Kinetica1234!

"},{"location":"Workbench/workbench/","title":"Kinetica Workbench Configuration","text":""},{"location":"Workbench/workbench/#workbench","title":"Workbench","text":"kubectl

Using kubetctl a CustomResource of type KineticaCluster is used to define a new Kinetica DB Cluster in a yaml file.

The basic Group, Version, Kind or GVK to instantiate a Kinetica Workbench is as follows: -

Workbench GVK
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\n
"},{"location":"Workbench/workbench/#metadata","title":"Metadata","text":"

to which we add a metadata: block for the name of the DB CR along with the namespace into which we are targetting the installation of the DB cluster.

Workbench metadata
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\nname: workbench-kinetica-cluster\nnamespace: gpudb\n
"},{"location":"Workbench/workbench/#an-example-workbench-cr","title":"An Example Workbench CR","text":"

The simplest valid Workbench CR looks as follows: -

workbench.yaml
apiVersion: workbench.com.kinetica/v1\nkind: Workbench\nmetadata:\nname: workbench-kinetica-cluster\nnamespace: gpudb\nspec:\nexecuteSqlLimit: 10000\nfqdn: kinetica-cluster.saas.kinetica.com\nimage: kineticastagingcloud/workbench:v7.1.9-8.rc1\nletsEncrypt:\nenabled: false\nuserIdleTimeout: 60\ningressController: nginx-ingress\n
"},{"location":"Workbench/workbench_reference/","title":"Workbench CRD/CR Reference","text":""}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 73d2415..9b1541b 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ