Skip to content

Latest commit

 

History

History
190 lines (155 loc) · 5.03 KB

README.md

File metadata and controls

190 lines (155 loc) · 5.03 KB
Text changing depending on mode. Light: 'So light!' Dark: 'So dark!'

Artifact Hub

This Operator is designed to enable K8sGPT within a Kubernetes cluster. It will allow you to create a custom resource that defines the behaviour and scope of a managed K8sGPT workload. Analysis and outputs will also be configurable to enable integration into existing workflows.

Installation

helm repo add k8sgpt https://charts.k8sgpt.ai/
helm install release k8sgpt/k8sgpt-operator -n ai-sre --create-namespace

Run the example

  1. Install the operator from the Installation section.

  2. Create secret:

kubectl create secret generic k8sgpt-sample-secret --from-literal=openai-api-key=$OPENAI_TOKEN -n ai-sre
  1. Apply the K8sGPT configuration object:
kubectl apply -f - << EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-sample
  namespace: ai-sre
spec:
  ai:
    enabled: true
    model: gpt-3.5-turbo
    backend: openai
    secret:
      name: k8sgpt-sample-secret
      key: openai-api-key
    # anonymized: false
    # language: english
  noCache: false
  version: v0.3.8
  # filters:
  #   - Ingress
  # sink:
  #   type: slack
  #   webhook: <webhook-url>
  # extraOptions:
  #   backstage:
  #     enabled: true
EOF
  1. Once the custom resource has been applied the K8sGPT-deployment will be installed and you will be able to see the Results objects of the analysis after some minutes (if there are any issues in your cluster):
❯ kubectl get results -o json | jq .
{
  "apiVersion": "v1",
  "items": [
    {
      "apiVersion": "core.k8sgpt.ai/v1alpha1",
      "kind": "Result",
      "spec": {
        "details": "The error message means that the service in Kubernetes doesn't have any associated endpoints, which should have been labeled with \"control-plane=controller-manager\". \n\nTo solve this issue, you need to add the \"control-plane=controller-manager\" label to the endpoint that matches the service. Once the endpoint is labeled correctly, Kubernetes can associate it with the service, and the error should be resolved.",

Remote Cache

S3
  1. Install the operator from the Installation section.

  2. Create secret:

kubectl create secret generic k8sgpt-sample-cache-secret --from-literal=aws_access_key_id=<AWS_ACCESS_KEY_ID>  --from-literal=aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> -n k8sgpt-
operator-system
  1. Apply the K8sGPT configuration object:
kubectl apply -f - << EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-sample
  namespace: ai-sre
spec:
  model: gpt-3.5-turbo
  backend: openai
  noCache: false
  version: v0.3.0
  enableAI: true
  secret:
    name: k8sgpt-sample-secret
    key: openai-api-key
  remoteCache:
    credentials:
      name: k8sgpt-sample-cache-secret 
    bucketName: foo
    region: us-west-1
EOF

Other AI Backend Examples

AzureOpenAI
  1. Install the operator from the Installation section.

  2. Create secret:

kubectl create secret generic k8sgpt-sample-secret --from-literal=azure-api-key=$AZURE_TOKEN -n ai-sre
  1. Apply the K8sGPT configuration object:
kubectl apply -f - << EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-sample
  namespace: ai-sre
spec:
  ai:
    enabled: true
    secret:
      name: k8sgpt-sample-secret
      key: azure-api-key
    model: gpt-35-turbo
    backend: azureopenai
    baseUrl: https://k8sgpt.openai.azure.com/
    engine: llm
  noCache: false
  version: v0.3.8
EOF
LocalAI
  1. Install the operator from the Installation section.

  2. Follow the LocalAI installation guide to install LocalAI. (No OpenAI secret is required when using LocalAI).

  3. Apply the K8sGPT configuration object:

kubectl apply -f - << EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-local-ai
  namespace: default
spec:
  ai:
    enabled: true
    model: ggml-gpt4all-j
    backend: localai
    baseUrl: http://local-ai.local-ai.svc.cluster.local:8080/v1
  noCache: false
  version: v0.3.8
EOF

Note: ensure that the value of baseUrl is a properly constructed DNS name for the LocalAI Service. It should take the form: http://local-ai.<namespace_local_ai_was_installed_in>.svc.cluster.local:8080/v1.

  1. Same as step 4. in the example above.

Helm values

For details please see here